simonycl's picture
update model card README.md
f2484fc
|
raw
history blame
No virus
4.44 kB
metadata
license: apache-2.0
base_model: bert-base-uncased
tags:
  - generated_from_trainer
metrics:
  - accuracy
model-index:
  - name: best_model-sst-2-16-13
    results: []

best_model-sst-2-16-13

This model is a fine-tuned version of bert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6710
  • Accuracy: 0.7188

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 1 0.6895 0.5312
No log 2.0 2 0.6895 0.5312
No log 3.0 3 0.6894 0.5312
No log 4.0 4 0.6894 0.5312
No log 5.0 5 0.6894 0.5312
No log 6.0 6 0.6893 0.5312
No log 7.0 7 0.6893 0.5312
No log 8.0 8 0.6892 0.5312
No log 9.0 9 0.6891 0.5312
0.7006 10.0 10 0.6890 0.5312
0.7006 11.0 11 0.6889 0.5312
0.7006 12.0 12 0.6888 0.5312
0.7006 13.0 13 0.6887 0.5312
0.7006 14.0 14 0.6886 0.5312
0.7006 15.0 15 0.6884 0.5312
0.7006 16.0 16 0.6883 0.5312
0.7006 17.0 17 0.6881 0.5312
0.7006 18.0 18 0.6879 0.5312
0.7006 19.0 19 0.6877 0.5312
0.6992 20.0 20 0.6875 0.5312
0.6992 21.0 21 0.6872 0.5312
0.6992 22.0 22 0.6870 0.5312
0.6992 23.0 23 0.6867 0.5
0.6992 24.0 24 0.6864 0.5
0.6992 25.0 25 0.6861 0.5
0.6992 26.0 26 0.6857 0.5
0.6992 27.0 27 0.6854 0.5
0.6992 28.0 28 0.6850 0.5
0.6992 29.0 29 0.6846 0.5
0.68 30.0 30 0.6842 0.5
0.68 31.0 31 0.6838 0.5
0.68 32.0 32 0.6833 0.5
0.68 33.0 33 0.6829 0.5
0.68 34.0 34 0.6824 0.5
0.68 35.0 35 0.6819 0.5
0.68 36.0 36 0.6814 0.5312
0.68 37.0 37 0.6808 0.5625
0.68 38.0 38 0.6802 0.5625
0.68 39.0 39 0.6796 0.5938
0.6655 40.0 40 0.6789 0.5938
0.6655 41.0 41 0.6783 0.5938
0.6655 42.0 42 0.6776 0.5938
0.6655 43.0 43 0.6769 0.6562
0.6655 44.0 44 0.6762 0.7188
0.6655 45.0 45 0.6754 0.7188
0.6655 46.0 46 0.6746 0.7188
0.6655 47.0 47 0.6737 0.75
0.6655 48.0 48 0.6728 0.75
0.6655 49.0 49 0.6719 0.75
0.6452 50.0 50 0.6710 0.7188

Framework versions

  • Transformers 4.32.0.dev0
  • Pytorch 2.0.1+cu118
  • Datasets 2.4.0
  • Tokenizers 0.13.3