simonycl's picture
update model card README.md
fc2f63f
|
raw
history blame
4.44 kB
metadata
license: apache-2.0
base_model: bert-base-uncased
tags:
  - generated_from_trainer
metrics:
  - accuracy
model-index:
  - name: best_model-sst-2-16-13
    results: []

best_model-sst-2-16-13

This model is a fine-tuned version of bert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6655
  • Accuracy: 0.5938

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 1 0.7052 0.5312
No log 2.0 2 0.7051 0.5312
No log 3.0 3 0.7050 0.5312
No log 4.0 4 0.7048 0.5312
No log 5.0 5 0.7045 0.5312
No log 6.0 6 0.7042 0.5312
No log 7.0 7 0.7038 0.5312
No log 8.0 8 0.7034 0.5312
No log 9.0 9 0.7029 0.5312
0.7492 10.0 10 0.7023 0.5312
0.7492 11.0 11 0.7017 0.5312
0.7492 12.0 12 0.7010 0.5312
0.7492 13.0 13 0.7002 0.5312
0.7492 14.0 14 0.6994 0.5312
0.7492 15.0 15 0.6985 0.5312
0.7492 16.0 16 0.6976 0.5312
0.7492 17.0 17 0.6966 0.5312
0.7492 18.0 18 0.6956 0.5312
0.7492 19.0 19 0.6946 0.5312
0.7304 20.0 20 0.6935 0.5312
0.7304 21.0 21 0.6924 0.5
0.7304 22.0 22 0.6912 0.5625
0.7304 23.0 23 0.6900 0.5625
0.7304 24.0 24 0.6887 0.5625
0.7304 25.0 25 0.6875 0.5625
0.7304 26.0 26 0.6861 0.5625
0.7304 27.0 27 0.6849 0.5625
0.7304 28.0 28 0.6836 0.5625
0.7304 29.0 29 0.6823 0.5625
0.6885 30.0 30 0.6812 0.5938
0.6885 31.0 31 0.6800 0.5625
0.6885 32.0 32 0.6789 0.5625
0.6885 33.0 33 0.6779 0.5312
0.6885 34.0 34 0.6772 0.5312
0.6885 35.0 35 0.6763 0.5312
0.6885 36.0 36 0.6753 0.5625
0.6885 37.0 37 0.6744 0.5625
0.6885 38.0 38 0.6734 0.5938
0.6885 39.0 39 0.6724 0.5938
0.6578 40.0 40 0.6715 0.625
0.6578 41.0 41 0.6707 0.5938
0.6578 42.0 42 0.6699 0.625
0.6578 43.0 43 0.6692 0.625
0.6578 44.0 44 0.6686 0.6562
0.6578 45.0 45 0.6681 0.6562
0.6578 46.0 46 0.6679 0.6562
0.6578 47.0 47 0.6677 0.5938
0.6578 48.0 48 0.6673 0.5938
0.6578 49.0 49 0.6665 0.5938
0.5964 50.0 50 0.6655 0.5938

Framework versions

  • Transformers 4.32.0.dev0
  • Pytorch 2.0.1+cu118
  • Datasets 2.4.0
  • Tokenizers 0.13.3