bert_uncased_L-4_H-128_A-2_stsb
This model is a fine-tuned version of google/bert_uncased_L-4_H-128_A-2 on the GLUE STSB dataset. It achieves the following results on the evaluation set:
- Loss: 0.7195
- Pearson: 0.8255
- Spearmanr: 0.8275
- Combined Score: 0.8265
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
Training results
Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
---|---|---|---|---|---|---|
6.5937 | 1.0 | 23 | 4.2841 | 0.1711 | 0.2041 | 0.1876 |
4.7932 | 2.0 | 46 | 3.4947 | 0.2599 | 0.2547 | 0.2573 |
3.9815 | 3.0 | 69 | 2.9842 | 0.1082 | 0.1081 | 0.1081 |
3.3791 | 4.0 | 92 | 2.6163 | 0.0598 | 0.0576 | 0.0587 |
2.8814 | 5.0 | 115 | 2.3940 | nan | nan | nan |
2.5786 | 6.0 | 138 | 2.2811 | nan | nan | nan |
2.3519 | 7.0 | 161 | 2.2501 | nan | nan | nan |
2.2133 | 8.0 | 184 | 2.2792 | 0.3315 | 0.3276 | 0.3295 |
2.0732 | 9.0 | 207 | 1.7190 | 0.6363 | 0.5993 | 0.6178 |
1.688 | 10.0 | 230 | 1.2774 | 0.7001 | 0.7069 | 0.7035 |
1.3697 | 11.0 | 253 | 1.2016 | 0.6980 | 0.7378 | 0.7179 |
1.197 | 12.0 | 276 | 1.1024 | 0.7279 | 0.7207 | 0.7243 |
1.0723 | 13.0 | 299 | 0.9007 | 0.7777 | 0.7922 | 0.7849 |
0.9748 | 14.0 | 322 | 0.8908 | 0.7814 | 0.7975 | 0.7895 |
0.923 | 15.0 | 345 | 0.8621 | 0.7905 | 0.8112 | 0.8008 |
0.8518 | 16.0 | 368 | 0.8973 | 0.7831 | 0.7829 | 0.7830 |
0.7957 | 17.0 | 391 | 0.7932 | 0.8057 | 0.8142 | 0.8100 |
0.7691 | 18.0 | 414 | 0.8001 | 0.8066 | 0.8182 | 0.8124 |
0.7204 | 19.0 | 437 | 0.7710 | 0.8134 | 0.8204 | 0.8169 |
0.6959 | 20.0 | 460 | 0.7704 | 0.8145 | 0.8228 | 0.8187 |
0.6458 | 21.0 | 483 | 0.7613 | 0.8175 | 0.8231 | 0.8203 |
0.6427 | 22.0 | 506 | 0.7614 | 0.8188 | 0.8233 | 0.8210 |
0.6307 | 23.0 | 529 | 0.7195 | 0.8255 | 0.8275 | 0.8265 |
0.5979 | 24.0 | 552 | 0.7540 | 0.8207 | 0.8243 | 0.8225 |
0.5774 | 25.0 | 575 | 0.7823 | 0.8183 | 0.8226 | 0.8204 |
0.5542 | 26.0 | 598 | 0.7305 | 0.8253 | 0.8278 | 0.8266 |
0.548 | 27.0 | 621 | 0.7279 | 0.8261 | 0.8278 | 0.8270 |
0.5295 | 28.0 | 644 | 0.7397 | 0.8254 | 0.8267 | 0.8260 |
Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
- Downloads last month
- 104
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for gokulsrinivasagan/bert_uncased_L-4_H-128_A-2_stsb
Base model
google/bert_uncased_L-4_H-128_A-2