roberta-large
This model is a fine-tuned version of FacebookAI/roberta-large on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.5329
- Accuracy: 0.9121
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
No log | 1.0 | 782 | 0.2406 | 0.8990 |
0.3075 | 2.0 | 1564 | 0.2539 | 0.8962 |
0.2171 | 3.0 | 2346 | 0.2650 | 0.9031 |
0.1697 | 4.0 | 3128 | 0.3427 | 0.8973 |
0.1697 | 5.0 | 3910 | 0.3241 | 0.9031 |
0.1339 | 6.0 | 4692 | 0.4141 | 0.9049 |
0.1038 | 7.0 | 5474 | 0.4572 | 0.8946 |
0.0922 | 8.0 | 6256 | 0.4154 | 0.9054 |
0.0676 | 9.0 | 7038 | 0.5020 | 0.8982 |
0.0676 | 10.0 | 7820 | 0.5070 | 0.9071 |
0.0568 | 11.0 | 8602 | 0.4826 | 0.9067 |
0.0443 | 12.0 | 9384 | 0.5104 | 0.9086 |
0.0313 | 13.0 | 10166 | 0.5456 | 0.9088 |
0.0313 | 14.0 | 10948 | 0.4740 | 0.9078 |
0.0245 | 15.0 | 11730 | 0.4977 | 0.9071 |
0.0227 | 16.0 | 12512 | 0.5136 | 0.9098 |
0.0175 | 17.0 | 13294 | 0.5131 | 0.9108 |
0.0173 | 18.0 | 14076 | 0.5370 | 0.9109 |
0.0173 | 19.0 | 14858 | 0.5344 | 0.9126 |
0.0152 | 20.0 | 15640 | 0.5329 | 0.9121 |
Framework versions
- Transformers 4.37.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2
- Downloads last month
- 3
Model tree for jialicheng/imdb-roberta-large
Base model
FacebookAI/roberta-large