Bart
This model is a fine-tuned version of facebook/bart-large-cnn on the None dataset. It achieves the following results on the evaluation set:
- Loss: 3.4379
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
No log | 1.0 | 32 | 1.9039 |
No log | 2.0 | 64 | 1.9118 |
No log | 3.0 | 96 | 1.9611 |
No log | 4.0 | 128 | 2.1126 |
No log | 5.0 | 160 | 2.3234 |
No log | 6.0 | 192 | 2.5468 |
No log | 7.0 | 224 | 2.6987 |
No log | 8.0 | 256 | 2.8041 |
No log | 9.0 | 288 | 2.9329 |
No log | 10.0 | 320 | 3.0530 |
No log | 11.0 | 352 | 3.1344 |
No log | 12.0 | 384 | 3.1571 |
No log | 13.0 | 416 | 3.2308 |
No log | 14.0 | 448 | 3.3060 |
No log | 15.0 | 480 | 3.3254 |
0.55 | 16.0 | 512 | 3.3449 |
0.55 | 17.0 | 544 | 3.3627 |
0.55 | 18.0 | 576 | 3.4195 |
0.55 | 19.0 | 608 | 3.4282 |
0.55 | 20.0 | 640 | 3.4379 |
Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
- Downloads last month
- 0
Model tree for zera09/bart_bos
Base model
facebook/bart-large-cnn