results_t5small
This model is a fine-tuned version of t5-small on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.0067
- Bleu: 95.7548
- Wer: 0.0246
- Gen Len: 62.3093
Model description
TODO
Intended uses & limitations
TODO
Training and evaluation data
TODO
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Bleu | Wer | Gen Len |
---|---|---|---|---|---|---|
0.0195 | 0.4 | 500 | 0.0153 | 93.2753 | 0.0427 | 62.7732 |
0.0128 | 0.8 | 1000 | 0.0109 | 94.7989 | 0.0312 | 62.7715 |
0.0121 | 1.2 | 1500 | 0.0094 | 95.0581 | 0.0295 | 62.266 |
0.0105 | 1.6 | 2000 | 0.0087 | 95.2607 | 0.0286 | 62.1432 |
0.009 | 2.0 | 2500 | 0.0083 | 95.3219 | 0.0274 | 62.2098 |
0.0096 | 2.4 | 3000 | 0.0077 | 95.4354 | 0.0268 | 62.2532 |
0.0071 | 2.8 | 3500 | 0.0075 | 95.5026 | 0.0261 | 62.3815 |
0.0072 | 3.2 | 4000 | 0.0073 | 95.5419 | 0.0259 | 62.5688 |
0.0074 | 3.6 | 4500 | 0.0071 | 95.6607 | 0.0254 | 62.7457 |
0.0076 | 4.0 | 5000 | 0.0068 | 95.6779 | 0.0251 | 62.892 |
0.0074 | 4.4 | 5500 | 0.0068 | 95.7313 | 0.0248 | 62.8282 |
0.0077 | 4.8 | 6000 | 0.0067 | 95.7548 | 0.0246 | 62.3093 |
Framework versions
- Transformers 4.37.1
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.15.1
- Downloads last month
- 3
Inference API (serverless) is not available, repository is disabled.
Model tree for spidersouris/genre-t5-small-60k
Base model
google-t5/t5-small
Finetuned
this model