flan-t5-large-extraction-cnndm_8000-all
This model is a fine-tuned version of google/flan-t5-large on the None dataset. It achieves the following results on the evaluation set:
- Loss: 1.6960
- Rouge1: 35.1425
- Rouge2: 15.3877
- Rougel: 30.0992
- Rougelsum: 30.1879
- Gen Len: 19.0
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 24
- seed: 1799
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
---|---|---|---|---|---|---|---|---|
2.1837 | 0.2 | 200 | 1.8342 | 33.7673 | 14.4744 | 28.8398 | 28.8415 | 19.0 |
1.9557 | 0.4 | 400 | 1.7798 | 34.3577 | 14.8613 | 29.769 | 29.766 | 18.986 |
1.9219 | 0.6 | 600 | 1.7428 | 34.8589 | 15.4488 | 30.1084 | 30.1336 | 18.99 |
1.871 | 0.8 | 800 | 1.7408 | 35.001 | 15.597 | 30.3374 | 30.37 | 18.99 |
1.8729 | 1.0 | 1000 | 1.7502 | 34.9305 | 15.5718 | 30.1495 | 30.1513 | 19.0 |
1.7803 | 1.2 | 1200 | 1.7261 | 35.7504 | 15.4172 | 30.6898 | 30.7362 | 19.0 |
1.7674 | 1.4 | 1400 | 1.7214 | 35.9564 | 15.6508 | 30.3541 | 30.4292 | 19.0 |
1.7704 | 1.6 | 1600 | 1.7253 | 35.2706 | 15.7274 | 30.118 | 30.1324 | 19.0 |
1.7656 | 1.8 | 1800 | 1.6960 | 35.1425 | 15.3877 | 30.0992 | 30.1879 | 19.0 |
1.7545 | 2.0 | 2000 | 1.7186 | 34.6436 | 15.2712 | 29.9781 | 29.9698 | 19.0 |
1.6739 | 2.2 | 2200 | 1.7245 | 35.4083 | 15.8808 | 30.6222 | 30.6752 | 19.0 |
1.6836 | 2.4 | 2400 | 1.7212 | 35.1829 | 15.5181 | 30.2438 | 30.262 | 19.0 |
Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.5.1
- Tokenizers 0.12.1
- Downloads last month
- 1
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.