rut5-base-absum-tech-support-calls
This model is a fine-tuned version of cointegrated/rut5-base-absum on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.4464
- Rouge-1: 0.5076
- Rouge-2: 0.3897
- Rouge-l: 0.4945
- Gen Len: 15.75
- Avg Rouge F: 0.4639
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 100
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Avg Rouge F |
---|---|---|---|---|---|---|---|---|
2.6017 | 2.78 | 50 | 2.0030 | 0.0 | 0.0 | 0.0 | 8.125 | 0.0 |
2.1413 | 5.56 | 100 | 1.5154 | 0.1125 | 0.0317 | 0.0958 | 11.5 | 0.08 |
1.6874 | 8.33 | 150 | 1.2364 | 0.3417 | 0.2312 | 0.325 | 13.25 | 0.2993 |
1.2272 | 11.11 | 200 | 1.1259 | 0.3605 | 0.2437 | 0.3291 | 14.25 | 0.3111 |
0.9384 | 13.89 | 250 | 1.0853 | 0.4505 | 0.3 | 0.4211 | 13.5 | 0.3905 |
0.7071 | 16.67 | 300 | 1.0607 | 0.3559 | 0.1368 | 0.3133 | 14.875 | 0.2687 |
0.5871 | 19.44 | 350 | 1.0346 | 0.5377 | 0.4194 | 0.5126 | 16.0 | 0.4899 |
0.4194 | 22.22 | 400 | 1.0672 | 0.5079 | 0.3819 | 0.4829 | 15.5 | 0.4576 |
0.3685 | 25.0 | 450 | 1.1284 | 0.5029 | 0.3835 | 0.4897 | 14.75 | 0.4587 |
0.2884 | 27.78 | 500 | 1.1729 | 0.5427 | 0.421 | 0.5164 | 15.875 | 0.4933 |
0.2368 | 30.56 | 550 | 1.1640 | 0.5326 | 0.421 | 0.5195 | 15.25 | 0.491 |
0.195 | 33.33 | 600 | 1.2053 | 0.5326 | 0.421 | 0.5195 | 15.25 | 0.491 |
0.1667 | 36.11 | 650 | 1.2525 | 0.4245 | 0.2717 | 0.4114 | 16.125 | 0.3692 |
0.1491 | 38.89 | 700 | 1.3346 | 0.5032 | 0.3897 | 0.4901 | 16.0 | 0.461 |
0.1122 | 41.67 | 750 | 1.3354 | 0.5094 | 0.4062 | 0.5094 | 15.375 | 0.475 |
0.1166 | 44.44 | 800 | 1.3685 | 0.5076 | 0.3897 | 0.4945 | 15.625 | 0.4639 |
0.0973 | 47.22 | 850 | 1.4157 | 0.5076 | 0.3897 | 0.4945 | 15.375 | 0.4639 |
0.0944 | 50.0 | 900 | 1.4523 | 0.5095 | 0.3897 | 0.4963 | 15.125 | 0.4652 |
0.0744 | 52.78 | 950 | 1.4221 | 0.5326 | 0.421 | 0.5195 | 15.25 | 0.491 |
0.0745 | 55.56 | 1000 | 1.4464 | 0.5076 | 0.3897 | 0.4945 | 15.75 | 0.4639 |
Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Tokenizers 0.13.3
- Downloads last month
- 20
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.