Hasanur525's picture
End of training
bbc5aca verified
metadata
base_model: Hasanur525/deed-summarization_version_10
tags:
  - generated_from_trainer
metrics:
  - rouge
model-index:
  - name: deed-summarization_version_11
    results: []

deed-summarization_version_11

This model is a fine-tuned version of Hasanur525/deed-summarization_version_10 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2748
  • Rouge1: 0.7615
  • Rouge2: 0.3638
  • Rougel: 0.7644
  • Rougelsum: 0.7534
  • Gen Len: 98.2164

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 5000
  • num_epochs: 25

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
1.1025 1.0 265 0.4128 0.3423 0.1522 0.3482 0.3493 98.4272
0.7311 2.0 530 0.4113 0.3324 0.1504 0.3405 0.3389 98.465
0.1826 3.0 795 0.4086 0.3511 0.1619 0.36 0.3585 98.328
0.6314 4.0 1060 0.4053 0.3198 0.1474 0.3222 0.3179 98.4565
0.4551 5.0 1325 0.4025 0.363 0.1659 0.3732 0.3694 98.3507
1.1978 6.0 1590 0.3960 0.3611 0.1386 0.3589 0.3577 98.3043
1.078 7.0 1855 0.3902 0.3158 0.1445 0.3112 0.3074 98.3809
0.2222 8.0 2120 0.3846 0.4959 0.2242 0.494 0.4793 98.2212
0.811 9.0 2385 0.3811 0.4641 0.2215 0.464 0.4499 98.2457
0.4816 10.0 2650 0.3713 0.436 0.217 0.439 0.4368 98.1881
0.2396 11.0 2915 0.3650 0.556 0.2677 0.5563 0.5475 98.2571
0.1897 12.0 3180 0.3601 0.6718 0.4061 0.6712 0.6631 98.1597
0.6071 13.0 3445 0.3498 0.5639 0.294 0.5623 0.5554 98.1096
0.3386 14.0 3710 0.3416 0.4915 0.2933 0.5002 0.4954 98.069
0.2921 15.0 3975 0.3342 0.4391 0.2676 0.4381 0.4342 97.7353
1.4814 16.0 4240 0.3261 0.5389 0.2966 0.5542 0.5466 98.0945
0.1891 17.0 4505 0.3167 0.4885 0.2725 0.5044 0.4923 98.2146
0.4877 18.0 4770 0.3090 0.6391 0.3774 0.6378 0.6224 98.2098
0.6804 19.0 5035 0.3016 0.766 0.4274 0.7649 0.7553 97.8828
0.1395 20.0 5300 0.2930 0.7208 0.3954 0.7478 0.7245 98.0955
0.4395 21.0 5565 0.2866 0.7457 0.406 0.7629 0.7453 97.9509
0.2215 22.0 5830 0.2820 0.6278 0.3099 0.6447 0.6288 98.0255
0.6845 23.0 6095 0.2775 0.7815 0.3541 0.7789 0.7629 98.1692
0.3637 24.0 6360 0.2753 0.819 0.3989 0.8195 0.8062 98.328
0.4836 25.0 6625 0.2748 0.7615 0.3638 0.7644 0.7534 98.2164

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.1.0.dev20230811+cu121
  • Datasets 2.17.0
  • Tokenizers 0.15.2