ldos's picture
End of training
a6e076e
|
raw
history blame
3.09 kB
metadata
license: mit
base_model: facebook/bart-large-xsum
tags:
  - generated_from_trainer
metrics:
  - rouge
model-index:
  - name: text_shortening_model_v50
    results: []

text_shortening_model_v50

This model is a fine-tuned version of facebook/bart-large-xsum on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.8296
  • Rouge1: 0.5063
  • Rouge2: 0.2803
  • Rougel: 0.4415
  • Rougelsum: 0.4405
  • Bert precision: 0.8741
  • Bert recall: 0.8787
  • Average word count: 8.7857
  • Max word count: 16
  • Min word count: 3
  • Average token count: 16.3942
  • % shortened texts with length > 12: 11.9048

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Bert precision Bert recall Average word count Max word count Min word count Average token count % shortened texts with length > 12
1.3343 1.0 83 1.4625 0.5101 0.2866 0.4536 0.4527 0.8751 0.877 8.3042 19 4 15.1508 5.0265
0.7011 2.0 166 1.4296 0.5101 0.284 0.4548 0.4551 0.8736 0.8797 8.7593 18 5 16.0529 7.672
0.483 3.0 249 1.3880 0.5025 0.2819 0.4433 0.442 0.8722 0.8782 8.7698 18 5 14.8492 6.3492
0.3876 4.0 332 1.7614 0.4934 0.2653 0.4334 0.4327 0.8715 0.8725 8.2249 18 5 16.3042 5.5556
0.291 5.0 415 1.8296 0.5063 0.2803 0.4415 0.4405 0.8741 0.8787 8.7857 16 3 16.3942 11.9048

Framework versions

  • Transformers 4.33.1
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.5
  • Tokenizers 0.13.3