satyanshu404's picture
End of training
c1f156b
|
raw
history blame
2.11 kB
metadata
license: mit
base_model: facebook/bart-large-cnn
tags:
  - generated_from_trainer
datasets:
  - cnn_dailymail
metrics:
  - rouge
model-index:
  - name: bart-large-cnn-finetuned-CNN-ML
    results:
      - task:
          name: Sequence-to-sequence Language Modeling
          type: text2text-generation
        dataset:
          name: cnn_dailymail
          type: cnn_dailymail
          config: 3.0.0
          split: test
          args: 3.0.0
        metrics:
          - name: Rouge1
            type: rouge
            value: 44.4382

bart-large-cnn-finetuned-CNN-ML

This model is a fine-tuned version of facebook/bart-large-cnn on the cnn_dailymail dataset. It achieves the following results on the evaluation set:

  • Loss: 2.1137
  • Rouge1: 44.4382
  • Rouge2: 20.686
  • Rougel: 29.9355
  • Rougelsum: 41.4113
  • Gen Len: 93.846

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
1.0341 1.0 1000 1.5412 43.0331 20.1656 29.6298 39.9858 83.22
0.6416 2.0 2000 1.8461 44.2294 20.5043 29.6298 41.1457 93.366
0.3766 3.0 3000 2.1137 44.4382 20.686 29.9355 41.4113 93.846

Framework versions

  • Transformers 4.33.1
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.5
  • Tokenizers 0.13.3