santiviquez's picture
Add evaluation results on samsum dataset (#1)
1021f8c
|
raw
history blame
2.61 kB
metadata
license: apache-2.0
tags:
  - summarization
  - generated_from_trainer
datasets:
  - samsum
metrics:
  - rouge
model-index:
  - name: bart-base-finetuned-samsum-en
    results:
      - task:
          name: Sequence-to-sequence Language Modeling
          type: text2text-generation
        dataset:
          name: samsum
          type: samsum
          args: samsum
        metrics:
          - name: Rouge1
            type: rouge
            value: 46.8825
      - task:
          type: summarization
          name: Summarization
        dataset:
          name: samsum
          type: samsum
          config: samsum
          split: test
        metrics:
          - name: ROUGE-1
            type: rouge
            value: 45.0692
            verified: true
          - name: ROUGE-2
            type: rouge
            value: 20.9049
            verified: true
          - name: ROUGE-L
            type: rouge
            value: 37.3128
            verified: true
          - name: ROUGE-LSUM
            type: rouge
            value: 40.662
            verified: true
          - name: loss
            type: loss
            value: 5.763935565948486
            verified: true
          - name: gen_len
            type: gen_len
            value: 18.4921
            verified: true

bart-base-finetuned-samsum-en

This model is a fine-tuned version of facebook/bart-base on the samsum dataset. It achieves the following results on the evaluation set:

  • Loss: 2.3676
  • Rouge1: 46.8825
  • Rouge2: 22.0923
  • Rougel: 39.7249
  • Rougelsum: 42.9187

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5.6e-05
  • train_batch_size: 10
  • eval_batch_size: 10
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum
0.5172 1.0 300 2.1613 47.4152 22.8106 39.93 43.3639
0.3627 2.0 600 2.2771 47.2676 22.6325 40.1345 43.19
0.2466 3.0 900 2.3676 46.8825 22.0923 39.7249 42.9187

Framework versions

  • Transformers 4.19.2
  • Pytorch 1.11.0+cu113
  • Datasets 2.2.2
  • Tokenizers 0.12.1