Edit model card

bart-large-cnn-finetuned-scope-summarization

This model is a fine-tuned version of facebook/bart-large-cnn on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1120
  • Rouge1: 51.232
  • Rouge2: 37.3103
  • Rougel: 39.2783
  • Rougelsum: 39.2011

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5.6e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum
0.6379 1.0 40 0.2289 45.9991 29.5151 34.3864 34.3984
0.2731 2.0 80 0.1935 47.3991 33.1933 38.1538 38.0514
0.2362 3.0 120 0.1734 47.4125 32.2496 35.7852 35.8279
0.222 4.0 160 0.1665 46.2226 32.0249 37.016 36.8941
0.2005 5.0 200 0.1530 50.1647 35.1015 39.0526 39.0721
0.1971 6.0 240 0.1434 49.7914 35.5371 39.2372 39.244
0.1754 7.0 280 0.1286 49.8482 35.7536 40.2412 40.2248
0.1777 8.0 320 0.1187 51.6342 38.223 41.4109 41.3626
0.1555 9.0 360 0.1149 49.1858 36.1404 38.857 38.7268
0.1415 10.0 400 0.1120 51.232 37.3103 39.2783 39.2011

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.2.0+cu121
  • Datasets 3.0.0
  • Tokenizers 0.19.1
Downloads last month
3
Safetensors
Model size
406M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for nandavikas16/bart-large-cnn-finetuned-scope-summarization

Finetuned
(295)
this model