Edit model card

bart_CNN_NLP

This model is a fine-tuned version of facebook/bart-large-cnn on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 3.0479
  • Rouge1: 45.8751
  • Rouge2: 28.1917
  • Rougel: 42.0922
  • Rougelsum: 41.9934
  • Gen Len: 6433791.8333

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 4
  • label_smoothing_factor: 0.1

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
3.1748 0.4 40 3.1564 44.8208 26.6733 41.2873 41.226 6433791.8889
3.0649 0.8 80 2.9386 45.8469 27.8327 41.8543 41.8139 6433791.8556
2.6983 1.2 120 2.8712 47.7681 29.8568 43.9396 43.8816 6433791.8778
2.6725 1.6 160 2.8698 46.6433 29.2504 43.1299 43.0348 6433791.9333
2.7537 2.0 200 2.8534 47.0645 29.6233 43.5479 43.4841 6433791.8778
2.3728 2.4 240 2.9305 46.1673 28.848 42.6293 42.5577 6433791.8889
2.3572 2.8 280 2.9414 47.2408 29.4202 43.4668 43.3747 6433791.9
2.087 3.2 320 3.0366 46.652 28.7844 42.7646 42.6204 6433791.8778
2.1212 3.6 360 3.0169 46.6902 28.1997 42.5114 42.4226 6433791.8222
2.1264 4.0 400 3.0479 45.8751 28.1917 42.0922 41.9934 6433791.8333

Framework versions

  • Transformers 4.39.3
  • Pytorch 2.1.2
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
6
Safetensors
Model size
406M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Moatasem22/bart_CNN_NLP

Finetuned
(295)
this model