Edit model card

bart-base-luong

This model is a fine-tuned version of facebook/bart-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2356
  • Rouge1: 45.4069
  • Rouge2: 23.2838
  • Rougel: 39.4615
  • Rougelsum: 41.5905
  • Gen Len: 18.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
0.2702 1.0 2307 0.2429 43.0834 19.6597 36.4303 39.1751 18.0
0.2121 2.0 4615 0.2338 43.5038 20.3513 37.1389 39.418 18.0
0.1917 3.0 6922 0.2327 44.3658 21.3002 38.0506 40.4574 18.0
0.1768 4.0 9230 0.2304 44.761 22.2373 38.713 40.955 18.0
0.1658 5.0 11537 0.2310 45.176 22.8385 39.0963 41.2373 18.0
0.1567 6.0 13845 0.2327 45.2475 22.7529 38.9987 41.2975 18.0
0.1498 7.0 16152 0.2350 45.4093 22.9187 39.1624 41.4173 18.0
0.1444 8.0 18460 0.2340 45.6332 23.1632 39.5567 41.5893 18.0
0.1406 9.0 20767 0.2353 45.1827 22.7108 39.089 41.2022 18.0
0.1385 10.0 23070 0.2356 45.4069 23.2838 39.4615 41.5905 18.0

Framework versions

  • Transformers 4.36.1
  • Pytorch 2.1.2
  • Datasets 2.20.0
  • Tokenizers 0.15.2
Downloads last month
0
Safetensors
Model size
139M params
Tensor type
F32
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ntluongg/bart-base-luong

Base model

facebook/bart-base
Finetuned
(364)
this model

Space using ntluongg/bart-base-luong 1