Edit model card

fine-tune-bart

This model is a fine-tuned version of facebook/bart-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8951
  • Rouge1: 0.3436
  • Rouge2: 0.1406
  • Rougel: 0.3117
  • Rougelsum: 0.3108
  • Gen Len: 15.43

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 301 0.7910 0.2441 0.0841 0.2149 0.2154 14.55
1.8181 2.0 602 0.7323 0.256 0.0926 0.2294 0.2291 13.25
1.8181 3.0 903 0.7217 0.2794 0.1079 0.2491 0.2465 14.48
0.6902 4.0 1204 0.7233 0.3095 0.1209 0.2782 0.277 14.38
0.5826 5.0 1505 0.7241 0.2985 0.1239 0.2628 0.2633 14.68
0.5826 6.0 1806 0.7184 0.3312 0.1309 0.2968 0.2978 15.53
0.4967 7.0 2107 0.7332 0.3127 0.1324 0.2856 0.2857 14.86
0.4967 8.0 2408 0.7419 0.3379 0.1391 0.3027 0.3035 14.7
0.429 9.0 2709 0.7580 0.3473 0.1417 0.318 0.3178 14.65
0.3799 10.0 3010 0.7505 0.338 0.1406 0.3057 0.3033 15.18
0.3799 11.0 3311 0.7783 0.3444 0.1341 0.3139 0.3126 15.12
0.341 12.0 3612 0.7893 0.3231 0.1294 0.2991 0.2993 14.97
0.341 13.0 3913 0.7957 0.347 0.1376 0.3105 0.3101 15.3
0.299 14.0 4214 0.8134 0.3275 0.1367 0.3023 0.3012 14.84
0.263 15.0 4515 0.8191 0.3125 0.1364 0.2873 0.2875 15.17
0.263 16.0 4816 0.8196 0.3276 0.1334 0.3011 0.2996 15.32
0.2394 17.0 5117 0.8389 0.3168 0.1244 0.2856 0.2881 15.07
0.2394 18.0 5418 0.8502 0.3398 0.1328 0.3123 0.3112 15.06
0.2157 19.0 5719 0.8584 0.3257 0.1197 0.2937 0.2936 15.36
0.1957 20.0 6020 0.8633 0.3325 0.1295 0.2986 0.2994 15.4
0.1957 21.0 6321 0.8620 0.3254 0.1208 0.2952 0.2949 15.28
0.181 22.0 6622 0.8762 0.3395 0.1306 0.3054 0.3045 15.27
0.181 23.0 6923 0.8775 0.3419 0.14 0.3137 0.3126 15.24
0.1622 24.0 7224 0.8780 0.3397 0.1311 0.3069 0.3063 15.15
0.1613 25.0 7525 0.8859 0.3231 0.1225 0.2887 0.288 15.14
0.1613 26.0 7826 0.8905 0.3289 0.1284 0.2953 0.2941 15.23
0.1463 27.0 8127 0.8883 0.3358 0.1303 0.3002 0.2988 15.19
0.1463 28.0 8428 0.8933 0.3414 0.139 0.3113 0.3098 15.5
0.1444 29.0 8729 0.8949 0.3449 0.1369 0.311 0.31 15.43
0.135 30.0 9030 0.8951 0.3436 0.1406 0.3117 0.3108 15.43

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu118
  • Datasets 2.15.0
  • Tokenizers 0.15.0
Downloads last month
3
Safetensors
Model size
139M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for tanatapanun/fine-tune-bart

Base model

facebook/bart-base
Finetuned
(364)
this model