raptorkwok's picture
update model card README.md
65dc3d3
|
raw
history blame
3.32 kB
metadata
tags:
  - generated_from_trainer
metrics:
  - bleu
model-index:
  - name: cantonese-chinese-parallel-corpus-bart-compare-alpha
    results: []

cantonese-chinese-parallel-corpus-bart-compare-alpha

This model is a fine-tuned version of fnlp/bart-base-chinese on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2307
  • Bleu: 28.1911
  • Chrf: 27.3934
  • Gen Len: 13.1593

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Bleu Chrf Gen Len
1.8245 0.14 1000 1.5392 23.4094 22.9586 12.9471
1.6283 0.29 2000 1.4433 24.6312 24.1038 12.9882
1.5527 0.43 3000 1.4074 25.4368 24.7944 13.0385
1.5125 0.58 4000 1.3743 25.6532 25.1073 13.0069
1.4572 0.72 5000 1.3468 26.2054 25.6527 13.0221
1.451 0.87 6000 1.3249 26.3433 25.7717 13.0345
1.4087 1.01 7000 1.3162 26.7569 26.0931 13.1037
1.296 1.16 8000 1.2961 26.7816 26.1834 13.0488
1.285 1.3 9000 1.2881 27.1895 26.4474 13.1257
1.281 1.45 10000 1.2778 27.248 26.5723 13.072
1.2809 1.59 11000 1.2772 27.3645 26.7016 13.0937
1.2741 1.74 12000 1.2568 27.3857 26.7455 13.0646
1.2658 1.88 13000 1.2552 27.4927 26.8279 13.0988
1.2412 2.03 14000 1.2632 27.5154 26.9238 13.0482
1.1303 2.17 15000 1.2627 27.7288 27.0753 13.0828
1.1449 2.32 16000 1.2596 27.7628 27.1038 13.0667
1.1352 2.46 17000 1.2465 27.9487 27.1672 13.1585
1.151 2.61 18000 1.2426 27.9699 27.2496 13.1294
1.1361 2.75 19000 1.2348 27.9343 27.218 13.0994
1.1368 2.9 20000 1.2307 28.1911 27.3934 13.1593
1.1012 3.04 21000 1.2487 28.1384 27.4055 13.1253
1.0201 3.19 22000 1.2482 28.0577 27.3169 13.1299
1.0274 3.33 23000 1.2479 28.149 27.4087 13.1401

Framework versions

  • Transformers 4.28.1
  • Pytorch 2.0.1+cu117
  • Datasets 2.13.1
  • Tokenizers 0.13.3