romaneng2nep_v3 / README.md
syubraj's picture
Update README.md
e6ab459 verified
|
raw
history blame
2.41 kB
metadata
library_name: transformers
license: apache-2.0
base_model: google/mt5-small
tags:
  - generated_from_trainer
metrics:
  - bleu
model-index:
  - name: romaneng2nep_v2
    results: []
datasets:
  - syubraj/roman2nepali-transliteration
language:
  - ne
  - en

romaneng2nep_v2

This model is a fine-tuned version of google/mt5-small on an syubraj/roman2nepali-transliteration. It achieves the following results on the evaluation set:

  • Loss: 2.7225
  • Gen Len: 5.2131

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 24
  • eval_batch_size: 24
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Gen Len
15.2574 0.0626 1000 5.7371 2.0266
6.1453 0.1251 2000 4.5094 4.5514
5.3182 0.1877 3000 4.0351 4.7656
4.9218 0.2503 4000 3.6947 4.9841
4.6397 0.3128 5000 3.4644 5.1216
4.433 0.3754 6000 3.3009 5.2036
4.2494 0.4380 7000 3.1525 5.1748
4.1467 0.5005 8000 3.0482 5.232
4.0272 0.5631 9000 2.9592 5.253
3.9598 0.6257 10000 2.8917 5.1893
3.9116 0.6882 11000 2.8292 5.2252
3.8435 0.7508 12000 2.7871 5.2148
3.8047 0.8134 13000 2.7574 5.2123
3.7818 0.8759 14000 2.7338 5.2409
3.7764 0.9385 15000 2.7225 5.2131

Framework versions

  • Transformers 4.45.1
  • Pytorch 2.4.0
  • Datasets 3.0.1
  • Tokenizers 0.20.0