metadata
language:
- en
- ro
license: apache-2.0
tags:
- translation
- wmt16
- Lvxue
datasets:
- wmt16
metrics:
- sacrebleu
- bleu
model-index:
- name: Lvxue/finetuned-mt5-small-10epoch
results:
- task:
type: translation
name: Translation
dataset:
name: wmt16
type: wmt16
config: ro-en
split: test
metrics:
- type: bleu
value: 6.0012
name: BLEU
verified: true
- type: loss
value: 1.7407585382461548
name: loss
verified: true
- type: gen_len
value: 18.2281
name: gen_len
verified: true
- task:
type: translation
name: Translation
dataset:
name: wmt16
type: wmt16
config: cs-en
split: train
metrics:
- type: bleu
value: 0.4125
name: BLEU
verified: true
verifyToken: >-
eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTUxNmNhNjY2Nzk3ZTRkOTM0ZjQ4ZDI2ZTZkZTBkMzBkYjc1Y2EyNmVlOWEzYjc3ZjIzOTA5MDk4M2RiMzgxMyIsInZlcnNpb24iOjF9.XzXIZo3AlYuFeaJwBcHJPFML4Um4MER4XbL3RDuwwZeOMDnqnMfspLcrbyvozyZjSC8JWWvpgJ192JflgDv1Cw
- type: loss
value: 6.76282262802124
name: loss
verified: true
verifyToken: >-
eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzI5ZDZkYzQxYTA3YzJjZWY0OWNhOGQ2YTIzOGJmMmY4NDQ4M2Y2ZTJmMWZiNWY0N2Y3Y2VjNDU4ZWMwZTM2OSIsInZlcnNpb24iOjF9.5yJy4N0hC3myiT0Q8kKNnKLbZnRiilaQA3dDxTNm7Sz-rNZqMTs9qUsX5S6Y4S4_hWD1Tny2xJle6o33IMDXBw
- type: gen_len
value: 18.4717
name: gen_len
verified: true
verifyToken: >-
eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDdmZGE4ZTg5MGI1MmE3NjFmYzE2MGMyNWJiNGFiYjUwNWRjYjExNmUyNDQ2ZGY1OWJkNWQ4M2U2ZGVmMDYwMSIsInZlcnNpb24iOjF9.YX8bVSFJKTvCyeWRrgwt3W9EVbL9IcC6T4n2LGkz1AHc2aB3e-l8Wwg_yzfI_W3HBmVDveLGw6I7qUg1YgCcAA
finetuned-mt5-small-10epoch
This model is a fine-tuned version of google/mt5-small on the wmt16 ro-en dataset. It achieves the following results on the evaluation set:
- Loss: 1.7274
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
Training results
Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1