metadata
license: apache-2.0
datasets:
- opus_books
- iwslt2017
language:
- en
- nl
pipeline_tag: text2text-generation
tags:
- translation
metrics:
- bleu
- chrf
- chrf++
widget:
- text: '>>en<< Was het leuk?'
Model Card for mt5-small nl-en translation
The mt5-small nl-en translation model is a finetuned version of google/mt5-small.
It was finetuned on 237k rows of the iwslt2017 dataset and roughly 38k rows of the opus_books dataset. The model was trained in multiple phases with different epochs & batch sizes.
How to use
Install dependencies
pip install transformers
pip install sentencepiece
pip install protobuf
You can use the following code for model inference. This model was finetuned to work with an identifier when prompted that needs to be present for the best results.
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, GenerationConfig
# load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("Michielo/mt5-small_nl-en_translation")
model = AutoModelForSeq2SeqLM.from_pretrained("Michielo/mt5-small_nl-en_translation")
# tokenize input
inputs = tokenizer(">>en<< Your Dutch text here", return_tensors="pt")
# calculate the output
outputs = model.generate(**inputs, generation_config=generation_config)
# decode and print
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
Benchmarks
Benchmark | Score |
---|---|
BLEU | 51.92% |
chr-F | 67.90% |
chr-F++ | 67.62% |
License
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.