Davlan commited on
Commit
3bde708
1 Parent(s): deb7145

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -6,19 +6,19 @@ language:
6
  datasets:
7
  - JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
8
  ---
9
- # byt5-base-eng_yor_mt
10
  ## Model description
11
- **byt5_base_yor_eng_mt** is a **machine translation** model from English language to Yorùbá language based on a fine-tuned byt5-base model. It establishes a **strong baseline** for automatically translating texts from English to Yorùbá.
12
 
13
  Specifically, this model is a *byt5-base* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
14
 
15
  #### Limitations and bias
16
- This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
17
  ## Training data
18
  This model was fine-tuned on on JW300 corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
19
 
20
  ## Training procedure
21
- This model was trained on a single NVIDIA V100 GPU
22
 
23
  ## Eval results on Test set (BLEU score)
24
  Fine-tuning byt5-base achieves **12.23 BLEU** on [Menyo-20k test set](https://arxiv.org/abs/2103.08647) while mt5-base achieves 9.82
 
6
  datasets:
7
  - JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
8
  ---
9
+ # byt5-base-eng-yor-mt
10
  ## Model description
11
+ **byt5-base-eng-yor-mt** is a **machine translation** model from English language to Yorùbá language based on a fine-tuned byt5-base model. It establishes a **strong baseline** for automatically translating texts from English to Yorùbá.
12
 
13
  Specifically, this model is a *byt5-base* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
14
 
15
  #### Limitations and bias
16
+ This model is limited by its training dataset. This may not generalize well for all use cases in different domains.
17
  ## Training data
18
  This model was fine-tuned on on JW300 corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
19
 
20
  ## Training procedure
21
+ This model was trained on NVIDIA V100 GPU
22
 
23
  ## Eval results on Test set (BLEU score)
24
  Fine-tuning byt5-base achieves **12.23 BLEU** on [Menyo-20k test set](https://arxiv.org/abs/2103.08647) while mt5-base achieves 9.82