rockdrigoma commited on
Commit
da9fda5
1 Parent(s): 23b7f6d

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +78 -7
app.py CHANGED
@@ -1,13 +1,84 @@
1
  import gradio as gr
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  from transformers import AutoModelForSeq2SeqLM
3
  from transformers import AutoTokenizer
4
 
5
- article='''
6
- # Team members
7
- - Emilio Alejandro Morales [(milmor)](https://huggingface.co/milmor)
8
- - Rodrigo Martínez Arzate [(rockdrigoma)](https://huggingface.co/rockdrigoma)
9
- - Luis Armando Mercado [(luisarmando)](https://huggingface.co/luisarmando)
10
- - Jacobo del Valle [(jjdv)](https://huggingface.co/jjdv)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  '''
12
 
13
  model = AutoModelForSeq2SeqLM.from_pretrained('hackathon-pln-es/t5-small-spanish-nahuatl')
@@ -27,7 +98,7 @@ gr.Interface(
27
  ],
28
  theme="peach",
29
  title='🌽 Spanish to Nahuatl Automatic Translation',
30
- description='This model is a T5 Transformer (t5-small) fine-tuned on spanish and nahuatl sentences collected from the web. The dataset is normalized using "sep" normalization from py-elotl. For more details visit https://huggingface.co/hackathon-pln-es/t5-small-spanish-nahuatl',
31
  examples=[
32
  'conejo',
33
  'estrella',
 
1
  import gradio as gr
2
+ from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
3
+
4
+ article='''
5
+ # t5-small-spanish-nahuatl
6
+ Nahuatl is the most widely spoken indigenous language in Mexico. However, training a neural network for the task of neural machine tranlation is hard due to the lack of structured data. The most popular datasets such as the Axolot dataset and the bible-corpus only consists of ~16,000 and ~7,000 samples respectivly. Moreover, there are multiple variants of Nahuatl, which makes this task even more difficult. For example, a single word from the Axolot dataset can be found written in more than three different ways. Therefore, in this work we leverage the T5 text-to-text sufix training strategy to compensate the lack of data. We first teach the multilingual model Spanish using English, then we make the transition to Spanish-Nahuatl. The resulting model successfully translates short sentences from Spanish to Nahuatl. We report Chrf and BLEU results.
7
+
8
+
9
+ ## Model description
10
+ This model is a T5 Transformer ([t5-small](https://huggingface.co/t5-small)) fine-tuned on spanish and nahuatl sentences collected from the web. The dataset is normalized using 'sep' normalization from [py-elotl](https://github.com/ElotlMX/py-elotl).
11
+
12
+
13
+ ## Usage
14
+ ```python
15
  from transformers import AutoModelForSeq2SeqLM
16
  from transformers import AutoTokenizer
17
 
18
+ model = AutoModelForSeq2SeqLM.from_pretrained('hackathon-pln-es/t5-small-spanish-nahuatl')
19
+ tokenizer = AutoTokenizer.from_pretrained('hackathon-pln-es/t5-small-spanish-nahuatl')
20
+
21
+ model.eval()
22
+ sentence = 'muchas flores son blancas'
23
+ input_ids = tokenizer('translate Spanish to Nahuatl: ' + sentence, return_tensors='pt').input_ids
24
+ outputs = model.generate(input_ids)
25
+ # outputs = miak xochitl istak
26
+ outputs = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
27
+ ```
28
+
29
+ ## Approach
30
+ ### Dataset
31
+ Since the Axolotl corpus contains misaligments, we just select the best samples (12,207 samples). We also use the [bible-corpus](https://github.com/christos-c/bible-corpus) (7,821 samples).
32
+
33
+ | Axolotl best aligned books |
34
+ |:-----------------------------------------------------:|
35
+ | Anales de Tlatelolco |
36
+ | Diario |
37
+ | Documentos nauas de la Ciudad de México del siglo XVI |
38
+ | Historia de México narrada en náhuatl y español |
39
+ | La tinta negra y roja (antología de poesía náhuatl) |
40
+ | Memorial Breve (Libro las ocho relaciones) |
41
+ | Método auto-didáctico náhuatl-español |
42
+ | Nican Mopohua |
43
+ | Quinta Relación (Libro las ocho relaciones) |
44
+ | Recetario Nahua de Milpa Alta D.F |
45
+ | Testimonios de la antigua palabra |
46
+ | Trece Poetas del Mundo Azteca |
47
+ | Una tortillita nomás - Se taxkaltsin saj |
48
+ | Vida económica de Tenochtitlan |
49
+
50
+ Also, to increase the amount of data we collected 3,000 extra samples from the web.
51
+
52
+ ### Model and training
53
+ We employ two training-stages using a multilingual T5-small. This model was chosen because it can handle different vocabularies and suffixes. T5-small is pretrained on different tasks and languages (French, Romanian, English, German).
54
+
55
+ ### Training-stage 1 (learning Spanish)
56
+ In training stage 1 we first introduce Spanish to the model. The objective is to learn a new language rich in data (Spanish) and not lose the previous knowledge acquired. We use the English-Spanish [Anki](https://www.manythings.org/anki/) dataset, which consists of 118.964 text pairs. We train the model till convergence adding the suffix "Translate Spanish to English: ".
57
+
58
+ ### Training-stage 2 (learning Nahuatl)
59
+ We use the pretrained Spanish-English model to learn Spanish-Nahuatl. Since the amount of Nahuatl pairs is limited, we also add to our dataset 20,000 samples from the English-Spanish Anki dataset. This two-task-training avoids overfitting end makes the model more robust.
60
+
61
+ ### Training setup
62
+ We train the models on the same datasets for 660k steps using batch size = 16 and 2e-5 learning rate.
63
+
64
+
65
+ ## Evaluation results
66
+ For a fair comparison, the models are evaluated on the same 505 validation Nahuatl sentences. We report the results using chrf and sacrebleu hugging face metrics:
67
+
68
+ | English-Spanish pretraining | Validation loss | BLEU | Chrf |
69
+ |:----------------------------:|:---------------:|:-----|-------:|
70
+ | False | 1.34 | 6.17 | 26.96 |
71
+ | True | 1.31 | 6.18 | 28.21 |
72
+
73
+ The English-Spanish pretraining improves BLEU and Chrf, and leads to faster convergence.
74
+
75
+ ## References
76
+ - Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits
77
+ of transfer learning with a unified Text-to-Text transformer.
78
+
79
+ - Ximena Gutierrez-Vasques, Gerardo Sierra, and Hernandez Isaac. 2016. Axolotl: a web accessible parallel corpus for Spanish-Nahuatl. In International Conference on Language Resources and Evaluation (LREC).
80
+
81
+ For more details visit [(our model approach description)](https://huggingface.co/hackathon-pln-es/t5-small-spanish-nahuatl)
82
  '''
83
 
84
  model = AutoModelForSeq2SeqLM.from_pretrained('hackathon-pln-es/t5-small-spanish-nahuatl')
 
98
  ],
99
  theme="peach",
100
  title='🌽 Spanish to Nahuatl Automatic Translation',
101
+ description='This model is a T5 Transformer (t5-small) fine-tuned on spanish and nahuatl sentences collected from the web. The dataset is normalized using "sep" normalization from py-elotl.',
102
  examples=[
103
  'conejo',
104
  'estrella',