tiedeman commited on
Commit
541ff5f
1 Parent(s): 27ef404

Initial commit

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ *.spm filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,205 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ language:
4
+ - akl
5
+ - bcl
6
+ - bik
7
+ - bto
8
+ - ceb
9
+ - cgc
10
+ - de
11
+ - en
12
+ - es
13
+ - fil
14
+ - fr
15
+ - gor
16
+ - hil
17
+ - ify
18
+ - ilo
19
+ - krj
20
+ - mbb
21
+ - mbt
22
+ - mog
23
+ - mrw
24
+ - msm
25
+ - mta
26
+ - obo
27
+ - pag
28
+ - pam
29
+ - pt
30
+ - sxn
31
+ - tbl
32
+ - war
33
+
34
+ tags:
35
+ - translation
36
+ - opus-mt-tc-bible
37
+
38
+ license: apache-2.0
39
+ model-index:
40
+ - name: opus-mt-tc-bible-big-phi-deu_eng_fra_por_spa
41
+ results:
42
+ - task:
43
+ name: Translation multi-multi
44
+ type: translation
45
+ args: multi-multi
46
+ dataset:
47
+ name: tatoeba-test-v2020-07-28-v2023-09-26
48
+ type: tatoeba_mt
49
+ args: multi-multi
50
+ metrics:
51
+ - name: BLEU
52
+ type: bleu
53
+ value: 22.4
54
+ - name: chr-F
55
+ type: chrf
56
+ value: 0.39255
57
+ ---
58
+ # opus-mt-tc-bible-big-phi-deu_eng_fra_por_spa
59
+
60
+ ## Table of Contents
61
+ - [Model Details](#model-details)
62
+ - [Uses](#uses)
63
+ - [Risks, Limitations and Biases](#risks-limitations-and-biases)
64
+ - [How to Get Started With the Model](#how-to-get-started-with-the-model)
65
+ - [Training](#training)
66
+ - [Evaluation](#evaluation)
67
+ - [Citation Information](#citation-information)
68
+ - [Acknowledgements](#acknowledgements)
69
+
70
+ ## Model Details
71
+
72
+ Neural machine translation model for translating from Philippine languages (phi) to unknown (deu+eng+fra+por+spa).
73
+
74
+ This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
75
+ **Model Description:**
76
+ - **Developed by:** Language Technology Research Group at the University of Helsinki
77
+ - **Model Type:** Translation (transformer-big)
78
+ - **Release**: 2024-05-30
79
+ - **License:** Apache-2.0
80
+ - **Language(s):**
81
+ - Source Language(s): akl bcl bik bto ceb cgc fil gor hil ify ilo krj mbb mbt mog mrw msm mta obo pag pam sxn tbl war
82
+ - Target Language(s): deu eng fra por spa
83
+ - Valid Target Language Labels: >>deu<< >>eng<< >>fra<< >>por<< >>spa<< >>xxx<<
84
+ - **Original Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/phi-deu+eng+fra+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip)
85
+ - **Resources for more information:**
86
+ - [OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/phi-deu%2Beng%2Bfra%2Bpor%2Bspa/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-05-30)
87
+ - [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
88
+ - [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian)
89
+ - [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/)
90
+ - [HPLT bilingual data v1 (as part of the Tatoeba Translation Challenge dataset)](https://hplt-project.org/datasets/v1)
91
+ - [A massively parallel Bible corpus](https://aclanthology.org/L14-1215/)
92
+
93
+ This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>deu<<`
94
+
95
+ ## Uses
96
+
97
+ This model can be used for translation and text-to-text generation.
98
+
99
+ ## Risks, Limitations and Biases
100
+
101
+ **CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
102
+
103
+ Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
104
+
105
+ ## How to Get Started With the Model
106
+
107
+ A short example code:
108
+
109
+ ```python
110
+ from transformers import MarianMTModel, MarianTokenizer
111
+
112
+ src_text = [
113
+ ">>deu<< Replace this with text in an accepted source language.",
114
+ ">>spa<< This is the second sentence."
115
+ ]
116
+
117
+ model_name = "pytorch-models/opus-mt-tc-bible-big-phi-deu_eng_fra_por_spa"
118
+ tokenizer = MarianTokenizer.from_pretrained(model_name)
119
+ model = MarianMTModel.from_pretrained(model_name)
120
+ translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
121
+
122
+ for t in translated:
123
+ print( tokenizer.decode(t, skip_special_tokens=True) )
124
+ ```
125
+
126
+ You can also use OPUS-MT models with the transformers pipelines, for example:
127
+
128
+ ```python
129
+ from transformers import pipeline
130
+ pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-bible-big-phi-deu_eng_fra_por_spa")
131
+ print(pipe(">>deu<< Replace this with text in an accepted source language."))
132
+ ```
133
+
134
+ ## Training
135
+
136
+ - **Data**: opusTCv20230926max50+bt+jhubc ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
137
+ - **Pre-processing**: SentencePiece (spm32k,spm32k)
138
+ - **Model Type:** transformer-big
139
+ - **Original MarianNMT Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/phi-deu+eng+fra+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip)
140
+ - **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
141
+
142
+ ## Evaluation
143
+
144
+ * [Model scores at the OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/phi-deu%2Beng%2Bfra%2Bpor%2Bspa/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-05-30)
145
+ * test set translations: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/phi-deu+eng+fra+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.test.txt)
146
+ * test set scores: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/phi-deu+eng+fra+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.eval.txt)
147
+ * benchmark results: [benchmark_results.txt](benchmark_results.txt)
148
+ * benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
149
+
150
+ | langpair | testset | chr-F | BLEU | #sent | #words |
151
+ |----------|---------|-------|-------|-------|--------|
152
+ | multi-multi | tatoeba-test-v2020-07-28-v2023-09-26 | 0.39255 | 22.4 | 5147 | 36912 |
153
+
154
+ ## Citation Information
155
+
156
+ * Publications: [Democratizing neural machine translation with OPUS-MT](https://doi.org/10.1007/s10579-023-09704-w) and [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
157
+
158
+ ```bibtex
159
+ @article{tiedemann2023democratizing,
160
+ title={Democratizing neural machine translation with {OPUS-MT}},
161
+ author={Tiedemann, J{\"o}rg and Aulamo, Mikko and Bakshandaeva, Daria and Boggia, Michele and Gr{\"o}nroos, Stig-Arne and Nieminen, Tommi and Raganato, Alessandro and Scherrer, Yves and Vazquez, Raul and Virpioja, Sami},
162
+ journal={Language Resources and Evaluation},
163
+ number={58},
164
+ pages={713--755},
165
+ year={2023},
166
+ publisher={Springer Nature},
167
+ issn={1574-0218},
168
+ doi={10.1007/s10579-023-09704-w}
169
+ }
170
+
171
+ @inproceedings{tiedemann-thottingal-2020-opus,
172
+ title = "{OPUS}-{MT} {--} Building open translation services for the World",
173
+ author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
174
+ booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
175
+ month = nov,
176
+ year = "2020",
177
+ address = "Lisboa, Portugal",
178
+ publisher = "European Association for Machine Translation",
179
+ url = "https://aclanthology.org/2020.eamt-1.61",
180
+ pages = "479--480",
181
+ }
182
+
183
+ @inproceedings{tiedemann-2020-tatoeba,
184
+ title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
185
+ author = {Tiedemann, J{\"o}rg},
186
+ booktitle = "Proceedings of the Fifth Conference on Machine Translation",
187
+ month = nov,
188
+ year = "2020",
189
+ address = "Online",
190
+ publisher = "Association for Computational Linguistics",
191
+ url = "https://aclanthology.org/2020.wmt-1.139",
192
+ pages = "1174--1182",
193
+ }
194
+ ```
195
+
196
+ ## Acknowledgements
197
+
198
+ The work is supported by the [HPLT project](https://hplt-project.org/), funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland, and the [EuroHPC supercomputer LUMI](https://www.lumi-supercomputer.eu/).
199
+
200
+ ## Model conversion info
201
+
202
+ * transformers version: 4.45.1
203
+ * OPUS-MT git hash: 0882077
204
+ * port time: Tue Oct 8 12:40:26 EEST 2024
205
+ * port machine: LM0-400-22516.local
benchmark_results.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ multi-multi tatoeba-test-v2020-07-28-v2023-09-26 0.39255 22.4 5147 36912
benchmark_translations.zip ADDED
File without changes
config.json ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "pytorch-models/opus-mt-tc-bible-big-phi-deu_eng_fra_por_spa",
3
+ "activation_dropout": 0.0,
4
+ "activation_function": "relu",
5
+ "architectures": [
6
+ "MarianMTModel"
7
+ ],
8
+ "attention_dropout": 0.0,
9
+ "bos_token_id": 0,
10
+ "classifier_dropout": 0.0,
11
+ "d_model": 1024,
12
+ "decoder_attention_heads": 16,
13
+ "decoder_ffn_dim": 4096,
14
+ "decoder_layerdrop": 0.0,
15
+ "decoder_layers": 6,
16
+ "decoder_start_token_id": 59759,
17
+ "decoder_vocab_size": 59760,
18
+ "dropout": 0.1,
19
+ "encoder_attention_heads": 16,
20
+ "encoder_ffn_dim": 4096,
21
+ "encoder_layerdrop": 0.0,
22
+ "encoder_layers": 6,
23
+ "eos_token_id": 355,
24
+ "forced_eos_token_id": null,
25
+ "init_std": 0.02,
26
+ "is_encoder_decoder": true,
27
+ "max_length": null,
28
+ "max_position_embeddings": 1024,
29
+ "model_type": "marian",
30
+ "normalize_embedding": false,
31
+ "num_beams": null,
32
+ "num_hidden_layers": 6,
33
+ "pad_token_id": 59759,
34
+ "scale_embedding": true,
35
+ "share_encoder_decoder_embeddings": true,
36
+ "static_position_embeddings": true,
37
+ "torch_dtype": "float32",
38
+ "transformers_version": "4.45.1",
39
+ "use_cache": true,
40
+ "vocab_size": 59760
41
+ }
generation_config.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bad_words_ids": [
4
+ [
5
+ 59759
6
+ ]
7
+ ],
8
+ "bos_token_id": 0,
9
+ "decoder_start_token_id": 59759,
10
+ "eos_token_id": 355,
11
+ "forced_eos_token_id": 355,
12
+ "max_length": 512,
13
+ "num_beams": 4,
14
+ "pad_token_id": 59759,
15
+ "transformers_version": "4.45.1"
16
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:529488d2ec2292ab8e2699ceccce35bcc5fc4e9acd43c84f64a2a4e1c824be5e
3
+ size 950475120
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c96535419b1afed8160a37e1cc62a62893b5a28b146272e56495b7f30a76f1b8
3
+ size 950526341
source.spm ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0faae32b9b3446cb1b85c2c18ae3888f20b2461f64dbe4d335ae2d3cdf3f6766
3
+ size 787583
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"eos_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>"}
target.spm ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b3ef4bd2511e5b6a4a7ffda8a7c387ba0e670061b874537d4d7ae16cfd8bcb4
3
+ size 811703
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"source_lang": "phi", "target_lang": "deu+eng+fra+por+spa", "unk_token": "<unk>", "eos_token": "</s>", "pad_token": "<pad>", "model_max_length": 512, "sp_model_kwargs": {}, "separate_vocabs": false, "special_tokens_map_file": null, "name_or_path": "marian-models/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30/phi-deu+eng+fra+por+spa", "tokenizer_class": "MarianTokenizer"}
vocab.json ADDED
The diff for this file is too large to render. See raw diff