tiedeman commited on
Commit
39acdc3
1 Parent(s): 45a66d7

Initial commit

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ *.spm filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,230 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ language:
4
+ - af
5
+ - ang
6
+ - bar
7
+ - bi
8
+ - bzj
9
+ - da
10
+ - de
11
+ - djk
12
+ - drt
13
+ - en
14
+ - enm
15
+ - es
16
+ - fo
17
+ - fr
18
+ - frr
19
+ - fy
20
+ - gos
21
+ - got
22
+ - gsw
23
+ - hrx
24
+ - hwc
25
+ - icr
26
+ - is
27
+ - jam
28
+ - kri
29
+ - ksh
30
+ - lb
31
+ - li
32
+ - nb
33
+ - nds
34
+ - nl
35
+ - nn
36
+ - no
37
+ - non
38
+ - ofs
39
+ - pcm
40
+ - pdc
41
+ - pfl
42
+ - pih
43
+ - pis
44
+ - pt
45
+ - rop
46
+ - sco
47
+ - srm
48
+ - srn
49
+ - stq
50
+ - sv
51
+ - swg
52
+ - tcs
53
+ - tpi
54
+ - vls
55
+ - wae
56
+ - yi
57
+ - zea
58
+
59
+ tags:
60
+ - translation
61
+ - opus-mt-tc-bible
62
+
63
+ license: apache-2.0
64
+ model-index:
65
+ - name: opus-mt-tc-bible-big-deu_eng_fra_por_spa-gem
66
+ results:
67
+ - task:
68
+ name: Translation multi-multi
69
+ type: translation
70
+ args: multi-multi
71
+ dataset:
72
+ name: tatoeba-test-v2020-07-28-v2023-09-26
73
+ type: tatoeba_mt
74
+ args: multi-multi
75
+ metrics:
76
+ - name: BLEU
77
+ type: bleu
78
+ value: 49.6
79
+ - name: chr-F
80
+ type: chrf
81
+ value: 0.68215
82
+ ---
83
+ # opus-mt-tc-bible-big-deu_eng_fra_por_spa-gem
84
+
85
+ ## Table of Contents
86
+ - [Model Details](#model-details)
87
+ - [Uses](#uses)
88
+ - [Risks, Limitations and Biases](#risks-limitations-and-biases)
89
+ - [How to Get Started With the Model](#how-to-get-started-with-the-model)
90
+ - [Training](#training)
91
+ - [Evaluation](#evaluation)
92
+ - [Citation Information](#citation-information)
93
+ - [Acknowledgements](#acknowledgements)
94
+
95
+ ## Model Details
96
+
97
+ Neural machine translation model for translating from unknown (deu+eng+fra+por+spa) to Germanic languages (gem).
98
+
99
+ This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
100
+ **Model Description:**
101
+ - **Developed by:** Language Technology Research Group at the University of Helsinki
102
+ - **Model Type:** Translation (transformer-big)
103
+ - **Release**: 2024-05-30
104
+ - **License:** Apache-2.0
105
+ - **Language(s):**
106
+ - Source Language(s): deu eng fra por spa
107
+ - Target Language(s): afr ang bar bis bzj dan deu djk drt eng enm fao frr fry gos got gsw hrx hwc icr isl jam kri ksh lim ltz nds nld nno nob non nor ofs pcm pdc pfl pih pis rop sco srm srn stq swe swg tcs tpi vls wae yid zea
108
+ - Valid Target Language Labels: >>act<< >>afr<< >>afs<< >>aig<< >>ang<< >>ang_Latn<< >>bah<< >>bar<< >>bis<< >>bjs<< >>brc<< >>bzj<< >>bzk<< >>cim<< >>dan<< >>dcr<< >>deu<< >>djk<< >>djk_Latn<< >>drt<< >>drt_Latn<< >>dum<< >>eng<< >>enm<< >>enm_Latn<< >>fao<< >>fpe<< >>frk<< >>frr<< >>fry<< >>gcl<< >>gct<< >>geh<< >>gmh<< >>gml<< >>goh<< >>gos<< >>got<< >>got_Goth<< >>gpe<< >>gsw<< >>gul<< >>gyn<< >>hrx<< >>hrx_Latn<< >>hwc<< >>icr<< >>isl<< >>jam<< >>jut<< >>jvd<< >>kri<< >>ksh<< >>kww<< >>lim<< >>lng<< >>ltz<< >>mhn<< >>nds<< >>nld<< >>nno<< >>nob<< >>non<< >>nor<< >>nrn<< >>odt<< >>ofs<< >>ofs_Latn<< >>oor<< >>osx<< >>ovd<< >>pcm<< >>pdc<< >>pdt<< >>pey<< >>pfl<< >>pih<< >>pih_Latn<< >>pis<< >>rmg<< >>rop<< >>sco<< >>sdz<< >>skw<< >>sli<< >>srm<< >>srn<< >>stl<< >>stq<< >>svc<< >>swe<< >>swg<< >>sxu<< >>tch<< >>tcs<< >>tgh<< >>tpi<< >>trf<< >>twd<< >>uln<< >>vel<< >>vic<< >>vls<< >>vmf<< >>wae<< >>wep<< >>wes<< >>wym<< >>xvn<< >>xxx<< >>yec<< >>yid<< >>zea<<
109
+ - **Original Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu+eng+fra+por+spa-gem/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip)
110
+ - **Resources for more information:**
111
+ - [OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/deu%2Beng%2Bfra%2Bpor%2Bspa-gem/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-05-30)
112
+ - [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
113
+ - [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian)
114
+ - [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/)
115
+ - [HPLT bilingual data v1 (as part of the Tatoeba Translation Challenge dataset)](https://hplt-project.org/datasets/v1)
116
+ - [A massively parallel Bible corpus](https://aclanthology.org/L14-1215/)
117
+
118
+ This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>afr<<`
119
+
120
+ ## Uses
121
+
122
+ This model can be used for translation and text-to-text generation.
123
+
124
+ ## Risks, Limitations and Biases
125
+
126
+ **CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
127
+
128
+ Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
129
+
130
+ ## How to Get Started With the Model
131
+
132
+ A short example code:
133
+
134
+ ```python
135
+ from transformers import MarianMTModel, MarianTokenizer
136
+
137
+ src_text = [
138
+ ">>afr<< Replace this with text in an accepted source language.",
139
+ ">>zea<< This is the second sentence."
140
+ ]
141
+
142
+ model_name = "pytorch-models/opus-mt-tc-bible-big-deu_eng_fra_por_spa-gem"
143
+ tokenizer = MarianTokenizer.from_pretrained(model_name)
144
+ model = MarianMTModel.from_pretrained(model_name)
145
+ translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
146
+
147
+ for t in translated:
148
+ print( tokenizer.decode(t, skip_special_tokens=True) )
149
+ ```
150
+
151
+ You can also use OPUS-MT models with the transformers pipelines, for example:
152
+
153
+ ```python
154
+ from transformers import pipeline
155
+ pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-bible-big-deu_eng_fra_por_spa-gem")
156
+ print(pipe(">>afr<< Replace this with text in an accepted source language."))
157
+ ```
158
+
159
+ ## Training
160
+
161
+ - **Data**: opusTCv20230926max50+bt+jhubc ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
162
+ - **Pre-processing**: SentencePiece (spm32k,spm32k)
163
+ - **Model Type:** transformer-big
164
+ - **Original MarianNMT Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu+eng+fra+por+spa-gem/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip)
165
+ - **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
166
+
167
+ ## Evaluation
168
+
169
+ * [Model scores at the OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/deu%2Beng%2Bfra%2Bpor%2Bspa-gem/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-05-30)
170
+ * test set translations: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu+eng+fra+por+spa-gem/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.test.txt)
171
+ * test set scores: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu+eng+fra+por+spa-gem/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.eval.txt)
172
+ * benchmark results: [benchmark_results.txt](benchmark_results.txt)
173
+ * benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
174
+
175
+ | langpair | testset | chr-F | BLEU | #sent | #words |
176
+ |----------|---------|-------|-------|-------|--------|
177
+ | multi-multi | tatoeba-test-v2020-07-28-v2023-09-26 | 0.68215 | 49.6 | 10000 | 83403 |
178
+
179
+ ## Citation Information
180
+
181
+ * Publications: [Democratizing neural machine translation with OPUS-MT](https://doi.org/10.1007/s10579-023-09704-w) and [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
182
+
183
+ ```bibtex
184
+ @article{tiedemann2023democratizing,
185
+ title={Democratizing neural machine translation with {OPUS-MT}},
186
+ author={Tiedemann, J{\"o}rg and Aulamo, Mikko and Bakshandaeva, Daria and Boggia, Michele and Gr{\"o}nroos, Stig-Arne and Nieminen, Tommi and Raganato, Alessandro and Scherrer, Yves and Vazquez, Raul and Virpioja, Sami},
187
+ journal={Language Resources and Evaluation},
188
+ number={58},
189
+ pages={713--755},
190
+ year={2023},
191
+ publisher={Springer Nature},
192
+ issn={1574-0218},
193
+ doi={10.1007/s10579-023-09704-w}
194
+ }
195
+
196
+ @inproceedings{tiedemann-thottingal-2020-opus,
197
+ title = "{OPUS}-{MT} {--} Building open translation services for the World",
198
+ author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
199
+ booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
200
+ month = nov,
201
+ year = "2020",
202
+ address = "Lisboa, Portugal",
203
+ publisher = "European Association for Machine Translation",
204
+ url = "https://aclanthology.org/2020.eamt-1.61",
205
+ pages = "479--480",
206
+ }
207
+
208
+ @inproceedings{tiedemann-2020-tatoeba,
209
+ title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
210
+ author = {Tiedemann, J{\"o}rg},
211
+ booktitle = "Proceedings of the Fifth Conference on Machine Translation",
212
+ month = nov,
213
+ year = "2020",
214
+ address = "Online",
215
+ publisher = "Association for Computational Linguistics",
216
+ url = "https://aclanthology.org/2020.wmt-1.139",
217
+ pages = "1174--1182",
218
+ }
219
+ ```
220
+
221
+ ## Acknowledgements
222
+
223
+ The work is supported by the [HPLT project](https://hplt-project.org/), funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland, and the [EuroHPC supercomputer LUMI](https://www.lumi-supercomputer.eu/).
224
+
225
+ ## Model conversion info
226
+
227
+ * transformers version: 4.45.1
228
+ * OPUS-MT git hash: 0882077
229
+ * port time: Tue Oct 8 09:19:25 EEST 2024
230
+ * port machine: LM0-400-22516.local
benchmark_results.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ multi-multi tatoeba-test-v2020-07-28-v2023-09-26 0.68215 49.6 10000 83403
benchmark_translations.zip ADDED
File without changes
config.json ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "pytorch-models/opus-mt-tc-bible-big-deu_eng_fra_por_spa-gem",
3
+ "activation_dropout": 0.0,
4
+ "activation_function": "relu",
5
+ "architectures": [
6
+ "MarianMTModel"
7
+ ],
8
+ "attention_dropout": 0.0,
9
+ "bos_token_id": 0,
10
+ "classifier_dropout": 0.0,
11
+ "d_model": 1024,
12
+ "decoder_attention_heads": 16,
13
+ "decoder_ffn_dim": 4096,
14
+ "decoder_layerdrop": 0.0,
15
+ "decoder_layers": 6,
16
+ "decoder_start_token_id": 49012,
17
+ "decoder_vocab_size": 49013,
18
+ "dropout": 0.1,
19
+ "encoder_attention_heads": 16,
20
+ "encoder_ffn_dim": 4096,
21
+ "encoder_layerdrop": 0.0,
22
+ "encoder_layers": 6,
23
+ "eos_token_id": 467,
24
+ "forced_eos_token_id": null,
25
+ "init_std": 0.02,
26
+ "is_encoder_decoder": true,
27
+ "max_length": null,
28
+ "max_position_embeddings": 1024,
29
+ "model_type": "marian",
30
+ "normalize_embedding": false,
31
+ "num_beams": null,
32
+ "num_hidden_layers": 6,
33
+ "pad_token_id": 49012,
34
+ "scale_embedding": true,
35
+ "share_encoder_decoder_embeddings": true,
36
+ "static_position_embeddings": true,
37
+ "torch_dtype": "float32",
38
+ "transformers_version": "4.45.1",
39
+ "use_cache": true,
40
+ "vocab_size": 49013
41
+ }
generation_config.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bad_words_ids": [
4
+ [
5
+ 49012
6
+ ]
7
+ ],
8
+ "bos_token_id": 0,
9
+ "decoder_start_token_id": 49012,
10
+ "eos_token_id": 467,
11
+ "forced_eos_token_id": 467,
12
+ "max_length": 512,
13
+ "num_beams": 4,
14
+ "pad_token_id": 49012,
15
+ "transformers_version": "4.45.1"
16
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8e482982f155289cf90946a5e9fe80c377a3d806c3598b8cd2f931478a556e89
3
+ size 906412420
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:732081d2d0584269a8704a624e3de27d0712cb0e996880cd7306d4b7c4f6f007
3
+ size 906463685
source.spm ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:79c8b7a9e57ea63a56aa0e8d18f5b332a6677dd9c9d1095348c5058013d753f0
3
+ size 807155
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"eos_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>"}
target.spm ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8bcb203d294a9365335ba0cd08c8537f6ae78c42009cb024eaf1c067bcb3b672
3
+ size 790660
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"source_lang": "deu+eng+fra+por+spa", "target_lang": "gem", "unk_token": "<unk>", "eos_token": "</s>", "pad_token": "<pad>", "model_max_length": 512, "sp_model_kwargs": {}, "separate_vocabs": false, "special_tokens_map_file": null, "name_or_path": "marian-models/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30/deu+eng+fra+por+spa-gem", "tokenizer_class": "MarianTokenizer"}
vocab.json ADDED
The diff for this file is too large to render. See raw diff