tiedeman commited on
Commit
07437be
1 Parent(s): 0c2a941

Initial commit

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ *.spm filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,200 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ language:
4
+ - be
5
+ - bg
6
+ - bs
7
+ - cs
8
+ - csb
9
+ - cu
10
+ - dsb
11
+ - en
12
+ - hr
13
+ - hsb
14
+ - mk
15
+ - orv
16
+ - pl
17
+ - ru
18
+ - rue
19
+ - sh
20
+ - sk
21
+ - sl
22
+ - sr
23
+ - szl
24
+ - uk
25
+
26
+ tags:
27
+ - translation
28
+ - opus-mt-tc-bible
29
+
30
+ license: apache-2.0
31
+ model-index:
32
+ - name: opus-mt-tc-bible-big-sla-en
33
+ results:
34
+ - task:
35
+ name: Translation multi-eng
36
+ type: translation
37
+ args: multi-eng
38
+ dataset:
39
+ name: tatoeba-test-v2020-07-28-v2023-09-26
40
+ type: tatoeba_mt
41
+ args: multi-eng
42
+ metrics:
43
+ - name: BLEU
44
+ type: bleu
45
+ value: 55.6
46
+ - name: chr-F
47
+ type: chrf
48
+ value: 0.70473
49
+ ---
50
+ # opus-mt-tc-bible-big-sla-en
51
+
52
+ ## Table of Contents
53
+ - [Model Details](#model-details)
54
+ - [Uses](#uses)
55
+ - [Risks, Limitations and Biases](#risks-limitations-and-biases)
56
+ - [How to Get Started With the Model](#how-to-get-started-with-the-model)
57
+ - [Training](#training)
58
+ - [Evaluation](#evaluation)
59
+ - [Citation Information](#citation-information)
60
+ - [Acknowledgements](#acknowledgements)
61
+
62
+ ## Model Details
63
+
64
+ Neural machine translation model for translating from Slavic languages (sla) to English (en).
65
+
66
+ This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
67
+ **Model Description:**
68
+ - **Developed by:** Language Technology Research Group at the University of Helsinki
69
+ - **Model Type:** Translation (transformer-big)
70
+ - **Release**: 2024-08-17
71
+ - **License:** Apache-2.0
72
+ - **Language(s):**
73
+ - Source Language(s): bel bos bul ces chu cnr csb dsb hbs hrv hsb mkd multi orv pol rue rus slk slv srp szl ukr
74
+ - Target Language(s): eng
75
+ - **Original Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/sla-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip)
76
+ - **Resources for more information:**
77
+ - [OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/sla-eng/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-08-17)
78
+ - [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
79
+ - [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian)
80
+ - [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/)
81
+ - [HPLT bilingual data v1 (as part of the Tatoeba Translation Challenge dataset)](https://hplt-project.org/datasets/v1)
82
+ - [A massively parallel Bible corpus](https://aclanthology.org/L14-1215/)
83
+
84
+ ## Uses
85
+
86
+ This model can be used for translation and text-to-text generation.
87
+
88
+ ## Risks, Limitations and Biases
89
+
90
+ **CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
91
+
92
+ Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
93
+
94
+ ## How to Get Started With the Model
95
+
96
+ A short example code:
97
+
98
+ ```python
99
+ from transformers import MarianMTModel, MarianTokenizer
100
+
101
+ src_text = [
102
+ "Nie winię ciebie.",
103
+ "Так это по-немецки сказать нельзя."
104
+ ]
105
+
106
+ model_name = "pytorch-models/opus-mt-tc-bible-big-sla-en"
107
+ tokenizer = MarianTokenizer.from_pretrained(model_name)
108
+ model = MarianMTModel.from_pretrained(model_name)
109
+ translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
110
+
111
+ for t in translated:
112
+ print( tokenizer.decode(t, skip_special_tokens=True) )
113
+
114
+ # expected output:
115
+ # I don't blame you.
116
+ # You can't say that in German.
117
+ ```
118
+
119
+ You can also use OPUS-MT models with the transformers pipelines, for example:
120
+
121
+ ```python
122
+ from transformers import pipeline
123
+ pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-bible-big-sla-en")
124
+ print(pipe("Nie winię ciebie."))
125
+
126
+ # expected output: I don't blame you.
127
+ ```
128
+
129
+ ## Training
130
+
131
+ - **Data**: opusTCv20230926max50+bt+jhubc ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
132
+ - **Pre-processing**: SentencePiece (spm32k,spm32k)
133
+ - **Model Type:** transformer-big
134
+ - **Original MarianNMT Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/sla-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip)
135
+ - **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
136
+
137
+ ## Evaluation
138
+
139
+ * [Model scores at the OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/sla-eng/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-08-17)
140
+ * test set translations: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sla-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.test.txt)
141
+ * test set scores: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sla-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.eval.txt)
142
+ * benchmark results: [benchmark_results.txt](benchmark_results.txt)
143
+ * benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
144
+
145
+ | langpair | testset | chr-F | BLEU | #sent | #words |
146
+ |----------|---------|-------|-------|-------|--------|
147
+ | multi-eng | tatoeba-test-v2020-07-28-v2023-09-26 | 0.70473 | 55.6 | 10000 | 74777 |
148
+
149
+ ## Citation Information
150
+
151
+ * Publications: [Democratizing neural machine translation with OPUS-MT](https://doi.org/10.1007/s10579-023-09704-w) and [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
152
+
153
+ ```bibtex
154
+ @article{tiedemann2023democratizing,
155
+ title={Democratizing neural machine translation with {OPUS-MT}},
156
+ author={Tiedemann, J{\"o}rg and Aulamo, Mikko and Bakshandaeva, Daria and Boggia, Michele and Gr{\"o}nroos, Stig-Arne and Nieminen, Tommi and Raganato, Alessandro and Scherrer, Yves and Vazquez, Raul and Virpioja, Sami},
157
+ journal={Language Resources and Evaluation},
158
+ number={58},
159
+ pages={713--755},
160
+ year={2023},
161
+ publisher={Springer Nature},
162
+ issn={1574-0218},
163
+ doi={10.1007/s10579-023-09704-w}
164
+ }
165
+
166
+ @inproceedings{tiedemann-thottingal-2020-opus,
167
+ title = "{OPUS}-{MT} {--} Building open translation services for the World",
168
+ author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
169
+ booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
170
+ month = nov,
171
+ year = "2020",
172
+ address = "Lisboa, Portugal",
173
+ publisher = "European Association for Machine Translation",
174
+ url = "https://aclanthology.org/2020.eamt-1.61",
175
+ pages = "479--480",
176
+ }
177
+
178
+ @inproceedings{tiedemann-2020-tatoeba,
179
+ title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
180
+ author = {Tiedemann, J{\"o}rg},
181
+ booktitle = "Proceedings of the Fifth Conference on Machine Translation",
182
+ month = nov,
183
+ year = "2020",
184
+ address = "Online",
185
+ publisher = "Association for Computational Linguistics",
186
+ url = "https://aclanthology.org/2020.wmt-1.139",
187
+ pages = "1174--1182",
188
+ }
189
+ ```
190
+
191
+ ## Acknowledgements
192
+
193
+ The work is supported by the [HPLT project](https://hplt-project.org/), funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland, and the [EuroHPC supercomputer LUMI](https://www.lumi-supercomputer.eu/).
194
+
195
+ ## Model conversion info
196
+
197
+ * transformers version: 4.45.1
198
+ * OPUS-MT git hash: a44ab31
199
+ * port time: Sun Oct 6 22:22:36 EEST 2024
200
+ * port machine: LM0-400-22516.local
benchmark_results.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ multi-eng tatoeba-test-v2020-07-28-v2023-09-26 0.70473 55.6 10000 74777
benchmark_translations.zip ADDED
File without changes
config.json ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "pytorch-models/opus-mt-tc-bible-big-sla-en",
3
+ "activation_dropout": 0.0,
4
+ "activation_function": "relu",
5
+ "architectures": [
6
+ "MarianMTModel"
7
+ ],
8
+ "attention_dropout": 0.0,
9
+ "bad_words_ids": [
10
+ [
11
+ 59890
12
+ ]
13
+ ],
14
+ "bos_token_id": 0,
15
+ "classifier_dropout": 0.0,
16
+ "d_model": 1024,
17
+ "decoder_attention_heads": 16,
18
+ "decoder_ffn_dim": 4096,
19
+ "decoder_layerdrop": 0.0,
20
+ "decoder_layers": 6,
21
+ "decoder_start_token_id": 59890,
22
+ "decoder_vocab_size": 59891,
23
+ "dropout": 0.1,
24
+ "encoder_attention_heads": 16,
25
+ "encoder_ffn_dim": 4096,
26
+ "encoder_layerdrop": 0.0,
27
+ "encoder_layers": 6,
28
+ "eos_token_id": 940,
29
+ "forced_eos_token_id": 940,
30
+ "init_std": 0.02,
31
+ "is_encoder_decoder": true,
32
+ "max_length": 512,
33
+ "max_position_embeddings": 1024,
34
+ "model_type": "marian",
35
+ "normalize_embedding": false,
36
+ "num_beams": 4,
37
+ "num_hidden_layers": 6,
38
+ "pad_token_id": 59890,
39
+ "scale_embedding": true,
40
+ "share_encoder_decoder_embeddings": true,
41
+ "static_position_embeddings": true,
42
+ "torch_dtype": "float32",
43
+ "transformers_version": "4.18.0.dev0",
44
+ "use_cache": true,
45
+ "vocab_size": 59891
46
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:015361595f7c4affcfbb5494a5cc5f1118010e812975f36b952b17001d710e5b
3
+ size 951074117
source.spm ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4c53dd33810479dcc8107fabf6bb09d78608b8d98c3d6dfec482ab803ceaceff
3
+ size 868827
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"eos_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>"}
target.spm ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:15e81d7bdd7357822c974e4b929b2b143d7085f8eb94f354bbb9b480a659fb4f
3
+ size 799108
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"source_lang": "sla", "target_lang": "en", "unk_token": "<unk>", "eos_token": "</s>", "pad_token": "<pad>", "model_max_length": 512, "sp_model_kwargs": {}, "separate_vocabs": false, "special_tokens_map_file": null, "name_or_path": "marian-models/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17/sla-en", "tokenizer_class": "MarianTokenizer"}
vocab.json ADDED
The diff for this file is too large to render. See raw diff