Update README.md

#3
by gpengzhi - opened
Files changed (1) hide show
  1. README.md +6 -14
README.md CHANGED
@@ -40,26 +40,18 @@ language:
40
  - zh
41
  ---
42
 
43
- # Model Card for GemmaX2-28
44
 
45
- ## Model Details
46
 
47
- ### Model Description
48
 
49
- GemmaX2-28-2B-Pretrain is a language model that results from continual pretraining of Gemma2-2B on a mix of 56 billion tokens of monolingual and parallel data in 28 different languages โ€” Arabic, Bengali, Czech, German, English, Spanish, Persian, French, Hebrew, Hindi, Indonesian, Italian, Japanese, Khmer, Korean, Lao, Malay, Burmese, Dutch, polish, Portuguese, Russian, Thai, Tagalog, Turkish, Urdu, Vietnamese, Chinese.
50
-
51
- GemmaX2-28-2B-v0.1 is the model version of GemmaX2-28-2B-Pretrain after SFT.
52
 
53
  - **Developed by:** Xiaomi
54
- - **Model type:** A 2B parameter model base on Gemma2, we obtained GemmaX2-28-2B-Pretrain by continuing pre-training on a large amount of monolingual and parallel data. Afterward, GemmaX2-28-2B-v0.1 was derived through supervised fine-tuning on a small set of high-quality instruction data.
55
- - **Language(s):** Arabic, Bengali, Czech, German, English, Spanish, Persian, French, Hebrew, Hindi, Indonesian, Italian, Japanese, Khmer, Korean, Lao, Malay, Burmese, Dutch, Polish, Portuguese, Russian, Thai, Tagalog, Turkish, Urdu, Vietnamese, Chinese.
56
- - **License:** gemma
57
-
58
- ### Model Source
59
 
60
- - paper: [Multilingual Machine Translation with Open Large Language Models at Practical Scale: An Empirical Study](https://arxiv.org/pdf/2502.02481)
61
 
62
- ### Model Performance
63
 
64
  ![Experimental Result](main.png)
65
 
@@ -99,4 +91,4 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
99
 
100
  ## Limitations
101
 
102
- GemmaX2-28-2B-v0.1 supports only the 28 most commonly used languages and does not guarantee powerful translation performance for other languages. Additionally, we will continue to improve GemmaX2-28-2B's translation performance, and future models will be release in due course.
 
40
  - zh
41
  ---
42
 
 
43
 
44
+ ## Model Description
45
 
46
+ GemmaX2-28-2B-v0.1 is an LLM-based translation model. It has been fintuned on GemmaX2-28-2B-Pretrain, which is a language model developed through continual pretraining of Gemma2-2B using a mix of 56 billion tokens from both monolingual and parallel data across 28 different languages. Please find more details in our paper: [Multilingual Machine Translation with Open Large Language Models at Practical Scale: An Empirical Study](https://arxiv.org/pdf/2502.02481).
47
 
 
 
 
48
 
49
  - **Developed by:** Xiaomi
50
+ - **Model type:** GemmaX2-28-2B-Pretrain is obtained by continually pretraining Gemma2-2B on a large amount of monolingual and parallel data. Subsequently, GemmaX2-28-2B-v0.1 is derived through supervised finetuning on a small set of high-quality translation instruction data.
51
+ - **Languages:** Arabic, Bengali, Czech, German, English, Spanish, Persian, French, Hebrew, Hindi, Indonesian, Italian, Japanese, Khmer, Korean, Lao, Malay, Burmese, Dutch, Polish, Portuguese, Russian, Thai, Tagalog, Turkish, Urdu, Vietnamese, Chinese.
 
 
 
52
 
 
53
 
54
+ ## Model Performance
55
 
56
  ![Experimental Result](main.png)
57
 
 
91
 
92
  ## Limitations
93
 
94
+ GemmaX2-28-2B-v0.1 only supports the 28 languages listed above and does not guarantee strong translation performance for other languages. We will continue to enhance the translation performance of GemmaX2-28-2B, and future models will be released in due course.