Text Generation
Transformers
Safetensors
llama
conversational
text-generation-inference
Inference Endpoints
haoranxu commited on
Commit
473ed92
1 Parent(s): ff56011

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +139 -3
README.md CHANGED
@@ -1,3 +1,139 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - oscar-corpus/OSCAR-2301
5
+ - allenai/nllb
6
+ - Helsinki-NLP/opus-100
7
+ language:
8
+ - en
9
+ - da
10
+ - nl
11
+ - de
12
+ - is
13
+ - 'no'
14
+ - sc
15
+ - af
16
+ - ca
17
+ - ro
18
+ - gl
19
+ - it
20
+ - pt
21
+ - es
22
+ - bg
23
+ - mk
24
+ - sr
25
+ - uk
26
+ - ru
27
+ - id
28
+ - ms
29
+ - th
30
+ - vi
31
+ - mg
32
+ - fr
33
+ - hu
34
+ - el
35
+ - cs
36
+ - pl
37
+ - lt
38
+ - lv
39
+ - ka
40
+ - zh
41
+ - ja
42
+ - ko
43
+ - fi
44
+ - et
45
+ - gu
46
+ - hi
47
+ - mr
48
+ - ne
49
+ - ur
50
+ - az
51
+ - kk
52
+ - ky
53
+ - tr
54
+ - uz
55
+ - ar
56
+ - he
57
+ - fa
58
+ base_model:
59
+ - haoranxu/ALMA-13B-Pretrain
60
+ - meta-llama/Llama-2-13b-hf
61
+ ---
62
+
63
+
64
+ X-ALMA builds upon [ALMA-R](https://arxiv.org/pdf/2401.08417) by expanding support from 6 to 50 languages. It utilizes a plug-and-play architecture with language-specific modules, complemented by a carefully designed training recipe. This release includes the **X-ALMA pre-trained base model**.
65
+
66
+ X-ALMA-13B-Pretrain is pre-trained on 50 languages: en,da,nl,de,is,no,sv,af,ca,ro,gl,it,pt,es,bg,mk,sr,uk,ru,id,ms,th,vi,mg,fr,hu,el,cs,pl,lt,lv,ka,zh,ja,ko,fi,et,gu,hi,mr,ne,ur,az,kk,ky,tr,uz,ar,he,fa.
67
+
68
+ All X-ALMA checkpoints are released at huggingface:
69
+ | Models | Model Link | Description |
70
+ |:-------------:|:---------------:|:---------------:|
71
+ | X-ALMA | [haoranxu/X-ALMA]([https://huggingface.co/haoranxu/ALMA-7B](https://huggingface.co/haoranxu/X-ALMA)) | X-ALMA model with all its modules |
72
+ | X-ALMA-13B-Pretrain | [haoranxu/X-ALMA-13B-Pretrain](https://huggingface.co/haoranxu/X-ALMA-13B-Pretrain) | X-ALMA 13B multilingual pre-trained base model |
73
+ | X-ALMA-Group1 | [haoranxu/X-ALMA-13B-Group1](https://huggingface.co/haoranxu/X-ALMA-13B-Group1) | X-ALMA group1 specific module and the merged model |
74
+ | X-ALMA-Group2 | [haoranxu/X-ALMA-13B-Group2](https://huggingface.co/haoranxu/X-ALMA-13B-Group2) | X-ALMA group2 specific module and the merged model |
75
+ | X-ALMA-Group3 | [haoranxu/X-ALMA-13B-Group3](https://huggingface.co/haoranxu/X-ALMA-13B-Group3) | X-ALMA group3 specific module and the merged model |
76
+ | X-ALMA-Group4 | [haoranxu/X-ALMA-13B-Group4](https://huggingface.co/haoranxu/X-ALMA-13B-Group4) | X-ALMA group4 specific module and the merged model |
77
+ | X-ALMA-Group5 | [haoranxu/X-ALMA-13B-Group5](https://huggingface.co/haoranxu/X-ALMA-13B-Group5) | X-ALMA group5 specific module and the merged model |
78
+ | X-ALMA-Group6 | [haoranxu/X-ALMA-13B-Group6](https://huggingface.co/haoranxu/X-ALMA-13B-Group6) | X-ALMA group6 specific module and the merged model |
79
+ | X-ALMA-Group7 | [haoranxu/X-ALMA-13B-Group7](https://huggingface.co/haoranxu/X-ALMA-13B-Group7) | X-ALMA group7 specific module and the merged model |
80
+ | X-ALMA-Group8 | [haoranxu/X-ALMA-13B-Group8](https://huggingface.co/haoranxu/X-ALMA-13B-Group8) | X-ALMA group8 specific module and the merged model |
81
+
82
+ ## A quick start:
83
+ There are three ways to load X-ALMA for translation. An example of translating "我爱机器翻译。" into English (X-ALMA should also able to do multilingual open-ended QA).
84
+
85
+ **The first way**: loading the merged model where the language-specific module has been merged into the base model **(Recommended)**:
86
+ ```
87
+ import torch
88
+ from transformers import AutoModelForCausalLM
89
+ from transformers import AutoTokenizer
90
+ from peft import PeftModel
91
+
92
+ GROUP2LANG = {
93
+ 1: ["da", "nl", "de", "is", "no", "sv", "af"],
94
+ 2: ["ca", "ro", "gl", "it", "pt", "es"],
95
+ 3: ["bg", "mk", "sr", "uk", "ru"],
96
+ 4: ["id", "ms", "th", "vi", "mg", "fr"],
97
+ 5: ["hu", "el", "cs", "pl", "lt", "lv"],
98
+ 6: ["ka", "zh", "ja", "ko", "fi", "et"],
99
+ 7: ["gu", "hi", "mr", "ne", "ur"],
100
+ 8: ["az", "kk", "ky", "tr", "uz", "ar", "he", "fa"],
101
+ }
102
+ LANG2GROUP = {lang: str(group) for group, langs in GROUP2LANG.items() for lang in langs}
103
+ group_id = LANG2GROUP["zh"]
104
+
105
+ model = AutoModelForCausalLM.from_pretrained(f"haoranxu/X-ALMA-13B-Group{group_id}", torch_dtype=torch.float16, device_map="auto")
106
+ tokenizer = AutoTokenizer.from_pretrained(f"haoranxu/X-ALMA-13B-Group{group_id}", padding_side='left')
107
+
108
+ # Add the source sentence into the prompt template
109
+ prompt="Translate this from Chinese to English:\nChinese: 我爱机器翻译。\nEnglish:"
110
+
111
+ # X-ALMA needs chat template but ALMA and ALMA-R don't need it.
112
+ chat_style_prompt = [{"role": "user", "content": prompt}]
113
+ prompt = tokenizer.apply_chat_template(chat_style_prompt, tokenize=False, add_generation_prompt=True)
114
+
115
+ input_ids = tokenizer(prompt, return_tensors="pt", padding=True, max_length=40, truncation=True).input_ids.cuda()
116
+
117
+ # Translation
118
+ with torch.no_grad():
119
+ generated_ids = model.generate(input_ids=input_ids, num_beams=5, max_new_tokens=20, do_sample=True, temperature=0.6, top_p=0.9)
120
+ outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
121
+ print(outputs)
122
+ ```
123
+
124
+ **The second way**: loading the base model and language-specific module **(Recommended)**:
125
+ ```
126
+ model = AutoModelForCausalLM.from_pretrained("haoranxu/X-ALMA-13B-Pretrain", torch_dtype=torch.float16, device_map="auto")
127
+ model = PeftModel.from_pretrained(model, f"haoranxu/X-ALMA-13B-Group{group_id}")
128
+ tokenizer = AutoTokenizer.from_pretrained(f"haoranxu/X-ALMA-13B-Group{group_id}", padding_side='left')
129
+ ```
130
+
131
+ **The third way**: loading the base model with all language-specific modules like MoE: (Require large GPU memory)
132
+ ```
133
+ from modeling_xalma import XALMAForCausalLM
134
+ model = XALMAForCausalLM.from_pretrained("haoranxu/X-ALMA", torch_dtype=torch.float16, device_map="auto")
135
+ tokenizer = AutoTokenizer.from_pretrained("haoranxu/X-ALMA", padding_side='left')
136
+
137
+ # Add `lang="zh"`: specify the language to instruct the model on which group to use for the third loading method during generation.
138
+ generated_ids = model.generate(input_ids=input_ids, num_beams=5, max_new_tokens=20, do_sample=True, temperature=0.6, top_p=0.9, lang="zh")
139
+ ```