ordisbold commited on
Commit
f7c2e90
·
verified ·
1 Parent(s): ef01d28

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +146 -0
README.md ADDED
@@ -0,0 +1,146 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ This is the 4 bit quanitzed gguf model.
2
+
3
+ Original model: https://huggingface.co/sambanovasystems/SambaLingo-Thai-Chat-70B
4
+
5
+
6
+ ---
7
+ language:
8
+ - th
9
+ - en
10
+ license: llama2
11
+ datasets:
12
+ - HuggingFaceH4/ultrachat_200k
13
+ - HuggingFaceH4/ultrafeedback_binarized
14
+ - HuggingFaceH4/cai-conversation-harmless
15
+
16
+ ---
17
+
18
+
19
+
20
+ # SambaLingo-Thai-Chat-70B
21
+
22
+ <img src="/sambanovasystems/SambaLingo-Thai-Chat-70B/resolve/main/SambaLingo_Logo.png" width="340" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
23
+
24
+ <!-- Provide a quick summary of what the model is/does. -->
25
+ SambaLingo-Thai-Chat-70B is a human aligned chat model trained in Thai and English. It is trained using direct preference optimization on top the base model [SambaLingo-Thai-Base-70B](https://huggingface.co/sambanovasystems/SambaLingo-Thai-Base-70B). The base model adapts [Llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf) to Thai by training on 26 billion tokens from the Thai split of the [Cultura-X](https://huggingface.co/datasets/uonlp/CulturaX) dataset. Try This Model at [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space).
26
+
27
+ ## Model Description
28
+ <!-- Provide a longer summary of what this model is. -->
29
+
30
+ - **Developed by:** [SambaNova Systems](https://sambanova.ai/)
31
+ - **Model type:** Language Model
32
+ - **Language(s):** Thai, English
33
+ - **Finetuned from model:** [Llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf)
34
+ - **Paper:** [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
35
+ - **Blog Post**: [sambalingo-open-source-language-experts](https://sambanova.ai/blog/sambalingo-open-source-language-experts)
36
+
37
+ ## Getting Started
38
+
39
+ ### Loading Model With Hugging Face
40
+ Please make sure to set use_fast=False when loading the tokenizer.
41
+ ```python
42
+ from transformers import AutoModelForCausalLM, AutoTokenizer
43
+
44
+ tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/SambaLingo-Thai-Chat-70B", use_fast=False)
45
+ model = AutoModelForCausalLM.from_pretrained("sambanovasystems/SambaLingo-Thai-Chat-70B", device_map="auto", torch_dtype="auto")
46
+ ```
47
+
48
+ ### Interacting With Model Pipeline
49
+ Please make sure to set use_fast=False when loading the tokenizer.
50
+ ```python
51
+ from transformers import pipeline
52
+ pipe = pipeline("text-generation", model="sambanovasystems/SambaLingo-Thai-Chat-70B", device_map="auto", use_fast=False)
53
+ messages = [
54
+ {"role": "user", "content": {YOUR_QUESTION}},
55
+ ]
56
+ prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
57
+ outputs = pipe(prompt)[0]
58
+ outputs = outputs["generated_text"]
59
+ ```
60
+
61
+ ### Suggested Inference Parameters
62
+ - Temperature: 0.8
63
+ - Repetition penalty: 1.0
64
+ - Top-p: 0.9
65
+
66
+ ### Prompting Guidelines
67
+ To prompt this model, please use the following chat template:
68
+ ```
69
+ <|user|>\n{question}</s>\n<|assistant|>\n
70
+ ```
71
+
72
+ ### Example Prompts and Generations
73
+ ```
74
+ <|user|>
75
+ ประเทศไทยช่วงเช้าเคารพธงชาติเมื่อไร</s>
76
+ <|assistant|>
77
+ ในประเทศไทย เวลาเคารพธงชาติคือเวลา 08.00 น. และ 18.00 น. ทุกวัน ประชาชนจะยืนตรงและร้องเพลงชาติในช่วงเวลาเหล่านี้เพื่อเป็นสัญลักษณ์ของความรักชาติและความเคารพต่อประเทศ
78
+ ```
79
+
80
+ ## Training Details
81
+ The alignment phase follows the recipe for [Zephyr-7B](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta), and comprises two stages: supervised fine-tuning (SFT) and Direct Performance Optimization (DPO).
82
+
83
+ The SFT phase was done on the [ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) dataset mixed with the Google translated version of the ultrachat_200k dataset. It was trained for one epoch with global batch size 512 and max sequence length 2048 tokens. We used a linear decay learning rate of 2e-5 and 10% warmup.
84
+
85
+ The DPO phase was done on the [ultrafeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) dataset and [cai-conversation-harmless](https://huggingface.co/datasets/HuggingFaceH4/cai-conversation-harmless) dataset, mixed with 10% of the data Google translated. It was trained with global batch size 32 and for three epochs. We used a linear decay learning rate of 5e-7, 10% warmup and β=0.1 as the regularization factor for DPO.
86
+
87
+
88
+ ## Tokenizer Details
89
+ We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
90
+
91
+ ## Evaluation
92
+ For evaluation results see our paper: [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
93
+
94
+ ## Uses
95
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
96
+
97
+ ### Direct Use
98
+
99
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
100
+ Use of this model is governed by the Meta’s [Llama 2 Community License Agreement](https://ai.meta.com/llama/license/). Please review and accept the license before downloading the model weights.
101
+
102
+
103
+ ### Out-of-Scope Use
104
+
105
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
106
+ SambaLingo should NOT be used for:
107
+
108
+ - Mission-critical applications
109
+ - Applications that involve the safety of others
110
+ - Making highly important decisions
111
+
112
+ ## Bias, Risks, and Limitations
113
+
114
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
115
+
116
+ Like all LLMs, SambaLingo has certain limitations:
117
+ - Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.
118
+ - Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.
119
+ - Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.
120
+ - Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.
121
+ - Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.
122
+
123
+ ## Acknowledgments
124
+ We extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.
125
+
126
+ We would like to give a special thanks to the following groups:
127
+ - Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset
128
+ - Nguyen et al for open sourcing CulturaX dataset
129
+ - CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset
130
+ - EleutherAI for their open source evaluation framework
131
+ - Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo
132
+
133
+
134
+ ## Cite SambaLingo
135
+ ```
136
+ @misc{csaki2024sambalingo,
137
+ title={SambaLingo: Teaching Large Language Models New Languages},
138
+ author={Zoltan Csaki and Bo Li and Jonathan Li and Qiantong Xu and Pian Pawakapan and Leon Zhang and Yun Du and Hengyu Zhao and Changran Hu and Urmish Thakker},
139
+ year={2024},
140
+ eprint={2404.05829},
141
+ archivePrefix={arXiv},
142
+ primaryClass={cs.CL}
143
+ }
144
+ ```
145
+
146
+