Update README.md
Browse files
README.md
CHANGED
@@ -11,3 +11,74 @@ tags:
|
|
11 |
- orpo
|
12 |
---
|
13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
- orpo
|
12 |
---
|
13 |
|
14 |
+
|
15 |
+
|
16 |
+
# Model Summary
|
17 |
+
|
18 |
+
Gemma-2-9B-Chinese-Chat is the first instruction-tuned language model built upon [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) for Chinese & English users with various abilities such as roleplaying & tool-using.
|
19 |
+
|
20 |
+
Developed by: [Shenzhi Wang](https://shenzhi-wang.netlify.app) (王慎执) and [Yaowei Zheng](https://github.com/hiyouga) (郑耀威)
|
21 |
+
|
22 |
+
- License: [Gemma License](https://ai.google.dev/gemma/terms)
|
23 |
+
- Base Model: [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it)
|
24 |
+
- Model Size: 9.24B
|
25 |
+
- Context length: 8K
|
26 |
+
|
27 |
+
# 1. Introduction
|
28 |
+
|
29 |
+
This is the first model specifically fine-tuned for Chinese & English users based on the [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) with a preference dataset with more than 100K preference pairs. The fine-tuning algorithm we employ is ORPO [1].
|
30 |
+
|
31 |
+
**Compared to the original [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it), our Gemma-2-9B-Chinese-Chat model significantly reduces the issues of "Chinese questions with English answers" and the mixing of Chinese and English in responses, with enhanced performance in roleplay, tool using, and math.**
|
32 |
+
|
33 |
+
[1] Hong, Jiwoo, Noah Lee, and James Thorne. "Reference-free Monolithic Preference Optimization with Odds Ratio." arXiv preprint arXiv:2403.07691 (2024).
|
34 |
+
|
35 |
+
Training framework: [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory).
|
36 |
+
|
37 |
+
Training details:
|
38 |
+
|
39 |
+
- epochs: 3
|
40 |
+
- learning rate: 3e-6
|
41 |
+
- learning rate scheduler type: cosine
|
42 |
+
- Warmup ratio: 0.1
|
43 |
+
- cutoff len (i.e. context length): 8192
|
44 |
+
- orpo beta (i.e. $\lambda$ in the ORPO paper): 0.05
|
45 |
+
- global batch size: 128
|
46 |
+
- fine-tuning type: full parameters
|
47 |
+
- optimizer: paged_adamw_32bit
|
48 |
+
|
49 |
+
# 2. Usage
|
50 |
+
|
51 |
+
Please upgrade the `transformers` package to ensure it supports Gemma-2 models. The current version we are using is `4.42.2`.
|
52 |
+
|
53 |
+
```python
|
54 |
+
import torch
|
55 |
+
import transformers
|
56 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
57 |
+
|
58 |
+
model_id = "shenzhi-wang/Gemma-2-9B-Chinese-Chat"
|
59 |
+
dtype = torch.bfloat16
|
60 |
+
|
61 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
62 |
+
model = AutoModelForCausalLM.from_pretrained(
|
63 |
+
model_id,
|
64 |
+
device_map="cuda",
|
65 |
+
torch_dtype=dtype,
|
66 |
+
)
|
67 |
+
|
68 |
+
chat = [
|
69 |
+
{"role": "user", "content": "写一首关于机器学习的诗。"},
|
70 |
+
]
|
71 |
+
input_ids = tokenizer.apply_chat_template(
|
72 |
+
chat, tokenize=True, add_generation_prompt=True, return_tensors="pt"
|
73 |
+
).to(model.device)
|
74 |
+
|
75 |
+
outputs = model.generate(
|
76 |
+
input_ids,
|
77 |
+
max_new_tokens=8192,
|
78 |
+
do_sample=True,
|
79 |
+
temperature=0.6,
|
80 |
+
top_p=0.9,
|
81 |
+
)
|
82 |
+
response = outputs[0][input_ids.shape[-1] :]
|
83 |
+
print(tokenizer.decode(response, skip_special_tokens=True))
|
84 |
+
```
|