c01zaut commited on
Commit
72c45c9
·
verified ·
1 Parent(s): f70d89e

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,323 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - zh
4
+ - en
5
+ library_name: transformers
6
+ license: apache-2.0
7
+ pipeline_tag: text-generation
8
+ ---
9
+ # MiniCPM3-4B-RK3588-1.1.1
10
+
11
+ This version of MiniCPM3-4B has been converted to run on the RK3588 NPU using w8a8 quantization.
12
+
13
+ This model has been optimized with the following LoRA: openbmb/MiniCPM3-RAG-LoRA
14
+
15
+ Compatible with RKLLM version: 1.1.1
16
+
17
+ ###Useful links:
18
+ [Official RKLLM GitHub](https://github.com/airockchip/rknn-llm)
19
+
20
+ [RockhipNPU Reddit](https://reddit.com/r/RockchipNPU)
21
+
22
+ [EZRKNN-LLM](https://github.com/Pelochus/ezrknn-llm/)
23
+
24
+ Pretty much anything by these folks: [marty1885][https://github.com/marty1885] and [happyme531](https://huggingface.co/happyme531)
25
+
26
+ # Original Model Card for base model, MiniCPM3-4B, below:
27
+
28
+ <div align="center">
29
+ <img src="https://github.com/OpenBMB/MiniCPM/blob/main/assets/minicpm_logo.png?raw=true" width="500em" ></img>
30
+ </div>
31
+
32
+ <p align="center">
33
+ <a href="https://github.com/OpenBMB/MiniCPM/" target="_blank">MiniCPM Repo</a> |
34
+ <a href="https://arxiv.org/abs/2404.06395" target="_blank">MiniCPM Paper</a> |
35
+ <a href="https://github.com/OpenBMB/MiniCPM-V/" target="_blank">MiniCPM-V Repo</a> |
36
+ Join us in <a href="https://discord.gg/3cGQn9b3YM" target="_blank">Discord</a> and <a href="https://github.com/OpenBMB/MiniCPM/blob/main/assets/wechat.jpg" target="_blank">WeChat</a>
37
+
38
+ </p>
39
+
40
+ ## Introduction
41
+ MiniCPM3-4B is the 3rd generation of MiniCPM series. The overall performance of MiniCPM3-4B surpasses Phi-3.5-mini-Instruct and GPT-3.5-Turbo-0125, being comparable with many recent 7B~9B models.
42
+
43
+ Compared to MiniCPM1.0/MiniCPM2.0, MiniCPM3-4B has a more powerful and versatile skill set to enable more general usage. MiniCPM3-4B supports function call, along with code interpreter. Please refer to [Advanced Features](https://github.com/OpenBMB/MiniCPM/tree/main?tab=readme-ov-file#%E8%BF%9B%E9%98%B6%E5%8A%9F%E8%83%BD) for usage guidelines.
44
+
45
+ MiniCPM3-4B has a 32k context window. Equipped with LLMxMapReduce, MiniCPM3-4B can handle infinite context theoretically, without requiring huge amount of memory.
46
+
47
+ ## Usage
48
+ ### Inference with Transformers
49
+ ```python
50
+ from transformers import AutoModelForCausalLM, AutoTokenizer
51
+ import torch
52
+
53
+ path = "openbmb/MiniCPM3-4B"
54
+ device = "cuda"
55
+
56
+ tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True)
57
+ model = AutoModelForCausalLM.from_pretrained(path, torch_dtype=torch.bfloat16, device_map=device, trust_remote_code=True)
58
+
59
+ messages = [
60
+ {"role": "user", "content": "推荐5个北京的景点。"},
61
+ ]
62
+ model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True).to(device)
63
+
64
+ model_outputs = model.generate(
65
+ model_inputs,
66
+ max_new_tokens=1024,
67
+ top_p=0.7,
68
+ temperature=0.7
69
+ )
70
+
71
+ output_token_ids = [
72
+ model_outputs[i][len(model_inputs[i]):] for i in range(len(model_inputs))
73
+ ]
74
+
75
+ responses = tokenizer.batch_decode(output_token_ids, skip_special_tokens=True)[0]
76
+ print(responses)
77
+ ```
78
+
79
+ ### Inference with [vLLM](https://github.com/vllm-project/vllm)
80
+
81
+ For now, you need to install our forked version of vLLM.
82
+
83
+ ```bash
84
+ pip install git+https://github.com/OpenBMB/vllm.git@minicpm3
85
+ ```
86
+
87
+ ```python
88
+ from transformers import AutoTokenizer
89
+ from vllm import LLM, SamplingParams
90
+
91
+ model_name = "openbmb/MiniCPM3-4B"
92
+ prompt = [{"role": "user", "content": "推荐5个北京的景点。"}]
93
+
94
+ tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
95
+ input_text = tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True)
96
+
97
+ llm = LLM(
98
+ model=model_name,
99
+ trust_remote_code=True,
100
+ tensor_parallel_size=1
101
+ )
102
+ sampling_params = SamplingParams(top_p=0.7, temperature=0.7, max_tokens=1024, repetition_penalty=1.02)
103
+
104
+ outputs = llm.generate(prompts=input_text, sampling_params=sampling_params)
105
+
106
+ print(outputs[0].outputs[0].text)
107
+ ```
108
+
109
+ ## Evaluation Results
110
+
111
+ <table>
112
+ <tr>
113
+ <td>Benchmark</td>
114
+ <td>Qwen2-7B-Instruct</td>
115
+ <td>GLM-4-9B-Chat</td>
116
+ <td>Gemma2-9B-it</td>
117
+ <td>Llama3.1-8B-Instruct</td>
118
+ <td>GPT-3.5-Turbo-0125</td>
119
+ <td>Phi-3.5-mini-Instruct(3.8B)</td>
120
+ <td>MiniCPM3-4B </td>
121
+ </tr>
122
+ <tr>
123
+ <td colspan="15" align="left"><strong>English</strong></td>
124
+ </tr>
125
+ <tr>
126
+ <td>MMLU</td>
127
+ <td>70.5</td>
128
+ <td>72.4</td>
129
+ <td>72.6</td>
130
+ <td>69.4</td>
131
+ <td>69.2</td>
132
+ <td>68.4</td>
133
+ <td>67.2 </td>
134
+ </tr>
135
+ <tr>
136
+ <td>BBH</td>
137
+ <td>64.9</td>
138
+ <td>76.3</td>
139
+ <td>65.2</td>
140
+ <td>67.8</td>
141
+ <td>70.3</td>
142
+ <td>68.6</td>
143
+ <td>70.2 </td>
144
+ </tr>
145
+ <tr>
146
+ <td>MT-Bench</td>
147
+ <td>8.41</td>
148
+ <td>8.35</td>
149
+ <td>7.88</td>
150
+ <td>8.28</td>
151
+ <td>8.17</td>
152
+ <td>8.60</td>
153
+ <td>8.41 </td>
154
+ </tr>
155
+ <tr>
156
+ <td>IFEVAL (Prompt Strict-Acc.)</td>
157
+ <td>51.0</td>
158
+ <td>64.5</td>
159
+ <td>71.9</td>
160
+ <td>71.5</td>
161
+ <td>58.8</td>
162
+ <td>49.4</td>
163
+ <td>68.4 </td>
164
+ </tr>
165
+ <tr>
166
+ <td colspan="15" align="left"><strong>Chinese</strong></td>
167
+ </tr>
168
+ <tr>
169
+ <td>CMMLU</td>
170
+ <td>80.9</td>
171
+ <td>71.5</td>
172
+ <td>59.5</td>
173
+ <td>55.8</td>
174
+ <td>54.5</td>
175
+ <td>46.9</td>
176
+ <td>73.3 </td>
177
+ </tr>
178
+ <tr>
179
+ <td>CEVAL</td>
180
+ <td>77.2</td>
181
+ <td>75.6</td>
182
+ <td>56.7</td>
183
+ <td>55.2</td>
184
+ <td>52.8</td>
185
+ <td>46.1</td>
186
+ <td>73.6 </td>
187
+ </tr>
188
+ <tr>
189
+ <td>AlignBench v1.1</td>
190
+ <td>7.10</td>
191
+ <td>6.61</td>
192
+ <td>7.10</td>
193
+ <td>5.68</td>
194
+ <td>5.82</td>
195
+ <td>5.73</td>
196
+ <td>6.74 </td>
197
+ </tr>
198
+ <tr>
199
+ <td>FollowBench-zh (SSR)</td>
200
+ <td>63.0</td>
201
+ <td>56.4</td>
202
+ <td>57.0</td>
203
+ <td>50.6</td>
204
+ <td>64.6</td>
205
+ <td>58.1</td>
206
+ <td>66.8 </td>
207
+ </tr>
208
+ <tr>
209
+ <td colspan="15" align="left"><strong>Math</strong></td>
210
+ </tr>
211
+ <tr>
212
+ <td>MATH</td>
213
+ <td>49.6</td>
214
+ <td>50.6</td>
215
+ <td>46.0</td>
216
+ <td>51.9</td>
217
+ <td>41.8</td>
218
+ <td>46.4</td>
219
+ <td>46.6 </td>
220
+ </tr>
221
+ <tr>
222
+ <td>GSM8K</td>
223
+ <td>82.3</td>
224
+ <td>79.6</td>
225
+ <td>79.7</td>
226
+ <td>84.5</td>
227
+ <td>76.4</td>
228
+ <td>82.7</td>
229
+ <td>81.1 </td>
230
+ </tr>
231
+ <tr>
232
+ <td>MathBench</td>
233
+ <td>63.4</td>
234
+ <td>59.4</td>
235
+ <td>45.8</td>
236
+ <td>54.3</td>
237
+ <td>48.9</td>
238
+ <td>54.9</td>
239
+ <td>65.6 </td>
240
+ </tr>
241
+ <tr>
242
+ <td colspan="15" align="left"><strong>Code</strong></td>
243
+ </tr>
244
+ <tr>
245
+ <td>HumanEval+</td>
246
+ <td>70.1</td>
247
+ <td>67.1</td>
248
+ <td>61.6</td>
249
+ <td>62.8</td>
250
+ <td>66.5</td>
251
+ <td>68.9</td>
252
+ <td>68.3 </td>
253
+ </tr>
254
+ <tr>
255
+ <td>MBPP+</td>
256
+ <td>57.1</td>
257
+ <td>62.2</td>
258
+ <td>64.3</td>
259
+ <td>55.3</td>
260
+ <td>71.4</td>
261
+ <td>55.8</td>
262
+ <td>63.2 </td>
263
+ </tr>
264
+ <tr>
265
+ <td>LiveCodeBench v3</td>
266
+ <td>22.2</td>
267
+ <td>20.2</td>
268
+ <td>19.2</td>
269
+ <td>20.4</td>
270
+ <td>24.0</td>
271
+ <td>19.6</td>
272
+ <td>22.6 </td>
273
+ </tr>
274
+ <tr>
275
+ <td colspan="15" align="left"><strong>Function Call</strong></td>
276
+ </tr>
277
+ <tr>
278
+ <td>BFCL v2</td>
279
+ <td>71.6</td>
280
+ <td>70.1</td>
281
+ <td>19.2</td>
282
+ <td>73.3</td>
283
+ <td>75.4</td>
284
+ <td>48.4</td>
285
+ <td>76.0 </td>
286
+ </tr>
287
+ <tr>
288
+ <td colspan="15" align="left"><strong>Overall</strong></td>
289
+ </tr>
290
+ <tr>
291
+ <td>Average</td>
292
+ <td>65.3</td>
293
+ <td>65.0</td>
294
+ <td>57.9</td>
295
+ <td>60.8</td>
296
+ <td>61.0</td>
297
+ <td>57.2</td>
298
+ <td><strong>66.3</strong></td>
299
+ </tr>
300
+ </table>
301
+
302
+
303
+ ## Statement
304
+ * As a language model, MiniCPM3-4B generates content by learning from a vast amount of text.
305
+ * However, it does not possess the ability to comprehend or express personal opinions or value judgments.
306
+ * Any content generated by MiniCPM3-4B does not represent the viewpoints or positions of the model developers.
307
+ * Therefore, when using content generated by MiniCPM3-4B, users should take full responsibility for evaluating and verifying it on their own.
308
+
309
+ ## LICENSE
310
+ * This repository is released under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) License.
311
+ * The usage of MiniCPM3-4B model weights must strictly follow [MiniCPM Model License.md](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md).
312
+ * The models and weights of MiniCPM3-4B are completely free for academic research. after filling out a ["questionnaire"](https://modelbest.feishu.cn/share/base/form/shrcnpV5ZT9EJ6xYjh3Kx0J6v8g) for registration, are also available for free commercial use.
313
+
314
+ ## Citation
315
+
316
+ ```
317
+ @article{hu2024minicpm,
318
+ title={MiniCPM: Unveiling the Potential of Small Language Models with Scalable Training Strategies},
319
+ author={Hu, Shengding and Tu, Yuge and Han, Xu and He, Chaoqun and Cui, Ganqu and Long, Xiang and Zheng, Zhi and Fang, Yewei and Huang, Yuxiang and Zhao, Weilin and others},
320
+ journal={arXiv preprint arXiv:2404.06395},
321
+ year={2024}
322
+ }
323
+ ```
added_tokens.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "<|execute_end|>": 73444,
3
+ "<|execute_start|>": 73443,
4
+ "<|fim_middle|>": 73446,
5
+ "<|fim_prefix|>": 73445,
6
+ "<|fim_suffix|>": 73447,
7
+ "<|im_end|>": 73440,
8
+ "<|im_start|>": 73441,
9
+ "<|tool_call|>": 73442
10
+ }
config.json ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "openbmb/MiniCPM3-4B",
3
+ "architectures": [
4
+ "MiniCPM3ForCausalLM"
5
+ ],
6
+ "auto_map": {
7
+ "AutoConfig": "configuration_minicpm.MiniCPM3Config",
8
+ "AutoModel": "modeling_minicpm.MiniCPM3Model",
9
+ "AutoModelForCausalLM": "modeling_minicpm.MiniCPM3ForCausalLM",
10
+ "AutoModelForSeq2SeqLM": "modeling_minicpm.MiniCPM3ForCausalLM",
11
+ "AutoModelForSequenceClassification": "modeling_minicpm.MiniCPM3ForSequenceClassification"
12
+ },
13
+ "bos_token_id": 1,
14
+ "eos_token_id": [2, 73440],
15
+ "hidden_act": "silu",
16
+ "initializer_range": 0.1,
17
+ "hidden_size": 2560,
18
+ "num_hidden_layers": 62,
19
+ "intermediate_size": 6400,
20
+ "max_position_embeddings": 32768,
21
+ "model_type": "minicpm3",
22
+ "num_attention_heads": 40,
23
+ "num_key_value_heads": 40,
24
+ "qk_nope_head_dim": 64,
25
+ "qk_rope_head_dim": 32,
26
+ "q_lora_rank": 768,
27
+ "kv_lora_rank": 256,
28
+ "rms_norm_eps": 1e-05,
29
+ "rope_scaling": {
30
+ "type": "longrope",
31
+ "long_factor": [1.0591234137867171, 1.1241891283591912, 1.2596935748670968, 1.5380380402321725, 2.093982484148734, 3.1446935121267696, 4.937952647693647, 7.524541999994549, 10.475458000005451, 13.062047352306353, 14.85530648787323, 15.906017515851266, 16.461961959767827, 16.740306425132907, 16.87581087164081, 16.940876586213285],
32
+ "short_factor": [1.0591234137867171, 1.1241891283591912, 1.2596935748670968, 1.5380380402321725, 2.093982484148734, 3.1446935121267696, 4.937952647693647, 7.524541999994549, 10.475458000005451, 13.062047352306353, 14.85530648787323, 15.906017515851266, 16.461961959767827, 16.740306425132907, 16.87581087164081, 16.940876586213285],
33
+ "original_max_position_embeddings": 32768
34
+ },
35
+ "torch_dtype": "bfloat16",
36
+ "transformers_version": "4.41.0",
37
+ "use_cache": true,
38
+ "vocab_size": 73448,
39
+ "scale_emb": 12,
40
+ "dim_model_base": 256,
41
+ "scale_depth": 1.4
42
+ }
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_sample": true,
3
+ "top_p": 0.8,
4
+ "temperature": 0.8,
5
+ "bos_token_id": 1,
6
+ "eos_token_id": [2, 73440]
7
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ {
4
+ "content": "<|im_end|>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false
9
+ },
10
+ {
11
+ "content": "<|im_start|>",
12
+ "lstrip": false,
13
+ "normalized": false,
14
+ "rstrip": false,
15
+ "single_word": false
16
+ },
17
+ {
18
+ "content": "<|tool_call|>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ },
24
+ {
25
+ "content": "<|execute_start|>",
26
+ "lstrip": false,
27
+ "normalized": false,
28
+ "rstrip": false,
29
+ "single_word": false
30
+ },
31
+ {
32
+ "content": "<|execute_end|>",
33
+ "lstrip": false,
34
+ "normalized": false,
35
+ "rstrip": false,
36
+ "single_word": false
37
+ },
38
+ {
39
+ "content": "<|fim_prefix|>",
40
+ "lstrip": false,
41
+ "normalized": false,
42
+ "rstrip": false,
43
+ "single_word": false
44
+ },
45
+ {
46
+ "content": "<|fim_middle|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false
51
+ },
52
+ {
53
+ "content": "<|fim_suffix|>",
54
+ "lstrip": false,
55
+ "normalized": false,
56
+ "rstrip": false,
57
+ "single_word": false
58
+ }
59
+ ],
60
+ "bos_token": {
61
+ "content": "<s>",
62
+ "lstrip": false,
63
+ "normalized": false,
64
+ "rstrip": false,
65
+ "single_word": false
66
+ },
67
+ "eos_token": {
68
+ "content": "</s>",
69
+ "lstrip": false,
70
+ "normalized": false,
71
+ "rstrip": false,
72
+ "single_word": false
73
+ },
74
+ "unk_token": {
75
+ "content": "<unk>",
76
+ "lstrip": false,
77
+ "normalized": false,
78
+ "rstrip": false,
79
+ "single_word": false
80
+ }
81
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "73440": {
30
+ "content": "<|im_end|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "73441": {
38
+ "content": "<|im_start|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "73442": {
46
+ "content": "<|tool_call|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "73443": {
54
+ "content": "<|execute_start|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "73444": {
62
+ "content": "<|execute_end|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "73445": {
70
+ "content": "<|fim_prefix|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "73446": {
78
+ "content": "<|fim_middle|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "73447": {
86
+ "content": "<|fim_suffix|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ }
93
+ },
94
+ "additional_special_tokens": [
95
+ "<|im_end|>",
96
+ "<|im_start|>",
97
+ "<|tool_call|>",
98
+ "<|execute_start|>",
99
+ "<|execute_end|>",
100
+ "<|fim_prefix|>",
101
+ "<|fim_middle|>",
102
+ "<|fim_suffix|>"
103
+ ],
104
+ "bos_token": "<s>",
105
+ "clean_up_tokenization_spaces": false,
106
+ "eos_token": "<|im_end|>",
107
+ "legacy": true,
108
+ "model_max_length": 1000000000000000019884624838656,
109
+ "pad_token": null,
110
+ "sp_model_kwargs": {},
111
+ "spaces_between_special_tokens": false,
112
+ "tokenizer_class": "LlamaTokenizer",
113
+ "unk_token": "<unk>",
114
+ "use_default_system_prompt": false,
115
+ "chat_template": "{%- macro json_to_python_type(param_name, json_spec) %}\n{%- set basic_type_map = {\n 'string': 'str',\n 'number': 'float',\n 'integer': 'int',\n 'boolean': 'bool',\n 'null': 'None'\n} %}\n\n{%- if json_spec.enum %}\n {{- param_name|title }}\n{%- elif basic_type_map[json_spec.type] is defined %}\n {{- basic_type_map[json_spec.type] }}\n{%- elif json_spec.type == 'array' %}\n {{- 'List[' + json_to_python_type(param_name, json_spec['items']) + ']' }}\n{%- elif json_spec.type == 'object' %}\n {{- 'Dict[str, ' + json_to_python_type(param_name, json_spec.additionalProperties if json_spec.additionalProperties else 'Any') + ']' if not json_spec.properties else param_name|title }}\n{%- elif json_spec.type is iterable %}\n {{- 'Union[' }}\n {%- for t in json_spec.type %}\n {{- json_to_python_type(param_name, {'type': t}) }}\n {{- ', ' if not loop.last }}\n {%- endfor %}\n {{- ']' }}\n{%- else %}\n {{- 'Any' }}\n{%- endif %}\n{%- endmacro %}\n\n{%- macro object_to_fields(json_spec, field_indent) %}\n {%- set o_ns = namespace(f = caller()) %}\n {%- for param_name, param_fields in json_spec.properties|items %}\n {%- if param_fields.enum %}\n {{- '\\n\\nclass ' + param_name|title + '(Enum):\\n' }}\n {%- for enum_option in param_fields.enum %}\n {{- ' enum_' + loop.index0|string + ' = ' + enum_option|tojson + '\\n' }}\n {%- endfor %}\n {%- elif param_fields.type == 'object' and param_fields.properties %}\n {%- call object_to_fields(param_fields, ' ') %}\n {{- '\\n\\nclass ' + param_name|title + '(BaseModel):\\n' }}\n {%- endcall %}\n {%- elif param_fields.type == 'array' and param_fields['items'] and param_fields['items'].type == 'object' and param_fields['items'].properties %}\n {%- call object_to_fields(param_fields['items'], ' ') %}\n {{- '\\n\\nclass ' + param_name|title + '(BaseModel):\\n' }}\n {%- endcall %}\n {%- endif %}\n {%- set param_default = param_fields.default|tojson if param_fields.default is string else param_fields.default|string if param_fields.default is defined else 'None' %}\n {%- set o_ns.f = o_ns.f + field_indent + param_name + ': ' %}\n {%- set o_ns.f = o_ns.f + ('Optional[' + json_to_python_type(param_name, param_fields) + ']' if param_name not in json_spec.required else json_to_python_type(param_name, param_fields)) %}\n {%- if not param_fields.title and not param_fields.description and not param_fields.pattern %}\n {%- set o_ns.f = o_ns.f + (' = ' + param_default if param_name not in json_spec.required else '') %}\n {%- else %}\n {%- set o_ns.f = o_ns.f + (' = Field(...' if param_name in json_spec.required else ' = Field(' + param_default) %}\n {%- set o_ns.f = o_ns.f + (', description=' + param_fields.description|tojson if param_fields.description else '') %}\n {%- set o_ns.f = o_ns.f + (', regex=' + param_fields.pattern|tojson if param_fields.pattern else '') %}\n {%- set o_ns.f = o_ns.f + (', title=' + param_fields.title|tojson if param_fields.title else '') %}\n {%- set o_ns.f = o_ns.f + ')' %}\n {%- endif %}\n {%- set o_ns.f = o_ns.f + '\\n' %}\n {%- endfor %}\n {{- o_ns.f }}\n{%- endmacro %}\n\n{%- macro tool_parser(tools) %}\n{%- for tool in tools %}\n {%- if tool.type is not defined or tool.type == 'function' %}\n {%- if tool.function is defined %}\n {%- set tool = tool.function %}\n {%- endif %}\n {%- set tool_params = tool.parameters if tool.parameters is defined else none %}\n {%- call object_to_fields(tool_params, ' ') %}\n {{- '\\n\\ndef ' + tool.name + '(' }}\n {%- if tool_params %}\n {%- for param_name, param_fields in tool_params.properties|items %}\n {%- set param_default = param_fields.default|tojson if param_fields.default is string else param_fields.default|string if param_fields.default is defined else 'None' %}\n {{- ', ' if loop.index0 != 0 }}\n {{- param_name }}\n {{- '=' + param_default if param_name not in tool_params.required }}\n {%- endfor %}\n {%- endif %}\n {{- '):\\n \"\"\"' }}\n {{- tool.description }}\n {{- '\\n\\n Args:\\n' if tool_params else '\\n' }}\n {%- endcall %}\n {{- ' \"\"\"\\n' }}\n {%- endif %}\n{%- endfor %}\n{%- endmacro %}\n\n{%- if messages[0]['role'] == 'system' %}\n {%- set loop_messages = messages[1:] %}\n {%- set system_message = messages[0]['content'] %}\n{%- else %}\n {%- set loop_messages = messages %}\n {%- set system_message = '' %}\n{%- endif %}\n{{- '<|im_start|>system\\n' + system_message if system_message or tools }}\n{%- if tools %}\n {{- '\\n# Functions\\nHere is a list of functions that you can invoke:\\n```python\\nfrom enum import Enum\\nfrom typing import List, Dict, Optional\\nfrom pydantic import BaseModel, Field\\n\\n' }}\n {{- tool_parser(tools) }}\n {{- \"\\n```\\n\\n# Function Call Rule and Output Format\\n- If the user's question can be answered without calling any function, please answer the user's question directly. In this situation, you should return your thought and answer the user's question directly.\\n- If the user cannot be answered without calling any function, and the user does not provide enough information to call functions, please ask the user for more information. In this situation, you should return your thought and ask the user for more information.\\n- If the user's question cannot be answered without calling any function, and the user has provided enough information to call functions to solve it, you should call the functions. In this situation, the assistant should return your thought and call the functions.\\n- Use default parameters unless the user has specified otherwise.\\n- You should answer in the following format:\\n\\n<|thought_start|>\\n{explain why the user's question can be answered without calling a function or why you should ask the user for more information or why you should call one or more functions and your plan to solve the user's question.}\\n<|thought_end|>\\n<|tool_call_start|>\\n```python\\nfunc1(params_name=params_value, params_name2=params_value2...)\\nfunc2(params)\\n```\\n<|tool_call_end|>\\n{answer the user's question directly or ask the user for more information}\" }}\n{%- endif %}\n{{- '<|im_end|>\\n' if system_message or tools }}\n{%- for message in loop_messages %}\n {%- set content = message.content %}\n {%- if message.role == 'assistant' and message.tool_calls %}\n {{- '<|im_start|>' + message.role + '\\n' }}\n {{- '<|thought_start|>\\n' + message.thought + '\\n<|thought_end|>\\n' if message.thought }}\n {{- '<|tool_call_start|>\\n```python\\n' }}\n {%- for tool_call in message.tool_calls %}\n {%- if tool_call.function is defined %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- tool_call.name + '(' }}\n {%- if tool_call.arguments is defined and tool_call.arguments|length > 0 %}\n {%- for param_name, param_value in tool_call.arguments|items %}\n {{- param_name + '=' + param_value|tojson }}\n {{- ',' if not loop.last }}\n {%- endfor %}\n {%- endif %}\n {{- ')\\n' }}\n {%- endfor %}\n {{- '```\\n<|tool_call_end|>\\n' }}\n {{- content if content and not content.startswith('<|tool_call_start|>') }}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == 'assistant' and message.thought %}\n {{- '<|im_start|>' + message.role + '\\n' + '<|thought_start|>\\n' + message.thought + '\\n<|thought_end|>\\n' + content + '<|im_end|>\\n' }}\n {%- else %}\n {{- '<|im_start|>' + message.role + '\\n' + content + '<|im_end|>\\n' }}\n {%- endif %}\n{%- endfor %}\n\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n{%- endif %}"
116
+ }