Update README.md
Browse files
README.md
CHANGED
@@ -8,15 +8,14 @@ tags: []
|
|
8 |
This model is a fine-tuned version of [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) for the **Function Calling** task on non-synthetic data,
|
9 |
fully annotated by humans only, on the English version of the <ins>*DiTy/function-calling*</ins> dataset.
|
10 |
<!-- Provide a quick summary of what the model is/does. -->
|
11 |
-
|
12 |
-
---
|
13 |
## Model card tree
|
14 |
|
15 |
* [How prepare your functions (tools) for *Function Calling*](#prepare_func_call)
|
|
|
|
|
16 |
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
|
21 |
## Usage (HuggingFace Transformers)
|
22 |
|
@@ -52,12 +51,256 @@ def get_sunrise_sunset_times(city: str):
|
|
52 |
return ["6:00 AM", "6:00 PM"]
|
53 |
```
|
54 |
|
55 |
-
###
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
56 |
|
57 |
```python
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
58 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
59 |
```
|
60 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
61 |
|
62 |
|
63 |
|
|
|
8 |
This model is a fine-tuned version of [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) for the **Function Calling** task on non-synthetic data,
|
9 |
fully annotated by humans only, on the English version of the <ins>*DiTy/function-calling*</ins> dataset.
|
10 |
<!-- Provide a quick summary of what the model is/does. -->
|
11 |
+
|
|
|
12 |
## Model card tree
|
13 |
|
14 |
* [How prepare your functions (tools) for *Function Calling*](#prepare_func_call)
|
15 |
+
* [Just use chat template for generation](#just_chat_template)
|
16 |
+
* [Prompt structure and expected content](#roles)
|
17 |
|
18 |
+
<br>
|
|
|
|
|
19 |
|
20 |
## Usage (HuggingFace Transformers)
|
21 |
|
|
|
51 |
return ["6:00 AM", "6:00 PM"]
|
52 |
```
|
53 |
|
54 |
+
### <a name="just_chat_template"></a>Just use chat template
|
55 |
+
|
56 |
+
Next, you need to download the model and tokenizer:
|
57 |
+
```python
|
58 |
+
import torch
|
59 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
60 |
+
|
61 |
+
model = AutoModelForCausalLM.from_pretrained(
|
62 |
+
"DiTy/gemma-2-9b-it-function-calling",
|
63 |
+
device_map="auto",
|
64 |
+
torch_dtype=torch.bfloat16, # use float16 or float32 if bfloat16 is not available to you.
|
65 |
+
cache_dir=PATH_TO_MODEL_DIR, # optional
|
66 |
+
)
|
67 |
+
tokenizer = AutoTokenizer.from_pretrained(
|
68 |
+
"DiTy/gemma-2-9b-it-function-calling",
|
69 |
+
cache_dir=PATH_TO_MODEL_DIR, # optional
|
70 |
+
)
|
71 |
+
```
|
72 |
+
|
73 |
+
To get the result of generation, just use `apply_chat_template`. In order to take into account our written functions (tools),
|
74 |
+
we need to pass them as a list through the `tools` attribute and also use `add_prompt_generation=True`.
|
75 |
+
```python
|
76 |
+
history_messages = [
|
77 |
+
{"role": "system", "content": "You are a helpful assistant with access to the following functions. Use them if required - "},
|
78 |
+
{"role": "user", "content": "Hi, can you tell me the time of sunrise in Los Angeles?"},
|
79 |
+
]
|
80 |
+
|
81 |
+
inputs = tokenizer.apply_chat_template(
|
82 |
+
history_messages,
|
83 |
+
tokenize=False,
|
84 |
+
add_generation_prompt=True, # adding prompt for generation
|
85 |
+
tools=[get_weather, get_sunrise_sunset_times], # our functions (tools)
|
86 |
+
)
|
87 |
+
```
|
88 |
+
|
89 |
+
Then our `inputs` will look like this:
|
90 |
+
```
|
91 |
+
<bos><start_of_turn>user
|
92 |
+
You are a helpful assistant with access to the following functions. Use them if required - {
|
93 |
+
"name": "get_weather",
|
94 |
+
"description": "A function that returns the weather in a given city.",
|
95 |
+
"parameters": {
|
96 |
+
"type": "object",
|
97 |
+
"properties": {
|
98 |
+
"city": {
|
99 |
+
"type": "string",
|
100 |
+
"description": "The city to get the weather for."
|
101 |
+
}
|
102 |
+
},
|
103 |
+
"required": [
|
104 |
+
"city"
|
105 |
+
]
|
106 |
+
}
|
107 |
+
},
|
108 |
+
{
|
109 |
+
"name": "get_sunrise_sunset_times",
|
110 |
+
"description": "A function that returns the time of sunrise and sunset at the present moment, for a given city, in the form of a list: [sunrise_time, sunset_time].",
|
111 |
+
"parameters": {
|
112 |
+
"type": "object",
|
113 |
+
"properties": {
|
114 |
+
"city": {
|
115 |
+
"type": "string",
|
116 |
+
"description": "The city to get the sunrise and sunset times for."
|
117 |
+
}
|
118 |
+
},
|
119 |
+
"required": [
|
120 |
+
"city"
|
121 |
+
]
|
122 |
+
}
|
123 |
+
}
|
124 |
+
|
125 |
+
Hi, can you tell me the time of sunrise in Los Angeles?<end_of_turn>
|
126 |
+
<start_of_turn>model
|
127 |
+
|
128 |
+
```
|
129 |
+
|
130 |
+
Now we can generate a model's response.
|
131 |
+
Be careful because, after `apply_chat_template`, there is no need to *add special tokens* during tokenization. So, use `add_special_tokens=False`:
|
132 |
+
```python
|
133 |
+
terminator_ids = [
|
134 |
+
tokenizer.eos_token_id,
|
135 |
+
tokenizer.convert_tokens_to_ids("<end_of_turn>"),
|
136 |
+
]
|
137 |
+
|
138 |
+
prompt_ids = tokenizer.encode(inputs, add_special_tokens=False, return_tensors='pt').to(model.device)
|
139 |
+
generated_ids = model.generate(
|
140 |
+
prompt_ids,
|
141 |
+
max_new_tokens=512,
|
142 |
+
eos_token_id=terminator_ids,
|
143 |
+
bos_token_id=tokenizer.bos_token_id,
|
144 |
+
)
|
145 |
+
generated_response = tokenizer.decode(generated_ids[0][prompt_ids.shape[-1]:], skip_special_tokens=False) # `skip_special_tokens=False` for debug
|
146 |
+
```
|
147 |
+
|
148 |
+
We get the generation as a function call:
|
149 |
+
```
|
150 |
+
Function call: {"name": "get_sunrise_sunset_times", "arguments": {"city": "Los Angeles"}}<end_of_turn>
|
151 |
+
```
|
152 |
+
|
153 |
+
Great, now we can pick up and process the results with our *called function*, and then provide the model with the *function's response*:
|
154 |
+
```python
|
155 |
+
history_messages = [
|
156 |
+
{"role": "system", "content": "You are a helpful assistant with access to the following functions. Use them if required - "},
|
157 |
+
{"role": "user", "content": "Hi, can you tell me the time of sunrise in Los Angeles?"},
|
158 |
+
{"role": "function-call", "content": '{"name": "get_sunrise_sunset_times", "arguments": {"city": "Los Angeles"}}'},
|
159 |
+
{"role": "function-response", "content": '{"times_list": ["6:00 AM", "6:00 PM"]}'}, # a hypothetical response from our function
|
160 |
+
]
|
161 |
+
|
162 |
+
inputs = tokenizer.apply_chat_template(
|
163 |
+
history_messages,
|
164 |
+
tokenize=False,
|
165 |
+
add_generation_prompt=True, # adding prompt for generation
|
166 |
+
tools=[get_weather, get_sunrise_sunset_times], # our functions (tools)
|
167 |
+
)
|
168 |
+
```
|
169 |
+
|
170 |
+
Let's make sure the `inputs` are correct:
|
171 |
+
```
|
172 |
+
<bos><start_of_turn>user
|
173 |
+
You are a helpful assistant with access to the following functions. Use them if required - {
|
174 |
+
"name": "get_weather",
|
175 |
+
"description": "A function that returns the weather in a given city.",
|
176 |
+
"parameters": {
|
177 |
+
"type": "object",
|
178 |
+
"properties": {
|
179 |
+
"city": {
|
180 |
+
"type": "string",
|
181 |
+
"description": "The city to get the weather for."
|
182 |
+
}
|
183 |
+
},
|
184 |
+
"required": [
|
185 |
+
"city"
|
186 |
+
]
|
187 |
+
}
|
188 |
+
},
|
189 |
+
{
|
190 |
+
"name": "get_sunrise_sunset_times",
|
191 |
+
"description": "A function that returns the time of sunrise and sunset at the present moment, for a given city, in the form of a list: [sunrise_time, sunset_time].",
|
192 |
+
"parameters": {
|
193 |
+
"type": "object",
|
194 |
+
"properties": {
|
195 |
+
"city": {
|
196 |
+
"type": "string",
|
197 |
+
"description": "The city to get the sunrise and sunset times for."
|
198 |
+
}
|
199 |
+
},
|
200 |
+
"required": [
|
201 |
+
"city"
|
202 |
+
]
|
203 |
+
}
|
204 |
+
}
|
205 |
+
|
206 |
+
Hi, can you tell me the time of sunrise in Los Angeles?<end_of_turn>
|
207 |
+
<start_of_turn>model
|
208 |
+
Function call: {"name": "get_sunrise_sunset_times", "arguments": {"city": "Los Angeles"}}<end_of_turn>
|
209 |
+
<start_of_turn>user
|
210 |
+
Function response: {"times_list": ["6:00 AM", "6:00 PM"]}<end_of_turn>
|
211 |
+
<start_of_turn>model
|
212 |
+
|
213 |
+
```
|
214 |
+
|
215 |
+
Similarly, we generate a response from the model:
|
216 |
+
```python
|
217 |
+
prompt_ids = tokenizer.encode(inputs, add_special_tokens=False, return_tensors='pt').to(model.device)
|
218 |
+
generated_ids = model.generate(
|
219 |
+
prompt_ids,
|
220 |
+
max_new_tokens=512,
|
221 |
+
eos_token_id=terminator_ids,
|
222 |
+
bos_token_id=tokenizer.bos_token_id,
|
223 |
+
)
|
224 |
+
generated_response = tokenizer.decode(generated_ids[0][prompt_ids.shape[-1]:], skip_special_tokens=False) # `skip_special_tokens=False` for debug
|
225 |
+
```
|
226 |
+
|
227 |
+
As a result, we get the model's response:
|
228 |
+
```
|
229 |
+
The sunrise time in Los Angeles is 6:00 AM.<end_of_turn>
|
230 |
+
```
|
231 |
+
|
232 |
+
## Usage via transformers `pipeline`
|
233 |
+
|
234 |
+
<details>
|
235 |
+
<summary>
|
236 |
+
Generation via pipeline
|
237 |
+
</summary>
|
238 |
|
239 |
```python
|
240 |
+
from transformers import pipeline
|
241 |
+
|
242 |
+
|
243 |
+
generation_pipeline = pipeline(
|
244 |
+
"text-generation",
|
245 |
+
model="DiTy/gemma-2-9b-it-function-calling",
|
246 |
+
model_kwargs={
|
247 |
+
"torch_dtype": torch.bfloat16, # use float16 or float32 if bfloat16 is not supported for you.
|
248 |
+
"cache_dir": PATH_TO_MODEL_DIR, # OPTIONAL
|
249 |
+
},
|
250 |
+
device_map="auto",
|
251 |
+
)
|
252 |
+
|
253 |
+
history_messages = [
|
254 |
+
{"role": "system", "content": "You are a helpful assistant with access to the following functions. Use them if required - "},
|
255 |
+
{"role": "user", "content": "Hi, can you tell me the time of sunrise in Los Angeles?"},
|
256 |
+
{"role": "function-call", "content": '{"name": "get_sunrise_sunset_times", "arguments": {"city": "Los Angeles"}}'},
|
257 |
+
{"role": "function-response", "content": '{"times_list": ["6:00 AM", "6:00 PM"]}'},
|
258 |
+
]
|
259 |
+
|
260 |
+
inputs = generation_pipeline.tokenizer.apply_chat_template(
|
261 |
+
history_messages,
|
262 |
+
tokenize=False,
|
263 |
+
add_generation_prompt=True,
|
264 |
+
tools=[get_weather, get_sunrise_sunset_times],
|
265 |
+
)
|
266 |
+
|
267 |
+
terminator_ids = [
|
268 |
+
generation_pipeline.tokenizer.eos_token_id,
|
269 |
+
generation_pipeline.tokenizer.convert_tokens_to_ids("<end_of_turn>")
|
270 |
+
]
|
271 |
+
|
272 |
+
outputs = generation_pipeline(
|
273 |
+
inputs,
|
274 |
+
max_new_tokens=512,
|
275 |
+
eos_token_id=terminator_ids,
|
276 |
+
)
|
277 |
+
|
278 |
+
print(outputs[0]["generated_text"][len(inputs):])
|
279 |
+
```
|
280 |
+
|
281 |
+
</details>
|
282 |
+
|
283 |
+
## <a name="roles"></a>Prompt structure and expected content
|
284 |
|
285 |
+
For the most correct operation of the model, it is assumed that `apply_chat_template` will be used.
|
286 |
+
It is necessary to transmit the message history in a certain format.
|
287 |
+
```python
|
288 |
+
history_messages = [
|
289 |
+
{"role": "...", "content": "..."},
|
290 |
+
...
|
291 |
+
]
|
292 |
```
|
293 |
|
294 |
+
The following roles are available for use:
|
295 |
+
|
296 |
+
* `system` - an optional role, its content is always placed at the very beginning and before listing the functions available to the model (tools).
|
297 |
+
You can always use the standard option that was used during the training: ***"You are a helpful assistant with access to the following functions. Use them if required - "***
|
298 |
+
* `user` - the user's request is transmitted through this role.
|
299 |
+
* `function-call` - The body of the function call is passed through this role.
|
300 |
+
Although the model is trained to generate a function call in the form of ***"Function call: {...}\<end_of_turn\>"***, you should still pass only the body ***"{...}"***
|
301 |
+
to the *"content"* field, since using `apply_chat_template`, the postscript in the instructions is added automatically.
|
302 |
+
* `function-response` -
|
303 |
+
* `model` -
|
304 |
|
305 |
|
306 |
|