ThomasBaruzier commited on
Commit
15e8c0c
1 Parent(s): cffc50f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -2
README.md CHANGED
@@ -303,7 +303,7 @@ Where to send questions or comments about the model Instructions on how to provi
303
 
304
  **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.1 Community License. Use in languages beyond those explicitly referenced as supported in this model card**.
305
 
306
- **Note**: Llama 3.1 has been trained on a broader collection of languages than the 8 supported languages. Developers may fine-tune Llama 3.1 models for languages beyond the 8 supported languages provided they comply with the Llama 3.1 Community License and the Acceptable Use Policy and in such cases are responsible for ensuring that any uses of Llama 3.1 in additional languages is done in a safe and responsible manner.
307
 
308
  ## How to use
309
 
@@ -342,6 +342,52 @@ print(outputs[0]["generated_text"][-1])
342
 
343
  Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes)
344
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
345
  ### Use with `llama`
346
 
347
  Please, follow the instructions in the [repository](https://github.com/meta-llama/llama)
@@ -1229,4 +1275,4 @@ Finally, we put in place a set of resources including an [output reporting mecha
1229
 
1230
  The core values of Llama 3.1 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.1 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
1231
 
1232
- But Llama 3.1 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.1’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.1 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
 
303
 
304
  **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.1 Community License. Use in languages beyond those explicitly referenced as supported in this model card**.
305
 
306
+ **<span style="text-decoration:underline;">Note</span>: Llama 3.1 has been trained on a broader collection of languages than the 8 supported languages. Developers may fine-tune Llama 3.1 models for languages beyond the 8 supported languages provided they comply with the Llama 3.1 Community License and the Acceptable Use Policy and in such cases are responsible for ensuring that any uses of Llama 3.1 in additional languages is done in a safe and responsible manner.
307
 
308
  ## How to use
309
 
 
342
 
343
  Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes)
344
 
345
+ ### Tool use with transformers
346
+
347
+ LLaMA-3.1 supports multiple tool use formats. You can see a full guide to prompt formatting [here](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/).
348
+
349
+ Tool use is also supported through [chat templates](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling) in Transformers.
350
+ Here is a quick example showing a single simple tool:
351
+
352
+ ```python
353
+ # First, define a tool
354
+ def get_current_temperature(location: str) -> float:
355
+ """
356
+ Get the current temperature at a location.
357
+
358
+ Args:
359
+ location: The location to get the temperature for, in the format "City, Country"
360
+ Returns:
361
+ The current temperature at the specified location in the specified units, as a float.
362
+ """
363
+ return 22. # A real function should probably actually get the temperature!
364
+
365
+ # Next, create a chat and apply the chat template
366
+ messages = [
367
+ {"role": "system", "content": "You are a bot that responds to weather queries."},
368
+ {"role": "user", "content": "Hey, what's the temperature in Paris right now?"}
369
+ ]
370
+
371
+ inputs = tokenizer.apply_chat_template(messages, tools=[get_current_temperature], add_generation_prompt=True)
372
+ ```
373
+
374
+ You can then generate text from this input as normal. If the model generates a tool call, you should add it to the chat like so:
375
+
376
+ ```python
377
+ tool_call = {"name": "get_current_temperature", "arguments": {"location": "Paris, France"}}
378
+ messages.append({"role": "assistant", "tool_calls": [{"type": "function", "function": tool_call}]})
379
+ ```
380
+
381
+ and then call the tool and append the result, with the `tool` role, like so:
382
+
383
+ ```python
384
+ messages.append({"role": "tool", "name": "get_current_temperature", "content": "22.0"})
385
+ ```
386
+
387
+ After that, you can `generate()` again to let the model use the tool result in the chat. Note that this was a very brief introduction to tool calling - for more information,
388
+ see the [LLaMA prompt format docs](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/) and the Transformers [tool use documentation](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling).
389
+
390
+
391
  ### Use with `llama`
392
 
393
  Please, follow the instructions in the [repository](https://github.com/meta-llama/llama)
 
1275
 
1276
  The core values of Llama 3.1 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.1 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
1277
 
1278
+ But Llama 3.1 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.1’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.1 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.