ThomasBaruzier commited on
Commit
4d58a94
1 Parent(s): 2610633

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -1
README.md CHANGED
@@ -188,6 +188,8 @@ extra_gated_description: The information you provide will be collected, stored,
188
  extra_gated_button_content: Submit
189
  ---
190
 
 
 
191
  # Llama.cpp imatrix quantizations of meta-llama/Meta-Llama-3.1-8B-Instruct
192
 
193
  Using llama.cpp commit [b5e9546](https://github.com/ggerganov/llama.cpp/commit/b5e95468b1676e1e5c9d80d1eeeb26f542a38f42) for quantization, featuring llama 3.1 rope scaling factors. This fixes low-quality issues when using 8-128k context lengths.
@@ -196,6 +198,12 @@ Original model: [https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct](h
196
 
197
  All quants were made using the imatrix option and Bartowski's [calibration file](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8).
198
 
 
 
 
 
 
 
199
  ## Model Information
200
 
201
  The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.
@@ -292,7 +300,7 @@ Where to send questions or comments about the model Instructions on how to provi
292
 
293
  **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.1 Community License. Use in languages beyond those explicitly referenced as supported in this model card**.
294
 
295
- **<span style="text-decoration:underline;">Note</span>: Llama 3.1 has been trained on a broader collection of languages than the 8 supported languages. Developers may fine-tune Llama 3.1 models for languages beyond the 8 supported languages provided they comply with the Llama 3.1 Community License and the Acceptable Use Policy and in such cases are responsible for ensuring that any uses of Llama 3.1 in additional languages is done in a safe and responsible manner.
296
 
297
  ## How to use
298
 
 
188
  extra_gated_button_content: Submit
189
  ---
190
 
191
+ <br><hr>
192
+
193
  # Llama.cpp imatrix quantizations of meta-llama/Meta-Llama-3.1-8B-Instruct
194
 
195
  Using llama.cpp commit [b5e9546](https://github.com/ggerganov/llama.cpp/commit/b5e95468b1676e1e5c9d80d1eeeb26f542a38f42) for quantization, featuring llama 3.1 rope scaling factors. This fixes low-quality issues when using 8-128k context lengths.
 
198
 
199
  All quants were made using the imatrix option and Bartowski's [calibration file](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8).
200
 
201
+ Note: There is a new [chat template](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct/discussions/53) coming to fix tool use, which has not been merged yet. This repo uses the current chat template, which doesn't support tool use. If you need this feature, you need to [edit the metadata](https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/scripts/gguf_set_metadata.py) of the model yourself.
202
+
203
+ <hr><br>
204
+
205
+ # Original model card:
206
+
207
  ## Model Information
208
 
209
  The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.
 
300
 
301
  **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.1 Community License. Use in languages beyond those explicitly referenced as supported in this model card**.
302
 
303
+ **Note**: Llama 3.1 has been trained on a broader collection of languages than the 8 supported languages. Developers may fine-tune Llama 3.1 models for languages beyond the 8 supported languages provided they comply with the Llama 3.1 Community License and the Acceptable Use Policy and in such cases are responsible for ensuring that any uses of Llama 3.1 in additional languages is done in a safe and responsible manner.
304
 
305
  ## How to use
306