pek111 commited on
Commit
7f407ba
1 Parent(s): 827a574

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1296 -3
README.md CHANGED
@@ -1,3 +1,1296 @@
1
- ---
2
- license: llama3.1
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
3
+ language:
4
+ - en
5
+ - de
6
+ - fr
7
+ - it
8
+ - pt
9
+ - hi
10
+ - es
11
+ - th
12
+ license: llama3.1
13
+ pipeline_tag: text-generation
14
+ tags:
15
+ - facebook
16
+ - meta
17
+ - pytorch
18
+ - llama
19
+ - llama-3
20
+ - instruct
21
+ - chat
22
+ - conversational
23
+ - quantized
24
+ ---
25
+ # Meta Llama 3.1 8B Instruct - GGUF
26
+
27
+ ## Description
28
+
29
+ This repo contains GGUF format model files for [Meta's Llama 3.1 8B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct/).
30
+
31
+ You can [**jump to downloads**](#provided-files).
32
+
33
+ ### About GGUF
34
+
35
+ GGUF is a new format introduced by the llama.cpp team on August 21st, 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenization, and support for special tokens. It also supports metadata and is designed to be extensible.
36
+
37
+ Here is an incomplete list of clients and libraries that are known to support GGUF:
38
+
39
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
40
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
41
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for storytelling.
42
+ * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
43
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
44
+ * [Faraday.dev](https://faraday.dev/), an attractive and easy-to-use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
45
+ * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
46
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
47
+ * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
48
+
49
+ ## Prompt template
50
+
51
+ Llama 3 chat template
52
+
53
+ ```
54
+ <|begin_of_text|><|start_header_id|>system<|end_header_id|>
55
+
56
+ You are a helpful assistant.<|eot_id|><|start_header_id|>user<|end_header_id|>
57
+
58
+ How fast can cheetah run?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
59
+
60
+ They can reach top speed of about 75mph (120 kmh)<|eot_id|>
61
+ ```
62
+
63
+ ## Compatibility
64
+
65
+ These quantised GGUFv3 files are compatible with llama.cpp from July 24th onwards, as of commit [f19bf99c015d3d745143e8bb4f056e0ea015ad40](https://github.com/ggerganov/llama.cpp/commit/f19bf99c015d3d745143e8bb4f056e0ea015ad40)
66
+
67
+ They are also compatible with many third-party UIs and libraries - please see the list at the top of this README.
68
+
69
+ ## Explanation of quantization methods
70
+ <details>
71
+ <summary>Click to see details</summary>
72
+
73
+ The new methods available are:
74
+ * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
75
+ * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This ends up using 3.4375 bpw.
76
+ * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
77
+ * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
78
+ * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
79
+
80
+ Refer to the Provided Files table below to see what files use which methods, and how.
81
+ </details>
82
+
83
+ ## Provided files
84
+
85
+ | Name | Quant method | Bits | Size | Use case |
86
+ | ---- | ---- | ---- | ---- | ---- |
87
+ | [Meta-Llama-3.1-8B-Instruct.Q2_K.gguf](https://huggingface.co/pek111/Meta-Llama-3.1-8B-Instruct-GGUF/resolve/main/Meta-Llama-3.1-8B-Instruct.Q2_K.gguf) | Q2_K | 2 | 2.95 GB | smallest, significant quality loss - not recommended for most purposes |
88
+ | [Meta-Llama-3.1-8B-Instruct.Q3_K_S.gguf](https://huggingface.co/pek111/Meta-Llama-3.1-8B-Instruct-GGUF/resolve/main/Meta-Llama-3.1-8B-Instruct.Q3_K_S.gguf) | Q3_K_S | 3 | 3.41 GB | very small, high quality loss |
89
+ | [Meta-Llama-3.1-8B-Instruct.Q3_K_M.gguf](https://huggingface.co/pek111/Meta-Llama-3.1-8B-Instruct-GGUF/resolve/main/Meta-Llama-3.1-8B-Instruct.Q3_K_M.gguf) | Q3_K_M | 3 | 3.74 GB | very small, high quality loss |
90
+ | [Meta-Llama-3.1-8B-Instruct.Q3_K_L.gguf](https://huggingface.co/pek111/Meta-Llama-3.1-8B-Instruct-GGUF/resolve/main/Meta-Llama-3.1-8B-Instruct.Q3_K_L.gguf) | Q3_K_L | 3 | 4.02 GB | small, substantial quality loss |
91
+ | [Meta-Llama-3.1-8B-Instruct.Q4_0.gguf](https://huggingface.co/pek111/Meta-Llama-3.1-8B-Instruct-GGUF/resolve/main/Meta-Llama-3.1-8B-Instruct.Q4_0.gguf) | Q4_0 | 4 | 4.34 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
92
+ | [Meta-Llama-3.1-8B-Instruct.Q4_K_S.gguf](https://huggingface.co/pek111/Meta-Llama-3.1-8B-Instruct-GGUF/resolve/main/Meta-Llama-3.1-8B-Instruct.Q4_K_S.gguf) | Q4_K_S | 4 | 4.37 GB | small, greater quality loss |
93
+ | [Meta-Llama-3.1-8B-Instruct.Q4_K_M.gguf](https://huggingface.co/pek111/Meta-Llama-3.1-8B-Instruct-GGUF/resolve/main/Meta-Llama-3.1-8B-Instruct.Q4_K_M.gguf) | Q4_K_M | 4 | 4.58 GB | medium, balanced quality - recommended |
94
+ | [Meta-Llama-3.1-8B-Instruct.Q5_0.gguf](https://huggingface.co/pek111/Meta-Llama-3.1-8B-Instruct-GGUF/resolve/main/Meta-Llama-3.1-8B-Instruct.Q5_0.gguf) | Q5_0 | 5 | 5.21 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
95
+ | [Meta-Llama-3.1-8B-Instruct.Q5_K_S.gguf](https://huggingface.co/pek111/Meta-Llama-3.1-8B-Instruct-GGUF/resolve/main/Meta-Llama-3.1-8B-Instruct.Q5_K_S.gguf) | Q5_K_S | 5 | 5.21 GB | large, low quality loss - recommended |
96
+ | [Meta-Llama-3.1-8B-Instruct.Q5_K_M.gguf](https://huggingface.co/pek111/Meta-Llama-3.1-8B-Instruct-GGUF/resolve/main/Meta-Llama-3.1-8B-Instruct.Q5_K_M.gguf) | Q5_K_M | 5 | 5.33 GB | large, very low quality loss - recommended |
97
+ | [Meta-Llama-3.1-8B-Instruct.Q6_K.gguf](https://huggingface.co/pek111/Meta-Llama-3.1-8B-Instruct-GGUF/resolve/main/Meta-Llama-3.1-8B-Instruct.Q6_K.gguf) | Q6_K | 6 | 6.14 GB | very large, extremely low quality loss |
98
+ | [Meta-Llama-3.1-8B-Instruct.Q8_0.gguf](https://huggingface.co/pek111/Meta-Llama-3.1-8B-Instruct-GGUF/resolve/main/Meta-Llama-3.1-8B-Instruct.Q8_0.gguf) | Q8_0 | 8 | 7.95 GB | very large, extremely low quality loss - not recommended |
99
+ | [Meta-Llama-3.1-8B-Instruct.BF16.gguf](https://huggingface.co/pek111/Meta-Llama-3.1-8B-Instruct-GGUF/resolve/main/Meta-Llama-3.1-8B-Instruct.BF16.gguf) | BF16 | 16 | 14.97 GB | largest, original quality - not recommended |
100
+
101
+ ## How to download GGUF files
102
+
103
+ **Note for manual downloaders:** You rarely want to clone the entire repo! Multiple different quantization formats are provided, and most users only want to pick and download a single file.
104
+
105
+ The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
106
+
107
+ - LM Studio
108
+ - LoLLMS Web UI
109
+ - Faraday.dev
110
+
111
+ ### In `text-generation-webui`
112
+
113
+ Under Download Model, you can enter the model repo: pek111/llama-3-typhoon-v1.5-8b-instruct-GGUF, and below it, a specific filename to download, such as tc-instruct-dpo.Q4_K_M.gguf.
114
+
115
+ Then click Download.
116
+
117
+ ### On the command line, including multiple files at once
118
+
119
+ I recommend using the `huggingface-hub` Python library:
120
+
121
+ ```shell
122
+ pip3 install huggingface-hub>=0.17.1
123
+ ```
124
+
125
+ Then you can download any individual model file to the current directory, at high speed, with a command like this:
126
+
127
+ ```shell
128
+ huggingface-cli download pek111/Meta-Llama-3.1-8B-Instruct-GGUF Meta-Llama-3.1-8B-Instruct.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
129
+ ```
130
+
131
+ <details>
132
+ <summary>More advanced huggingface-cli download usage</summary>
133
+
134
+
135
+ You can also download multiple files at once with a pattern:
136
+
137
+ ```shell
138
+ huggingface-cli download pek111/llama-3-typhoon-v1.5-8b-instruct-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
139
+ ```
140
+
141
+ For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
142
+
143
+ To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
144
+
145
+ ```shell
146
+ pip3 install hf_transfer
147
+ ```
148
+
149
+ And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
150
+
151
+ ```shell
152
+ HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download pek111/Meta-Llama-3.1-8B-Instruct-GGUF Meta-Llama-3.1-8B-Instruct.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
153
+ ```
154
+
155
+ Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` or `$env:HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
156
+ </details>
157
+
158
+ ## Example `llama.cpp` command
159
+
160
+ Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
161
+
162
+ ```shell
163
+ ./main -ngl 32 -m tc-instruct-dpo.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
164
+ ```
165
+
166
+ Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
167
+
168
+ Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
169
+
170
+ If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
171
+
172
+ For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
173
+
174
+ ## How to run in `text-generation-webui`
175
+
176
+ Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
177
+
178
+ ## How to run from Python code
179
+
180
+ You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
181
+
182
+ ### How to load this model from Python using ctransformers
183
+
184
+ #### First install the package
185
+
186
+ ```shell
187
+ # Base llama-cpp-python with no GPU acceleration
188
+ pip install llama-cpp-python
189
+ # With NVidia CUDA acceleration
190
+ CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
191
+ # Or with OpenBLAS acceleration
192
+ CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
193
+ # Or with CLBLast acceleration
194
+ CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
195
+ # Or with AMD ROCm GPU acceleration (Linux only)
196
+ CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
197
+ # Or with Metal GPU acceleration for macOS systems only
198
+ CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
199
+
200
+ # In Windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for Nvidia CUDA:
201
+ $env:CMAKE_ARGS = "-DLLAMA_CUDA=on"
202
+ pip install llama_cpp_python --verbose
203
+ # If BLAS = 0 try installing with these commands instead (Windows + CUDA)
204
+ set CMAKE_ARGS="-DLLAMA_CUDA=on"
205
+ set FORCE_CMAKE=1
206
+ $env:CMAKE_ARGS = "-DLLAMA_CUDA=on"
207
+ $env:FORCE_CMAKE = 1
208
+ python -m pip install llama_cpp_python>=0.2.26 --verbose --force-reinstall --no-cache-dir
209
+ ```
210
+
211
+ #### Simple example code to load one of these GGUF models
212
+
213
+ ```python
214
+ import llama_cpp
215
+
216
+ llm_cpp = llama_cpp.Llama(
217
+ model_path="models/llama-3-typhoon-v1.5-8b-instruct.Q6_K.gguf", # Path to the model
218
+ n_threads=10, # CPU cores
219
+ n_batch=512, # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU.
220
+ n_gpu_layers=33, # Change this value based on your model and your GPU VRAM pool.
221
+ n_ctx=2048, # Max context length
222
+ )
223
+
224
+ prompt = """
225
+ <|begin_of_text|><|start_header_id|>system<|end_header_id|>
226
+
227
+ You are a helpful assistant.<|eot_id|><|start_header_id|>user<|end_header_id|>
228
+
229
+ How fast can cheetah run?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
230
+
231
+ """
232
+
233
+ response = llm_cpp(
234
+ prompt=prompt,
235
+ max_tokens=256,
236
+ temperature=0.5,
237
+ top_k=1,
238
+ repeat_penalty=1.1,
239
+ echo=True
240
+ )
241
+
242
+ print(response)
243
+ ```
244
+
245
+ #### Output
246
+
247
+ ```json
248
+ {
249
+ "id": "cmpl-b0971ce1-1607-42b3-b6dd-8bf8e324307a",
250
+ "object": "text_completion",
251
+ "created": 1721478196,
252
+ "model": "models/llama-3-typhoon-v1.5-8b-instruct.Q6_K.gguf",
253
+ "choices": [
254
+ {
255
+ "text": "\n<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are a helpful assistant who're always speak Thai.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n1+1 เท่ากับเท่าไหร่<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n2",
256
+ "index": 0,
257
+ "logprobs": None,
258
+ "finish_reason": "stop",
259
+ }
260
+ ],
261
+ "usage": {"prompt_tokens": 41, "completion_tokens": 2, "total_tokens": 43},
262
+ }
263
+ ```
264
+
265
+ ## How to use with LangChain
266
+
267
+ Here are guides on using llama-cpp-python or ctransformers with LangChain:
268
+
269
+ * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
270
+ * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
271
+
272
+ # Original model card: Meta's Llama 3.1 8B Instruct
273
+
274
+ ## Model Information
275
+
276
+ The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.
277
+
278
+ **Model developer**: Meta
279
+
280
+ **Model Architecture:** Llama 3.1 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
281
+
282
+
283
+ <table>
284
+ <tr>
285
+ <td>
286
+ </td>
287
+ <td><strong>Training Data</strong>
288
+ </td>
289
+ <td><strong>Params</strong>
290
+ </td>
291
+ <td><strong>Input modalities</strong>
292
+ </td>
293
+ <td><strong>Output modalities</strong>
294
+ </td>
295
+ <td><strong>Context length</strong>
296
+ </td>
297
+ <td><strong>GQA</strong>
298
+ </td>
299
+ <td><strong>Token count</strong>
300
+ </td>
301
+ <td><strong>Knowledge cutoff</strong>
302
+ </td>
303
+ </tr>
304
+ <tr>
305
+ <td rowspan="3" >Llama 3.1 (text only)
306
+ </td>
307
+ <td rowspan="3" >A new mix of publicly available online data.
308
+ </td>
309
+ <td>8B
310
+ </td>
311
+ <td>Multilingual Text
312
+ </td>
313
+ <td>Multilingual Text and code
314
+ </td>
315
+ <td>128k
316
+ </td>
317
+ <td>Yes
318
+ </td>
319
+ <td rowspan="3" >15T+
320
+ </td>
321
+ <td rowspan="3" >December 2023
322
+ </td>
323
+ </tr>
324
+ <tr>
325
+ <td>70B
326
+ </td>
327
+ <td>Multilingual Text
328
+ </td>
329
+ <td>Multilingual Text and code
330
+ </td>
331
+ <td>128k
332
+ </td>
333
+ <td>Yes
334
+ </td>
335
+ </tr>
336
+ <tr>
337
+ <td>405B
338
+ </td>
339
+ <td>Multilingual Text
340
+ </td>
341
+ <td>Multilingual Text and code
342
+ </td>
343
+ <td>128k
344
+ </td>
345
+ <td>Yes
346
+ </td>
347
+ </tr>
348
+ </table>
349
+
350
+
351
+ **Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.
352
+
353
+ **Llama 3.1 family of models**. Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
354
+
355
+ **Model Release Date:** July 23, 2024.
356
+
357
+ **Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
358
+
359
+ **License:** A custom commercial license, the Llama 3.1 Community License, is available at: [https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE)
360
+
361
+ Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
362
+
363
+
364
+ ## Intended Use
365
+
366
+ **Intended Use Cases** Llama 3.1 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. The Llama 3.1 model collection also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The Llama 3.1 Community License allows for these use cases.
367
+
368
+ **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.1 Community License. Use in languages beyond those explicitly referenced as supported in this model card**.
369
+
370
+ **<span style="text-decoration:underline;">Note</span>: Llama 3.1 has been trained on a broader collection of languages than the 8 supported languages. Developers may fine-tune Llama 3.1 models for languages beyond the 8 supported languages provided they comply with the Llama 3.1 Community License and the Acceptable Use Policy and in such cases are responsible for ensuring that any uses of Llama 3.1 in additional languages is done in a safe and responsible manner.
371
+
372
+ ## How to use
373
+
374
+ This repository contains two versions of Meta-Llama-3.1-8B-Instruct, for use with transformers and with the original `llama` codebase.
375
+
376
+ ### Use with transformers
377
+
378
+ Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
379
+
380
+ Make sure to update your transformers installation via `pip install --upgrade transformers`.
381
+
382
+ ```python
383
+ import transformers
384
+ import torch
385
+
386
+ model_id = "meta-llama/Meta-Llama-3.1-8B-Instruct"
387
+
388
+ pipeline = transformers.pipeline(
389
+ "text-generation",
390
+ model=model_id,
391
+ model_kwargs={"torch_dtype": torch.bfloat16},
392
+ device_map="auto",
393
+ )
394
+
395
+ messages = [
396
+ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
397
+ {"role": "user", "content": "Who are you?"},
398
+ ]
399
+
400
+ outputs = pipeline(
401
+ messages,
402
+ max_new_tokens=256,
403
+ )
404
+ print(outputs[0]["generated_text"][-1])
405
+ ```
406
+
407
+ Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes)
408
+
409
+ ### Use with `llama`
410
+
411
+ Please, follow the instructions in the [repository](https://github.com/meta-llama/llama)
412
+
413
+ To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
414
+
415
+ ```
416
+ huggingface-cli download meta-llama/Meta-Llama-3.1-8B-Instruct --include "original/*" --local-dir Meta-Llama-3.1-8B-Instruct
417
+ ```
418
+
419
+ ## Hardware and Software
420
+
421
+ **Training Factors** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure.
422
+
423
+ **Training utilized a cumulative of** 39.3M GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency.
424
+
425
+
426
+ **Training Greenhouse Gas Emissions** Estimated total location-based greenhouse gas emissions were **11,390** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy, therefore the total market-based greenhouse gas emissions for training were 0 tons CO2eq.
427
+
428
+
429
+ <table>
430
+ <tr>
431
+ <td>
432
+ </td>
433
+ <td><strong>Training Time (GPU hours)</strong>
434
+ </td>
435
+ <td><strong>Training Power Consumption (W)</strong>
436
+ </td>
437
+ <td><strong>Training Location-Based Greenhouse Gas Emissions</strong>
438
+ <p>
439
+ <strong>(tons CO2eq)</strong>
440
+ </td>
441
+ <td><strong>Training Market-Based Greenhouse Gas Emissions</strong>
442
+ <p>
443
+ <strong>(tons CO2eq)</strong>
444
+ </td>
445
+ </tr>
446
+ <tr>
447
+ <td>Llama 3.1 8B
448
+ </td>
449
+ <td>1.46M
450
+ </td>
451
+ <td>700
452
+ </td>
453
+ <td>420
454
+ </td>
455
+ <td>0
456
+ </td>
457
+ </tr>
458
+ <tr>
459
+ <td>Llama 3.1 70B
460
+ </td>
461
+ <td>7.0M
462
+ </td>
463
+ <td>700
464
+ </td>
465
+ <td>2,040
466
+ </td>
467
+ <td>0
468
+ </td>
469
+ </tr>
470
+ <tr>
471
+ <td>Llama 3.1 405B
472
+ </td>
473
+ <td>30.84M
474
+ </td>
475
+ <td>700
476
+ </td>
477
+ <td>8,930
478
+ </td>
479
+ <td>0
480
+ </td>
481
+ </tr>
482
+ <tr>
483
+ <td>Total
484
+ </td>
485
+ <td>39.3M
486
+ <td>
487
+ <ul>
488
+
489
+ </ul>
490
+ </td>
491
+ <td>11,390
492
+ </td>
493
+ <td>0
494
+ </td>
495
+ </tr>
496
+ </table>
497
+
498
+
499
+
500
+ The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others.
501
+
502
+
503
+ ## Training Data
504
+
505
+ **Overview:** Llama 3.1 was pretrained on ~15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 25M synthetically generated examples.
506
+
507
+ **Data Freshness:** The pretraining data has a cutoff of December 2023.
508
+
509
+
510
+ ## Benchmark scores
511
+
512
+ In this section, we report the results for Llama 3.1 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library.
513
+
514
+ ### Base pretrained models
515
+
516
+
517
+ <table>
518
+ <tr>
519
+ <td><strong>Category</strong>
520
+ </td>
521
+ <td><strong>Benchmark</strong>
522
+ </td>
523
+ <td><strong># Shots</strong>
524
+ </td>
525
+ <td><strong>Metric</strong>
526
+ </td>
527
+ <td><strong>Llama 3 8B</strong>
528
+ </td>
529
+ <td><strong>Llama 3.1 8B</strong>
530
+ </td>
531
+ <td><strong>Llama 3 70B</strong>
532
+ </td>
533
+ <td><strong>Llama 3.1 70B</strong>
534
+ </td>
535
+ <td><strong>Llama 3.1 405B</strong>
536
+ </td>
537
+ </tr>
538
+ <tr>
539
+ <td rowspan="7" >General
540
+ </td>
541
+ <td>MMLU
542
+ </td>
543
+ <td>5
544
+ </td>
545
+ <td>macro_avg/acc_char
546
+ </td>
547
+ <td>66.7
548
+ </td>
549
+ <td>66.7
550
+ </td>
551
+ <td>79.5
552
+ </td>
553
+ <td>79.3
554
+ </td>
555
+ <td>85.2
556
+ </td>
557
+ </tr>
558
+ <tr>
559
+ <td>MMLU-Pro (CoT)
560
+ </td>
561
+ <td>5
562
+ </td>
563
+ <td>macro_avg/acc_char
564
+ </td>
565
+ <td>36.2
566
+ </td>
567
+ <td>37.1
568
+ </td>
569
+ <td>55.0
570
+ </td>
571
+ <td>53.8
572
+ </td>
573
+ <td>61.6
574
+ </td>
575
+ </tr>
576
+ <tr>
577
+ <td>AGIEval English
578
+ </td>
579
+ <td>3-5
580
+ </td>
581
+ <td>average/acc_char
582
+ </td>
583
+ <td>47.1
584
+ </td>
585
+ <td>47.8
586
+ </td>
587
+ <td>63.0
588
+ </td>
589
+ <td>64.6
590
+ </td>
591
+ <td>71.6
592
+ </td>
593
+ </tr>
594
+ <tr>
595
+ <td>CommonSenseQA
596
+ </td>
597
+ <td>7
598
+ </td>
599
+ <td>acc_char
600
+ </td>
601
+ <td>72.6
602
+ </td>
603
+ <td>75.0
604
+ </td>
605
+ <td>83.8
606
+ </td>
607
+ <td>84.1
608
+ </td>
609
+ <td>85.8
610
+ </td>
611
+ </tr>
612
+ <tr>
613
+ <td>Winogrande
614
+ </td>
615
+ <td>5
616
+ </td>
617
+ <td>acc_char
618
+ </td>
619
+ <td>-
620
+ </td>
621
+ <td>60.5
622
+ </td>
623
+ <td>-
624
+ </td>
625
+ <td>83.3
626
+ </td>
627
+ <td>86.7
628
+ </td>
629
+ </tr>
630
+ <tr>
631
+ <td>BIG-Bench Hard (CoT)
632
+ </td>
633
+ <td>3
634
+ </td>
635
+ <td>average/em
636
+ </td>
637
+ <td>61.1
638
+ </td>
639
+ <td>64.2
640
+ </td>
641
+ <td>81.3
642
+ </td>
643
+ <td>81.6
644
+ </td>
645
+ <td>85.9
646
+ </td>
647
+ </tr>
648
+ <tr>
649
+ <td>ARC-Challenge
650
+ </td>
651
+ <td>25
652
+ </td>
653
+ <td>acc_char
654
+ </td>
655
+ <td>79.4
656
+ </td>
657
+ <td>79.7
658
+ </td>
659
+ <td>93.1
660
+ </td>
661
+ <td>92.9
662
+ </td>
663
+ <td>96.1
664
+ </td>
665
+ </tr>
666
+ <tr>
667
+ <td>Knowledge reasoning
668
+ </td>
669
+ <td>TriviaQA-Wiki
670
+ </td>
671
+ <td>5
672
+ </td>
673
+ <td>em
674
+ </td>
675
+ <td>78.5
676
+ </td>
677
+ <td>77.6
678
+ </td>
679
+ <td>89.7
680
+ </td>
681
+ <td>89.8
682
+ </td>
683
+ <td>91.8
684
+ </td>
685
+ </tr>
686
+ <tr>
687
+ <td rowspan="4" >Reading comprehension
688
+ </td>
689
+ <td>SQuAD
690
+ </td>
691
+ <td>1
692
+ </td>
693
+ <td>em
694
+ </td>
695
+ <td>76.4
696
+ </td>
697
+ <td>77.0
698
+ </td>
699
+ <td>85.6
700
+ </td>
701
+ <td>81.8
702
+ </td>
703
+ <td>89.3
704
+ </td>
705
+ </tr>
706
+ <tr>
707
+ <td>QuAC (F1)
708
+ </td>
709
+ <td>1
710
+ </td>
711
+ <td>f1
712
+ </td>
713
+ <td>44.4
714
+ </td>
715
+ <td>44.9
716
+ </td>
717
+ <td>51.1
718
+ </td>
719
+ <td>51.1
720
+ </td>
721
+ <td>53.6
722
+ </td>
723
+ </tr>
724
+ <tr>
725
+ <td>BoolQ
726
+ </td>
727
+ <td>0
728
+ </td>
729
+ <td>acc_char
730
+ </td>
731
+ <td>75.7
732
+ </td>
733
+ <td>75.0
734
+ </td>
735
+ <td>79.0
736
+ </td>
737
+ <td>79.4
738
+ </td>
739
+ <td>80.0
740
+ </td>
741
+ </tr>
742
+ <tr>
743
+ <td>DROP (F1)
744
+ </td>
745
+ <td>3
746
+ </td>
747
+ <td>f1
748
+ </td>
749
+ <td>58.4
750
+ </td>
751
+ <td>59.5
752
+ </td>
753
+ <td>79.7
754
+ </td>
755
+ <td>79.6
756
+ </td>
757
+ <td>84.8
758
+ </td>
759
+ </tr>
760
+ </table>
761
+
762
+
763
+
764
+ ### Instruction tuned models
765
+
766
+
767
+ <table>
768
+ <tr>
769
+ <td><strong>Category</strong>
770
+ </td>
771
+ <td><strong>Benchmark</strong>
772
+ </td>
773
+ <td><strong># Shots</strong>
774
+ </td>
775
+ <td><strong>Metric</strong>
776
+ </td>
777
+ <td><strong>Llama 3 8B Instruct</strong>
778
+ </td>
779
+ <td><strong>Llama 3.1 8B Instruct</strong>
780
+ </td>
781
+ <td><strong>Llama 3 70B Instruct</strong>
782
+ </td>
783
+ <td><strong>Llama 3.1 70B Instruct</strong>
784
+ </td>
785
+ <td><strong>Llama 3.1 405B Instruct</strong>
786
+ </td>
787
+ </tr>
788
+ <tr>
789
+ <td rowspan="4" >General
790
+ </td>
791
+ <td>MMLU
792
+ </td>
793
+ <td>5
794
+ </td>
795
+ <td>macro_avg/acc
796
+ </td>
797
+ <td>68.5
798
+ </td>
799
+ <td>69.4
800
+ </td>
801
+ <td>82.0
802
+ </td>
803
+ <td>83.6
804
+ </td>
805
+ <td>87.3
806
+ </td>
807
+ </tr>
808
+ <tr>
809
+ <td>MMLU (CoT)
810
+ </td>
811
+ <td>0
812
+ </td>
813
+ <td>macro_avg/acc
814
+ </td>
815
+ <td>65.3
816
+ </td>
817
+ <td>73.0
818
+ </td>
819
+ <td>80.9
820
+ </td>
821
+ <td>86.0
822
+ </td>
823
+ <td>88.6
824
+ </td>
825
+ </tr>
826
+ <tr>
827
+ <td>MMLU-Pro (CoT)
828
+ </td>
829
+ <td>5
830
+ </td>
831
+ <td>micro_avg/acc_char
832
+ </td>
833
+ <td>45.5
834
+ </td>
835
+ <td>48.3
836
+ </td>
837
+ <td>63.4
838
+ </td>
839
+ <td>66.4
840
+ </td>
841
+ <td>73.3
842
+ </td>
843
+ </tr>
844
+ <tr>
845
+ <td>IFEval
846
+ </td>
847
+ <td>
848
+ </td>
849
+ <td>
850
+ </td>
851
+ <td>76.8
852
+ </td>
853
+ <td>80.4
854
+ </td>
855
+ <td>82.9
856
+ </td>
857
+ <td>87.5
858
+ </td>
859
+ <td>88.6
860
+ </td>
861
+ </tr>
862
+ <tr>
863
+ <td rowspan="2" >Reasoning
864
+ </td>
865
+ <td>ARC-C
866
+ </td>
867
+ <td>0
868
+ </td>
869
+ <td>acc
870
+ </td>
871
+ <td>82.4
872
+ </td>
873
+ <td>83.4
874
+ </td>
875
+ <td>94.4
876
+ </td>
877
+ <td>94.8
878
+ </td>
879
+ <td>96.9
880
+ </td>
881
+ </tr>
882
+ <tr>
883
+ <td>GPQA
884
+ </td>
885
+ <td>0
886
+ </td>
887
+ <td>em
888
+ </td>
889
+ <td>34.6
890
+ </td>
891
+ <td>30.4
892
+ </td>
893
+ <td>39.5
894
+ </td>
895
+ <td>41.7
896
+ </td>
897
+ <td>50.7
898
+ </td>
899
+ </tr>
900
+ <tr>
901
+ <td rowspan="4" >Code
902
+ </td>
903
+ <td>HumanEval
904
+ </td>
905
+ <td>0
906
+ </td>
907
+ <td>pass@1
908
+ </td>
909
+ <td>60.4
910
+ </td>
911
+ <td>72.6
912
+ </td>
913
+ <td>81.7
914
+ </td>
915
+ <td>80.5
916
+ </td>
917
+ <td>89.0
918
+ </td>
919
+ </tr>
920
+ <tr>
921
+ <td>MBPP ++ base version
922
+ </td>
923
+ <td>0
924
+ </td>
925
+ <td>pass@1
926
+ </td>
927
+ <td>70.6
928
+ </td>
929
+ <td>72.8
930
+ </td>
931
+ <td>82.5
932
+ </td>
933
+ <td>86.0
934
+ </td>
935
+ <td>88.6
936
+ </td>
937
+ </tr>
938
+ <tr>
939
+ <td>Multipl-E HumanEval
940
+ </td>
941
+ <td>0
942
+ </td>
943
+ <td>pass@1
944
+ </td>
945
+ <td>-
946
+ </td>
947
+ <td>50.8
948
+ </td>
949
+ <td>-
950
+ </td>
951
+ <td>65.5
952
+ </td>
953
+ <td>75.2
954
+ </td>
955
+ </tr>
956
+ <tr>
957
+ <td>Multipl-E MBPP
958
+ </td>
959
+ <td>0
960
+ </td>
961
+ <td>pass@1
962
+ </td>
963
+ <td>-
964
+ </td>
965
+ <td>52.4
966
+ </td>
967
+ <td>-
968
+ </td>
969
+ <td>62.0
970
+ </td>
971
+ <td>65.7
972
+ </td>
973
+ </tr>
974
+ <tr>
975
+ <td rowspan="2" >Math
976
+ </td>
977
+ <td>GSM-8K (CoT)
978
+ </td>
979
+ <td>8
980
+ </td>
981
+ <td>em_maj1@1
982
+ </td>
983
+ <td>80.6
984
+ </td>
985
+ <td>84.5
986
+ </td>
987
+ <td>93.0
988
+ </td>
989
+ <td>95.1
990
+ </td>
991
+ <td>96.8
992
+ </td>
993
+ </tr>
994
+ <tr>
995
+ <td>MATH (CoT)
996
+ </td>
997
+ <td>0
998
+ </td>
999
+ <td>final_em
1000
+ </td>
1001
+ <td>29.1
1002
+ </td>
1003
+ <td>51.9
1004
+ </td>
1005
+ <td>51.0
1006
+ </td>
1007
+ <td>68.0
1008
+ </td>
1009
+ <td>73.8
1010
+ </td>
1011
+ </tr>
1012
+ <tr>
1013
+ <td rowspan="4" >Tool Use
1014
+ </td>
1015
+ <td>API-Bank
1016
+ </td>
1017
+ <td>0
1018
+ </td>
1019
+ <td>acc
1020
+ </td>
1021
+ <td>48.3
1022
+ </td>
1023
+ <td>82.6
1024
+ </td>
1025
+ <td>85.1
1026
+ </td>
1027
+ <td>90.0
1028
+ </td>
1029
+ <td>92.0
1030
+ </td>
1031
+ </tr>
1032
+ <tr>
1033
+ <td>BFCL
1034
+ </td>
1035
+ <td>0
1036
+ </td>
1037
+ <td>acc
1038
+ </td>
1039
+ <td>60.3
1040
+ </td>
1041
+ <td>76.1
1042
+ </td>
1043
+ <td>83.0
1044
+ </td>
1045
+ <td>84.8
1046
+ </td>
1047
+ <td>88.5
1048
+ </td>
1049
+ </tr>
1050
+ <tr>
1051
+ <td>Gorilla Benchmark API Bench
1052
+ </td>
1053
+ <td>0
1054
+ </td>
1055
+ <td>acc
1056
+ </td>
1057
+ <td>1.7
1058
+ </td>
1059
+ <td>8.2
1060
+ </td>
1061
+ <td>14.7
1062
+ </td>
1063
+ <td>29.7
1064
+ </td>
1065
+ <td>35.3
1066
+ </td>
1067
+ </tr>
1068
+ <tr>
1069
+ <td>Nexus (0-shot)
1070
+ </td>
1071
+ <td>0
1072
+ </td>
1073
+ <td>macro_avg/acc
1074
+ </td>
1075
+ <td>18.1
1076
+ </td>
1077
+ <td>38.5
1078
+ </td>
1079
+ <td>47.8
1080
+ </td>
1081
+ <td>56.7
1082
+ </td>
1083
+ <td>58.7
1084
+ </td>
1085
+ </tr>
1086
+ <tr>
1087
+ <td>Multilingual
1088
+ </td>
1089
+ <td>Multilingual MGSM (CoT)
1090
+ </td>
1091
+ <td>0
1092
+ </td>
1093
+ <td>em
1094
+ </td>
1095
+ <td>-
1096
+ </td>
1097
+ <td>68.9
1098
+ </td>
1099
+ <td>-
1100
+ </td>
1101
+ <td>86.9
1102
+ </td>
1103
+ <td>91.6
1104
+ </td>
1105
+ </tr>
1106
+ </table>
1107
+
1108
+ #### Multilingual benchmarks
1109
+
1110
+ <table>
1111
+ <tr>
1112
+ <td><strong>Category</strong>
1113
+ </td>
1114
+ <td><strong>Benchmark</strong>
1115
+ </td>
1116
+ <td><strong>Language</strong>
1117
+ </td>
1118
+ <td><strong>Llama 3.1 8B</strong>
1119
+ </td>
1120
+ <td><strong>Llama 3.1 70B</strong>
1121
+ </td>
1122
+ <td><strong>Llama 3.1 405B</strong>
1123
+ </td>
1124
+ </tr>
1125
+ <tr>
1126
+ <td rowspan="9" ><strong>General</strong>
1127
+ </td>
1128
+ <td rowspan="9" ><strong>MMLU (5-shot, macro_avg/acc)</strong>
1129
+ </td>
1130
+ <td>Portuguese
1131
+ </td>
1132
+ <td>62.12
1133
+ </td>
1134
+ <td>80.13
1135
+ </td>
1136
+ <td>84.95
1137
+ </td>
1138
+ </tr>
1139
+ <tr>
1140
+ <td>Spanish
1141
+ </td>
1142
+ <td>62.45
1143
+ </td>
1144
+ <td>80.05
1145
+ </td>
1146
+ <td>85.08
1147
+ </td>
1148
+ </tr>
1149
+ <tr>
1150
+ <td>Italian
1151
+ </td>
1152
+ <td>61.63
1153
+ </td>
1154
+ <td>80.4
1155
+ </td>
1156
+ <td>85.04
1157
+ </td>
1158
+ </tr>
1159
+ <tr>
1160
+ <td>German
1161
+ </td>
1162
+ <td>60.59
1163
+ </td>
1164
+ <td>79.27
1165
+ </td>
1166
+ <td>84.36
1167
+ </td>
1168
+ </tr>
1169
+ <tr>
1170
+ <td>French
1171
+ </td>
1172
+ <td>62.34
1173
+ </td>
1174
+ <td>79.82
1175
+ </td>
1176
+ <td>84.66
1177
+ </td>
1178
+ </tr>
1179
+ <tr>
1180
+ <td>Hindi
1181
+ </td>
1182
+ <td>50.88
1183
+ </td>
1184
+ <td>74.52
1185
+ </td>
1186
+ <td>80.31
1187
+ </td>
1188
+ </tr>
1189
+ <tr>
1190
+ <td>Thai
1191
+ </td>
1192
+ <td>50.32
1193
+ </td>
1194
+ <td>72.95
1195
+ </td>
1196
+ <td>78.21
1197
+ </td>
1198
+ </tr>
1199
+ </table>
1200
+
1201
+
1202
+
1203
+ ## Responsibility & Safety
1204
+
1205
+ As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks:
1206
+
1207
+
1208
+
1209
+ * Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama.
1210
+ * Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm.
1211
+ * Provide protections for the community to help prevent the misuse of our models.
1212
+
1213
+
1214
+ ### Responsible deployment
1215
+
1216
+ Llama is a foundational technology designed to be used in a variety of use cases, examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models enabling the world to benefit from the technology power, by aligning our model safety for the generic use cases addressing a standard set of harms. Developers are then in the driver seat to tailor safety for their use case, defining their own policy and deploying the models with the necessary safeguards in their Llama systems. Llama 3.1 was developed following the best practices outlined in our Responsible Use Guide, you can refer to the [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to learn more.
1217
+
1218
+
1219
+ #### Llama 3.1 instruct
1220
+
1221
+ Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. For more details on the safety mitigations implemented please read the Llama 3 paper.
1222
+
1223
+ **Fine-tuning data**
1224
+
1225
+ We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control.
1226
+
1227
+ **Refusals and Tone**
1228
+
1229
+ Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines.
1230
+
1231
+
1232
+ #### Llama 3.1 systems
1233
+
1234
+ **Large language models, including Llama 3.1, are not designed to be deployed in isolation but instead should be deployed as part of an overall AI system with additional safety guardrails as required.** Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools.
1235
+
1236
+ As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard 3, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box.
1237
+
1238
+
1239
+ #### New capabilities
1240
+
1241
+ Note that this release introduces new capabilities, including a longer context window, multilingual inputs and outputs and possible integrations by developers with third party tools. Building with these new capabilities requires specific considerations in addition to the best practices that generally apply across all Generative AI use cases.
1242
+
1243
+ **Tool-use**: Just like in standard software development, developers are responsible for the integration of the LLM with the tools and services of their choice. They should define a clear policy for their use case and assess the integrity of the third party services they use to be aware of the safety and security limitations when using this capability. Refer to the Responsible Use Guide for best practices on the safe deployment of the third party safeguards.
1244
+
1245
+ **Multilinguality**: Llama 3.1 supports 7 languages in addition to English: French, German, Hindi, Italian, Portuguese, Spanish, and Thai. Llama may be able to output text in other languages than those that meet performance thresholds for safety and helpfulness. We strongly discourage developers from using this model to converse in non-supported languages without implementing finetuning and system controls in alignment with their policies and the best practices shared in the Responsible Use Guide.
1246
+
1247
+
1248
+ ### Evaluations
1249
+
1250
+ We evaluated Llama models for common use cases as well as specific capabilities. Common use cases evaluations measure safety risks of systems for most commonly built applications including chat bot, coding assistant, tool calls. We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Llama Guard 3 to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. Prompt Guard and Code Shield are also available if relevant to the application.
1251
+
1252
+ Capability evaluations measure vulnerabilities of Llama models inherent to specific capabilities, for which were crafted dedicated benchmarks including long context, multilingual, tools calls, coding or memorization.
1253
+
1254
+ **Red teaming**
1255
+
1256
+ For both scenarios, we conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets.
1257
+
1258
+ We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets.
1259
+
1260
+
1261
+ ### Critical and other risks
1262
+
1263
+ We specifically focused our efforts on mitigating the following critical risk areas:
1264
+
1265
+ **1- CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive materials) helpfulness**
1266
+
1267
+ To assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons.
1268
+
1269
+
1270
+ **2. Child Safety**
1271
+
1272
+ Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
1273
+
1274
+ **3. Cyber attack enablement**
1275
+
1276
+ Our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed.
1277
+
1278
+ Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention.
1279
+
1280
+ Our study of Llama-3.1-405B’s social engineering uplift for cyber attackers was conducted to assess the effectiveness of AI models in aiding cyber threat actors in spear phishing campaigns. Please read our Llama 3.1 Cyber security whitepaper to learn more.
1281
+
1282
+
1283
+ ### Community
1284
+
1285
+ Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
1286
+
1287
+ We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists).
1288
+
1289
+ Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
1290
+
1291
+
1292
+ ## Ethical Considerations and Limitations
1293
+
1294
+ The core values of Llama 3.1 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.1 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
1295
+
1296
+ But Llama 3.1 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.1’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.1 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.