Update README.md
Browse files
README.md
CHANGED
@@ -213,6 +213,22 @@ Original model: https://huggingface.co/meta-llama/Meta-Llama-3.1-8B
|
|
213 |
| [Meta-Llama-3.1-8B-Q3_K_S.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/Meta-Llama-3.1-8B-Q3_K_S.gguf) | Q3_K_S | 3.66GB | 7.8823 +/- 0.04920 |
|
214 |
| [Meta-Llama-3.1-8B-Q2_K.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/Meta-Llama-3.1-8B-Q2_K.gguf) | Q2_K | 3.18GB | 9.7262 +/- 0.06393 |
|
215 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
216 |
## Downloading using huggingface-cli
|
217 |
|
218 |
First, make sure you have hugginface-cli installed:
|
|
|
213 |
| [Meta-Llama-3.1-8B-Q3_K_S.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/Meta-Llama-3.1-8B-Q3_K_S.gguf) | Q3_K_S | 3.66GB | 7.8823 +/- 0.04920 |
|
214 |
| [Meta-Llama-3.1-8B-Q2_K.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/Meta-Llama-3.1-8B-Q2_K.gguf) | Q2_K | 3.18GB | 9.7262 +/- 0.06393 |
|
215 |
|
216 |
+
## Benchmark Results
|
217 |
+
|
218 |
+
| Benchmark | Quant type | Metric |
|
219 |
+
| -------- | ---------- | --------- |
|
220 |
+
| WinoGrande (0-shot) | BF16 | 73.7964 +/- 1.2359 |
|
221 |
+
| WinoGrande (0-shot) | Q8_0 | 74.1121 +/- 1.2311 |
|
222 |
+
| WinoGrande (0-shot) | Q6_K | 74.0126 +/- 1.2331 |
|
223 |
+
| WinoGrande (0-shot) | Q5_K_M | 74.8815 +/- 1.2194 |
|
224 |
+
| WinoGrande (0-shot) | Q4_K_M | 73.1650 +/- 1.2453 |
|
225 |
+
| WinoGrande (0-shot) | Q4_K_S | 74.4076 +/- 1.2269 |
|
226 |
+
| WinoGrande (0-shot) | Q3_K_L | 73.3807 +/- 1.2426 |
|
227 |
+
| WinoGrande (0-shot) | Q3_K_M | 72.8278 +/- 1.2507 |
|
228 |
+
| WinoGrande (0-shot) | Q3_K_S | 72.3539 +/- 1.2575 |
|
229 |
+
| WinoGrande (0-shot) | Q2_K | 68.4294 +/- 1.3063 |
|
230 |
+
|
231 |
+
|
232 |
## Downloading using huggingface-cli
|
233 |
|
234 |
First, make sure you have hugginface-cli installed:
|