fedric95 commited on
Commit
2473bb9
1 Parent(s): 633f85e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -1
README.md CHANGED
@@ -210,6 +210,7 @@ git clone https://huggingface.co/meta-llama/Meta-Llama-3.1-8B
210
  ```
211
  cd llama.cpp
212
  python ./convert_hf_to_gguf.py ../Meta-Llama-3.1-8B --outtype bf16 --outfile ../Meta-Llama-3.1-8B.BF16.gguf
 
213
  python ./convert_hf_to_gguf.py ../Meta-Llama-3.1-8B --outtype q8_0 --outfile ../Meta-Llama-3.1-8B-Q8_0.gguf
214
  ./llama-quantize ../Meta-Llama-3.1-8B.BF16.gguf ../Meta-Llama-3.1-8B-Q6_K.gguf Q6_K
215
  ./llama-quantize ../Meta-Llama-3.1-8B.BF16.gguf ../Meta-Llama-3.1-8B-Q5_K_S.gguf Q5_K_S
@@ -243,7 +244,8 @@ cd llama.cpp
243
 
244
  | Filename | Quant type | File Size | Perplexity (wikitext-2-raw-v1.test) |
245
  | -------- | ---------- | --------- | ----------- |
246
- | [Meta-Llama-3.1-8B-f32.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/Meta-Llama-3.1-8B-BF16.gguf) | BF16 | 16.07GB | 6.4006 +/- 0.03938 |
 
247
  | [Meta-Llama-3.1-8B-Q8_0.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/Meta-Llama-3.1-8B-Q8_0.gguf) | Q8_0 | 8.54GB | 6.4070 +/- 0.03941 |
248
  | [Meta-Llama-3.1-8B-Q6_K.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/Meta-Llama-3.1-8B-Q6_K.gguf) | Q6_K | 6.60GB | 6.4231 +/- 0.03957 |
249
  | [Meta-Llama-3.1-8B-Q5_K_M.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/Meta-Llama-3.1-8B-Q5_K_M.gguf) | Q5_K_M | 5.73GB | 6.4623 +/- 0.03987 |
@@ -255,6 +257,8 @@ cd llama.cpp
255
  | [Meta-Llama-3.1-8B-Q3_K_S.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/Meta-Llama-3.1-8B-Q3_K_S.gguf) | Q3_K_S | 3.66GB | 7.8823 +/- 0.04920 |
256
  | [Meta-Llama-3.1-8B-Q2_K.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/Meta-Llama-3.1-8B-Q2_K.gguf) | Q2_K | 3.18GB | 9.7262 +/- 0.06393 |
257
 
 
 
258
  ## Downloading using huggingface-cli
259
 
260
  First, make sure you have hugginface-cli installed:
 
210
  ```
211
  cd llama.cpp
212
  python ./convert_hf_to_gguf.py ../Meta-Llama-3.1-8B --outtype bf16 --outfile ../Meta-Llama-3.1-8B.BF16.gguf
213
+ python ./convert_hf_to_gguf.py ../Meta-Llama-3.1-8B --outtype f16 --outfile ../Meta-Llama-3.1-8B-FP16.gguf
214
  python ./convert_hf_to_gguf.py ../Meta-Llama-3.1-8B --outtype q8_0 --outfile ../Meta-Llama-3.1-8B-Q8_0.gguf
215
  ./llama-quantize ../Meta-Llama-3.1-8B.BF16.gguf ../Meta-Llama-3.1-8B-Q6_K.gguf Q6_K
216
  ./llama-quantize ../Meta-Llama-3.1-8B.BF16.gguf ../Meta-Llama-3.1-8B-Q5_K_S.gguf Q5_K_S
 
244
 
245
  | Filename | Quant type | File Size | Perplexity (wikitext-2-raw-v1.test) |
246
  | -------- | ---------- | --------- | ----------- |
247
+ | [Meta-Llama-3.1-8B-BF16.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/Meta-Llama-3.1-8B-BF16.gguf) | BF16 | 16.07GB | 6.4006 +/- 0.03938 |
248
+ | [Meta-Llama-3.1-8B-FP16.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/Meta-Llama-3.1-8B-FP16.gguf) | FP16 | 16.07GB | 6.4016 +/- 0.03939 |
249
  | [Meta-Llama-3.1-8B-Q8_0.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/Meta-Llama-3.1-8B-Q8_0.gguf) | Q8_0 | 8.54GB | 6.4070 +/- 0.03941 |
250
  | [Meta-Llama-3.1-8B-Q6_K.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/Meta-Llama-3.1-8B-Q6_K.gguf) | Q6_K | 6.60GB | 6.4231 +/- 0.03957 |
251
  | [Meta-Llama-3.1-8B-Q5_K_M.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/Meta-Llama-3.1-8B-Q5_K_M.gguf) | Q5_K_M | 5.73GB | 6.4623 +/- 0.03987 |
 
257
  | [Meta-Llama-3.1-8B-Q3_K_S.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/Meta-Llama-3.1-8B-Q3_K_S.gguf) | Q3_K_S | 3.66GB | 7.8823 +/- 0.04920 |
258
  | [Meta-Llama-3.1-8B-Q2_K.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/Meta-Llama-3.1-8B-Q2_K.gguf) | Q2_K | 3.18GB | 9.7262 +/- 0.06393 |
259
 
260
+
261
+
262
  ## Downloading using huggingface-cli
263
 
264
  First, make sure you have hugginface-cli installed: