mradermacher commited on
Commit
4388052
·
verified ·
1 Parent(s): c578347

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -1
README.md CHANGED
@@ -16,7 +16,7 @@ quantized_by: mradermacher
16
  static quants of https://huggingface.co/perplexity-ai/r1-1776-distill-llama-70b
17
 
18
  <!-- provided-files -->
19
- weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
20
  ## Usage
21
 
22
  If you are unsure how to use GGUF files, refer to one of [TheBloke's
@@ -30,7 +30,15 @@ more details, including on how to concatenate multi-part files.
30
  | Link | Type | Size/GB | Notes |
31
  |:-----|:-----|--------:|:------|
32
  | [GGUF](https://huggingface.co/mradermacher/r1-1776-distill-llama-70b-GGUF/resolve/main/r1-1776-distill-llama-70b.Q2_K.gguf) | Q2_K | 26.5 | |
 
 
 
33
  | [GGUF](https://huggingface.co/mradermacher/r1-1776-distill-llama-70b-GGUF/resolve/main/r1-1776-distill-llama-70b.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
 
 
 
 
 
34
 
35
  Here is a handy graph by ikawrakow comparing some lower-quality quant
36
  types (lower is better):
 
16
  static quants of https://huggingface.co/perplexity-ai/r1-1776-distill-llama-70b
17
 
18
  <!-- provided-files -->
19
+ weighted/imatrix quants are available at https://huggingface.co/mradermacher/r1-1776-distill-llama-70b-i1-GGUF
20
  ## Usage
21
 
22
  If you are unsure how to use GGUF files, refer to one of [TheBloke's
 
30
  | Link | Type | Size/GB | Notes |
31
  |:-----|:-----|--------:|:------|
32
  | [GGUF](https://huggingface.co/mradermacher/r1-1776-distill-llama-70b-GGUF/resolve/main/r1-1776-distill-llama-70b.Q2_K.gguf) | Q2_K | 26.5 | |
33
+ | [GGUF](https://huggingface.co/mradermacher/r1-1776-distill-llama-70b-GGUF/resolve/main/r1-1776-distill-llama-70b.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
34
+ | [GGUF](https://huggingface.co/mradermacher/r1-1776-distill-llama-70b-GGUF/resolve/main/r1-1776-distill-llama-70b.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
35
+ | [GGUF](https://huggingface.co/mradermacher/r1-1776-distill-llama-70b-GGUF/resolve/main/r1-1776-distill-llama-70b.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
36
  | [GGUF](https://huggingface.co/mradermacher/r1-1776-distill-llama-70b-GGUF/resolve/main/r1-1776-distill-llama-70b.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
37
+ | [GGUF](https://huggingface.co/mradermacher/r1-1776-distill-llama-70b-GGUF/resolve/main/r1-1776-distill-llama-70b.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
38
+ | [GGUF](https://huggingface.co/mradermacher/r1-1776-distill-llama-70b-GGUF/resolve/main/r1-1776-distill-llama-70b.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
39
+ | [GGUF](https://huggingface.co/mradermacher/r1-1776-distill-llama-70b-GGUF/resolve/main/r1-1776-distill-llama-70b.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
40
+ | [PART 1](https://huggingface.co/mradermacher/r1-1776-distill-llama-70b-GGUF/resolve/main/r1-1776-distill-llama-70b.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/r1-1776-distill-llama-70b-GGUF/resolve/main/r1-1776-distill-llama-70b.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
41
+ | [PART 1](https://huggingface.co/mradermacher/r1-1776-distill-llama-70b-GGUF/resolve/main/r1-1776-distill-llama-70b.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/r1-1776-distill-llama-70b-GGUF/resolve/main/r1-1776-distill-llama-70b.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
42
 
43
  Here is a handy graph by ikawrakow comparing some lower-quality quant
44
  types (lower is better):