mradermacher commited on
Commit
166a86a
1 Parent(s): fb79288

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +11 -1
README.md CHANGED
@@ -7,9 +7,17 @@ quantized_by: mradermacher
7
  tags:
8
  - moe
9
  ---
10
- weighted/imatrix quants of https://huggingface.co/mistralai/Mixtral-8x7B-v0.1
11
 
 
12
  <!-- provided-files -->
 
 
 
 
 
 
 
13
  ## Provided Quants
14
 
15
  | Link | Type | Size/GB | Notes |
@@ -25,5 +33,7 @@ weighted/imatrix quants of https://huggingface.co/mistralai/Mixtral-8x7B-v0.1
25
  | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 24.3 | |
26
  | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 26.9 | fast, medium quality |
27
  | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 28.6 | fast, medium quality |
 
 
28
 
29
  <!-- end -->
 
7
  tags:
8
  - moe
9
  ---
10
+ ## About
11
 
12
+ weighted/imatrix quants of https://huggingface.co/mistralai/Mixtral-8x7B-v0.1
13
  <!-- provided-files -->
14
+
15
+ ## Usage
16
+
17
+ If you are unsure how to use GGUF files, refer to one of [TheBloke's
18
+ READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
19
+ more details, including on how to concatenate multi-part files.
20
+
21
  ## Provided Quants
22
 
23
  | Link | Type | Size/GB | Notes |
 
33
  | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 24.3 | |
34
  | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 26.9 | fast, medium quality |
35
  | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 28.6 | fast, medium quality |
36
+ | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 33.4 | best weighted quant |
37
+
38
 
39
  <!-- end -->