legraphista commited on
Commit
9891491
β€’
1 Parent(s): da0eee2

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +123 -0
README.md ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: mistralai/Mistral-7B-v0.3
3
+ inference: false
4
+ library_name: gguf
5
+ license: apache-2.0
6
+ pipeline_tag: text-generation
7
+ quantized_by: legraphista
8
+ tags:
9
+ - quantized
10
+ - GGUF
11
+ - imatrix
12
+ - quantization
13
+ - imat
14
+ - imatrix
15
+ - static
16
+ ---
17
+
18
+ # Mistral-7B-v0.3-IMat-GGUF
19
+ _Llama.cpp imatrix quantization of mistralai/Mistral-7B-v0.3_
20
+
21
+ Original Model: [mistralai/Mistral-7B-v0.3](https://huggingface.co/mistralai/Mistral-7B-v0.3)
22
+ Original dtype: `BF16` (`bfloat16`)
23
+ Quantized by: llama.cpp [b3003](https://github.com/ggerganov/llama.cpp/releases/tag/b3003)
24
+ IMatrix dataset: [here](https://gist.githubusercontent.com/legraphista/d6d93f1a254bcfc58e0af3777eaec41e/raw/d380e7002cea4a51c33fffd47db851942754e7cc/imatrix.calibration.medium.raw)
25
+
26
+ - [Mistral-7B-v0.3-IMat-GGUF](#mistral-7b-v0-3-imat-gguf)
27
+ - [Files](#files)
28
+ - [IMatrix](#imatrix)
29
+ - [Common Quants](#common-quants)
30
+ - [All Quants](#all-quants)
31
+ - [Downloading using huggingface-cli](#downloading-using-huggingface-cli)
32
+ - [Inference](#inference)
33
+ - [Llama.cpp](#llama-cpp)
34
+ - [FAQ](#faq)
35
+ - [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere)
36
+ - [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf)
37
+
38
+ ---
39
+
40
+ ## Files
41
+
42
+ ### IMatrix
43
+ Status: ⏳ Processing
44
+ Link: [here](https://huggingface.co/legraphista/Mistral-7B-v0.3-IMat-GGUF/blob/main/imatrix.dat)
45
+
46
+ ### Common Quants
47
+ | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
48
+ | -------- | ---------- | --------- | ------ | ------------ | -------- |
49
+ | Mistral-7B-v0.3.Q8_0 | Q8_0 | - | ⏳ Processing | βšͺ No | -
50
+ | Mistral-7B-v0.3.Q6_K | Q6_K | - | ⏳ Processing | βšͺ No | -
51
+ | Mistral-7B-v0.3.Q4_K | Q4_K | - | ⏳ Processing | 🟒 Yes | -
52
+ | Mistral-7B-v0.3.Q3_K | Q3_K | - | ⏳ Processing | 🟒 Yes | -
53
+ | Mistral-7B-v0.3.Q2_K | Q2_K | - | ⏳ Processing | 🟒 Yes | -
54
+
55
+
56
+ ### All Quants
57
+ | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
58
+ | -------- | ---------- | --------- | ------ | ------------ | -------- |
59
+ | Mistral-7B-v0.3.FP16 | F16 | - | ⏳ Processing | βšͺ No | -
60
+ | Mistral-7B-v0.3.BF16 | BF16 | - | ⏳ Processing | βšͺ No | -
61
+ | Mistral-7B-v0.3.Q5_K | Q5_K | - | ⏳ Processing | βšͺ No | -
62
+ | Mistral-7B-v0.3.Q5_K_S | Q5_K_S | - | ⏳ Processing | βšͺ No | -
63
+ | Mistral-7B-v0.3.Q4_K_S | Q4_K_S | - | ⏳ Processing | 🟒 Yes | -
64
+ | Mistral-7B-v0.3.Q3_K_L | Q3_K_L | - | ⏳ Processing | 🟒 Yes | -
65
+ | Mistral-7B-v0.3.Q3_K_S | Q3_K_S | - | ⏳ Processing | 🟒 Yes | -
66
+ | Mistral-7B-v0.3.Q2_K_S | Q2_K_S | - | ⏳ Processing | 🟒 Yes | -
67
+ | Mistral-7B-v0.3.IQ4_NL | IQ4_NL | - | ⏳ Processing | 🟒 Yes | -
68
+ | Mistral-7B-v0.3.IQ4_XS | IQ4_XS | - | ⏳ Processing | 🟒 Yes | -
69
+ | Mistral-7B-v0.3.IQ3_M | IQ3_M | - | ⏳ Processing | 🟒 Yes | -
70
+ | Mistral-7B-v0.3.IQ3_S | IQ3_S | - | ⏳ Processing | 🟒 Yes | -
71
+ | Mistral-7B-v0.3.IQ3_XS | IQ3_XS | - | ⏳ Processing | 🟒 Yes | -
72
+ | Mistral-7B-v0.3.IQ3_XXS | IQ3_XXS | - | ⏳ Processing | 🟒 Yes | -
73
+ | Mistral-7B-v0.3.IQ2_M | IQ2_M | - | ⏳ Processing | 🟒 Yes | -
74
+ | Mistral-7B-v0.3.IQ2_S | IQ2_S | - | ⏳ Processing | 🟒 Yes | -
75
+ | Mistral-7B-v0.3.IQ2_XS | IQ2_XS | - | ⏳ Processing | 🟒 Yes | -
76
+ | Mistral-7B-v0.3.IQ2_XXS | IQ2_XXS | - | ⏳ Processing | 🟒 Yes | -
77
+ | Mistral-7B-v0.3.IQ1_M | IQ1_M | - | ⏳ Processing | 🟒 Yes | -
78
+ | Mistral-7B-v0.3.IQ1_S | IQ1_S | - | ⏳ Processing | 🟒 Yes | -
79
+
80
+
81
+ ## Downloading using huggingface-cli
82
+ If you do not have hugginface-cli installed:
83
+ ```
84
+ pip install -U "huggingface_hub[cli]"
85
+ ```
86
+ Download the specific file you want:
87
+ ```
88
+ huggingface-cli download legraphista/Mistral-7B-v0.3-IMat-GGUF --include "Mistral-7B-v0.3.Q8_0.gguf" --local-dir ./
89
+ ```
90
+ If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
91
+ ```
92
+ huggingface-cli download legraphista/Mistral-7B-v0.3-IMat-GGUF --include "Mistral-7B-v0.3.Q8_0/*" --local-dir Mistral-7B-v0.3.Q8_0
93
+ # see FAQ for merging GGUF's
94
+ ```
95
+
96
+ ---
97
+
98
+ ## Inference
99
+
100
+ ### Llama.cpp
101
+ ```
102
+ llama.cpp/main -m Mistral-7B-v0.3.Q8_0.gguf --color -i -p "prompt here"
103
+ ```
104
+
105
+ ---
106
+
107
+ ## FAQ
108
+
109
+ ### Why is the IMatrix not applied everywhere?
110
+ According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
111
+
112
+ ### How do I merge a split GGUF?
113
+ 1. Make sure you have `gguf-split` available
114
+ - To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
115
+ - Download the appropriate zip for your system from the latest release
116
+ - Unzip the archive and you should be able to find `gguf-split`
117
+ 2. Locate your GGUF chunks folder (ex: `Mistral-7B-v0.3.Q8_0`)
118
+ 3. Run `gguf-split --merge Mistral-7B-v0.3.Q8_0/Mistral-7B-v0.3.Q8_0-00001-of-XXXXX.gguf Mistral-7B-v0.3.Q8_0.gguf`
119
+ - Make sure to point `gguf-split` to the first chunk of the split.
120
+
121
+ ---
122
+
123
+ Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!