legraphista commited on
Commit
27debdc
Β·
verified Β·
1 Parent(s): b1644b1

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +177 -0
README.md ADDED
@@ -0,0 +1,177 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: akjindal53244/Llama-3.1-Storm-8B
3
+ inference: false
4
+ language:
5
+ - en
6
+ - de
7
+ - fr
8
+ - it
9
+ - pt
10
+ - hi
11
+ - es
12
+ - th
13
+ library_name: gguf
14
+ license: llama3.1
15
+ pipeline_tag: text-generation
16
+ quantized_by: legraphista
17
+ tags:
18
+ - llama-3.1
19
+ - conversational
20
+ - instruction following
21
+ - reasoning
22
+ - function calling
23
+ - mergekit
24
+ - finetuning
25
+ - axolotl
26
+ - quantized
27
+ - GGUF
28
+ - quantization
29
+ - imat
30
+ - imatrix
31
+ - static
32
+ - 16bit
33
+ - 8bit
34
+ - 6bit
35
+ - 5bit
36
+ - 4bit
37
+ - 3bit
38
+ - 2bit
39
+ - 1bit
40
+ ---
41
+
42
+ # Llama-3.1-Storm-8B-IMat-GGUF
43
+ _Llama.cpp imatrix quantization of akjindal53244/Llama-3.1-Storm-8B_
44
+
45
+ Original Model: [akjindal53244/Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B)
46
+ Original dtype: `BF16` (`bfloat16`)
47
+ Quantized by: llama.cpp [b3606](https://github.com/ggerganov/llama.cpp/releases/tag/b3606)
48
+ IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt)
49
+
50
+ - [Files](#files)
51
+ - [IMatrix](#imatrix)
52
+ - [Common Quants](#common-quants)
53
+ - [All Quants](#all-quants)
54
+ - [Downloading using huggingface-cli](#downloading-using-huggingface-cli)
55
+ - [Inference](#inference)
56
+ - [Simple chat template](#simple-chat-template)
57
+ - [Chat template with system prompt](#chat-template-with-system-prompt)
58
+ - [Llama.cpp](#llama-cpp)
59
+ - [FAQ](#faq)
60
+ - [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere)
61
+ - [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf)
62
+
63
+ ---
64
+
65
+ ## Files
66
+
67
+ ### IMatrix
68
+ Status: ⏳ Processing
69
+ Link: [here](https://huggingface.co/legraphista/Llama-3.1-Storm-8B-IMat-GGUF/blob/main/imatrix.dat)
70
+
71
+ ### Common Quants
72
+ | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
73
+ | -------- | ---------- | --------- | ------ | ------------ | -------- |
74
+ | Llama-3.1-Storm-8B.Q8_0 | Q8_0 | - | ⏳ Processing | βšͺ Static | -
75
+ | Llama-3.1-Storm-8B.Q6_K | Q6_K | - | ⏳ Processing | βšͺ Static | -
76
+ | Llama-3.1-Storm-8B.Q4_K | Q4_K | - | ⏳ Processing | 🟒 IMatrix | -
77
+ | Llama-3.1-Storm-8B.Q3_K | Q3_K | - | ⏳ Processing | 🟒 IMatrix | -
78
+ | Llama-3.1-Storm-8B.Q2_K | Q2_K | - | ⏳ Processing | 🟒 IMatrix | -
79
+
80
+
81
+ ### All Quants
82
+ | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
83
+ | -------- | ---------- | --------- | ------ | ------------ | -------- |
84
+ | Llama-3.1-Storm-8B.BF16 | BF16 | - | ⏳ Processing | βšͺ Static | -
85
+ | Llama-3.1-Storm-8B.FP16 | F16 | - | ⏳ Processing | βšͺ Static | -
86
+ | Llama-3.1-Storm-8B.Q8_0 | Q8_0 | - | ⏳ Processing | βšͺ Static | -
87
+ | Llama-3.1-Storm-8B.Q6_K | Q6_K | - | ⏳ Processing | βšͺ Static | -
88
+ | Llama-3.1-Storm-8B.Q5_K | Q5_K | - | ⏳ Processing | βšͺ Static | -
89
+ | Llama-3.1-Storm-8B.Q5_K_S | Q5_K_S | - | ⏳ Processing | βšͺ Static | -
90
+ | Llama-3.1-Storm-8B.Q4_K | Q4_K | - | ⏳ Processing | 🟒 IMatrix | -
91
+ | Llama-3.1-Storm-8B.Q4_K_S | Q4_K_S | - | ⏳ Processing | 🟒 IMatrix | -
92
+ | Llama-3.1-Storm-8B.IQ4_NL | IQ4_NL | - | ⏳ Processing | 🟒 IMatrix | -
93
+ | Llama-3.1-Storm-8B.IQ4_XS | IQ4_XS | - | ⏳ Processing | 🟒 IMatrix | -
94
+ | Llama-3.1-Storm-8B.Q3_K | Q3_K | - | ⏳ Processing | 🟒 IMatrix | -
95
+ | Llama-3.1-Storm-8B.Q3_K_L | Q3_K_L | - | ⏳ Processing | 🟒 IMatrix | -
96
+ | Llama-3.1-Storm-8B.Q3_K_S | Q3_K_S | - | ⏳ Processing | 🟒 IMatrix | -
97
+ | Llama-3.1-Storm-8B.IQ3_M | IQ3_M | - | ⏳ Processing | 🟒 IMatrix | -
98
+ | Llama-3.1-Storm-8B.IQ3_S | IQ3_S | - | ⏳ Processing | 🟒 IMatrix | -
99
+ | Llama-3.1-Storm-8B.IQ3_XS | IQ3_XS | - | ⏳ Processing | 🟒 IMatrix | -
100
+ | Llama-3.1-Storm-8B.IQ3_XXS | IQ3_XXS | - | ⏳ Processing | 🟒 IMatrix | -
101
+ | Llama-3.1-Storm-8B.Q2_K | Q2_K | - | ⏳ Processing | 🟒 IMatrix | -
102
+ | Llama-3.1-Storm-8B.Q2_K_S | Q2_K_S | - | ⏳ Processing | 🟒 IMatrix | -
103
+ | Llama-3.1-Storm-8B.IQ2_M | IQ2_M | - | ⏳ Processing | 🟒 IMatrix | -
104
+ | Llama-3.1-Storm-8B.IQ2_S | IQ2_S | - | ⏳ Processing | 🟒 IMatrix | -
105
+ | Llama-3.1-Storm-8B.IQ2_XS | IQ2_XS | - | ⏳ Processing | 🟒 IMatrix | -
106
+ | Llama-3.1-Storm-8B.IQ2_XXS | IQ2_XXS | - | ⏳ Processing | 🟒 IMatrix | -
107
+ | Llama-3.1-Storm-8B.IQ1_M | IQ1_M | - | ⏳ Processing | 🟒 IMatrix | -
108
+ | Llama-3.1-Storm-8B.IQ1_S | IQ1_S | - | ⏳ Processing | 🟒 IMatrix | -
109
+
110
+
111
+ ## Downloading using huggingface-cli
112
+ If you do not have hugginface-cli installed:
113
+ ```
114
+ pip install -U "huggingface_hub[cli]"
115
+ ```
116
+ Download the specific file you want:
117
+ ```
118
+ huggingface-cli download legraphista/Llama-3.1-Storm-8B-IMat-GGUF --include "Llama-3.1-Storm-8B.Q8_0.gguf" --local-dir ./
119
+ ```
120
+ If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
121
+ ```
122
+ huggingface-cli download legraphista/Llama-3.1-Storm-8B-IMat-GGUF --include "Llama-3.1-Storm-8B.Q8_0/*" --local-dir ./
123
+ # see FAQ for merging GGUF's
124
+ ```
125
+
126
+ ---
127
+
128
+ ## Inference
129
+
130
+ ### Simple chat template
131
+ ```
132
+ <|begin_of_text|><|start_header_id|>user<|end_header_id|>
133
+
134
+ {user_prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
135
+
136
+ {assistant_response}<|eot_id|><|start_header_id|>user<|end_header_id|>
137
+
138
+ {next_user_prompt}<|eot_id|>
139
+ ```
140
+
141
+ ### Chat template with system prompt
142
+ ```
143
+ <|begin_of_text|><|start_header_id|>system<|end_header_id|>
144
+
145
+ {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
146
+
147
+ {user_prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
148
+
149
+ {assistant_response}<|eot_id|><|start_header_id|>user<|end_header_id|>
150
+
151
+ {next_user_prompt}<|eot_id|>
152
+ ```
153
+
154
+ ### Llama.cpp
155
+ ```
156
+ llama.cpp/main -m Llama-3.1-Storm-8B.Q8_0.gguf --color -i -p "prompt here (according to the chat template)"
157
+ ```
158
+
159
+ ---
160
+
161
+ ## FAQ
162
+
163
+ ### Why is the IMatrix not applied everywhere?
164
+ According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
165
+
166
+ ### How do I merge a split GGUF?
167
+ 1. Make sure you have `gguf-split` available
168
+ - To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
169
+ - Download the appropriate zip for your system from the latest release
170
+ - Unzip the archive and you should be able to find `gguf-split`
171
+ 2. Locate your GGUF chunks folder (ex: `Llama-3.1-Storm-8B.Q8_0`)
172
+ 3. Run `gguf-split --merge Llama-3.1-Storm-8B.Q8_0/Llama-3.1-Storm-8B.Q8_0-00001-of-XXXXX.gguf Llama-3.1-Storm-8B.Q8_0.gguf`
173
+ - Make sure to point `gguf-split` to the first chunk of the split.
174
+
175
+ ---
176
+
177
+ Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!