legraphista commited on
Commit
df1a29a
1 Parent(s): 871e53d

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +147 -0
README.md ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: THUDM/glm-4-9b-chat-1m
3
+ inference: false
4
+ language:
5
+ - zh
6
+ - en
7
+ library_name: gguf
8
+ license: other
9
+ license_link: https://huggingface.co/THUDM/glm-4-9b-chat-1m/blob/main/LICENSE
10
+ license_name: glm-4
11
+ pipeline_tag: text-generation
12
+ quantized_by: legraphista
13
+ tags:
14
+ - glm
15
+ - chatglm
16
+ - thudm
17
+ - quantized
18
+ - GGUF
19
+ - quantization
20
+ - static
21
+ - 16bit
22
+ - 8bit
23
+ - 6bit
24
+ - 5bit
25
+ - 4bit
26
+ - 3bit
27
+ - 2bit
28
+ ---
29
+
30
+ # glm-4-9b-chat-1m-GGUF
31
+ _Llama.cpp static quantization of THUDM/glm-4-9b-chat-1m_
32
+
33
+ Original Model: [THUDM/glm-4-9b-chat-1m](https://huggingface.co/THUDM/glm-4-9b-chat-1m)
34
+ Original dtype: `BF16` (`bfloat16`)
35
+ Quantized by: [https://github.com/ggerganov/llama.cpp/pull/6999](https://github.com/ggerganov/llama.cpp/pull/6999)
36
+ IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt)
37
+
38
+ - [Files](#files)
39
+ - [Common Quants](#common-quants)
40
+ - [All Quants](#all-quants)
41
+ - [Downloading using huggingface-cli](#downloading-using-huggingface-cli)
42
+ - [Inference](#inference)
43
+ - [Simple chat template](#simple-chat-template)
44
+ - [Chat template with system prompt](#chat-template-with-system-prompt)
45
+ - [Llama.cpp](#llama-cpp)
46
+ - [FAQ](#faq)
47
+ - [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere)
48
+ - [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf)
49
+
50
+ ---
51
+
52
+ ## Files
53
+
54
+
55
+
56
+ ### Common Quants
57
+ | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
58
+ | -------- | ---------- | --------- | ------ | ------------ | -------- |
59
+ | glm-4-9b-chat-1m.Q8_0 | Q8_0 | - | ⏳ Processing | ⚪ Static | -
60
+ | glm-4-9b-chat-1m.Q6_K | Q6_K | - | ⏳ Processing | ⚪ Static | -
61
+ | glm-4-9b-chat-1m.Q4_K | Q4_K | - | ⏳ Processing | ⚪ Static | -
62
+ | glm-4-9b-chat-1m.Q3_K | Q3_K | - | ⏳ Processing | ⚪ Static | -
63
+ | glm-4-9b-chat-1m.Q2_K | Q2_K | - | ⏳ Processing | ⚪ Static | -
64
+
65
+
66
+ ### All Quants
67
+ | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
68
+ | -------- | ---------- | --------- | ------ | ------------ | -------- |
69
+ | glm-4-9b-chat-1m.BF16 | BF16 | - | ⏳ Processing | ⚪ Static | -
70
+ | glm-4-9b-chat-1m.FP16 | F16 | - | ⏳ Processing | ⚪ Static | -
71
+ | glm-4-9b-chat-1m.Q8_0 | Q8_0 | - | ⏳ Processing | ⚪ Static | -
72
+ | glm-4-9b-chat-1m.Q6_K | Q6_K | - | ⏳ Processing | ⚪ Static | -
73
+ | glm-4-9b-chat-1m.Q5_K | Q5_K | - | ⏳ Processing | ⚪ Static | -
74
+ | glm-4-9b-chat-1m.Q5_K_S | Q5_K_S | - | ⏳ Processing | ⚪ Static | -
75
+ | glm-4-9b-chat-1m.Q4_K | Q4_K | - | ⏳ Processing | ⚪ Static | -
76
+ | glm-4-9b-chat-1m.Q4_K_S | Q4_K_S | - | ⏳ Processing | ⚪ Static | -
77
+ | glm-4-9b-chat-1m.IQ4_NL | IQ4_NL | - | ⏳ Processing | ⚪ Static | -
78
+ | glm-4-9b-chat-1m.IQ4_XS | IQ4_XS | - | ⏳ Processing | ⚪ Static | -
79
+ | glm-4-9b-chat-1m.Q3_K | Q3_K | - | ⏳ Processing | ⚪ Static | -
80
+ | glm-4-9b-chat-1m.Q3_K_L | Q3_K_L | - | ⏳ Processing | ⚪ Static | -
81
+ | glm-4-9b-chat-1m.Q3_K_S | Q3_K_S | - | ⏳ Processing | ⚪ Static | -
82
+ | glm-4-9b-chat-1m.IQ3_M | IQ3_M | - | ⏳ Processing | ⚪ Static | -
83
+ | glm-4-9b-chat-1m.IQ3_S | IQ3_S | - | ⏳ Processing | ⚪ Static | -
84
+ | glm-4-9b-chat-1m.IQ3_XS | IQ3_XS | - | ⏳ Processing | ⚪ Static | -
85
+ | glm-4-9b-chat-1m.Q2_K | Q2_K | - | ⏳ Processing | ⚪ Static | -
86
+
87
+
88
+ ## Downloading using huggingface-cli
89
+ If you do not have hugginface-cli installed:
90
+ ```
91
+ pip install -U "huggingface_hub[cli]"
92
+ ```
93
+ Download the specific file you want:
94
+ ```
95
+ huggingface-cli download legraphista/glm-4-9b-chat-1m-GGUF --include "glm-4-9b-chat-1m.Q8_0.gguf" --local-dir ./
96
+ ```
97
+ If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
98
+ ```
99
+ huggingface-cli download legraphista/glm-4-9b-chat-1m-GGUF --include "glm-4-9b-chat-1m.Q8_0/*" --local-dir ./
100
+ # see FAQ for merging GGUF's
101
+ ```
102
+
103
+ ---
104
+
105
+ ## Inference
106
+
107
+ ### Simple chat template
108
+ ```
109
+ [gMASK]<sop><|user|>
110
+ {user_prompt}<|assistant|>
111
+ {assistant_response}<|user|>
112
+ {next_user_prompt}
113
+ ```
114
+
115
+ ### Chat template with system prompt
116
+ ```
117
+ [gMASK]<sop><|system|>
118
+ {system_prompt}<|user|>
119
+ {user_prompt}<|assistant|>
120
+ {assistant_response}<|user|>
121
+ {next_user_prompt}
122
+ ```
123
+
124
+ ### Llama.cpp
125
+ ```
126
+ llama.cpp/main -m glm-4-9b-chat-1m.Q8_0.gguf --color -i -p "prompt here (according to the chat template)"
127
+ ```
128
+
129
+ ---
130
+
131
+ ## FAQ
132
+
133
+ ### Why is the IMatrix not applied everywhere?
134
+ According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
135
+
136
+ ### How do I merge a split GGUF?
137
+ 1. Make sure you have `gguf-split` available
138
+ - To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
139
+ - Download the appropriate zip for your system from the latest release
140
+ - Unzip the archive and you should be able to find `gguf-split`
141
+ 2. Locate your GGUF chunks folder (ex: `glm-4-9b-chat-1m.Q8_0`)
142
+ 3. Run `gguf-split --merge glm-4-9b-chat-1m.Q8_0/glm-4-9b-chat-1m.Q8_0-00001-of-XXXXX.gguf glm-4-9b-chat-1m.Q8_0.gguf`
143
+ - Make sure to point `gguf-split` to the first chunk of the split.
144
+
145
+ ---
146
+
147
+ Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!