--- base_model: THUDM/glm-4-9b-chat-1m inference: false language: - zh - en library_name: gguf license: other license_link: https://huggingface.co./THUDM/glm-4-9b-chat-1m/blob/main/LICENSE license_name: glm-4 pipeline_tag: text-generation quantized_by: legraphista tags: - glm - chatglm - thudm - quantized - GGUF - quantization - static - 16bit - 8bit - 6bit - 5bit - 4bit - 3bit - 2bit --- # glm-4-9b-chat-1m-GGUF _Llama.cpp static quantization of THUDM/glm-4-9b-chat-1m_ Original Model: [THUDM/glm-4-9b-chat-1m](https://huggingface.co./THUDM/glm-4-9b-chat-1m) Original dtype: `BF16` (`bfloat16`) Quantized by: [https://github.com/ggerganov/llama.cpp/pull/6999](https://github.com/ggerganov/llama.cpp/pull/6999) IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt) - [Files](#files) - [Common Quants](#common-quants) - [All Quants](#all-quants) - [Downloading using huggingface-cli](#downloading-using-huggingface-cli) - [Inference](#inference) - [Simple chat template](#simple-chat-template) - [Chat template with system prompt](#chat-template-with-system-prompt) - [Llama.cpp](#llama-cpp) - [FAQ](#faq) - [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere) - [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf) --- ## Files ### Common Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | glm-4-9b-chat-1m.Q8_0 | Q8_0 | - | ⏳ Processing | ⚪ Static | - | [glm-4-9b-chat-1m.Q6_K.gguf](https://huggingface.co./legraphista/glm-4-9b-chat-1m-GGUF/blob/main/glm-4-9b-chat-1m.Q6_K.gguf) | Q6_K | 8.33GB | ✅ Available | ⚪ Static | 📦 No | [glm-4-9b-chat-1m.Q4_K.gguf](https://huggingface.co./legraphista/glm-4-9b-chat-1m-GGUF/blob/main/glm-4-9b-chat-1m.Q4_K.gguf) | Q4_K | 6.31GB | ✅ Available | ⚪ Static | 📦 No | glm-4-9b-chat-1m.Q3_K | Q3_K | - | ⏳ Processing | ⚪ Static | - | glm-4-9b-chat-1m.Q2_K | Q2_K | - | ⏳ Processing | ⚪ Static | - ### All Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | glm-4-9b-chat-1m.BF16 | BF16 | - | ⏳ Processing | ⚪ Static | - | glm-4-9b-chat-1m.FP16 | F16 | - | ⏳ Processing | ⚪ Static | - | glm-4-9b-chat-1m.Q8_0 | Q8_0 | - | ⏳ Processing | ⚪ Static | - | [glm-4-9b-chat-1m.Q6_K.gguf](https://huggingface.co./legraphista/glm-4-9b-chat-1m-GGUF/blob/main/glm-4-9b-chat-1m.Q6_K.gguf) | Q6_K | 8.33GB | ✅ Available | ⚪ Static | 📦 No | glm-4-9b-chat-1m.Q5_K | Q5_K | - | ⏳ Processing | ⚪ Static | - | glm-4-9b-chat-1m.Q5_K_S | Q5_K_S | - | ⏳ Processing | ⚪ Static | - | [glm-4-9b-chat-1m.Q4_K.gguf](https://huggingface.co./legraphista/glm-4-9b-chat-1m-GGUF/blob/main/glm-4-9b-chat-1m.Q4_K.gguf) | Q4_K | 6.31GB | ✅ Available | ⚪ Static | 📦 No | glm-4-9b-chat-1m.Q4_K_S | Q4_K_S | - | ⏳ Processing | ⚪ Static | - | glm-4-9b-chat-1m.IQ4_NL | IQ4_NL | - | ⏳ Processing | ⚪ Static | - | glm-4-9b-chat-1m.IQ4_XS | IQ4_XS | - | ⏳ Processing | ⚪ Static | - | glm-4-9b-chat-1m.Q3_K | Q3_K | - | ⏳ Processing | ⚪ Static | - | glm-4-9b-chat-1m.Q3_K_L | Q3_K_L | - | ⏳ Processing | ⚪ Static | - | glm-4-9b-chat-1m.Q3_K_S | Q3_K_S | - | ⏳ Processing | ⚪ Static | - | glm-4-9b-chat-1m.IQ3_M | IQ3_M | - | ⏳ Processing | ⚪ Static | - | glm-4-9b-chat-1m.IQ3_S | IQ3_S | - | ⏳ Processing | ⚪ Static | - | glm-4-9b-chat-1m.IQ3_XS | IQ3_XS | - | ⏳ Processing | ⚪ Static | - | glm-4-9b-chat-1m.Q2_K | Q2_K | - | ⏳ Processing | ⚪ Static | - ## Downloading using huggingface-cli If you do not have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Download the specific file you want: ``` huggingface-cli download legraphista/glm-4-9b-chat-1m-GGUF --include "glm-4-9b-chat-1m.Q8_0.gguf" --local-dir ./ ``` If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download legraphista/glm-4-9b-chat-1m-GGUF --include "glm-4-9b-chat-1m.Q8_0/*" --local-dir ./ # see FAQ for merging GGUF's ``` --- ## Inference ### Simple chat template ``` [gMASK]<|user|> {user_prompt}<|assistant|> {assistant_response}<|user|> {next_user_prompt} ``` ### Chat template with system prompt ``` [gMASK]<|system|> {system_prompt}<|user|> {user_prompt}<|assistant|> {assistant_response}<|user|> {next_user_prompt} ``` ### Llama.cpp ``` llama.cpp/main -m glm-4-9b-chat-1m.Q8_0.gguf --color -i -p "prompt here (according to the chat template)" ``` --- ## FAQ ### Why is the IMatrix not applied everywhere? According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results). ### How do I merge a split GGUF? 1. Make sure you have `gguf-split` available - To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases - Download the appropriate zip for your system from the latest release - Unzip the archive and you should be able to find `gguf-split` 2. Locate your GGUF chunks folder (ex: `glm-4-9b-chat-1m.Q8_0`) 3. Run `gguf-split --merge glm-4-9b-chat-1m.Q8_0/glm-4-9b-chat-1m.Q8_0-00001-of-XXXXX.gguf glm-4-9b-chat-1m.Q8_0.gguf` - Make sure to point `gguf-split` to the first chunk of the split. --- Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!