c4ai-command-r-plus-08-2024-IMat-GGUF

Llama.cpp imatrix quantization of CohereForAI/c4ai-command-r-plus-08-2024

Original Model: CohereForAI/c4ai-command-r-plus-08-2024
Original dtype: FP16 (float16)
Quantized by: llama.cpp b3645
IMatrix dataset: here


Files

IMatrix

Status: ✅ Available
Link: here

Common Quants

Filename Quant type File Size Status Uses IMatrix Is Split
c4ai-command-r-plus-08-2024.Q8_0 Q8_0 - ⏳ Processing ⚪ Static -
c4ai-command-r-plus-08-2024.Q6_K/* Q6_K 85.17GB ✅ Available ⚪ Static ✂ Yes
c4ai-command-r-plus-08-2024.Q4_K/* Q4_K 62.75GB ✅ Available 🟢 IMatrix ✂ Yes
c4ai-command-r-plus-08-2024.Q3_K/* Q3_K 50.98GB ✅ Available 🟢 IMatrix ✂ Yes
c4ai-command-r-plus-08-2024.Q2_K.gguf Q2_K 39.50GB ✅ Available 🟢 IMatrix 📦 No

All Quants

Filename Quant type File Size Status Uses IMatrix Is Split
c4ai-command-r-plus-08-2024.FP16/* F16 207.64GB ✅ Available ⚪ Static ✂ Yes
c4ai-command-r-plus-08-2024.Q8_0 Q8_0 - ⏳ Processing ⚪ Static -
c4ai-command-r-plus-08-2024.Q6_K/* Q6_K 85.17GB ✅ Available ⚪ Static ✂ Yes
c4ai-command-r-plus-08-2024.Q5_K/* Q5_K 73.62GB ✅ Available ⚪ Static ✂ Yes
c4ai-command-r-plus-08-2024.Q5_K_S/* Q5_K_S 71.80GB ✅ Available ⚪ Static ✂ Yes
c4ai-command-r-plus-08-2024.Q4_K/* Q4_K 62.75GB ✅ Available 🟢 IMatrix ✂ Yes
c4ai-command-r-plus-08-2024.Q4_K_S/* Q4_K_S 59.64GB ✅ Available 🟢 IMatrix ✂ Yes
c4ai-command-r-plus-08-2024.IQ4_NL/* IQ4_NL 59.32GB ✅ Available 🟢 IMatrix ✂ Yes
c4ai-command-r-plus-08-2024.IQ4_XS/* IQ4_XS 56.20GB ✅ Available 🟢 IMatrix ✂ Yes
c4ai-command-r-plus-08-2024.Q3_K/* Q3_K 50.98GB ✅ Available 🟢 IMatrix ✂ Yes
c4ai-command-r-plus-08-2024.Q3_K_L/* Q3_K_L 55.40GB ✅ Available 🟢 IMatrix ✂ Yes
c4ai-command-r-plus-08-2024.Q3_K_S/* Q3_K_S 45.85GB ✅ Available 🟢 IMatrix ✂ Yes
c4ai-command-r-plus-08-2024.IQ3_M/* IQ3_M 47.68GB ✅ Available 🟢 IMatrix ✂ Yes
c4ai-command-r-plus-08-2024.IQ3_S/* IQ3_S 45.96GB ✅ Available 🟢 IMatrix ✂ Yes
c4ai-command-r-plus-08-2024.IQ3_XS.gguf IQ3_XS 43.60GB ✅ Available 🟢 IMatrix 📦 No
c4ai-command-r-plus-08-2024.IQ3_XXS.gguf IQ3_XXS 40.66GB ✅ Available 🟢 IMatrix 📦 No
c4ai-command-r-plus-08-2024.Q2_K.gguf Q2_K 39.50GB ✅ Available 🟢 IMatrix 📦 No
c4ai-command-r-plus-08-2024.Q2_K_S.gguf Q2_K_S 36.60GB ✅ Available 🟢 IMatrix 📦 No
c4ai-command-r-plus-08-2024.IQ2_M.gguf IQ2_M 36.04GB ✅ Available 🟢 IMatrix 📦 No
c4ai-command-r-plus-08-2024.IQ2_S.gguf IQ2_S 33.32GB ✅ Available 🟢 IMatrix 📦 No
c4ai-command-r-plus-08-2024.IQ2_XS.gguf IQ2_XS 31.63GB ✅ Available 🟢 IMatrix 📦 No
c4ai-command-r-plus-08-2024.IQ2_XXS.gguf IQ2_XXS 28.61GB ✅ Available 🟢 IMatrix 📦 No
c4ai-command-r-plus-08-2024.IQ1_M.gguf IQ1_M 25.22GB ✅ Available 🟢 IMatrix 📦 No
c4ai-command-r-plus-08-2024.IQ1_S.gguf IQ1_S 23.18GB ✅ Available 🟢 IMatrix 📦 No

Downloading using huggingface-cli

If you do not have hugginface-cli installed:

pip install -U "huggingface_hub[cli]"

Download the specific file you want:

huggingface-cli download legraphista/c4ai-command-r-plus-08-2024-IMat-GGUF --include "c4ai-command-r-plus-08-2024.Q8_0.gguf" --local-dir ./

If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:

huggingface-cli download legraphista/c4ai-command-r-plus-08-2024-IMat-GGUF --include "c4ai-command-r-plus-08-2024.Q8_0/*" --local-dir ./
# see FAQ for merging GGUF's

Inference

Simple chat template

<BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>{user_prompt}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>{assistant_response}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>{next_user_prompt}<|END_OF_TURN_TOKEN|>

Chat template with system prompt

<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>{system_prompt}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>{user_prompt}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>{assistant_response}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>{next_user_prompt}<|END_OF_TURN_TOKEN|>

Llama.cpp

llama.cpp/main -m c4ai-command-r-plus-08-2024.Q8_0.gguf --color -i -p "prompt here (according to the chat template)"

FAQ

Why is the IMatrix not applied everywhere?

According to this investigation, it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).

How do I merge a split GGUF?

  1. Make sure you have gguf-split available
  2. Locate your GGUF chunks folder (ex: c4ai-command-r-plus-08-2024.Q8_0)
  3. Run gguf-split --merge c4ai-command-r-plus-08-2024.Q8_0/c4ai-command-r-plus-08-2024.Q8_0-00001-of-XXXXX.gguf c4ai-command-r-plus-08-2024.Q8_0.gguf
    • Make sure to point gguf-split to the first chunk of the split.

Got a suggestion? Ping me @legraphista!

Downloads last month
203
GGUF
Model size
104B params
Architecture
command-r

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for legraphista/c4ai-command-r-plus-08-2024-IMat-GGUF

Quantized
(16)
this model