Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -1,12 +1,11 @@
|
|
1 |
---
|
2 |
-
base_model: Qwen/Qwen2.5-3B-Instruct
|
3 |
-
pipeline_tag: text-generation
|
4 |
quantized_by: bartowski
|
|
|
5 |
---
|
6 |
|
7 |
## Llamacpp imatrix Quantizations of Qwen2.5-3B-Instruct
|
8 |
|
9 |
-
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/
|
10 |
|
11 |
Original model: https://huggingface.co/Qwen/Qwen2.5-3B-Instruct
|
12 |
|
@@ -28,7 +27,7 @@ Run them in [LM Studio](https://lmstudio.ai/)
|
|
28 |
|
29 |
| Filename | Quant type | File Size | Split | Description |
|
30 |
| -------- | ---------- | --------- | ----- | ----------- |
|
31 |
-
| [Qwen2.5-3B-Instruct-
|
32 |
| [Qwen2.5-3B-Instruct-Q8_0.gguf](https://huggingface.co/bartowski/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q8_0.gguf) | Q8_0 | 3.29GB | false | Extremely high quality, generally unneeded but max available quant. |
|
33 |
| [Qwen2.5-3B-Instruct-Q6_K_L.gguf](https://huggingface.co/bartowski/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q6_K_L.gguf) | Q6_K_L | 2.61GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
|
34 |
| [Qwen2.5-3B-Instruct-Q6_K.gguf](https://huggingface.co/bartowski/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q6_K.gguf) | Q6_K | 2.54GB | false | Very high quality, near perfect, *recommended*. |
|
@@ -45,7 +44,13 @@ Run them in [LM Studio](https://lmstudio.ai/)
|
|
45 |
| [Qwen2.5-3B-Instruct-Q3_K_XL.gguf](https://huggingface.co/bartowski/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q3_K_XL.gguf) | Q3_K_XL | 1.78GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
|
46 |
| [Qwen2.5-3B-Instruct-IQ4_XS.gguf](https://huggingface.co/bartowski/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-IQ4_XS.gguf) | IQ4_XS | 1.74GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
|
47 |
| [Qwen2.5-3B-Instruct-Q3_K_L.gguf](https://huggingface.co/bartowski/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q3_K_L.gguf) | Q3_K_L | 1.71GB | false | Lower quality but usable, good for low RAM availability. |
|
|
|
48 |
| [Qwen2.5-3B-Instruct-IQ3_M.gguf](https://huggingface.co/bartowski/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-IQ3_M.gguf) | IQ3_M | 1.49GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
|
|
|
|
|
|
|
|
|
|
|
49 |
|
50 |
## Embed/output weights
|
51 |
|
|
|
1 |
---
|
|
|
|
|
2 |
quantized_by: bartowski
|
3 |
+
pipeline_tag: text-generation
|
4 |
---
|
5 |
|
6 |
## Llamacpp imatrix Quantizations of Qwen2.5-3B-Instruct
|
7 |
|
8 |
+
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3772">b3772</a> for quantization.
|
9 |
|
10 |
Original model: https://huggingface.co/Qwen/Qwen2.5-3B-Instruct
|
11 |
|
|
|
27 |
|
28 |
| Filename | Quant type | File Size | Split | Description |
|
29 |
| -------- | ---------- | --------- | ----- | ----------- |
|
30 |
+
| [Qwen2.5-3B-Instruct-f16.gguf](https://huggingface.co/bartowski/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-f16.gguf) | f16 | 6.18GB | false | Full F16 weights. |
|
31 |
| [Qwen2.5-3B-Instruct-Q8_0.gguf](https://huggingface.co/bartowski/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q8_0.gguf) | Q8_0 | 3.29GB | false | Extremely high quality, generally unneeded but max available quant. |
|
32 |
| [Qwen2.5-3B-Instruct-Q6_K_L.gguf](https://huggingface.co/bartowski/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q6_K_L.gguf) | Q6_K_L | 2.61GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
|
33 |
| [Qwen2.5-3B-Instruct-Q6_K.gguf](https://huggingface.co/bartowski/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q6_K.gguf) | Q6_K | 2.54GB | false | Very high quality, near perfect, *recommended*. |
|
|
|
44 |
| [Qwen2.5-3B-Instruct-Q3_K_XL.gguf](https://huggingface.co/bartowski/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q3_K_XL.gguf) | Q3_K_XL | 1.78GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
|
45 |
| [Qwen2.5-3B-Instruct-IQ4_XS.gguf](https://huggingface.co/bartowski/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-IQ4_XS.gguf) | IQ4_XS | 1.74GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
|
46 |
| [Qwen2.5-3B-Instruct-Q3_K_L.gguf](https://huggingface.co/bartowski/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q3_K_L.gguf) | Q3_K_L | 1.71GB | false | Lower quality but usable, good for low RAM availability. |
|
47 |
+
| [Qwen2.5-3B-Instruct-Q3_K_M.gguf](https://huggingface.co/bartowski/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q3_K_M.gguf) | Q3_K_M | 1.59GB | false | Low quality. |
|
48 |
| [Qwen2.5-3B-Instruct-IQ3_M.gguf](https://huggingface.co/bartowski/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-IQ3_M.gguf) | IQ3_M | 1.49GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
|
49 |
+
| [Qwen2.5-3B-Instruct-Q3_K_S.gguf](https://huggingface.co/bartowski/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q3_K_S.gguf) | Q3_K_S | 1.45GB | false | Low quality, not recommended. |
|
50 |
+
| [Qwen2.5-3B-Instruct-IQ3_XS.gguf](https://huggingface.co/bartowski/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-IQ3_XS.gguf) | IQ3_XS | 1.39GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
|
51 |
+
| [Qwen2.5-3B-Instruct-Q2_K_L.gguf](https://huggingface.co/bartowski/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q2_K_L.gguf) | Q2_K_L | 1.35GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
|
52 |
+
| [Qwen2.5-3B-Instruct-Q2_K.gguf](https://huggingface.co/bartowski/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q2_K.gguf) | Q2_K | 1.27GB | false | Very low quality but surprisingly usable. |
|
53 |
+
| [Qwen2.5-3B-Instruct-IQ2_M.gguf](https://huggingface.co/bartowski/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-IQ2_M.gguf) | IQ2_M | 1.14GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
|
54 |
|
55 |
## Embed/output weights
|
56 |
|