Llamacpp quants
Browse files- .gitattributes +12 -0
- README.md +32 -0
- WhiteRabbitNeo-7B-v1.5a-Q2_K.gguf +3 -0
- WhiteRabbitNeo-7B-v1.5a-Q3_K_L.gguf +3 -0
- WhiteRabbitNeo-7B-v1.5a-Q3_K_M.gguf +3 -0
- WhiteRabbitNeo-7B-v1.5a-Q3_K_S.gguf +3 -0
- WhiteRabbitNeo-7B-v1.5a-Q4_0.gguf +3 -0
- WhiteRabbitNeo-7B-v1.5a-Q4_K_M.gguf +3 -0
- WhiteRabbitNeo-7B-v1.5a-Q4_K_S.gguf +3 -0
- WhiteRabbitNeo-7B-v1.5a-Q5_0.gguf +3 -0
- WhiteRabbitNeo-7B-v1.5a-Q5_K_M.gguf +3 -0
- WhiteRabbitNeo-7B-v1.5a-Q5_K_S.gguf +3 -0
- WhiteRabbitNeo-7B-v1.5a-Q6_K.gguf +3 -0
- WhiteRabbitNeo-7B-v1.5a-Q8_0.gguf +3 -0
.gitattributes
CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
WhiteRabbitNeo-7B-v1.5a-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
37 |
+
WhiteRabbitNeo-7B-v1.5a-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
38 |
+
WhiteRabbitNeo-7B-v1.5a-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
39 |
+
WhiteRabbitNeo-7B-v1.5a-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
40 |
+
WhiteRabbitNeo-7B-v1.5a-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
|
41 |
+
WhiteRabbitNeo-7B-v1.5a-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
42 |
+
WhiteRabbitNeo-7B-v1.5a-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
43 |
+
WhiteRabbitNeo-7B-v1.5a-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
|
44 |
+
WhiteRabbitNeo-7B-v1.5a-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
45 |
+
WhiteRabbitNeo-7B-v1.5a-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
46 |
+
WhiteRabbitNeo-7B-v1.5a-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
47 |
+
WhiteRabbitNeo-7B-v1.5a-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: other
|
3 |
+
license_name: deepseek
|
4 |
+
license_link: https://huggingface.co/deepseek-ai/deepseek-coder-33b-base/blob/main/LICENSE
|
5 |
+
quantized_by: bartowski
|
6 |
+
pipeline_tag: text-generation
|
7 |
+
---
|
8 |
+
|
9 |
+
## Llamacpp Quantizations of WhiteRabbitNeo-7B-v1.5a
|
10 |
+
|
11 |
+
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2354">b2354</a> for quantization.
|
12 |
+
|
13 |
+
Original model: https://huggingface.co/WhiteRabbitNeo/WhiteRabbitNeo-7B-v1.5a
|
14 |
+
|
15 |
+
Download a file (not the whole branch) from below:
|
16 |
+
|
17 |
+
| Filename | Quant type | File Size | Description |
|
18 |
+
| -------- | ---------- | --------- | ----------- |
|
19 |
+
| [WhiteRabbitNeo-7B-v1.5a-Q8_0.gguf](https://huggingface.co/bartowski/WhiteRabbitNeo-7B-v1.5a-GGUF//main/WhiteRabbitNeo-7B-v1.5a-Q8_0.gguf) | Q8_0 | 7.16GB | Extremely high quality, generally unneeded but max available quant. |
|
20 |
+
| [WhiteRabbitNeo-7B-v1.5a-Q6_K.gguf](https://huggingface.co/bartowski/WhiteRabbitNeo-7B-v1.5a-GGUF//main/WhiteRabbitNeo-7B-v1.5a-Q6_K.gguf) | Q6_K | 5.53GB | Very high quality, near perfect, *recommended*. |
|
21 |
+
| [WhiteRabbitNeo-7B-v1.5a-Q5_K_M.gguf](https://huggingface.co/bartowski/WhiteRabbitNeo-7B-v1.5a-GGUF//main/WhiteRabbitNeo-7B-v1.5a-Q5_K_M.gguf) | Q5_K_M | 4.78GB | High quality, very usable. |
|
22 |
+
| [WhiteRabbitNeo-7B-v1.5a-Q5_K_S.gguf](https://huggingface.co/bartowski/WhiteRabbitNeo-7B-v1.5a-GGUF//main/WhiteRabbitNeo-7B-v1.5a-Q5_K_S.gguf) | Q5_K_S | 4.65GB | High quality, very usable. |
|
23 |
+
| [WhiteRabbitNeo-7B-v1.5a-Q5_0.gguf](https://huggingface.co/bartowski/WhiteRabbitNeo-7B-v1.5a-GGUF//main/WhiteRabbitNeo-7B-v1.5a-Q5_0.gguf) | Q5_0 | 4.65GB | High quality, older format, generally not recommended. |
|
24 |
+
| [WhiteRabbitNeo-7B-v1.5a-Q4_K_M.gguf](https://huggingface.co/bartowski/WhiteRabbitNeo-7B-v1.5a-GGUF//main/WhiteRabbitNeo-7B-v1.5a-Q4_K_M.gguf) | Q4_K_M | 4.08GB | Good quality, similar to 4.25 bpw. |
|
25 |
+
| [WhiteRabbitNeo-7B-v1.5a-Q4_K_S.gguf](https://huggingface.co/bartowski/WhiteRabbitNeo-7B-v1.5a-GGUF//main/WhiteRabbitNeo-7B-v1.5a-Q4_K_S.gguf) | Q4_K_S | 3.85GB | Slightly lower quality with small space savings. |
|
26 |
+
| [WhiteRabbitNeo-7B-v1.5a-Q4_0.gguf](https://huggingface.co/bartowski/WhiteRabbitNeo-7B-v1.5a-GGUF//main/WhiteRabbitNeo-7B-v1.5a-Q4_0.gguf) | Q4_0 | 3.82GB | Decent quality, older format, generally not recommended. |
|
27 |
+
| [WhiteRabbitNeo-7B-v1.5a-Q3_K_L.gguf](https://huggingface.co/bartowski/WhiteRabbitNeo-7B-v1.5a-GGUF//main/WhiteRabbitNeo-7B-v1.5a-Q3_K_L.gguf) | Q3_K_L | 3.59GB | Lower quality but usable, good for low RAM availability. |
|
28 |
+
| [WhiteRabbitNeo-7B-v1.5a-Q3_K_M.gguf](https://huggingface.co/bartowski/WhiteRabbitNeo-7B-v1.5a-GGUF//main/WhiteRabbitNeo-7B-v1.5a-Q3_K_M.gguf) | Q3_K_M | 3.29GB | Even lower quality. |
|
29 |
+
| [WhiteRabbitNeo-7B-v1.5a-Q3_K_S.gguf](https://huggingface.co/bartowski/WhiteRabbitNeo-7B-v1.5a-GGUF//main/WhiteRabbitNeo-7B-v1.5a-Q3_K_S.gguf) | Q3_K_S | 2.94GB | Low quality, not recommended. |
|
30 |
+
| [WhiteRabbitNeo-7B-v1.5a-Q2_K.gguf](https://huggingface.co/bartowski/WhiteRabbitNeo-7B-v1.5a-GGUF//main/WhiteRabbitNeo-7B-v1.5a-Q2_K.gguf) | Q2_K | 2.53GB | Extremely low quality, *not* recommended.
|
31 |
+
|
32 |
+
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
WhiteRabbitNeo-7B-v1.5a-Q2_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cd63a343c88e992d1ecfaa7ef268b9e58aefa7ebb66adf82c9dc8a22c05b3330
|
3 |
+
size 2534104288
|
WhiteRabbitNeo-7B-v1.5a-Q3_K_L.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0f1e6ee35813d98630d2414257c9c064ddf40c93acbe0f5ee41911bc2259ffc3
|
3 |
+
size 3598458080
|
WhiteRabbitNeo-7B-v1.5a-Q3_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b24ad0fcfac834051cb3bf75fd7bd328837683c7c63d6a8bf99c924258f4d390
|
3 |
+
size 3299351776
|
WhiteRabbitNeo-7B-v1.5a-Q3_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cc5f2715ac752f2fbb6372073d57a2ce3e89f50972a99ae9e4617edb4fceee8d
|
3 |
+
size 2949651680
|
WhiteRabbitNeo-7B-v1.5a-Q4_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:07a57582a86a7cfbe17bc78a36c251b40c9c5fa19335996a25d7bebb0fecc515
|
3 |
+
size 3827293408
|
WhiteRabbitNeo-7B-v1.5a-Q4_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:339516dfc83ac6c031d67836f41eef865bb78b6ae79024e99fcbc19acceffd3d
|
3 |
+
size 4082490592
|
WhiteRabbitNeo-7B-v1.5a-Q4_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fa9a137d9fb9679c3bdd4220020d3390e5d05e090bb6e6091971bd887f1585c1
|
3 |
+
size 3858226400
|
WhiteRabbitNeo-7B-v1.5a-Q5_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f1e995f9d4b3dfec1ab5ee0df504b13141cf4cf9bae62e9ed47da8e8d71db14e
|
3 |
+
size 4653309152
|
WhiteRabbitNeo-7B-v1.5a-Q5_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:89e99857bf0a69d1e7ed9b312fe4689a0232edd0a1967461a3d0f2bba933c02b
|
3 |
+
size 4784774368
|
WhiteRabbitNeo-7B-v1.5a-Q5_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:84df22dfc2b7c6dfc70129defebc95a855501ad4d04b91ec51262327fecc9f9a
|
3 |
+
size 4653309152
|
WhiteRabbitNeo-7B-v1.5a-Q6_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a684839be5cc46a00a7bec3ef92fe66a5da85f9c5536ecb506086b4043c18ad1
|
3 |
+
size 5530950880
|
WhiteRabbitNeo-7B-v1.5a-Q8_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:adc85196571fb8128abc0092d7c0c8953016eedf8b7139f66d22f10f7d622e48
|
3 |
+
size 7163354336
|