Add 4bit quants
Browse filesThese quantized models are done with 128 groupsize and another with only act-order. They are built with the GPTQ fork compatible with the 4bit-capable koboldAI.
- 4bit-128g.safetensors +3 -0
- 4bit.safetensors +3 -0
4bit-128g.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4399ca5e6370b54764466964c216ac2905675e5b7581f06a501155b0fc67f7e9
|
3 |
+
size 7455124842
|
4bit.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fb5337473f956b0c49c58acd90a7565787e76b1d2b97bdc57cd4f4e71b7ca9f0
|
3 |
+
size 7018652718
|