Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
@@ -19,7 +19,8 @@ quantize.exe --allow-requantize --output-tensor-type f16 --token-embedding-type
|
|
19 |
quantize.exe --allow-requantize --output-tensor-type f16 --token-embedding-type f16 model.f16.gguf model.f16.q6.gguf q6_k
|
20 |
quantize.exe --allow-requantize --output-tensor-type f16 --token-embedding-type f16 model.f16.gguf model.f16.q6.gguf q8_0
|
21 |
and there is also a pure f16 in every directory.
|
22 |
-
|
|
|
23 |
* [ZeroWw/Llama-3-8B-Instruct-Gradient-1048k-GGUF](https://huggingface.co/ZeroWw/Llama-3-8B-Instruct-Gradient-1048k-GGUF)
|
24 |
* [ZeroWw/Pythia-Chat-Base-7B-GGUF](https://huggingface.co/ZeroWw/Pythia-Chat-Base-7B-GGUF)
|
25 |
* [ZeroWw/Yi-1.5-6B-Chat-GGUF](https://huggingface.co/ZeroWw/Yi-1.5-6B-Chat-GGUF)
|
|
|
19 |
quantize.exe --allow-requantize --output-tensor-type f16 --token-embedding-type f16 model.f16.gguf model.f16.q6.gguf q6_k
|
20 |
quantize.exe --allow-requantize --output-tensor-type f16 --token-embedding-type f16 model.f16.gguf model.f16.q6.gguf q8_0
|
21 |
and there is also a pure f16 in every directory.
|
22 |
+
|
23 |
+
* [ZeroWw/L3-8B-Stheno-v3.3-32K-GGUF](https://huggingface.co/ZeroWw/L3-8B-Stheno-v3.3-32K-GGUF)
|
24 |
* [ZeroWw/Llama-3-8B-Instruct-Gradient-1048k-GGUF](https://huggingface.co/ZeroWw/Llama-3-8B-Instruct-Gradient-1048k-GGUF)
|
25 |
* [ZeroWw/Pythia-Chat-Base-7B-GGUF](https://huggingface.co/ZeroWw/Pythia-Chat-Base-7B-GGUF)
|
26 |
* [ZeroWw/Yi-1.5-6B-Chat-GGUF](https://huggingface.co/ZeroWw/Yi-1.5-6B-Chat-GGUF)
|