ZeroWw commited on
Commit
84158d4
·
verified ·
1 Parent(s): 0c189c3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -21,6 +21,7 @@ quantize.exe --allow-requantize --output-tensor-type f16 --token-embedding-type
21
  quantize.exe --allow-requantize --pure model.f16.gguf model.f16.q8_p.gguf q8_0
22
  and there is also a pure f16 and a pure q8 in every directory.
23
 
 
24
  * [ZeroWw/codegeex4-all-9b-GGUF](https://huggingface.co/ZeroWw/codegeex4-all-9b-GGUF)
25
  * [ZeroWw/DeepSeek-V2-Lite-Chat-GGUF](https://huggingface.co/ZeroWw/DeepSeek-V2-Lite-Chat-GGUF)
26
  * [ZeroWw/NuminaMath-7B-TIR-GGUF](https://huggingface.co/ZeroWw/NuminaMath-7B-TIR-GGUF)
 
21
  quantize.exe --allow-requantize --pure model.f16.gguf model.f16.q8_p.gguf q8_0
22
  and there is also a pure f16 and a pure q8 in every directory.
23
 
24
+ * [ZeroWw/phillama-3.8b-v0.1-GGUF](https://huggingface.co/ZeroWw/phillama-3.8b-v0.1-GGUF)
25
  * [ZeroWw/codegeex4-all-9b-GGUF](https://huggingface.co/ZeroWw/codegeex4-all-9b-GGUF)
26
  * [ZeroWw/DeepSeek-V2-Lite-Chat-GGUF](https://huggingface.co/ZeroWw/DeepSeek-V2-Lite-Chat-GGUF)
27
  * [ZeroWw/NuminaMath-7B-TIR-GGUF](https://huggingface.co/ZeroWw/NuminaMath-7B-TIR-GGUF)