Update README.md
Browse files
README.md
CHANGED
@@ -48,7 +48,10 @@ license: llama3.3
|
|
48 |
This is a GPTQ-quantized 4-bit version of [huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned](https://huggingface.co/huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned).
|
49 |
|
50 |
This is just the quantification test for GPTQ, with only one dataset: "gptqmodel is an easy-to-use model quantization library with user-friendly apis, based on GPTQ algorithm.".
|
51 |
-
|
|
|
|
|
|
|
52 |
|
53 |
### Use with transformers
|
54 |
|
|
|
48 |
This is a GPTQ-quantized 4-bit version of [huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned](https://huggingface.co/huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned).
|
49 |
|
50 |
This is just the quantification test for GPTQ, with only one dataset: "gptqmodel is an easy-to-use model quantization library with user-friendly apis, based on GPTQ algorithm.".
|
51 |
+
|
52 |
+
Although it was just a simple fine-tuning and quantization, it solved [this discussion](https://huggingface.co/huihui-ai/Llama-3.3-70B-Instruct-abliterated/discussions/4).
|
53 |
+
|
54 |
+
If you need to fine-tune your data and quantify it, please contact us: [email protected]
|
55 |
|
56 |
### Use with transformers
|
57 |
|