General discussion / GGUF-Imatrix quants.

#1
by Lewdiculous - opened

I should be uploading my set of GGUF-IQ-Imatrix quants of TheSpice-7b-v0.1.1 until early morning at:
https://huggingface.co./Lewdiculous/TheSpice-7b-v0.1.1-GGUF-IQ-Imatrix

The default list was requested in #16, if anyone also feels like any quant outside of these should be added, feel free to chime in.

    quantization_options = [
        "Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M", "Q5_K_S",
        "Q6_K", "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS"
    ]

Keep up the good work, @cgato !

I will say that I, just as a user personally, would much prefer if you released model weights in .safetensors instead.

It is a potential security risk as pickle files - and the same applies to derived binaries - can allow for remote code execution and it's just something we can avoid, .safetensors are already best practice in that sense.

Sign up or log in to comment