pygmalion-13b-4bit-128g

Model description

Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.

Quantized from the decoded pygmalion-13b xor format. https://huggingface.co./PygmalionAI/pygmalion-13b

In safetensor format.

Quantization Information

GPTQ CUDA quantized with: https://github.com/0cc4m/GPTQ-for-LLaMa

python llama.py --wbits 4 models/pygmalion-13b c4 --true-sequential --groupsize 128 --save_safetensors models/pygmalion-13b/4bit-128g.safetensors
Downloads last month
18
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.