Quantized version of meta-llama/LlamaGuard-7b
Model Description
The model meta-llama/LlamaGuard-7b was quantized to 4bit, group_size 128, and act-order=True with auto-gptq integration in transformers (https://huggingface.co./blog/gptq-integration).
Evaluation
To evaluate the qunatized model and compare it with the full precision model, I performed binary classification on the "toxicity" label from the ~5k samples test set of lmsys/toxic-chat.
๐ Full Precision Model:
Average Precision Score: 0.3625
๐ 4-bit Quantized Model:
Average Precision Score: 0.3450
- Downloads last month
- 16
Model tree for SebastianSchramm/LlamaGuard-7b-GPTQ-4bit-128g-actorder_True
Base model
meta-llama/LlamaGuard-7b