Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

huihui-ai/QwQ-32B-Preview-abliterated

This is an uncensored version of Qwen/QwQ-32B-Preview created with abliteration (see remove-refusals-with-transformers to know more about it).
This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.

ollama

You can use huihui_ai/qwq-abliterated directly,

ollama run huihui_ai/qwq-abliterated
Downloads last month
7
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for async0x42/QwQ-32B-Preview-abliterated-exl2_4.65bpw

Base model

Qwen/Qwen2.5-32B
Quantized
(114)
this model