Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

huihui-ai/r1-1776-distill-llama-70b-abliterated

This is an uncensored version of perplexity-ai/r1-1776-distill-llama-70b created with abliteration (see remove-refusals-with-transformers to know more about it).
This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.

Use with ollama

You can use huihui_ai/perplexity-ai-r1-abliterated directly

ollama run huihui_ai/perplexity-ai-r1-abliterated

Donation

If you like it, please click 'like' and follow us for more updates.
You can follow x.com/support_huihui to get the latest model information from huihui.ai.

Your donation helps us continue our further development and improvement, a cup of coffee can do it.
  • bitcoin:
  bc1qqnkhuchxw0zqjh2ku3lu4hq45hc6gy84uk70ge

Downloads last month
2
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for matatonic/r1-1776-distill-llama-70b-abliterated-6.5bpw-h8-exl2