huihui-ai/FluentlyLM-Prinum-abliterated
This is an uncensored version of fluently-lm/FluentlyLM-Prinum created with abliteration (see remove-refusals-with-transformers to know more about it).
This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.
Use with ollama
You can use huihui_ai/fluentlylm-prinum-abliterated directly
ollama run huihui_ai/fluentlylm-prinum-abliterated
Donation
If you like it, please click 'like' and follow us for more updates.
You can follow x.com/support_huihui to get the latest model information from huihui.ai.
Your donation helps us continue our further development and improvement, a cup of coffee can do it.
- bitcoin:
bc1qqnkhuchxw0zqjh2ku3lu4hq45hc6gy84uk70ge
- Downloads last month
- 25
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for huihui-ai/FluentlyLM-Prinum-abliterated
Datasets used to train huihui-ai/FluentlyLM-Prinum-abliterated
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard80.900
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard59.480
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard54.000
- acc_norm on GPQA (0-shot)Open LLM Leaderboard18.230
- acc_norm on MuSR (0-shot)Open LLM Leaderboard17.260
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard53.420