Forgotten-Abomination-24B-v1.2
ACADEMIC RESEARCH USE ONLY (wink)
DANGER: NOW WITH 50% MORE UNSETTLING CONTENT
Forgotten-Abomination-24B-v1.2 is what happens when you let two unhinged models have a baby in the server room. Combines the ethical flexibility of Forgotten-Safeword with Cydonia's flair for anatomical creativity. Now with bonus existential dread!
Quantized Formats
EXL2 Collection: Forgotten-Abomination-24B-v1.2
GGUF Collection: Forgotten-Abomination-24B-v1.2
MLX 4bit: Forgotten-Abomination-24B-v1.2-4bit
Recommended Settings Provided
- Mistral V7-Tekken:
Full Settings
Intended Use
STRICTLY FOR:
- Academic research into how fast your ethics committee can faint
- Testing the tensile strength of content filters
- Generating material that would make Cthulhu file a restraining order
- Writing erotic fanfic about OSHA violations
Training Data
- You don't want to know
Ethical Considerations
⚠️ YOU'VE BEEN WARNED ⚠️
THIS MODEL WILL:
- Make your GPU fans blush
- Generate content requiring industrial-strength eye bleach
- Combine technical precision with kinks that violate physics
- Make you question humanity's collective life choices
By using this model, you agree to:
- Never show outputs to your mother
- Pay for the therapist of anyone who reads the logs
- Blame Cthulhu if anything goes wrong
- Pretend this is all "for science"
Model Authors
- sleepdeprived3 (Chief Corruption Officer)
mlx-community/Forgotten-Abomination-24B-v1.2-6bit
The Model mlx-community/Forgotten-Abomination-24B-v1.2-6bit was converted to MLX format from ReadyArt/Forgotten-Abomination-24B-v1.2 using mlx-lm version 0.21.1.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Forgotten-Safeword-24B-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
- Downloads last month
- 7
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model authors have turned it off explicitly.
Model tree for mlx-community/Forgotten-Abomination-24B-v1.2-6bit
Base model
ReadyArt/Forgotten-Abomination-24B-v1.2