GGUF LoRA adapters Collection Adapters extracted from fine tuned models, using mergekit-extract-lora • 16 items • Updated 2 days ago • 4
GGUF LoRA adapters Collection Adapters extracted from fine tuned models, using mergekit-extract-lora • 16 items • Updated 2 days ago • 4
view post Post 467 Fun fact: you can get any DeepSeek-R1-Qwen **abliterated** by using one of these LoRA adapters (GGUF available!) ngxson/extracted-lora-mergekit-677d5c3eea0b6a7661201846 See translation 🚀 1 1 🔥 1 1 + Reply
GGUF LoRA adapters Collection Adapters extracted from fine tuned models, using mergekit-extract-lora • 16 items • Updated 2 days ago • 4
view post Post 2051 Check out my collection of pre-made GGUF LoRA adapters!This allow you to use both normal + abliterated version of popular models like llama, qwen, etc, without having to double to amount of VRAM usage. ngxson/gguf_lora_collection See translation 4 replies · 🔥 5 5 + Reply
GGUF LoRA adapters Collection Adapters extracted from fine tuned models, using mergekit-extract-lora • 16 items • Updated 2 days ago • 4
view post Post 2431 I made this small tool that can be useful for debugging Ollama chat template: ngxson/ollama_template_testCC @bartowski you may need this ;-) See translation 2 replies · 👀 5 5 🚀 3 3 + Reply
GGUF LoRA adapters Collection Adapters extracted from fine tuned models, using mergekit-extract-lora • 16 items • Updated 2 days ago • 4