--- license: apache-2.0 tags: - conversational - mistral - unsloth base_model: unsloth/mistral-small-instruct-2409 --- ## SorcererLM-22B Because good things always come in threes! **SorcererLM-22B** is here, rounding out the trinity of Mistral-Small-Instruct tunes from the [Quant Cartel](https://huggingface.co./Quant-Cartel). ## Prompt Format * Basic: Mistral V2 & V3 Context / Instruct Templates (Now available on ST Staging branch) * Advanced: TBA ## Quantized Versions * Coming soon ## Training For starters this is a LORA tune on top of Mistral-Small-Instruct-2409 and **not** a pruned version of [SorcererLM-8x22b](https://huggingface.co./rAIfle/SorcererLM-8x22b-bf16). Trained with a whole lot of love on 1 epoch of cleaned and deduped c2 logs. This model is 100% 'born-local', the result of roughly 27 hours and a little bit of patience on a single RTX 4080 SUPER. As hyperparameters and dataset intentionally mirror ones used in the original Sorcerer 8x22b tune, this is considered its 'lite' counterpart aiming to provide the same bespoke conversational experience relative to its size and reduced hardware requirements. While all three share the same Mistral-Small-Instruct base, in contrast to its sisters [Mistral-Small-NovusKyver](https://huggingface.co./Envoid/Mistral-Small-NovusKyver) and [Acoylte-22B](https://huggingface.co./rAIfle/Acolyte-22B) this release did not SLERP the resulting model with the original in a 50/50 ratio post-training. Instead, alpha was dropped when the lora was merged with full precision weights in the final step. ## Acknowledgments * First and foremost a huge thank you my brilliant teammates [envoid](https://huggingface.co./envoid/) and [rAIfle](https://huggingface.co./rAIfle/). Special shout-out to rAIfle for critical last minute advice that got this one through the finish line * Props to unsloth as well for helping make this local tune possible * And of course, none of this would matter without users like you. Thank you :) ## Safety ...