This repo contains the copy of the original quantized to EXL2. Original: allura-org/MS-Meadowlark-22B
MS-Meadowlark-22B
A roleplay and storywriting model based on Mistral Small 22B.GGUF models: https://huggingface.co./mradermacher/MS-Meadowlark-22B-GGUF/
Datasets used in this model:
- Dampfinchen/Creative_Writing_Multiturn at 16k
- Fizzarolli/rosier-dataset + Alfitaria/body-inflation-org at 16k
- ToastyPigeon/SpringDragon at 8k
Each dataset was trained separately onto Mistral Small Instruct, and then the component models were merged along with nbeerbower/Mistral-Small-Gutenberg-Doppel-22B to create Meadowlark.
I tried different blends of the component models, and this one seems to be the most stable while retaining creativity and unpredictability added by the trained data.
Instruct Format
Rosier/bodyinf and SpringDragon were trained in completion format. This model should work with Kobold Lite in Adventure Mode and Story Mode.
Creative_Writing_Multiturn and Gutenberg-Doppel were trained using the official instruct format of Mistral Small Instruct:
<s>[INST] {User message}[/INST] {Assistant response}</s>
This is the Mistral Small V2&V3 preset in SillyTavern and Kobold Lite.
For SillyTavern in particular I've had better luck getting good output from Mistral Small using a custom instruct template that formats the assembled context as a single user turn. This prevents SillyTavern from confusing the model by assembling user/assistant turns in a nonstandard way. Note: This preset is not compatible with Stepped Thinking, use the Mistral V2&V3 preset for that.
Model tree for CalamitousFelicitousness/MS-Meadowlark-22B-exl2
Base model
unsloth/Mistral-Small-Instruct-2409