--- base_model: - inflatebot/thorn-0.5 - inflatebot/thorn-0.35 - inflatebot/thorn-0.55 - inflatebot/thorn-0.45 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ![Made with NovelAI](https://huggingface.co./inflatebot/guns-and-roses-r1/resolve/main/guns%20and%20roses.png) `Quickest draw in the West.` ### NOTE: If you are getting phrase repetition or nonsense outputs with SillyTavern, make sure that "Include names" is disabled under Advanced Formatting. Nemo models tend to exhibit these issues when this is enabled. A re-application of the Helium-3 process to Mistral Nemo analogues. Experimental (as you can tell by it having a revision number, I'll be playing with this more in coming time.) Based ultimately on [Magnum-12B-V2](https://huggingface.co./anthracite-org/magnum-12b-v2) and [MN-12B-Rosier-v1](https://huggingface.co./Fizzarolli/MN-12b-Rosier-v1). Quants available from [Reiterate3680](https://huggingface.co./Reiterate3680/guns-and-roses-r1-GGUF/tree/main) Special thanks to Fizz and Toasty Pigeon. Use **ChatML** formatting, as Rosier was trained from base (so no instruct format) and Magnum V2 was trained on ChatML! ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [inflatebot/thorn-0.35](https://huggingface.co./inflatebot/thorn-0.35) as a base. ### Models Merged The following models were included in the merge: * [inflatebot/thorn-0.5](https://huggingface.co./inflatebot/thorn-0.5) * [inflatebot/thorn-0.55](https://huggingface.co./inflatebot/thorn-0.55) * [inflatebot/thorn-0.45](https://huggingface.co./inflatebot/thorn-0.45) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: inflatebot/thorn-0.5 - model: inflatebot/thorn-0.45 - model: inflatebot/thorn-0.55 merge_method: model_stock base_model: inflatebot/thorn-0.35 dtype: bfloat16 ```