--- base_model: - OmnicromsBrain/TestmodelC - OmnicromsBrain/Eros_Scribe-7b - MrRobotoAI/Hathor-v4.4 - Aratako/SniffyOtter-7B-Novel-Writing-NSFW - NousResearch/Yarn-Mistral-7b-128k - MrRobotoAI/Hathor-v3.2 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the linear [DARE](https://arxiv.org/abs/2311.03099) merge method using [NousResearch/Yarn-Mistral-7b-128k](https://huggingface.co./NousResearch/Yarn-Mistral-7b-128k) as a base. ### Models Merged The following models were included in the merge: * [OmnicromsBrain/TestmodelC](https://huggingface.co./OmnicromsBrain/TestmodelC) * [OmnicromsBrain/Eros_Scribe-7b](https://huggingface.co./OmnicromsBrain/Eros_Scribe-7b) * [MrRobotoAI/Hathor-v4.4](https://huggingface.co./MrRobotoAI/Hathor-v4.4) * [Aratako/SniffyOtter-7B-Novel-Writing-NSFW](https://huggingface.co./Aratako/SniffyOtter-7B-Novel-Writing-NSFW) * [MrRobotoAI/Hathor-v3.2](https://huggingface.co./MrRobotoAI/Hathor-v3.2) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: NousResearch/Yarn-Mistral-7b-128k parameters: weight: 0.1 density: 0.9 - model: MrRobotoAI/Hathor-v4.4 parameters: weight: 0.15 density: 0.9 - model: MrRobotoAI/Hathor-v3.2 parameters: weight: 0.15 density: 0.9 - model: OmnicromsBrain/TestmodelC parameters: weight: 0.2 density: 0.9 - model: OmnicromsBrain/Eros_Scribe-7b parameters: weight: 0.3 density: 0.9 - model: Aratako/SniffyOtter-7B-Novel-Writing-NSFW parameters: weight: 0.1 density: 0.9 merge_method: dare_linear base_model: NousResearch/Yarn-Mistral-7b-128k parameters: normalize: true dtype: float16 ```