--- base_model: - Hasnonname/Qwen2.5-14B-Wheatear-v0 - Hasnonname/Qwen2.5-14B-Kestrel-v0 library_name: transformers tags: - mergekit - merge --- # Qwen2.5-14B-Kebab-v0 merge methodology taken from [Aletheia v1](https://huggingface.co./allura-org/TQ2.5-14B-Aletheia-v1) hyperparams slightly modified from [Sugarquill v1](https://huggingface.co./allura-org/TQ2.5-14B-Sugarquill-v1) dataset consists of creative writing, multiturn RP, and some general assistant tasks thanks to the folks in the arli discord (owen and fizz in particular) for helping me with my axolotl config quants available here: [Qwen2.5-14B-Kebab-v0-GGUF](https://huggingface.co./mradermacher/Qwen2.5-14B-Kebab-v0-GGUF) This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [Hasnonname/Qwen2.5-14B-Wheatear-v0](https://huggingface.co./Hasnonname/Qwen2.5-14B-Wheatear-v0) * [Hasnonname/Qwen2.5-14B-Kestrel-v0](https://huggingface.co./Hasnonname/Qwen2.5-14B-Kestrel-v0) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: Hasnonname/Qwen2.5-14B-Wheatear-v0 dtype: bfloat16 merge_method: slerp parameters: t: - value: 0.7 slices: - sources: - layer_range: [0, 48] model: Hasnonname/Qwen2.5-14B-Kestrel-v0 - layer_range: [0, 48] model: Hasnonname/Qwen2.5-14B-Wheatear-v0 ```