--- base_model: - chargoddard/qwamma-14b-merge-v1 - arcee-train/Qwen2.5-14B-Instruct_arcee-qwen2-14B-v0.2 - Qwen/Qwen2.5-14B library_name: transformers tags: - mergekit - merge --- # qwamma-14b-merge-v9 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen2.5-14B](https://huggingface.co./Qwen/Qwen2.5-14B) as a base. ### Models Merged The following models were included in the merge: * [chargoddard/qwamma-14b-merge-v1](https://huggingface.co./chargoddard/qwamma-14b-merge-v1) * [arcee-train/Qwen2.5-14B-Instruct_arcee-qwen2-14B-v0.2](https://huggingface.co./arcee-train/Qwen2.5-14B-Instruct_arcee-qwen2-14B-v0.2) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: ties base_model: Qwen/Qwen2.5-14B models: - model: chargoddard/qwamma-14b-merge-v1 parameters: density: 1.0 weight: 1.0 - model: arcee-train/Qwen2.5-14B-Instruct_arcee-qwen2-14B-v0.2 parameters: density: 0.66 weight: - filter: mlp value: [0, 0.3, 0.6, 0.1] - filter: self_attn value: [0, 0, 0.2, 0.1] - value: 0.1 parameters: normalize: false int8_mask: true dtype: float32 out_dtype: bfloat16 ```