--- base_model: - win10/EVA-QwQ-32B-Preview - ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3 - maldv/Qwentile2.5-32B-Instruct - Sao10K/32B-Qwen2.5-Kunou-v1 library_name: transformers tags: - mergekit - merge license: apache-2.0 --- # Info Trying to make something different, feel free to try like or dislike or leave a feedback, I'm not claiming anything about anything. I made one quant here (https://huggingface.co./Aryanne/QwentileSwap/blob/main/q4ks_QwentileSwap.gguf) Also thanks to mradermacher for the other quants: [gguf](https://huggingface.co./mradermacher/QwentileSwap-GGUF) and [gguf_imatrix_quants](https://huggingface.co./mradermacher/QwentileSwap-i1-GGUF). # merged This is a merge of pre-trained language models created using my custom method in [mergekit](https://github.com/Ar57m/mergekit/tree/swapping). ## Merge Details ### Merge Method This model was merged using the task_swapping merge method using [win10/EVA-QwQ-32B-Preview](https://huggingface.co./win10/EVA-QwQ-32B-Preview) as a base. ### Models Merged The following models were included in the merge: * [ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3](https://huggingface.co./ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3) * [maldv/Qwentile2.5-32B-Instruct](https://huggingface.co./maldv/Qwentile2.5-32B-Instruct) * [Sao10K/32B-Qwen2.5-Kunou-v1](https://huggingface.co./Sao10K/32B-Qwen2.5-Kunou-v1) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: win10/EVA-QwQ-32B-Preview dtype: bfloat16 merge_method: task_swapping slices: - sources: - layer_range: [0, 64] model: maldv/Qwentile2.5-32B-Instruct parameters: diagonal_offset: 2.0 # ignored here random_mask: 0.666 random_mask_seed: 888.0 weight: 0.5 - layer_range: [0, 64] model: ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3 parameters: diagonal_offset: 5.0 weight: 0.75 - layer_range: [0, 64] model: Sao10K/32B-Qwen2.5-Kunou-v1 parameters: diagonal_offset: 2.0 # ignored here random_mask: 0.333 random_mask_seed: 12347888.0 weight: 0.5 - layer_range: [0, 64] model: win10/EVA-QwQ-32B-Preview ```