Qwenwify-32B-v1 / README.md
Kaoeiri's picture
Upload folder using huggingface_hub
8f39f23 verified
---
base_model:
- AiCloser/Qwen2.5-32B-AGI
- crestf411/Q2.5-32B-Slush
- EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
- rombodawg/Rombos-LLM-V2.5-Qwen-32b
- unsloth/qwen2.5-32b-instruct
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [unsloth/qwen2.5-32b-instruct](https://huggingface.co./unsloth/qwen2.5-32b-instruct) as a base.
### Models Merged
The following models were included in the merge:
* [AiCloser/Qwen2.5-32B-AGI](https://huggingface.co./AiCloser/Qwen2.5-32B-AGI)
* [crestf411/Q2.5-32B-Slush](https://huggingface.co./crestf411/Q2.5-32B-Slush)
* [EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2](https://huggingface.co./EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2)
* [rombodawg/Rombos-LLM-V2.5-Qwen-32b](https://huggingface.co./rombodawg/Rombos-LLM-V2.5-Qwen-32b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
parameters:
weight: 1.0
density: 0.85
- model: rombodawg/Rombos-LLM-V2.5-Qwen-32b
parameters:
weight: 0.28
density: 0.75
- model: crestf411/Q2.5-32B-Slush
parameters:
weight: 0.25
density: 0.74
- model: AiCloser/Qwen2.5-32B-AGI
parameters:
weight: 0.14
density: 0.66
merge_method: dare_ties
base_model: unsloth/qwen2.5-32b-instruct
parameters:
density: 0.84
epsilon: 0.07
lambda: 1.24
dtype: bfloat16
tokenizer_source: union
```