--- base_model: - bunnycore/Qwen-2.5-3b-RP - Replete-AI/Replete-LLM-V2.5-Qwen-3b - bunnycore/Qwen-2.5-3b-Mix-Data-lora - bunnycore/Qwen-2.5-3b-RP - bunnycore/Qwen-2.5-3b-Mix-Data-lora library_name: transformers tags: - mergekit - merge --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/Qwen-2.5-3b-Evol-CoT-GGUF This is quantized version of [bunnycore/Qwen-2.5-3b-Evol-CoT](https://huggingface.co./bunnycore/Qwen-2.5-3b-Evol-CoT) created using llama.cpp # Original Model Card # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [bunnycore/Qwen-2.5-3b-RP](https://huggingface.co./bunnycore/Qwen-2.5-3b-RP) as a base. ### Models Merged The following models were included in the merge: * [Replete-AI/Replete-LLM-V2.5-Qwen-3b](https://huggingface.co./Replete-AI/Replete-LLM-V2.5-Qwen-3b) + [bunnycore/Qwen-2.5-3b-Mix-Data-lora](https://huggingface.co./bunnycore/Qwen-2.5-3b-Mix-Data-lora) * [bunnycore/Qwen-2.5-3b-RP](https://huggingface.co./bunnycore/Qwen-2.5-3b-RP) + [bunnycore/Qwen-2.5-3b-Mix-Data-lora](https://huggingface.co./bunnycore/Qwen-2.5-3b-Mix-Data-lora) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Replete-AI/Replete-LLM-V2.5-Qwen-3b+bunnycore/Qwen-2.5-3b-Mix-Data-lora parameters: density: 0.5 weight: 0.5 - model: bunnycore/Qwen-2.5-3b-RP+bunnycore/Qwen-2.5-3b-Mix-Data-lora parameters: density: 0.5 weight: 0.5 merge_method: dare_ties base_model: bunnycore/Qwen-2.5-3b-RP parameters: normalize: false int8_mask: true dtype: float16 ```