File size: 5,431 Bytes
6f5037c e7bfede 6f5037c c5b7389 6f5037c c5b7389 e7bfede 6f5037c e7bfede 6f5037c e7bfede 6f5037c e7bfede |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazzymergekit
- Qwen2
- Coder
- Math
- Bunnycore
- Instruct
- OpenBookQA
- instruction-following
- long-form-generation
base_model:
- unsloth/Qwen2.5-Coder-1.5B-Instruct
---
### ⚠️ **Experimental Model - Pre-Alpha Warning**
Please note that **ZeroXClem/Qwen2.5-1.5B-Instruct-Coder-Math-Bunnycore-Fusion** is currently in **Pre-Alpha** and under active **revision**. As such, some features and functionalities may not perform as expected, and the model is still in the experimental phase. We are continuously refining the architecture, and future updates will improve performance and stability.
**Known Issues**:
- The quantized versions of this model may produce random tokens and exhibit unstable behavior.
- Further revisions are in progress to ensure better grammatical coherence and sentence generation.
# **ZeroXClem/Qwen2.5-1.5B-Instruct-Coder-Math-Bunnycore-Fusion**
**ZeroXClem/Qwen2.5-1.5B-Instruct-Coder-Math-Bunnycore-Fusion** is a cutting-edge merged model that combines the finest features from **instruction-following**, **coding**, **mathematical reasoning**, and **factual question-answering**. This powerhouse is designed for high performance in diverse technical, creative, and interactive tasks.
## 🌟 **Family Tree**
This model is the fusion of the following:
- [**cyixiao/qwen-1.5B-openbookqa**](https://huggingface.co./cyixiao/qwen-1.5B-openbookqa)
- [**unsloth/Qwen2.5-Coder-1.5B-Instruct**](https://huggingface.co./unsloth/Qwen2.5-Coder-1.5B-Instruct)
- [**Qwen/Qwen2.5-Math-1.5B-Instruct**](https://huggingface.co./Qwen/Qwen2.5-Math-1.5B-Instruct)
- [**bunnycore/Qwen2.5-1.5B-Matrix**](https://huggingface.co./bunnycore/Qwen2.5-1.5B-Matrix)
- [**Syed-Hasan-8503/Qwen2.5-1.5B-Instruct-WO-Adam-mini**](https://huggingface.co./Syed-Hasan-8503/Qwen2.5-1.5B-Instruct-WO-Adam-mini)
- [**Goekdeniz-Guelmez/Josiefied-Qwen2.5-1.5B-Instruct-abliterated-v3**](https://huggingface.co./Goekdeniz-Guelmez/Josiefied-Qwen2.5-1.5B-Instruct-abliterated-v3)
These models have been seamlessly blended to create a versatile AI that excels across multiple domains.
---
## 🧬 **Detailed Model Lineage**
### **A: cyixiao/qwen-1.5B-openbookqa**
- Focuses on factual knowledge and reasoning from the OpenBookQA dataset, providing strong question-answering capabilities.
### **B: unsloth/Qwen2.5-Coder-1.5B-Instruct**
- Tailored for **coding** and **instruction-following**, this model enhances the ability to generate code and follow precise instructions with ease.
### **C: Qwen/Qwen2.5-Math-1.5B-Instruct**
- This model specializes in **mathematical reasoning** and logical problem-solving, making it perfect for structured tasks that require high-level thinking.
### **D: bunnycore/Qwen2.5-1.5B-Matrix**
- A multi-purpose model that blends **instruction**, **math**, and **coding**, providing a well-rounded performance in both structured and creative tasks.
### **E: Syed-Hasan-8503/Qwen2.5-1.5B-Instruct-WO-Adam-mini**
- Fine-tuned on conversational and identity-specific tasks, this model contributes to the model’s ability to handle **conversation-heavy** tasks with clarity.
### **F: Goekdeniz-Guelmez/Josiefied-Qwen2.5-1.5B-Instruct-abliterated-v3**
- This model brings **uncensored** capabilities, ensuring that the AI is flexible and adaptable in open-ended and unrestricted instruction-following scenarios.
---
## 🛠️ **Merge Details**
The model was merged using the **DELLA merge method** with **bfloat16** precision, ensuring high-performance across multiple task types. Here's the configuration used for the merge:
```yaml
merge_method: della
dtype: bfloat16
parameters:
epsilon: 0.1
lambda: 1.0
normalize: true
base_model: unsloth/Qwen2.5-Coder-1.5B-Instruct
models:
- model: cyixiao/qwen-1.5B-openbookqa
parameters:
weight: 1
density: 0.5
- model: unsloth/Qwen2.5-Coder-1.5B-Instruct
parameters:
weight: 1
density: 0.6
- model: Qwen/Qwen2.5-Math-1.5B-Instruct
parameters:
weight: 1
density: 0.55
- model: bunnycore/Qwen2.5-1.5B-Matrix
parameters:
weight: 1
density: 0.55
- model: Syed-Hasan-8503/Qwen2.5-1.5B-Instruct-WO-Adam-mini
parameters:
weight: 1
density: 0.45
- model: Goekdeniz-Guelmez/Josiefied-Qwen2.5-1.5B-Instruct-abliterated-v3
parameters:
weight: 1
density: 0.5
```
---
## 🎯 **Key Features & Capabilities**
### **1. Coding and Instruction Following**:
This model excels in technical coding tasks thanks to the contributions from **Qwen2.5-Coder** and **Matrix**.
### **2. Mathematical Reasoning**:
With **Qwen2.5-Math-1.5B-Instruct**, the model is perfect for solving complex **mathematical problems** and structured logical tasks.
### **3. Conversational Abilities**:
Fine-tuned on conversation and identity tasks, the model handles complex dialogue and conversational exchanges with **Syed-Hasan-8503**.
### **4. Uncensored Versatility**:
Thanks to **Josiefied-Qwen2.5**, this model can operate without restrictions, making it ideal for **open-ended instruction-following**.
---
## 📜 **License**
This model is open-sourced under the **Apache-2.0 License**, allowing others to use and modify it freely, as long as they give proper attribution.
---
## 💡 **Tags**
- `merge`
- `Qwen`
- `Coder`
- `Math`
- `Bunnycore`
- `instruction-following`
- `long-form-generation`
--- |