File size: 1,124 Bytes
57b7dfb
3f9d41d
57b7dfb
 
 
 
 
 
 
 
2c6ff7a
57b7dfb
2c6ff7a
57b7dfb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
license: cc-by-nc-4.0
tags:
- merge
- mergekit
- vortexmergekit
- chihoonlee10/T3Q-Mistral-Orca-Math-DPO
- eldogbbhed/NeuralMonarchCoderPearlBeagle
---

# NeuralMonarchCoderPearlBeagle-T3Q-Mistral-Orca-Math-DPO-7b

This is a merge of multiple models brought together using the awesome [VortexMerge kit](https://colab.research.google.com/drive/1YjcvCLuNG1PK7Le6_4xhVU5VpzTwvGhk#scrollTo=UG5H2TK4gVyl).

Let's see what we've got in this merge:
* [chihoonlee10/T3Q-Mistral-Orca-Math-DPO](https://huggingface.co./chihoonlee10/T3Q-Mistral-Orca-Math-DPO) 🚀
* [eldogbbhed/NeuralMonarchCoderPearlBeagle](https://huggingface.co./eldogbbhed/NeuralMonarchCoderPearlBeagle) 🚀

## 🧩 Configuration

```yaml
models:
  - model: mlabonne/NeuralBeagle14-7B
    # no parameters necessary for base model
  - model: chihoonlee10/T3Q-Mistral-Orca-Math-DPO
    parameters:
      density: 0.5
      weight: 0.5
  - model: eldogbbhed/NeuralMonarchCoderPearlBeagle
    parameters:
      density: 0.5
      weight: 0.3
merge_method: ties
base_model: mlabonne/NeuralBeagle14-7B
parameters:
  normalize: true
  int8_mask: true
dtype: float16