Working Merge in my Profile
Collection
25 items
•
Updated
•
1
This is a merge of pre-trained language models created using mergekit.
This model was merged using the della_linear merge method using qwen/Qwen2.5-14b as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
merge_method: della_linear
dtype: float32
out_dtype: bfloat16
parameters:
epsilon: 0.04
lambda: 1.05
normalize: true
base_model: qwen/Qwen2.5-14b
tokenizer_source: arcee-ai/SuperNova-Medius
models:
- model: arcee-ai/SuperNova-Medius
parameters:
weight: 10
density: 1
- model: EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2
parameters:
weight: 7
density: 0.5
- model: v000000/Qwen2.5-Lumen-14B
parameters:
weight: 7
density: 0.4
- model: allura-org/TQ2.5-14B-Aletheia-v1
parameters:
weight: 8
density: 0.4
- model: huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2
parameters:
weight: 8
density: 0.45
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 39.21 |
IFEval (0-Shot) | 82.92 |
BBH (3-Shot) | 49.75 |
MATH Lvl 5 (4-Shot) | 28.02 |
GPQA (0-shot) | 14.54 |
MuSR (0-shot) | 12.26 |
MMLU-PRO (5-shot) | 47.76 |