Merge Models
Collection
Models I merged using mergekit library
•
6 items
•
Updated
•
4
This model is currently ranked #1 among the models up to 15B parameters and #56 among all models on the Open LLM Leaderboard.
This is a merge of pre-trained language models created using mergekit.
This model was merged using the SLERP merge method.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
base_model: sometimesanotion/Lamarck-14B-v0.7
dtype: bfloat16
merge_method: slerp
parameters:
t:
- filter: self_attn
value: [0.0, 0.5, 0.3, 0.7, 1.0]
- filter: mlp
value: [1.0, 0.5, 0.7, 0.3, 0.0]
- value: 0.5
slices:
- sources:
- layer_range: [0, 48]
model: sometimesanotion/Lamarck-14B-v0.7
- layer_range: [0, 48]
model: sometimesanotion/Qwenvergence-14B-v12-Prose-DS
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 43.32 |
IFEval (0-Shot) | 76.56 |
BBH (3-Shot) | 50.33 |
MATH Lvl 5 (4-Shot) | 54.00 |
GPQA (0-shot) | 15.10 |
MuSR (0-shot) | 16.34 |
MMLU-PRO (5-shot) | 47.59 |