Merged Model

This model is currently ranked #1 among the models up to 15B parameters and #56 among all models on the Open LLM Leaderboard.

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: sometimesanotion/Lamarck-14B-v0.7
dtype: bfloat16
merge_method: slerp
parameters:
  t:
  - filter: self_attn
    value: [0.0, 0.5, 0.3, 0.7, 1.0]
  - filter: mlp
    value: [1.0, 0.5, 0.7, 0.3, 0.0]
  - value: 0.5
slices:
- sources:
  - layer_range: [0, 48]
    model: sometimesanotion/Lamarck-14B-v0.7
  - layer_range: [0, 48]
    model: sometimesanotion/Qwenvergence-14B-v12-Prose-DS

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 43.32
IFEval (0-Shot) 76.56
BBH (3-Shot) 50.33
MATH Lvl 5 (4-Shot) 54.00
GPQA (0-shot) 15.10
MuSR (0-shot) 16.34
MMLU-PRO (5-shot) 47.59

Buy Me A Coffee

Downloads last month
100
Safetensors
Model size
14.8B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for suayptalha/Lamarckvergence-14B

Spaces using suayptalha/Lamarckvergence-14B 3

Collection including suayptalha/Lamarckvergence-14B

Evaluation results