Edit model card

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

  • NeverSleep/CausalLM-RP-34B
  • anthracite-org/magnum-v3-34b

Configuration

The following YAML configuration was used to produce this model:


models:
  - model: ../models/anthracite-org_magnum-v3-34b
  - model: ../models/NeverSleep_CausalLM-RP-34B
merge_method: slerp
base_model: ../models/NeverSleep_CausalLM-RP-34B
parameters:
  t:
    - value: [0, 0.1, 0.2, 0.4, 0.5, 0.6, 0.5, 0.4, 0.2, 0.1, 0] 
  embed_slerp: true  
dtype: float16

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 23.57
IFEval (0-Shot) 19.30
BBH (3-Shot) 43.05
MATH Lvl 5 (4-Shot) 7.85
GPQA (0-shot) 16.33
MuSR (0-shot) 8.40
MMLU-PRO (5-shot) 46.49
Downloads last month
267
Safetensors
Model size
34.4B params
Tensor type
FP16
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for Jacoby746/Casual-Magnum-34B

Collection including Jacoby746/Casual-Magnum-34B

Evaluation results