merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Model Stock merge method using meditsolutions/Llama-3.2-SUN-1B-chat as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
merge_method: model_stock
models:
- model: Nexesenex/Llama_3.2_1b_AquaSyn_0.1
parameters:
weight: 1.0
- model: Nexesenex/Llama_3.2_1b_Synopsys_0.1
parameters:
weight: 1.0
base_model: meditsolutions/Llama-3.2-SUN-1B-chat
dtype: bfloat16
normalize: false
chat_template: auto
tokenizer:
source: union
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 5.66 |
IFEval (0-Shot) | 24.95 |
BBH (3-Shot) | 2.85 |
MATH Lvl 5 (4-Shot) | 1.74 |
GPQA (0-shot) | 0.78 |
MuSR (0-shot) | 1.92 |
MMLU-PRO (5-shot) | 1.69 |
- Downloads last month
- 21
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for Nexesenex/Llama_3.2_1b_Odyssea_V1.01
Merge model
this model
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard24.950
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard2.850
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard1.740
- acc_norm on GPQA (0-shot)Open LLM Leaderboard0.780
- acc_norm on MuSR (0-shot)Open LLM Leaderboard1.920
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard1.690