grimjim's picture
Adding Evaluation Results (#1)
b9efed1 verified
---
base_model:
- meta-llama/Llama-3.1-8B
- grimjim/SauerHuatuoSkywork-o1-Llama-3.1-8B
- deepseek-ai/DeepSeek-R1-Distill-Llama-8B
library_name: transformers
pipeline_tag: text-generation
tags:
- mergekit
- merge
license: llama3.1
model-index:
- name: DeepSauerHuatuoSkywork-R1-o1-Llama-3.1-8B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: wis-k/instruction-following-eval
split: train
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 47.97
name: averaged accuracy
source:
url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=grimjim%2FDeepSauerHuatuoSkywork-R1-o1-Llama-3.1-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: SaylorTwift/bbh
split: test
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 32.77
name: normalized accuracy
source:
url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=grimjim%2FDeepSauerHuatuoSkywork-R1-o1-Llama-3.1-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: lighteval/MATH-Hard
split: test
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 21.98
name: exact match
source:
url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=grimjim%2FDeepSauerHuatuoSkywork-R1-o1-Llama-3.1-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
split: train
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 11.74
name: acc_norm
source:
url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=grimjim%2FDeepSauerHuatuoSkywork-R1-o1-Llama-3.1-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 14.1
name: acc_norm
source:
url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=grimjim%2FDeepSauerHuatuoSkywork-R1-o1-Llama-3.1-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 32.85
name: accuracy
source:
url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=grimjim%2FDeepSauerHuatuoSkywork-R1-o1-Llama-3.1-8B
name: Open LLM Leaderboard
---
# DeepSauerHuatuoSkywork-R1-o1-Llama-3.1-8B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
DeepSeek-R1-Distill-Llama-8B has been merged in at low weight in hopes of increasing the reasoning capability of the resulting model.
Built with Llama.
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [meta-llama/Llama-3.1-8B](https://huggingface.co./meta-llama/Llama-3.1-8B) as a base.
### Models Merged
The following models were included in the merge:
* [grimjim/SauerHuatuoSkywork-o1-Llama-3.1-8B](https://huggingface.co./grimjim/SauerHuatuoSkywork-o1-Llama-3.1-8B)
* [deepseek-ai/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co./deepseek-ai/DeepSeek-R1-Distill-Llama-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: meta-llama/Llama-3.1-8B
dtype: bfloat16
merge_method: task_arithmetic
parameters:
normalize: true
models:
- model: meta-llama/Llama-3.1-8B
- model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B
parameters:
weight: 0.1
- model: grimjim/SauerHuatuoSkywork-o1-Llama-3.1-8B
parameters:
weight: 0.9
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co./datasets/open-llm-leaderboard/grimjim__DeepSauerHuatuoSkywork-R1-o1-Llama-3.1-8B-details)!
Summarized results can be found [here](https://huggingface.co./datasets/open-llm-leaderboard/contents/viewer/default/train?q=grimjim%2FDeepSauerHuatuoSkywork-R1-o1-Llama-3.1-8B&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)!
| Metric |Value (%)|
|-------------------|--------:|
|**Average** | 26.90|
|IFEval (0-Shot) | 47.97|
|BBH (3-Shot) | 32.77|
|MATH Lvl 5 (4-Shot)| 21.98|
|GPQA (0-shot) | 11.74|
|MuSR (0-shot) | 14.10|
|MMLU-PRO (5-shot) | 32.85|