nbeerbower's picture
Adding Evaluation Results (#1)
f53aa96 verified
|
raw
history blame
4.46 kB
---
license: apache-2.0
library_name: transformers
tags:
- mergekit
- merge
base_model:
- flammenai/Mahou-1.5-mistral-nemo-12B
- nbeerbower/Mistral-Nemo-12B-abliterated-LORA
model-index:
- name: Mahou-1.5-mistral-nemo-12B-lorablated
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 68.25
name: strict accuracy
source:
url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 36.08
name: normalized accuracy
source:
url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 5.29
name: exact match
source:
url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 3.91
name: acc_norm
source:
url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 16.55
name: acc_norm
source:
url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 28.6
name: accuracy
source:
url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated
name: Open LLM Leaderboard
---
# Mahou-1.5-mistral-nemo-12B-lorablated
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [flammenai/Mahou-1.5-mistral-nemo-12B](https://huggingface.co./flammenai/Mahou-1.5-mistral-nemo-12B) + [nbeerbower/Mistral-Nemo-12B-abliterated-LORA](https://huggingface.co./nbeerbower/Mistral-Nemo-12B-abliterated-LORA) as a base.
### Models Merged
The following models were included in the merge:
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: flammenai/Mahou-1.5-mistral-nemo-12B+nbeerbower/Mistral-Nemo-12B-abliterated-LORA
dtype: bfloat16
merge_method: task_arithmetic
parameters:
normalize: false
slices:
- sources:
- layer_range: [0, 40]
model: flammenai/Mahou-1.5-mistral-nemo-12B+nbeerbower/Mistral-Nemo-12B-abliterated-LORA
parameters:
weight: 1.0
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co./datasets/open-llm-leaderboard/details_nbeerbower__Mahou-1.5-mistral-nemo-12B-lorablated)
| Metric |Value|
|-------------------|----:|
|Avg. |26.45|
|IFEval (0-Shot) |68.25|
|BBH (3-Shot) |36.08|
|MATH Lvl 5 (4-Shot)| 5.29|
|GPQA (0-shot) | 3.91|
|MuSR (0-shot) |16.55|
|MMLU-PRO (5-shot) |28.60|