MT1-gemma-2-9B / README.md
zelk12's picture
Adding Evaluation Results (#1)
09e1473 verified
---
library_name: transformers
tags:
- mergekit
- merge
base_model:
- zelk12/MT1-GB-gemma-2-9B
- zelk12/MT1-IMMMU-gemma-2-9B
model-index:
- name: MT1-gemma-2-9B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 79.47
name: strict accuracy
source:
url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT1-gemma-2-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 44.16
name: normalized accuracy
source:
url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT1-gemma-2-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 13.37
name: exact match
source:
url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT1-gemma-2-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 12.75
name: acc_norm
source:
url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT1-gemma-2-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 13.16
name: acc_norm
source:
url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT1-gemma-2-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 37.31
name: accuracy
source:
url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT1-gemma-2-9B
name: Open LLM Leaderboard
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/MT1-GB-gemma-2-9B](https://huggingface.co./zelk12/MT1-GB-gemma-2-9B)
* [zelk12/MT1-IMMMU-gemma-2-9B](https://huggingface.co./zelk12/MT1-IMMMU-gemma-2-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/MT1-GB-gemma-2-9B
- model: zelk12/MT1-IMMMU-gemma-2-9B
merge_method: slerp
base_model: zelk12/MT1-GB-gemma-2-9B
dtype: bfloat16
parameters:
t: 0.5
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co./datasets/open-llm-leaderboard/details_zelk12__MT1-gemma-2-9B)
| Metric |Value|
|-------------------|----:|
|Avg. |33.37|
|IFEval (0-Shot) |79.47|
|BBH (3-Shot) |44.16|
|MATH Lvl 5 (4-Shot)|13.37|
|GPQA (0-shot) |12.75|
|MuSR (0-shot) |13.16|
|MMLU-PRO (5-shot) |37.31|