Adding Evaluation Results

#1
by prithivMLmods - opened
Files changed (1) hide show
  1. README.md +114 -0
README.md CHANGED
@@ -7,6 +7,105 @@ library_name: transformers
7
  tags:
8
  - mergekit
9
  - merge
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
  # **Calcium-Opus-14B-Merge**
12
 
@@ -40,3 +139,18 @@ parameters:
40
  dtype: bfloat16
41
  tokenizer_source: "Qwen/Qwen2.5-14B-Instruct"
42
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  tags:
8
  - mergekit
9
  - merge
10
+ model-index:
11
+ - name: Calcium-Opus-14B-Merge
12
+ results:
13
+ - task:
14
+ type: text-generation
15
+ name: Text Generation
16
+ dataset:
17
+ name: IFEval (0-Shot)
18
+ type: wis-k/instruction-following-eval
19
+ split: train
20
+ args:
21
+ num_few_shot: 0
22
+ metrics:
23
+ - type: inst_level_strict_acc and prompt_level_strict_acc
24
+ value: 49.49
25
+ name: averaged accuracy
26
+ source:
27
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FCalcium-Opus-14B-Merge
28
+ name: Open LLM Leaderboard
29
+ - task:
30
+ type: text-generation
31
+ name: Text Generation
32
+ dataset:
33
+ name: BBH (3-Shot)
34
+ type: SaylorTwift/bbh
35
+ split: test
36
+ args:
37
+ num_few_shot: 3
38
+ metrics:
39
+ - type: acc_norm
40
+ value: 46.77
41
+ name: normalized accuracy
42
+ source:
43
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FCalcium-Opus-14B-Merge
44
+ name: Open LLM Leaderboard
45
+ - task:
46
+ type: text-generation
47
+ name: Text Generation
48
+ dataset:
49
+ name: MATH Lvl 5 (4-Shot)
50
+ type: lighteval/MATH-Hard
51
+ split: test
52
+ args:
53
+ num_few_shot: 4
54
+ metrics:
55
+ - type: exact_match
56
+ value: 33.08
57
+ name: exact match
58
+ source:
59
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FCalcium-Opus-14B-Merge
60
+ name: Open LLM Leaderboard
61
+ - task:
62
+ type: text-generation
63
+ name: Text Generation
64
+ dataset:
65
+ name: GPQA (0-shot)
66
+ type: Idavidrein/gpqa
67
+ split: train
68
+ args:
69
+ num_few_shot: 0
70
+ metrics:
71
+ - type: acc_norm
72
+ value: 16.11
73
+ name: acc_norm
74
+ source:
75
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FCalcium-Opus-14B-Merge
76
+ name: Open LLM Leaderboard
77
+ - task:
78
+ type: text-generation
79
+ name: Text Generation
80
+ dataset:
81
+ name: MuSR (0-shot)
82
+ type: TAUR-Lab/MuSR
83
+ args:
84
+ num_few_shot: 0
85
+ metrics:
86
+ - type: acc_norm
87
+ value: 20.93
88
+ name: acc_norm
89
+ source:
90
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FCalcium-Opus-14B-Merge
91
+ name: Open LLM Leaderboard
92
+ - task:
93
+ type: text-generation
94
+ name: Text Generation
95
+ dataset:
96
+ name: MMLU-PRO (5-shot)
97
+ type: TIGER-Lab/MMLU-Pro
98
+ config: main
99
+ split: test
100
+ args:
101
+ num_few_shot: 5
102
+ metrics:
103
+ - type: acc
104
+ value: 48.4
105
+ name: accuracy
106
+ source:
107
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FCalcium-Opus-14B-Merge
108
+ name: Open LLM Leaderboard
109
  ---
110
  # **Calcium-Opus-14B-Merge**
111
 
 
139
  dtype: bfloat16
140
  tokenizer_source: "Qwen/Qwen2.5-14B-Instruct"
141
  ```
142
+
143
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
144
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/prithivMLmods__Calcium-Opus-14B-Merge-details)!
145
+ Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=prithivMLmods%2FCalcium-Opus-14B-Merge&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)!
146
+
147
+ | Metric |Value (%)|
148
+ |-------------------|--------:|
149
+ |**Average** | 35.80|
150
+ |IFEval (0-Shot) | 49.49|
151
+ |BBH (3-Shot) | 46.77|
152
+ |MATH Lvl 5 (4-Shot)| 33.08|
153
+ |GPQA (0-shot) | 16.11|
154
+ |MuSR (0-shot) | 20.93|
155
+ |MMLU-PRO (5-shot) | 48.40|
156
+