Update README.md
Browse files
README.md
CHANGED
@@ -23,6 +23,89 @@ Spaetzle-v85-7b is a merge of the following models using [LazyMergekit](https://
|
|
23 |
* [cstr/Spaetzle-v79-7b](https://huggingface.co/cstr/Spaetzle-v79-7b)
|
24 |
* [cstr/Spaetzle-v71-7b](https://huggingface.co/cstr/Spaetzle-v71-7b)
|
25 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
## 🧩 Configuration
|
27 |
|
28 |
```yaml
|
|
|
23 |
* [cstr/Spaetzle-v79-7b](https://huggingface.co/cstr/Spaetzle-v79-7b)
|
24 |
* [cstr/Spaetzle-v71-7b](https://huggingface.co/cstr/Spaetzle-v71-7b)
|
25 |
|
26 |
+
## Evaluation
|
27 |
+
|
28 |
+
EQ-Bench (v2_de): 65.32, Parseable: 171.0
|
29 |
+
|
30 |
+
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|
31 |
+
|--------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|
32 |
+
|[Spaetzle-v85-7b](https://huggingface.co/cstr/Spaetzle-v85-7b)| 44.35| 75.99| 67.23| 46.55| 58.53|
|
33 |
+
|
34 |
+
### AGIEval
|
35 |
+
| Task |Version| Metric |Value| |Stderr|
|
36 |
+
|------------------------------|------:|--------|----:|---|-----:|
|
37 |
+
|agieval_aqua_rat | 0|acc |23.23|± | 2.65|
|
38 |
+
| | |acc_norm|22.44|± | 2.62|
|
39 |
+
|agieval_logiqa_en | 0|acc |37.33|± | 1.90|
|
40 |
+
| | |acc_norm|37.94|± | 1.90|
|
41 |
+
|agieval_lsat_ar | 0|acc |25.22|± | 2.87|
|
42 |
+
| | |acc_norm|23.04|± | 2.78|
|
43 |
+
|agieval_lsat_lr | 0|acc |49.41|± | 2.22|
|
44 |
+
| | |acc_norm|50.78|± | 2.22|
|
45 |
+
|agieval_lsat_rc | 0|acc |64.68|± | 2.92|
|
46 |
+
| | |acc_norm|63.20|± | 2.95|
|
47 |
+
|agieval_sat_en | 0|acc |77.67|± | 2.91|
|
48 |
+
| | |acc_norm|78.16|± | 2.89|
|
49 |
+
|agieval_sat_en_without_passage| 0|acc |46.12|± | 3.48|
|
50 |
+
| | |acc_norm|45.15|± | 3.48|
|
51 |
+
|agieval_sat_math | 0|acc |35.45|± | 3.23|
|
52 |
+
| | |acc_norm|34.09|± | 3.20|
|
53 |
+
|
54 |
+
Average: 44.35%
|
55 |
+
|
56 |
+
### GPT4All
|
57 |
+
| Task |Version| Metric |Value| |Stderr|
|
58 |
+
|-------------|------:|--------|----:|---|-----:|
|
59 |
+
|arc_challenge| 0|acc |63.82|± | 1.40|
|
60 |
+
| | |acc_norm|64.76|± | 1.40|
|
61 |
+
|arc_easy | 0|acc |85.90|± | 0.71|
|
62 |
+
| | |acc_norm|82.32|± | 0.78|
|
63 |
+
|boolq | 1|acc |87.61|± | 0.58|
|
64 |
+
|hellaswag | 0|acc |67.39|± | 0.47|
|
65 |
+
| | |acc_norm|85.36|± | 0.35|
|
66 |
+
|openbookqa | 0|acc |38.80|± | 2.18|
|
67 |
+
| | |acc_norm|48.80|± | 2.24|
|
68 |
+
|piqa | 0|acc |83.03|± | 0.88|
|
69 |
+
| | |acc_norm|84.17|± | 0.85|
|
70 |
+
|winogrande | 0|acc |78.93|± | 1.15|
|
71 |
+
|
72 |
+
Average: 75.99%
|
73 |
+
|
74 |
+
### TruthfulQA
|
75 |
+
| Task |Version|Metric|Value| |Stderr|
|
76 |
+
|-------------|------:|------|----:|---|-----:|
|
77 |
+
|truthfulqa_mc| 1|mc1 |50.80|± | 1.75|
|
78 |
+
| | |mc2 |67.23|± | 1.49|
|
79 |
+
|
80 |
+
Average: 67.23%
|
81 |
+
|
82 |
+
### Bigbench
|
83 |
+
| Task |Version| Metric |Value| |Stderr|
|
84 |
+
|------------------------------------------------|------:|---------------------|----:|---|-----:|
|
85 |
+
|bigbench_causal_judgement | 0|multiple_choice_grade|54.74|± | 3.62|
|
86 |
+
|bigbench_date_understanding | 0|multiple_choice_grade|68.29|± | 2.43|
|
87 |
+
|bigbench_disambiguation_qa | 0|multiple_choice_grade|39.53|± | 3.05|
|
88 |
+
|bigbench_geometric_shapes | 0|multiple_choice_grade|22.28|± | 2.20|
|
89 |
+
| | |exact_str_match |12.26|± | 1.73|
|
90 |
+
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|32.80|± | 2.10|
|
91 |
+
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|23.00|± | 1.59|
|
92 |
+
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|59.00|± | 2.84|
|
93 |
+
|bigbench_movie_recommendation | 0|multiple_choice_grade|45.60|± | 2.23|
|
94 |
+
|bigbench_navigate | 0|multiple_choice_grade|51.10|± | 1.58|
|
95 |
+
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|70.10|± | 1.02|
|
96 |
+
|bigbench_ruin_names | 0|multiple_choice_grade|52.68|± | 2.36|
|
97 |
+
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|33.57|± | 1.50|
|
98 |
+
|bigbench_snarks | 0|multiple_choice_grade|71.27|± | 3.37|
|
99 |
+
|bigbench_sports_understanding | 0|multiple_choice_grade|74.54|± | 1.39|
|
100 |
+
|bigbench_temporal_sequences | 0|multiple_choice_grade|40.00|± | 1.55|
|
101 |
+
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|21.52|± | 1.16|
|
102 |
+
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|18.86|± | 0.94|
|
103 |
+
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|59.00|± | 2.84|
|
104 |
+
|
105 |
+
Average: 46.55%
|
106 |
+
|
107 |
+
Average score: 58.53%
|
108 |
+
|
109 |
## 🧩 Configuration
|
110 |
|
111 |
```yaml
|