Update README.md
Browse files
README.md
CHANGED
@@ -18,12 +18,11 @@ pipeline_tag: text-generation
|
|
18 |
|
19 |
| Metric | GPT-2-dolly | GPT-2 (base) |
|
20 |
|-----------------------|-------|-------|
|
21 |
-
| Avg. |
|
22 |
-
| ARC (25-shot) |
|
23 |
-
| HellaSwag (10-shot) | 30.
|
24 |
-
| MMLU (5-shot) |
|
25 |
-
| TruthfulQA (0-shot) | **
|
26 |
-
|
27 |
|
28 |
|
29 |
We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results.
|
|
|
18 |
|
19 |
| Metric | GPT-2-dolly | GPT-2 (base) |
|
20 |
|-----------------------|-------|-------|
|
21 |
+
| Avg. | **30.91** | 29.99 |
|
22 |
+
| ARC (25-shot) | **22.70** | 21.84 |
|
23 |
+
| HellaSwag (10-shot) | 30.15 | **31.6** |
|
24 |
+
| MMLU (5-shot) | 25.81 | **25.86** |
|
25 |
+
| TruthfulQA (0-shot) | **44.97** | 40.67 |
|
|
|
26 |
|
27 |
|
28 |
We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results.
|