Update README.md
Browse files
README.md
CHANGED
@@ -18,11 +18,12 @@ pipeline_tag: text-generation
|
|
18 |
|
19 |
| Metric |lgaalves/gpt2-xl-camel-ai-physics |gpt2-xl (base) |
|
20 |
|-----------------------|-------|-------|
|
21 |
-
| Avg. |
|
22 |
-
| ARC (25-shot) |
|
23 |
-
| HellaSwag (10-shot) |
|
24 |
-
| MMLU (5-shot) |
|
25 |
-
| TruthfulQA (0-shot) |
|
|
|
26 |
|
27 |
We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results.
|
28 |
|
|
|
18 |
|
19 |
| Metric |lgaalves/gpt2-xl-camel-ai-physics |gpt2-xl (base) |
|
20 |
|-----------------------|-------|-------|
|
21 |
+
| Avg. | 36.51 | **36.66** |
|
22 |
+
| ARC (25-shot) | 29.52 | **30.29** |
|
23 |
+
| HellaSwag (10-shot) | 50.62 | **51.38** |
|
24 |
+
| MMLU (5-shot) | **26.79** | 26.43 |
|
25 |
+
| TruthfulQA (0-shot) | **39.12** | 38.54 |
|
26 |
+
|
27 |
|
28 |
We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results.
|
29 |
|