Update README.md
Browse files
README.md
CHANGED
@@ -16,28 +16,3 @@ Unlike version 1 this model has no issues at fp16 or any quantizations.
|
|
16 |
The model that was used to create this one is linked below:
|
17 |
|
18 |
https://huggingface.co/meta-llama/Meta-Llama-3-8B
|
19 |
-
|
20 |
-
|
21 |
-
- Llama-3-11.5B-V2
|
22 |
-
|
23 |
-
| Metric | Value |
|
24 |
-
|---------------------------------|------:|
|
25 |
-
| Avg. | 66.89 |
|
26 |
-
| AI2 Reasoning Challenge(25-Shot)| 57.68 |
|
27 |
-
| HellaSwag (10-Shot) | 78.59 |
|
28 |
-
| MMLU (5-Shot) | 65.39 |
|
29 |
-
| TruthfulQA (0-shot) | 35.86 |
|
30 |
-
| Winogrande (5-shot) | 74.74 |
|
31 |
-
| GSM8k (5-shot) | 69.37 |
|
32 |
-
|
33 |
-
- Original Meta-Llama-3-8B
|
34 |
-
|
35 |
-
| Metric |Value|
|
36 |
-
|---------------------------------|----:|
|
37 |
-
|Avg. |62.87|
|
38 |
-
|AI2 Reasoning Challenge (25-Shot)|59.47|
|
39 |
-
|HellaSwag (10-Shot) |82.09|
|
40 |
-
|MMLU (5-Shot) |66.69|
|
41 |
-
|TruthfulQA (0-shot) |43.90|
|
42 |
-
|Winogrande (5-shot) |77.35|
|
43 |
-
|GSM8k (5-shot) |45.34|
|
|
|
16 |
The model that was used to create this one is linked below:
|
17 |
|
18 |
https://huggingface.co/meta-llama/Meta-Llama-3-8B
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|