leaderboard-pr-bot commited on
Commit
e566288
1 Parent(s): 54ddc56

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co./spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co./spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -28,4 +28,17 @@ I am purposingly leaving this license ambiguous (other than the fact you must co
28
 
29
  Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
30
 
31
- Either way, by using this model, you agree to completely indemnify me.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
 
29
  Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
30
 
31
+ Either way, by using this model, you agree to completely indemnify me.
32
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
33
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_jondurbin__airoboros-l2-7b-gpt4-1.4.1)
34
+
35
+ | Metric | Value |
36
+ |-----------------------|---------------------------|
37
+ | Avg. | 43.46 |
38
+ | ARC (25-shot) | 55.12 |
39
+ | HellaSwag (10-shot) | 79.6 |
40
+ | MMLU (5-shot) | 45.17 |
41
+ | TruthfulQA (0-shot) | 40.29 |
42
+ | Winogrande (5-shot) | 74.27 |
43
+ | GSM8K (5-shot) | 2.81 |
44
+ | DROP (3-shot) | 6.96 |