LLaMA_2_13B_SFT_v0 / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
48e40dc
|
raw
history blame
687 Bytes
metadata
license: apache-2.0

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 50.97
ARC (25-shot) 62.03
HellaSwag (10-shot) 83.8
MMLU (5-shot) 58.39
TruthfulQA (0-shot) 49.92
Winogrande (5-shot) 77.27
GSM8K (5-shot) 12.43
DROP (3-shot) 12.96