CodeLlama-7b-hf / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
1d9e86f
|
raw
history blame
660 Bytes

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 34.86
ARC (25-shot) 39.85
HellaSwag (10-shot) 59.58
MMLU (5-shot) 30.47
TruthfulQA (0-shot) 38.62
Winogrande (5-shot) 64.88
GSM8K (5-shot) 5.46
DROP (3-shot) 5.17