license: apache-2.0 | |
datasets: | |
- pankajmathur/WizardLM_Orca | |
language: | |
- en | |
library_name: transformers | |
Trained on 2 epoch of pankajmathur's WizardLM_orca dataset. | |
Prompt template: | |
``` | |
### HUMAN: | |
{prompt} | |
### RESPONSE: | |
<leave a newline for the model to answer> | |
``` | |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) | |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard) | |
Detailed results can be found [here](https://huggingface.co./datasets/open-llm-leaderboard/details_harborwater__wizard-orca-3b) | |
| Metric | Value | | |
|-----------------------|---------------------------| | |
| Avg. | 35.93 | | |
| ARC (25-shot) | 41.72 | | |
| HellaSwag (10-shot) | 71.78 | | |
| MMLU (5-shot) | 24.49 | | |
| TruthfulQA (0-shot) | 40.04 | | |
| Winogrande (5-shot) | 66.93 | | |
| GSM8K (5-shot) | 1.06 | | |
| DROP (3-shot) | 5.5 | | |