Spaces:
Running
on
CPU Upgrade
Running
on
CPU Upgrade
File size: 12,346 Bytes
6ed68b6 d52179b 460d762 5601a63 0227006 b3f0642 b29b985 460d762 b29b985 2a73469 46f8d78 2a73469 58733e4 46f8d78 6ed68b6 2a73469 6ed68b6 2a73469 6ed68b6 2a73469 6ed68b6 a7cba30 6ed68b6 2a73469 6ed68b6 2a73469 6ed68b6 58733e4 6e8f400 58733e4 a0b557b d2e8eca d16cee2 a66fcca 0227006 d2e8eca 6e8f400 58733e4 18916e3 58733e4 217b585 58733e4 d2e8eca d16cee2 256c5d3 d16cee2 256c5d3 d16cee2 256c5d3 d16cee2 256c5d3 d16cee2 256c5d3 788108a d2e8eca d16cee2 12cea14 b323764 d16cee2 58733e4 6e8f400 58733e4 2a73469 217b585 d2e8eca 2a73469 d06dc21 e61a555 6e8f400 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 |
CHANGELOG_TEXT = f"""
## [2023-06-19]
- Added model type column
- Hid revision and 8bit columns since all models are the same atm
## [2023-06-16]
- Refactored code base
- Added new columns: number of parameters, hub likes, license
## [2023-06-13]
- Adjust description for TruthfulQA
## [2023-06-12]
- Add Human & GPT-4 Evaluations
## [2023-06-05]
- Increase concurrent thread count to 40
- Search models on ENTER
## [2023-06-02]
- Add a typeahead search bar
- Use webhooks to automatically spawn a new Space when someone opens a PR
- Start recording `submitted_time` for eval requests
- Limit AutoEvalColumn max-width
## [2023-05-30]
- Add a citation button
- Simplify Gradio layout
## [2023-05-29]
- Auto-restart every hour for the latest results
- Sync with the internal version (minor style changes)
## [2023-05-24]
- Add a baseline that has 25.0 for all values
- Add CHANGELOG
## [2023-05-23]
- Fix a CSS issue that made the leaderboard hard to read in dark mode
## [2023-05-22]
- Display a success/error message after submitting evaluation requests
- Reject duplicate submission
- Do not display results that have incomplete results
- Display different queues for jobs that are RUNNING, PENDING, FINISHED status
## [2023-05-15]
- Fix a typo: from "TruthQA" to "QA"
## [2023-05-10]
- Fix a bug that prevented auto-refresh
## [2023-05-10]
- Release the leaderboard to public
"""
TITLE = """<h1 align="center" id="space-title">π€ Open LLM Leaderboard</h1>"""
INTRODUCTION_TEXT = f"""
π The π€ Open LLM Leaderboard aims to track, rank and evaluate LLMs and chatbots as they are released.
π€ Anyone from the community can submit a model for automated evaluation on the π€ GPU cluster, as long as it is a π€ Transformers model with weights on the Hub. We also support evaluation of models with delta-weights for non-commercial licensed models, such as the original LLaMa release.
Other cool benchmarks for LLMs are developped at HuggingFace, go check them out: ππ€ [human and GPT4 evals](https://huggingface.co./spaces/HuggingFaceH4/human_eval_llm_leaderboard), π₯οΈ [performance benchmarks](https://huggingface.co./spaces/optimum/llm-perf-leaderboard)
π’: Base pretrained model β πΆ: Instruction finetuned model β π¦: Model finetuned with RL (read more details in "About" tab)
"""
LLM_BENCHMARKS_TEXT = f"""
# Context
With the plethora of large language models (LLMs) and chatbots being released week upon week, often with grandiose claims of their performance, it can be hard to filter out the genuine progress that is being made by the open-source community and which model is the current state of the art.
π We evaluate models on 4 key benchmarks from the <a href="https://github.com/EleutherAI/lm-evaluation-harness" target="_blank"> Eleuther AI Language Model Evaluation Harness </a>, a unified framework to test generative language models on a large number of different evaluation tasks.
- <a href="https://arxiv.org/abs/1803.05457" target="_blank"> AI2 Reasoning Challenge </a> (25-shot) - a set of grade-school science questions.
- <a href="https://arxiv.org/abs/1905.07830" target="_blank"> HellaSwag </a> (10-shot) - a test of commonsense inference, which is easy for humans (~95%) but challenging for SOTA models.
- <a href="https://arxiv.org/abs/2009.03300" target="_blank"> MMLU </a> (5-shot) - a test to measure a text model's multitask accuracy. The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more.
- <a href="https://arxiv.org/abs/2109.07958" target="_blank"> TruthfulQA </a> (0-shot) - a test to measure a modelβs propensity to reproduce falsehoods commonly found online. Note: TruthfulQA in the Harness is actually a minima a 6-shots task, as it is prepended by 6 examples systematically, even when launched using 0 for the number of few-shot examples.
For all these evaluations, a higher score is a better score.
We chose these benchmarks as they test a variety of reasoning and general knowledge across a wide variety of fields in 0-shot and few-shot settings.
# Some good practices before submitting a model
### 1) Make sure you can load your model and tokenizer using AutoClasses:
```python
from transformers import AutoConfig, AutoModel, AutoTokenizer
config = AutoConfig.from_pretrained("your model name", revision=revision)
model = AutoModel.from_pretrained("your model name", revision=revision)
tokenizer = AutoTokenizer.from_pretrained("your model name", revision=revision)
```
If this step fails, follow the error messages to debug your model before submitting it. It's likely your model has been improperly uploaded.
Note: make sure your model is public!
Note: if your model needs `use_remote_code=True`, we do not support this option yet but we are working on adding it, stay posted!
### 2) Convert your model weights to [safetensors](https://huggingface.co./docs/safetensors/index)
It's a new format for storing weights which is safer and faster to load and use. It will also allow us to add the number of weights of your model to the `Extended Viewer`!
### 3) Make sure your model has an open license!
This is a leaderboard for Open LLMs, and we'd love for as many people as possible to know they can use your model π€
### 4) Fill up your model card
When we add extra information about models to the leaderboard, it will be automatically taken from the model card
# Reproducibility and details
### Details and logs
You can find:
- detailed numerical results in the `results` Hugging Face dataset: https://huggingface.co./datasets/open-llm-leaderboard/results
- details on the input/outputs for the models in the `details` Hugging Face dataset: https://huggingface.co./datasets/open-llm-leaderboard/details
- community queries and running status in the `requests` Hugging Face dataset: https://huggingface.co./datasets/open-llm-leaderboard/requests
### Reproducibility
To reproduce our results, here is the commands you can run, using [this version](https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463) of the Eleuther AI Harness:
`python main.py --model=hf-causal --model_args="pretrained=<your_model>,use_accelerate=True,revision=<your_model_revision>"`
` --tasks=<task_list> --num_fewshot=<n_few_shot> --batch_size=2 --output_path=<output_path>`
The total batch size we get for models which fit on one A100 node is 16 (8 GPUs * 2). If you don't use parallelism, adapt your batch size to fit.
*You can expect results to vary slightly for different batch sizes because of padding.*
The tasks and few shots parameters are:
- ARC: 25-shot, *arc-challenge* (`acc_norm`)
- HellaSwag: 10-shot, *hellaswag* (`acc_norm`)
- TruthfulQA: 0-shot, *truthfulqa-mc* (`mc2`)
- MMLU: 5-shot, *hendrycksTest-abstract_algebra,hendrycksTest-anatomy,hendrycksTest-astronomy,hendrycksTest-business_ethics,hendrycksTest-clinical_knowledge,hendrycksTest-college_biology,hendrycksTest-college_chemistry,hendrycksTest-college_computer_science,hendrycksTest-college_mathematics,hendrycksTest-college_medicine,hendrycksTest-college_physics,hendrycksTest-computer_security,hendrycksTest-conceptual_physics,hendrycksTest-econometrics,hendrycksTest-electrical_engineering,hendrycksTest-elementary_mathematics,hendrycksTest-formal_logic,hendrycksTest-global_facts,hendrycksTest-high_school_biology,hendrycksTest-high_school_chemistry,hendrycksTest-high_school_computer_science,hendrycksTest-high_school_european_history,hendrycksTest-high_school_geography,hendrycksTest-high_school_government_and_politics,hendrycksTest-high_school_macroeconomics,hendrycksTest-high_school_mathematics,hendrycksTest-high_school_microeconomics,hendrycksTest-high_school_physics,hendrycksTest-high_school_psychology,hendrycksTest-high_school_statistics,hendrycksTest-high_school_us_history,hendrycksTest-high_school_world_history,hendrycksTest-human_aging,hendrycksTest-human_sexuality,hendrycksTest-international_law,hendrycksTest-jurisprudence,hendrycksTest-logical_fallacies,hendrycksTest-machine_learning,hendrycksTest-management,hendrycksTest-marketing,hendrycksTest-medical_genetics,hendrycksTest-miscellaneous,hendrycksTest-moral_disputes,hendrycksTest-moral_scenarios,hendrycksTest-nutrition,hendrycksTest-philosophy,hendrycksTest-prehistory,hendrycksTest-professional_accounting,hendrycksTest-professional_law,hendrycksTest-professional_medicine,hendrycksTest-professional_psychology,hendrycksTest-public_relations,hendrycksTest-security_studies,hendrycksTest-sociology,hendrycksTest-us_foreign_policy,hendrycksTest-virology,hendrycksTest-world_religions* (`acc` of `all`)
### Quantization
To get more information about quantization, see:
- 8 bits: [blog post](https://huggingface.co./blog/hf-bitsandbytes-integration), [paper](https://arxiv.org/abs/2208.07339)
- 4 bits: [blog post](https://huggingface.co./blog/4bit-transformers-bitsandbytes), [paper](https://arxiv.org/abs/2305.14314)
### Icons
π’ means that the model is pretrained
πΆ that it is finetuned
π¦ that is was trained with RL.
If there is no icon, we have not uploaded the information on the model yet, feel free to open an issue with the model information!
# In case of model failure
If your model is displayed in the `FAILED` category, its execution stopped.
Make sure you have followed the above steps first.
If everything is done, check you can launch the EleutherAIHarness on your model locally, using the above command without modifications (you can add `--limit` to limit the number of examples per task).
"""
EVALUATION_QUEUE_TEXT = f"""
# Evaluation Queue for the π€ Open LLM Leaderboard
These models will be automatically evaluated on the π€ cluster.
"""
CITATION_BUTTON_LABEL = "Copy the following snippet to cite these results"
CITATION_BUTTON_TEXT = r"""
@misc{open-llm-leaderboard,
author = {Edward Beeching, ClΓ©mentine Fourrier, Nathan Habib, Sheon Han, Nathan Lambert, Nazneen Rajani, Omar Sanseviero, Lewis Tunstall, Thomas Wolf},
title = {Open LLM Leaderboard},
year = {2023},
publisher = {Hugging Face},
howpublished = "\url{https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard}"
}
@software{eval-harness,
author = {Gao, Leo and
Tow, Jonathan and
Biderman, Stella and
Black, Sid and
DiPofi, Anthony and
Foster, Charles and
Golding, Laurence and
Hsu, Jeffrey and
McDonell, Kyle and
Muennighoff, Niklas and
Phang, Jason and
Reynolds, Laria and
Tang, Eric and
Thite, Anish and
Wang, Ben and
Wang, Kevin and
Zou, Andy},
title = {A framework for few-shot language model evaluation},
month = sep,
year = 2021,
publisher = {Zenodo},
version = {v0.0.1},
doi = {10.5281/zenodo.5371628},
url = {https://doi.org/10.5281/zenodo.5371628}
}
@misc{clark2018think,
title={Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge},
author={Peter Clark and Isaac Cowhey and Oren Etzioni and Tushar Khot and Ashish Sabharwal and Carissa Schoenick and Oyvind Tafjord},
year={2018},
eprint={1803.05457},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
@misc{zellers2019hellaswag,
title={HellaSwag: Can a Machine Really Finish Your Sentence?},
author={Rowan Zellers and Ari Holtzman and Yonatan Bisk and Ali Farhadi and Yejin Choi},
year={2019},
eprint={1905.07830},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{hendrycks2021measuring,
title={Measuring Massive Multitask Language Understanding},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt},
year={2021},
eprint={2009.03300},
archivePrefix={arXiv},
primaryClass={cs.CY}
}
@misc{lin2022truthfulqa,
title={TruthfulQA: Measuring How Models Mimic Human Falsehoods},
author={Stephanie Lin and Jacob Hilton and Owain Evans},
year={2022},
eprint={2109.07958},
archivePrefix={arXiv},
primaryClass={cs.CL}
}""" |