Spaces:
Runtime error
Runtime error
Clémentine
commited on
Commit
•
eaace79
1
Parent(s):
bb149ba
Simplify About
Browse files- src/display/about.py +6 -8
src/display/about.py
CHANGED
@@ -10,15 +10,17 @@ The leaderboard's backend runs the great [Eleuther AI Language Model Evaluation
|
|
10 |
"""
|
11 |
|
12 |
LLM_BENCHMARKS_TEXT = f"""
|
|
|
|
|
13 |
# Context
|
14 |
With the plethora of large language models (LLMs) and chatbots being released week upon week, often with grandiose claims of their performance, it can be hard to filter out the genuine progress that is being made by the open-source community and which model is the current state of the art.
|
15 |
|
16 |
## Icons
|
17 |
-
{ModelType.PT.to_str(" : ")} model: new, base models, trained on a given corpora
|
18 |
-
{ModelType.FT.to_str(" : ")} model: pretrained models finetuned on more data
|
19 |
Specific fine-tune subcategories (more adapted to chat):
|
20 |
-
{ModelType.IFT.to_str(" : ")} model: instruction fine-tunes, which are model fine-tuned specifically on datasets of task instruction
|
21 |
-
{ModelType.RL.to_str(" : ")} model: reinforcement fine-tunes, which usually change the model loss a bit with an added policy.
|
22 |
If there is no icon, we have not uploaded the information on the model yet, feel free to open an issue with the model information!
|
23 |
|
24 |
"Flagged" indicates that this model has been flagged by the community, and should probably be ignored! Clicking the link will redirect you to the discussion about the model.
|
@@ -71,10 +73,6 @@ Side note on the baseline scores:
|
|
71 |
To get more information about quantization, see:
|
72 |
- 8 bits: [blog post](https://huggingface.co/blog/hf-bitsandbytes-integration), [paper](https://arxiv.org/abs/2208.07339)
|
73 |
- 4 bits: [blog post](https://huggingface.co/blog/4bit-transformers-bitsandbytes), [paper](https://arxiv.org/abs/2305.14314)
|
74 |
-
|
75 |
-
## More resources
|
76 |
-
If you still have questions, you can check our FAQ [here](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/179)!
|
77 |
-
We also gather cool resources from the community, other teams, and other labs [here](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/174)!
|
78 |
"""
|
79 |
|
80 |
EVALUATION_QUEUE_TEXT = """
|
|
|
10 |
"""
|
11 |
|
12 |
LLM_BENCHMARKS_TEXT = f"""
|
13 |
+
Useful links: [FAQ](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/179), [Community resources](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/174), [Collection of best models](https://huggingface.co/collections/open-llm-leaderboard/llm-leaderboard-best-models-652d6c7965a4619fb5c27a03).
|
14 |
+
|
15 |
# Context
|
16 |
With the plethora of large language models (LLMs) and chatbots being released week upon week, often with grandiose claims of their performance, it can be hard to filter out the genuine progress that is being made by the open-source community and which model is the current state of the art.
|
17 |
|
18 |
## Icons
|
19 |
+
- {ModelType.PT.to_str(" : ")} model: new, base models, trained on a given corpora
|
20 |
+
- {ModelType.FT.to_str(" : ")} model: pretrained models finetuned on more data
|
21 |
Specific fine-tune subcategories (more adapted to chat):
|
22 |
+
- {ModelType.IFT.to_str(" : ")} model: instruction fine-tunes, which are model fine-tuned specifically on datasets of task instruction
|
23 |
+
- {ModelType.RL.to_str(" : ")} model: reinforcement fine-tunes, which usually change the model loss a bit with an added policy.
|
24 |
If there is no icon, we have not uploaded the information on the model yet, feel free to open an issue with the model information!
|
25 |
|
26 |
"Flagged" indicates that this model has been flagged by the community, and should probably be ignored! Clicking the link will redirect you to the discussion about the model.
|
|
|
73 |
To get more information about quantization, see:
|
74 |
- 8 bits: [blog post](https://huggingface.co/blog/hf-bitsandbytes-integration), [paper](https://arxiv.org/abs/2208.07339)
|
75 |
- 4 bits: [blog post](https://huggingface.co/blog/4bit-transformers-bitsandbytes), [paper](https://arxiv.org/abs/2305.14314)
|
|
|
|
|
|
|
|
|
76 |
"""
|
77 |
|
78 |
EVALUATION_QUEUE_TEXT = """
|