Spaces:
Running
Running
update link
Browse files- constants.py +1 -1
constants.py
CHANGED
@@ -21,7 +21,7 @@ INTRODUCTION_TEXT= """
|
|
21 |
> URIAL Bench tests the capacity of base LLMs for alignment without introducing the factors of fine-tuning (learning rate, data, etc.), which are hard to control for fair comparisons.
|
22 |
Specifically, we use [URIAL](https://github.com/Re-Align/URIAL/tree/main/run_scripts/mt-bench#run-urial-inference) to align a base LLM, and evaluate its performance on MT-Bench.
|
23 |
|
24 |
-
- [π URIAL](https://arxiv.org/abs/2312.01552) uses
|
25 |
- [π MT-Bench](https://huggingface.co/spaces/lmsys/mt-bench) is a small, curated benchmark with two turns of instruction following tasks in 10 domains.
|
26 |
|
27 |
|
|
|
21 |
> URIAL Bench tests the capacity of base LLMs for alignment without introducing the factors of fine-tuning (learning rate, data, etc.), which are hard to control for fair comparisons.
|
22 |
Specifically, we use [URIAL](https://github.com/Re-Align/URIAL/tree/main/run_scripts/mt-bench#run-urial-inference) to align a base LLM, and evaluate its performance on MT-Bench.
|
23 |
|
24 |
+
- [π URIAL](https://arxiv.org/abs/2312.01552) uses K=3 constant [examples](https://github.com/Re-Align/URIAL/blob/main/urial_prompts/inst_1k_v4.help.txt.md) to align BASE LLMs with in-context learning.
|
25 |
- [π MT-Bench](https://huggingface.co/spaces/lmsys/mt-bench) is a small, curated benchmark with two turns of instruction following tasks in 10 domains.
|
26 |
|
27 |
|