Update README.md
Browse files
README.md
CHANGED
@@ -26,9 +26,11 @@ You can try the codes from [this repo](https://github.com/DAMO-NLP-SG/MT-LLaMA).
|
|
26 |
|
27 |
|
28 |
## Zero-shot Evaluation
|
|
|
29 |
We primarily follow the protocols of [Bigscience T0](https://openreview.net/forum?id=9Vrb9D0WI4) to assess the generalization capability of our Multi-task LLaMA to: (1) _**Unseen Datasets**_ (i.e., datasets from seen tasks); (2) _**Unseen Tasks**_.
|
30 |
|
31 |
-
#### Prompt Format
|
|
|
32 |
Extractive QA:
|
33 |
|
34 |
1. XQuAD, TyDiQA, MLQA, SQuAD
|
@@ -95,6 +97,7 @@ Natural Language Inference:
|
|
95 |
| MT-LLaMA-7b | 88.0 | 54.9 | 52.2 | 49.6 | 79.1 |
|
96 |
|
97 |
## Acknowledgement
|
|
|
98 |
* Our training codes are largely borrowed from [FastChat](https://github.com/lm-sys/FastChat)
|
99 |
* We are also grateful for the efforts of [LLaMA](https://github.com/facebookresearch/llama) (from FAIR) and [T0](https://github.com/bigscience-workshop/t-zero) (from BigScience), which serve as the foundation of our work
|
100 |
|
|
|
26 |
|
27 |
|
28 |
## Zero-shot Evaluation
|
29 |
+
|
30 |
We primarily follow the protocols of [Bigscience T0](https://openreview.net/forum?id=9Vrb9D0WI4) to assess the generalization capability of our Multi-task LLaMA to: (1) _**Unseen Datasets**_ (i.e., datasets from seen tasks); (2) _**Unseen Tasks**_.
|
31 |
|
32 |
+
#### Prompt Format
|
33 |
+
|
34 |
Extractive QA:
|
35 |
|
36 |
1. XQuAD, TyDiQA, MLQA, SQuAD
|
|
|
97 |
| MT-LLaMA-7b | 88.0 | 54.9 | 52.2 | 49.6 | 79.1 |
|
98 |
|
99 |
## Acknowledgement
|
100 |
+
|
101 |
* Our training codes are largely borrowed from [FastChat](https://github.com/lm-sys/FastChat)
|
102 |
* We are also grateful for the efforts of [LLaMA](https://github.com/facebookresearch/llama) (from FAIR) and [T0](https://github.com/bigscience-workshop/t-zero) (from BigScience), which serve as the foundation of our work
|
103 |
|