Update README.md (#2)
Browse files- Update README.md (e45f5f86018e113f6cf2b3848718da98d79591e8)
Co-authored-by: Simoes <[email protected]>
README.md
CHANGED
@@ -19,18 +19,20 @@ We open-source Orca 2 to encourage further research on the development, evaluati
|
|
19 |
## What is Orca 2’s intended use(s)?
|
20 |
|
21 |
+ Orca 2 is built for research purposes only.
|
22 |
-
+ The main purpose is to allow the research community to assess its abilities and to provide a foundation for
|
|
|
23 |
|
24 |
-
## How was Orca evaluated?
|
25 |
|
26 |
-
+ Orca 2 has been evaluated on a large number of tasks ranging from reasoning to safety. Please refer
|
|
|
27 |
|
28 |
## Model Details
|
29 |
|
30 |
-
Orca 2 is a finetuned version of LLAMA-2. Orca 2’s training data is a synthetic dataset that was created to enhance the small model’s reasoning abilities.
|
31 |
-
More details about the model can be found at: LINK to Tech Report
|
32 |
|
33 |
-
|
34 |
|
35 |
## License
|
36 |
|
@@ -41,8 +43,8 @@ Llama 2 is licensed under the [LLAMA 2 Community License](https://ai.meta.com/ll
|
|
41 |
## Bias, Risks, and Limitations
|
42 |
|
43 |
Orca 2, built upon the LLaMA 2 model family, retains many of its limitations, as well as the
|
44 |
-
common limitations of other large language models or limitation
|
45 |
-
|
46 |
|
47 |
**Data Biases**: Large language models, trained on extensive data, can inadvertently carry
|
48 |
biases present in the source data. Consequently, the models may generate outputs that could
|
|
|
19 |
## What is Orca 2’s intended use(s)?
|
20 |
|
21 |
+ Orca 2 is built for research purposes only.
|
22 |
+
+ The main purpose is to allow the research community to assess its abilities and to provide a foundation for
|
23 |
+
building better frontier models.
|
24 |
|
25 |
+
## How was Orca 2 evaluated?
|
26 |
|
27 |
+
+ Orca 2 has been evaluated on a large number of tasks ranging from reasoning to grounding and safety. Please refer
|
28 |
+
to Section 6 and Appendix in the paper for details on evaluations.
|
29 |
|
30 |
## Model Details
|
31 |
|
32 |
+
Orca 2 is a finetuned version of LLAMA-2. Orca 2’s training data is a synthetic dataset that was created to enhance the small model’s reasoning abilities.
|
33 |
+
All synthetic training data was moderated using the Microsoft Azure content filters. More details about the model can be found at: LINK to Tech Report
|
34 |
|
35 |
+
Please refer to LLaMA-2 technical report for details on the model architecture.
|
36 |
|
37 |
## License
|
38 |
|
|
|
43 |
## Bias, Risks, and Limitations
|
44 |
|
45 |
Orca 2, built upon the LLaMA 2 model family, retains many of its limitations, as well as the
|
46 |
+
common limitations of other large language models or limitation caused by its training process,
|
47 |
+
including:
|
48 |
|
49 |
**Data Biases**: Large language models, trained on extensive data, can inadvertently carry
|
50 |
biases present in the source data. Consequently, the models may generate outputs that could
|