DachengZhang
commited on
Commit
•
d35efa7
1
Parent(s):
aae5291
Update README.md
Browse files
README.md
CHANGED
@@ -9,20 +9,27 @@ widget:
|
|
9 |
pipeline_tag: text-generation
|
10 |
---
|
11 |
|
12 |
-
|
13 |
|
14 |
-
|
15 |
|
16 |
-
- Yi
|
17 |
-
|
|
|
18 |
|
19 |
-
-
|
20 |
-
|
|
|
|
|
21 |
|
22 |
-
-
|
|
|
23 |
|
24 |
-
|
25 |
-
|
|
|
|
|
|
|
26 |
|
27 |
| | C-Eval | MMLU | CMMLU |
|
28 |
|---------------------------|-----------|--------|-----------|
|
|
|
9 |
pipeline_tag: text-generation
|
10 |
---
|
11 |
|
12 |
+
[OrionStarAI/OrionStar-Yi-34B-Chat](https://huggingface.co/OrionStarAI/OrionStar-Yi-34B-Chat/tree/main) with tensors renamed to match standard Llama modelling code.
|
13 |
|
14 |
+
# Model Introduction
|
15 |
|
16 |
+
- OrionStar-Yi-34B-Chat from OrionStarAI is based on the open-source Yi-34B model, fine-tuned on a high-quality corpus
|
17 |
+
of over 15 million sentences. OrionStar-Yi-34B-Chat aims to provide an excellent interactive experience for users in
|
18 |
+
the large model community.
|
19 |
|
20 |
+
- The Yi series models, open-sourced by the 01-ai team, have shown impressive performance on various benchmarks in
|
21 |
+
Chinese, English, and general domains. OrionStar-Yi-34B-Chat further explores the potential of Yi-34B. Through
|
22 |
+
extensive fine-tuning on a large and high-quality corpus, OrionStar-Yi-34B-Chat performs exceptionally well on
|
23 |
+
evaluation data. We strive to make it an outstanding open-source alternative in the ChatGPT domain!
|
24 |
|
25 |
+
- Our fine-tuned model is completely open for academic research, but please adhere to the [agreement](#license) and
|
26 |
+
the [Yi License](https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt).
|
27 |
|
28 |
+
- Model Evaluation Results
|
29 |
+
|
30 |
+
We use [opencompass](https://opencompass.org.cn) to perform 5-shot on the following general domain datasets Testing.
|
31 |
+
The evaluation results of other models are taken
|
32 |
+
from [opencompass leaderboard](https://opencompass.org.cn/leaderboard-llm).
|
33 |
|
34 |
| | C-Eval | MMLU | CMMLU |
|
35 |
|---------------------------|-----------|--------|-----------|
|