Update README.md
Browse files
README.md
CHANGED
@@ -34,6 +34,8 @@
|
|
34 |
|
35 |
**Llama3-Chinese** is a large model trained on 500k high-quality Chinese multi-turn SFT data, 100k English multi-turn SFT data, and 2k single-turn self-cognition data, using the training methods of [DORA](https://arxiv.org/pdf/2402.09353.pdf) and [LORA+](https://arxiv.org/pdf/2402.12354.pdf) based on **Meta-Llama-3-8B** as the base.
|
36 |
|
|
|
|
|
37 |
![DEMO](./images/vllm_web_demo.png)
|
38 |
|
39 |
|
|
|
34 |
|
35 |
**Llama3-Chinese** is a large model trained on 500k high-quality Chinese multi-turn SFT data, 100k English multi-turn SFT data, and 2k single-turn self-cognition data, using the training methods of [DORA](https://arxiv.org/pdf/2402.09353.pdf) and [LORA+](https://arxiv.org/pdf/2402.12354.pdf) based on **Meta-Llama-3-8B** as the base.
|
36 |
|
37 |
+
[Github](https://github.com/seanzhang-zhichen/llama3-chinese)
|
38 |
+
|
39 |
![DEMO](./images/vllm_web_demo.png)
|
40 |
|
41 |
|