Update README.md
Browse files
README.md
CHANGED
@@ -1,10 +1,16 @@
|
|
1 |
---
|
2 |
library_name: peft
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
|
5 |
-
|
6 |
-
## Model Details
|
7 |
-
|
8 |
### Model Description
|
9 |
|
10 |
<!-- Provide a longer summary of what this model is. -->
|
@@ -12,12 +18,14 @@ library_name: peft
|
|
12 |
Mongolian-Llama3 is an instruction-tuned language model for Mongolian & English users with various abilities such as roleplaying & tool-using built upon the quantized Meta-Llama-3-8B model.
|
13 |
|
14 |
Developed by: Dorjzodovsuren
|
|
|
15 |
License: Llama-3 License
|
|
|
16 |
Base Model: llama-3-8b-bnb-4bit
|
|
|
17 |
Model Size: 8.03B
|
18 |
-
Context length: 8K
|
19 |
|
20 |
-
|
21 |
|
22 |
## Bias, Risks, and Limitations
|
23 |
|
@@ -44,7 +52,9 @@ To combat fake news, current strategies rely heavily on synthetic and translated
|
|
44 |
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
45 |
|
46 |
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
|
47 |
-
Due to hallucinations and pretraining datasets characteristics, some information might be misleading, and answer might be a bit different based on language.
|
|
|
|
|
48 |
|
49 |
## How to Get Started with the Model
|
50 |
|
|
|
1 |
---
|
2 |
library_name: peft
|
3 |
+
license: apache-2.0
|
4 |
+
language:
|
5 |
+
- mn
|
6 |
+
- en
|
7 |
+
tags:
|
8 |
+
- Mongolian
|
9 |
+
- QLora
|
10 |
+
- Llama3
|
11 |
+
- Instructed-model
|
12 |
---
|
13 |
|
|
|
|
|
|
|
14 |
### Model Description
|
15 |
|
16 |
<!-- Provide a longer summary of what this model is. -->
|
|
|
18 |
Mongolian-Llama3 is an instruction-tuned language model for Mongolian & English users with various abilities such as roleplaying & tool-using built upon the quantized Meta-Llama-3-8B model.
|
19 |
|
20 |
Developed by: Dorjzodovsuren
|
21 |
+
|
22 |
License: Llama-3 License
|
23 |
+
|
24 |
Base Model: llama-3-8b-bnb-4bit
|
25 |
+
|
26 |
Model Size: 8.03B
|
|
|
27 |
|
28 |
+
Context length: 8K
|
29 |
|
30 |
## Bias, Risks, and Limitations
|
31 |
|
|
|
52 |
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
53 |
|
54 |
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
|
55 |
+
Due to hallucinations and pretraining datasets characteristics, some information might be misleading, and answer might be a bit different based on language.
|
56 |
+
|
57 |
+
Please ask in <b>Mongolian</b> if possible.
|
58 |
|
59 |
## How to Get Started with the Model
|
60 |
|