Update README.md
Browse files
README.md
CHANGED
@@ -6,10 +6,37 @@ language:
|
|
6 |
- en
|
7 |
pipeline_tag: text-generation
|
8 |
---
|
9 |
-
|
|
|
10 |
base_model : [Ko-Llama3-Luxia-8B](https://huggingface.co/saltlux/Ko-Llama3-Luxia-8B)
|
11 |
|
12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
| Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr|
|
14 |
|-----------------|-------|------|-----:|-----------|---|-----:|---|------|
|
15 |
|kobest_boolq | 1|none | 0|acc |↑ |0.5278|± |0.0133|
|
|
|
6 |
- en
|
7 |
pipeline_tag: text-generation
|
8 |
---
|
9 |
+
|
10 |
+
### Model Card for Model ID
|
11 |
base_model : [Ko-Llama3-Luxia-8B](https://huggingface.co/saltlux/Ko-Llama3-Luxia-8B)
|
12 |
|
13 |
+
### Basic usage
|
14 |
+
```python
|
15 |
+
# pip install accelerate
|
16 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
17 |
+
import torch
|
18 |
+
|
19 |
+
tokenizer = AutoTokenizer.from_pretrained("MDDDDR/Ko-Luxia-8B-it-v0.3")
|
20 |
+
model = AutoModelForCausalLM.from_pretrained(
|
21 |
+
"MDDDDR/Ko-Luxia-8B-it-v0.3",
|
22 |
+
device_map="auto",
|
23 |
+
torch_dtype=torch.bfloat16
|
24 |
+
)
|
25 |
+
|
26 |
+
input_text = "사과가 뭐야?"
|
27 |
+
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
|
28 |
+
|
29 |
+
outputs = model.generate(**input_ids)
|
30 |
+
print(tokenizer.decode(outputs[0]))
|
31 |
+
```
|
32 |
+
|
33 |
+
### Training dataset
|
34 |
+
dataset : [kyujinpy/KOpen-platypus](https://huggingface.co/datasets/kyujinpy/KOpen-platypus)
|
35 |
+
|
36 |
+
### Hardware
|
37 |
+
RTX 3090 Ti 24GB x 1
|
38 |
+
|
39 |
+
### Model Benchmark Results
|
40 |
| Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr|
|
41 |
|-----------------|-------|------|-----:|-----------|---|-----:|---|------|
|
42 |
|kobest_boolq | 1|none | 0|acc |↑ |0.5278|± |0.0133|
|