File size: 1,085 Bytes
9001e1d
 
 
 
 
 
58215bf
9001e1d
 
 
 
 
 
 
 
 
58215bf
c9263ca
 
58215bf
9001e1d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5e8d3ab
9001e1d
5e8d3ab
9001e1d
5e8d3ab
9001e1d
5e8d3ab
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
---
language:
- en
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
datasets:
- nlpai-lab/databricks-dolly-15k-ko
- kyujinpy/KOR-OpenOrca-Platypus-v3
---

**Input** Models input text only.

**Output** Models generate text only.

**Base Model**  [beomi/Yi-Ko-6B](https://huggingface.co./beomi/Yi-Ko-6B)   

**Training Dataset**  
- [nlpai-lab/databricks-dolly-15k-ko](https://huggingface.co./datasets/nlpai-lab/databricks-dolly-15k-ko)
- [kyujinpy/KOR-OpenOrca-Platypus-v3](https://huggingface.co./datasets/kyujinpy/KOR-OpenOrca-Platypus-v3)

# Implementation Code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "ifuseok/yi-ko-playtus-instruct-v0.2"
OpenOrca = AutoModelForCausalLM.from_pretrained(
        repo,
        return_dict=True,
        torch_dtype=torch.float16,
        device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```

# Prompt Example
```
<|system|>
μ‹œμŠ€ν…œ λ©”μ‹œμ§€ μž…λ‹ˆλ‹€. <|endoftext|>
<|user|>
μœ μ €  μž…λ‹ˆλ‹€.<|endoftext|>
<|assistant|>
μ–΄μ‹œμŠ€ν„΄νŠΈ μž…λ‹ˆλ‹€.<|endoftext|>
```