File size: 1,559 Bytes
422dbb3
00623df
 
 
 
 
 
422dbb3
 
00623df
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
---
language:
- ko
datasets:
- kyujinpy/KOR-OpenOrca-Platypus-v3
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
---

# **Ko-PlatYi-6B-O**  
<img src='./Ko-PlatYi.png' width=256>

## Model Details

**Model Developers** Kyujin Han (kyujinpy)

**Input** Models input text only.

**Output** Models generate text only.

**Model Architecture**   
Ko-PlatYi-6B-O is an auto-regressive language model based on the Yi-34B transformer architecture.

**Blog Link**  
Blog: [Coming soon...]  
Github: [Coming soon...]  

**Base Model**    
[beomi/Yi-Ko-6B](https://huggingface.co./beomi/Yi-Ko-6B)   

**Training Dataset**    
[kyujinpy/KOR-OpenOrca-Platypus-v3](https://huggingface.co./datasets/kyujinpy/KOR-OpenOrca-Platypus-v3).   
 
# **Model Benchmark**

## Open leaderboard
- Follow up as [link](https://huggingface.co./spaces/upstage/open-ko-llm-leaderboard).  

| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | CommonGen-V2 |
| --- | --- | --- | --- | --- | --- | --- |  
| **Ko-PlatYi-6B-O** | NaN | NaN | NaN | NaN | NaN | NaN |  
| Ko-PlatYi-6B | NaN | NaN | NaN | NaN | NaN | NaN |  
| Yi-Ko-6B | 48.79 | 41.04 | 53.39 | 46.28 | 41.64 | 61.63 |  
  
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

repo = "kyujinpy/Ko-PlatYi-6B-O"
OpenOrca = AutoModelForCausalLM.from_pretrained(
        repo,
        return_dict=True,
        torch_dtype=torch.float16,
        device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```