File size: 3,401 Bytes
19874a2
 
 
05b11c7
19874a2
 
05b11c7
19874a2
 
 
4f5189b
19874a2
 
 
 
 
 
 
05b11c7
19874a2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0d33813
 
 
 
 
 
 
19874a2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4f5189b
 
19874a2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
05b11c7
19874a2
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
---
tags:
    - text-generation
license: cc-by-nc-sa-4.0
language:
    - ko
base_model: megastudy/M-SOLAR-10.7B-v1.3
pipeline_tag: text-generation
---

# **DataVortexS-10.7B-dpo-v1.8**

<img src="./DataVortex.png" alt="DataVortex" style="height: 8em;">

## **Model Details**

### **Base Model**

[megastudy/M-SOLAR-10.7B-v1.3](https://huggingface.co./megastudy/M-SOLAR-10.7B-v1.3)

### **Trained On**

-   **OS**: Ubuntu 22.04
-   **GPU**: H100 80GB 4ea
-   **transformers**: v4.36.2

### **Instruction format**

It follows **Alpaca (Chat)** format.

E.g.

```python
text = """\
### System:
당신은 μ‚¬λžŒλ“€μ΄ 정보λ₯Ό 찾을 수 μžˆλ„λ‘ λ„μ™€μ£ΌλŠ” 인곡지λŠ₯ λΉ„μ„œμž…λ‹ˆλ‹€.

### User:
λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μ•Ό?

### Assistant:
λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμž…λ‹ˆλ‹€.

### User:
μ„œμšΈ μΈκ΅¬λŠ” 총 λͺ‡ λͺ…이야?
"""
```

## **Model Benchmark**

### **[Ko LM Eval Harness](https://github.com/Beomi/ko-lm-evaluation-harness)**

| Task             |       0-shot |         5-shot |       10-shot |      50-shot |
| :--------------- | -----------: | -------------: | ------------: | -----------: |
| kobest_boolq     |     0.375807 |       0.822623 |      0.828582 |     0.822529 |
| kobest_copa      |     0.539993 |       0.665979 |       0.67998 |     0.694997 |
| kobest_hellaswag |     0.405785 |       0.401975 |      0.438219 |     0.402962 |
| kobest_sentineg  |     0.794083 |        0.85276 |      0.883509 |     0.880932 |
| **Average**      | **0.528917** | **0.68583425** | **0.7075725** | **0.700355** |

### **[Ko-LLM-Leaderboard](https://huggingface.co./spaces/upstage/open-ko-llm-leaderboard)**

On Benchmarking ...

| Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| ------: | -----: | -----------: | ------: | ------------: | --------------: |
|       0 |      0 |            0 |       0 |             0 |               0 |

## **Implementation Code**

This model contains the chat_template instruction format.  
You can use the code below.

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda" # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.8")
tokenizer = AutoTokenizer.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.8")

messages = [
    {"role": "system", "content": "당신은 μ‚¬λžŒλ“€μ΄ 정보λ₯Ό 찾을 수 μžˆλ„λ‘ λ„μ™€μ£ΌλŠ” 인곡지λŠ₯ λΉ„μ„œμž…λ‹ˆλ‹€."},
    {"role": "user", "content": "λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μ•Ό?"},
    {"role": "assistant", "content": "λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμž…λ‹ˆλ‹€."},
    {"role": "user", "content": "μ„œμšΈ μΈκ΅¬λŠ” 총 λͺ‡ λͺ…이야?"}
]

encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")

model_inputs = encodeds.to(device)
model.to(device)

generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```

## **License**

The model is licensed under the [cc-by-nc-sa-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license, which allows others to copy, modify, and share the work non-commercially, as long as they give appropriate credit and distribute any derivative works under the same license.

<div align="center">
    <a href="https://edentns.com/">
        <img src="./Logo.png" alt="Logo" style="height: 3em;">
    </a>
</div>