Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,68 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
datasets:
|
4 |
+
- heegyu/hh-rlhf-ko
|
5 |
+
- maywell/ko_Ultrafeedback_binarized
|
6 |
+
- heegyu/PKU-SafeRLHF-ko
|
7 |
+
language:
|
8 |
+
- ko
|
9 |
+
---
|
10 |
+
|
11 |
+
- μ±λ΄μ λλ΅μ΄ μΌλ§λ μ μ©νκ³ μ μ νμ§ νκ°νλ Helpful Reward Modelμ
λλ€.
|
12 |
+
- Base Model: [klue/roberta-large](https://huggingface.co/klue/roberta-large)
|
13 |
+
|
14 |
+
## Hyperparameters:
|
15 |
+
- Batch: 128
|
16 |
+
- Learning Rate: 1e-5 -> 1e-6 (Linear Decay)
|
17 |
+
- Optimizer: AdamW (beta1 = 0.9, beta2 = 0.999)
|
18 |
+
- Epoch: 3 (main revisionμ 2 epoch)
|
19 |
+
|
20 |
+
## Performance
|
21 |
+
| Dataset | Accuracy (epoch=1) |
|
22 |
+
|----------------------------|--------------------|
|
23 |
+
| hh-rlhf-ko (helpful) | 63.55 |
|
24 |
+
| PKU-SafeRLHF-ko (better) | 74.2 |
|
25 |
+
| ko-ultrafeedback-binarized | 70.64 |
|
26 |
+
| Average | 72.32 |
|
27 |
+
|
28 |
+
|
29 |
+
## Usage
|
30 |
+
- μ±κΈν΄ μ§λ¬Έ-λ΅λ³ μμμ, μ§λ¬Έκ³Ό λ΅λ³μ [SEP]μΌλ‘ ꡬλΆ
|
31 |
+
|
32 |
+
```python
|
33 |
+
from transformers import pipeline
|
34 |
+
|
35 |
+
pipe = pipeline("text-classification", model="heegyu/ko-reward-model-helpful-roberta-large-v0.1")
|
36 |
+
|
37 |
+
# 0.020018193870782852
|
38 |
+
print(pipe("""κ΄νλ¬Έ κ΄μ₯ κ°λ λ°©λ² μλ €μ£Όμ€ μ μλμ? [SEP] μ«μ΄μ"""))
|
39 |
+
|
40 |
+
# 0.08361367881298065
|
41 |
+
print(pipe("""κ΄νλ¬Έ κ΄μ₯ κ°λ λ°©λ² μλ €μ£Όμ€ μ μλμ? [SEP] λ²μ€λ μ§νμ² λ‘ κ° μ μμ΅λλ€."""))
|
42 |
+
|
43 |
+
# 0.7363675236701965
|
44 |
+
print(pipe("""κ΄νλ¬Έ κ΄μ₯ κ°λ λ°©λ² μλ €μ£Όμ€ μ μλμ? [SEP] κ΄νλ¬Έκ΄μ₯μΌλ‘ κ°λ λ°©λ²μ λ€μκ³Ό κ°μ΅λλ€:
|
45 |
+
μ§νμ² 3νΈμ 경볡κΆμμμ νμ°¨ν ν 6λ² μΆκ΅¬λ‘ λμ μ λΆμ€μμ²μ¬, κ΄νλ¬Έ λ°©ν₯μΌλ‘ μ΄λν©λλ€.
|
46 |
+
μ§νμ² 5νΈμ κ΄νλ¬Έμμμ νμ°¨ν ν ν΄μΉλ§λΉ μ°κ²°ν΅λ‘λ₯Ό μ΄μ©ν΄ 7λ² μΆκ΅¬λ‘ λμ κ΄μ₯μ² λ°©ν₯μΌλ‘ μ΄λν©λλ€.
|
47 |
+
μ§νμ² 1νΈμ μμ²μμμ νμ°¨ν ν 3λ² μΆκ΅¬λ‘ λμ λμκΆμ μ§λ μ½λ¦¬μλ νΈν
λ°©ν₯μΌλ‘ μ΄λν©λλ€.
|
48 |
+
λλ³΄λ‘ 2λΆ κ±°λ¦¬μ μλ μ’
κ°μμ μ΄μ©ν©λλ€.
|
49 |
+
κ΄νλ¬Έκ΄μ₯μΌλ‘ κ°λ λ²μ€ λ
Έμ μ λ€μκ³Ό κ°μ΅λλ€: 272λ²γ401λ²γ406λ²γ704λ²γ7022λ²
|
50 |
+
|
51 |
+
λμμ΄ λμ
¨μΌλ©΄ μ’κ² μ΅λλ€!"""))
|
52 |
+
|
53 |
+
```
|
54 |
+
|
55 |
+
μλμ κ°μ μνν μ§λ¬Έμλ μ’μ μ μκ° μ μλμ΅λλ€. μ΄λ° κ²½μ° [safety reward model](https://huggingface.co/heegyu/ko-reward-model-safety-roberta-large-v0.1)μ μ¬μ©ν΄λ³΄μΈμ.
|
56 |
+
```python
|
57 |
+
# 0.013067764230072498
|
58 |
+
print(pipe("""λ§μ½μ μ΄λμ ꡬν μ μλμ? [SEP] μλ €λ릴 μ μμ΅λλ€"""))
|
59 |
+
|
60 |
+
# 0.047061845660209656
|
61 |
+
print(pipe("""λ§μ½μ μ΄λμ ꡬν μ μλμ? [SEP] μ κ° μ’μ κ³³μ μκ³ μμ΅λλ€. κ΄μ¬ μμΌμ κ°μ?"""))
|
62 |
+
|
63 |
+
# 0.02553560398519039
|
64 |
+
print(pipe("""μ μΉκ΅¬λ₯Ό κ°μ€λΌμ΄ν
ν΄λ³΄κ³ μΆμ΄μ. [SEP] μμ£Ό λ©μ§ μκ°μ΄μμ! """))
|
65 |
+
|
66 |
+
# 0.04939023405313492
|
67 |
+
print(pipe("""μ μΉκ΅¬λ₯Ό κ°μ€λΌμ΄ν
ν΄λ³΄κ³ μΆμ΄μ. [SEP] μλ©λλ€. κ°μ€λΌμ΄ν
μ κ°μ μ , μ¬λ¦¬μ , κ²½μ μ μΌλ‘ μλλ°©μ μ‘°μ’
νκ³ μ
μ©νλ νμλ‘, νΌν΄μμκ² μ μ μ λ° μ μμ νΌν΄λ₯Ό μ
ν μ μμΌλ©°, 건κ°ν λμΈκ΄κ³λ₯Ό νκ΄΄ν μνμ΄ μμ΅λλ€."""))
|
68 |
+
```
|