metadata
tags:
- text-generation
license: cc-by-nc-4.0
language:
- ko
base_model: yanolja/Bookworm-10.7B-v0.4-DPO
pipeline_tag: text-generation
DataVortexS-10.7B-dpo-v1.4
Our Team
Research & Engineering | Product Management |
---|---|
Kwangseok Yang | Seunghyun Choi |
Jeongwon Choi | Hyoseok Choi |
Model Details
Base Model
yanolja/Bookworm-10.7B-v0.4-DPO
Trained On
- OS: Ubuntu 22.04
- GPU: H100 80GB 4ea
- transformers: v4.36.2
Instruction format
It follows ChatML format.
E.g.
text = """\
<|im_start|>system
λΉμ μ μ¬λλ€μ΄ μ 보λ₯Ό μ°Ύμ μ μλλ‘ λμμ£Όλ μΈκ³΅μ§λ₯ λΉμμ
λλ€.<|im_end|>
<|im_start|>user
λνλ―Όκ΅μ μλλ μ΄λμΌ?<|im_end|>
<|im_start|>assistant
λνλ―Όκ΅μ μλλ μμΈμ
λλ€.<|im_end|>
<|im_start|>user
μμΈ μΈκ΅¬λ μ΄ λͺ λͺ
μ΄μΌ?<|im_end|>
<|im_start|>assistant
"""
Model Benchmark
Ko LM Eval Harness
Task | 0-shot | 5-shot | 10-shot | 50-shot |
---|---|---|---|---|
kobest_boolq | 0.757911 | 0.907177 | 0.924496 | 0.605075 |
kobest_copa | 0.740605 | 0.801886 | 0.831886 | 0.849978 |
kobest_hellaswag | 0.445176 | 0.454788 | 0.468654 | 0.45218 |
kobest_sentineg | 0.415445 | 0.95214 | 0.962217 | 0.967254 |
Average | 0.589784 | 0.778998 | 0.796813 | 0.718622 |
Ko-LLM-Leaderboard
Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
---|---|---|---|---|---|
53.81 | 52.05 | 62.93 | 53.59 | 50.42 | 50.06 |
Implementation Code
This model contains the chat_template instruction format.
You can use the code below.
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.4")
tokenizer = AutoTokenizer.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.4")
messages = [
{"role": "system", "content": "λΉμ μ μ¬λλ€μ΄ μ 보λ₯Ό μ°Ύμ μ μλλ‘ λμμ£Όλ μΈκ³΅μ§λ₯ λΉμμ
λλ€."},
{"role": "user", "content": "λνλ―Όκ΅μ μλλ μ΄λμΌ?"},
{"role": "assistant", "content": "λνλ―Όκ΅μ μλλ μμΈμ
λλ€."},
{"role": "user", "content": "μμΈ μΈκ΅¬λ μ΄ λͺ λͺ
μ΄μΌ?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
License
This model is licensed under the cc-by-nc-4.0. which allows others to share and adapt the model for non-commercial purposes.