metadata
language:
- ko
- en
license: cc-by-nc-4.0
library_name: transformers
tags:
- mergekit
- merge
base_model:
- Nexusflow/Athene-V2-Chat
- Nexusflow/Athene-V2-Agent
- anthracite-org/magnum-v4-72b
- Qwen/Qwen2.5-72B-Instruct
spow12/MK_Nemo_12B
Model Description
This model is a Supervised fine-tuned version of Qwen/Qwen2.5-72B-Instruct with DeepSpeed and trl for korean.
Merge methods.
merge_method: model_stock
name: ChatWaifu_72B_V2.4
models:
- model: Nexusflow/Athene-V2-Chat
- model: Nexusflow/Athene-V2-Agent
- model: Qwen/Qwen2.5-72B-Instruct_instruction_tunned(private)
- model: anthracite-org/magnum-v4-72b
base_model: Qwen/Qwen2.5-72B-Instruct
dtype: bfloat16
tokenizer_source: base
Trained Data
- Trained with public, private data (about 500K)
Usage
from transformers import TextStreamer, pipeline, AutoTokenizer, AutoModelForCausalLM
model_id = 'spow12/KoQwen_72B_v5.0'
tokenizer = AutoTokenizer.from_pretrained(model_id)
# %%
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2", #Optional
device_map='auto',
)
model.eval()
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device_map='auto')
generation_configs = dict(
max_new_tokens=2048,
num_return_sequences=1,
temperature=0.75,
# repetition_penalty=1.1,
do_sample=True,
top_k=20,
top_p=0.9,
min_p=0.1,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id,
streamer = TextStreamer(tokenizer) # Optional, if you want to use streamer, you have to set num_beams=1
)
sys_message = """λΉμ μ μΉμ ν μ±λ΄μΌλ‘μ μλλ°©μ μμ²μ μ΅λν μμΈνκ³ μΉμ νκ² λ΅ν΄μΌν©λλ€.
μ¬μ©μκ° μ 곡νλ μ 보λ₯Ό μΈμ¬νκ² λΆμνμ¬ μ¬μ©μμ μλλ₯Ό μ μνκ² νμ
νκ³ κ·Έμ λ°λΌ λ΅λ³μ μμ±ν΄μΌν©λλ€.
νμ λ§€μ° μμ°μ€λ¬μ΄ νκ΅μ΄λ‘ μλ΅νμΈμ."""
message = [
{
'role': "system",
'content': sys_message
},
{
'role': 'user',
'content': "νμ¬μ κ²½μ μν©μ λν΄ μ΄λ»κ² μκ°ν΄?."
}
]
conversation = pipe(message, **generation_configs)
conversation[-1]