metadata
license: cc-by-sa-4.0
Join our discord
License
cc-by-sa-4.0
Model Details
Base Model
maywell/Synatra-10.7B-v0.4
Trained On
A100 80GB * 8
Sionic AIμμ GPU μμμ μ§μλ°μ μ μλμμ΅λλ€.
Instruction format
It follows Alpaca format.
Model Benchmark
TBD
Implementation Code
Since, chat_template already contains insturction format above. You can use the code below.
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("maywell/Synatra-kiqu-10.7B")
tokenizer = AutoTokenizer.from_pretrained("maywell/Synatra-kiqu-10.7B")
messages = [
{"role": "user", "content": "μλΉλμλ λ νλ κΈ°μ
μ΄μΌ?"},
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])