File size: 1,198 Bytes
babb786 35d797a 9ec2d63 35d797a fbd74c1 35d797a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
---
license: cc-by-sa-4.0
---
# Join our discord
[Server Link](https://discord.gg/MrBt3PXdXc)
# **License**
**cc-by-sa-4.0**
# **Model Details**
**Base Model**
[maywell/Synatra-10.7B-v0.4](https://huggingface.co./maywell/Synatra-10.7B-v0.4)
**Trained On**
A100 80GB * 8
Sionic AIμμ GPU μμμ μ§μλ°μ μ μλμμ΅λλ€.
**Instruction format**
It follows **Alpaca** format.
# **Model Benchmark**
TBD
# **Implementation Code**
Since, chat_template already contains insturction format above.
You can use the code below.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("maywell/Synatra-kiqu-10.7B-v0.4")
tokenizer = AutoTokenizer.from_pretrained("maywell/Synatra-kiqu-10.7B-v0.4")
messages = [
{"role": "user", "content": "λ°λλλ μλ νμμμ΄μΌ?"},
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
``` |