File size: 1,418 Bytes
ed53a2e ecee29f 06f42ff |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
---
license: mit
---
Demo on Google Colab: https://colab.research.google.com/drive/1i5plJtq_6HIOuk_x7D-LkYDpcd3SADLf?usp=sharing
Similarly as [Qwen-1.5-14B-Chat](https://huggingface.co./Qwen/Qwen1.5-14B-Chat), you can always call this model from the `AutoModel` class.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"ljsabc/Qwen-1.5-14B-Chat-Fujisaki",
torch_dtype="auto",
device_map="auto",
#load_in_4bit=True
)
tokenizer = AutoTokenizer.from_pretrained("ljsabc/Qwen-1.5-14B-Chat-Fujisaki")
prompt = "请撰写一条新的推文。"
messages = [
{"role": "system", "content": "你将扮演推特用户@ljsabc,你需要撰写你的原创推文或回复别人的推文。所有你的回复都应该使用简体中文书写。"},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512,
temperature=0.95,
top_p=0.99
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
``` |