Text Generation
Transformers
PyTorch
English
llama
text-generation-inference
Inference Endpoints

wrong chat template?

#9
by vinnitu - opened

I use this code to generate prompt with chat template, but seems it is wrong. Or I missed something?
Where can I get info about [INST], <> in your docs?
And what about "You are helpfull, respectful..." ?

from transformers import AutoTokenizer

model_name = "openaccess-ai-collective/wizard-mega-13b"

tokenizer = AutoTokenizer.from_pretrained(model_name)

chat = [
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
]

res = tokenizer.apply_chat_template(chat, tokenize=False)

print(res)

image.png

Sign up or log in to comment