Conversation roles must alternate user/assistant/user/assistant/...
#57
by
akanksh-bc
- opened
Hi all,
I'm using Hugging Face transformers pipeline module to load the model. The following code is what I executed:
from transformers import pipeline
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
chatbot = pipeline("text-generation", model="mistralai/Mistral-7B-Instruct-v0.3")
chatbot(messages)
I want to explicitly mention the SystemMessage
to set the context. But I get this error when I run the code:
File "mistral/lib/python3.10/site-packages/transformers/pipelines/text_generation.py", line 257, in __call__
return super().__call__(Chat(text_inputs), **kwargs)
File "mistral/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1254, in __call__
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "mistral/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1260, in run_single
model_inputs = self.preprocess(inputs, **preprocess_params)
File "mistral/lib/python3.10/site-packages/transformers/pipelines/text_generation.py", line 276, in preprocess
inputs = self.tokenizer.apply_chat_template(
File "mistral/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1855, in apply_chat_template
rendered_chat = compiled_template.render(
File "mistral/lib/python3.10/site-packages/jinja2/environment.py", line 1304, in render
self.environment.handle_exception()
File "mistral/lib/python3.10/site-packages/jinja2/environment.py", line 939, in handle_exception
raise rewrite_traceback_stack(source=source)
File "<template>", line 1, in top-level template code
File "mistral/lib/python3.10/site-packages/jinja2/sandbox.py", line 394, in call
return __context.call(__obj, *args, **kwargs)
File "mistral/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1899, in raise_exception
raise TemplateError(message)
jinja2.exceptions.TemplateError: Conversation roles must alternate user/assistant/user/assistant/...
What am I doing wrong ? Should I change any parameter while loading the model ?
The problem seems to be that mistral does not support "system" role in their prompt template. I made some changes and the following script worked fine for me
from transformers import pipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
from transformers import BitsAndBytesConfig
model_name_or_path = "mistralai/Mistral-7B-Instruct-v0.3" # Replace with your model's name or path
# Create a BitsAndBytesConfig object for 4-bit precision
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.float16 # Specify the compute dtype, e.g., float16
)
# Load the model with 4-bit precision
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, quantization_config=bnb_config)
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
chatbot = pipeline("text-generation", model=model, tokenizer=tokenizer, max_new_tokens=4096)
messages = [
# {"role": "user", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "You are a pirate chatbot who always responds in pirate speak!\nWho are you?"},
]
chatbot(messages)
Thank You !
Will it follow a chat template? As the pipeline is text generation?