Is this tokenizer messed up?
I've noticed sometimes the model returns \n\nUSER: at the end of some responses however I don't encounter this issue on your 13b-v2 version. Are they different between the models? I'm using vicuna formatting for multi turn.
Thanks to the team for all the work they put into this model btw, it's very impressive.
I have noticed this happening when the prompt templating is not correct. Try checking if the prompts are in the right format.
Ahh that's what it is: I was using USER: ASSISTANT: template for 13b but it looks like 70b is ### User:\nWrite a python flask code for login management\n\n### Assistant:\n, switching to that format fixed it, thank you!
Out of curiosity are the new lines needed? eg. \n after User: Assistant: and after the response? \n\n? This formatting template stuff has been hard for me to understand while experimenting with LLMs.
Thanks again!