Chat template?
Just to be sure: does this model use the Mistral chat template? The model card doesn't mention it, but does imply it.
i.e. im_start_im_end
The model uses a custom chat template, which is described in the tokenizer_config.json (https://huggingface.co./h2oai/h2o-danube2-1.8b-chat/blob/main/tokenizer_config.json#L33). This is automatically picked when using the transformers pipeline.
For more information about chat templates, visit https://huggingface.co./docs/transformers/main/en/chat_templating
Added an example to the model card: <|prompt|>Why is drinking water so healthy?</s><|answer|>
Thanks, now I know where to look for that stuff!
Unfortunately I can't use the transformers pipeline to manage this, since I'm trying to incorporate Danube in a 100% browser-based project. So I have to build the templating engine myself. That's why a clear example of what a finished template should look like would be so helpful.
For example, see https://github.com/ngxson/wllama
Whoa, psinger read my mind :-) Thanks!
Is it all on one line? So even if there are multiple questions and answers in remains on one line?
yeah one line:
<|prompt|>Why is drinking water so healthy?</s><|answer|>It is healthy...</s><|prompt|>ok but....</s><|answer|>
etc
My compliments on the model by the way. Now that the prompt is working it's doing really wel for it's size (Q5), and has become a favourite for me. I hope you will make a version with an even bigger context.