Fine tuning process for "<|im_start|>" and "<|im_end|>" tokens

#1
by ccdv - opened

Hi
Your continuous fine tuning techniques requires to fine tune the base model using lora.
Do you also fine tune the embedding layer and the lm_head using lora?
Because its difficult to match the instruct template without training "<|im_start|>" and "<|im_end|>" tokens.

thank you

ccdv changed discussion title from Fine tuning process to Fine tuning process for "<|im_start|>" and "<|im_end|>" tokens

It just depends on the prompt template you are using. Generally you want to match the tokens of the target models prompt template for ease of use. So id recommend finetuning in the same tokens yes.

Thank you.
Can you share the lora config that you used with unsloth please?
I'm trying to fine tune a small qwen base model on a niche dataset but I dont find the way to tune chat template tokens.

Sign up or log in to comment