I tried to use LoRA to finetune the model and add special tokens into the model, but after I finetuned the model. The model output looks weird, this may be caused by special tokens is not trained, so the token embedding can not be handled well by the model.