This is a llama-13B based model. (sorry, I forgot to put it in the model name) Base Model: GPT4-x-Alpaca full fine tune by Chavinlo -> https://huggingface.co./chavinlo/gpt4-x-alpaca LORA fine tune using the Roleplay Instruct from GPT4 generated dataset -> https://github.com/teknium1/GPTeacher/tree/main/Roleplay LORA Adapter Only: https://huggingface.co./ZeusLabs/gpt4-x-alpaca-rp-lora/tree/main/gpt-rp-instruct-1 Merged LORA to the model. FYI Latest HF Transformers generates BROKEN generations. Try this instead if your generations are terrible (first uninstall transformers): pip install git+https://github.com/huggingface/transformers@9eae4aa57650c1dbe1becd4e0979f6ad1e572ac0 Instruct it same way as alpaca / gpt4xalpaca: ``` ### Instruction: ### Response: ``` or ``` ### Instruction: ### Input: