license: gpl | |
As a base model used https://huggingface.co./eachadea/vicuna-13b-1.1 | |
Finetuned on Teknium's GPTeacher dataset, unreleased Roleplay v2 dataset, GPT-4-LLM dataset, and Nous Research Instruct Dataset | |
Approx 180k instructions, all from GPT-4, all cleaned of any OpenAI censorship/"As an AI Language Model" etc. | |
Base model still has OpenAI censorship. Soon, a new version will be released with cleaned vicuna from https://huggingface.co./datasets/anon8231489123/ShareGPT_Vicuna_unfiltere | |
Trained on 8 A100-80GB GPUs for 5 epochs following Alpaca deepspeed training code. | |
Nous Research Instruct Dataset will be released soon. | |
GPTeacher, Roleplay v2 by https://huggingface.co./teknium | |
Wizard LM by https://github.com/nlpxucan | |
Nous Research Instruct Dataset by https://huggingface.co./karan4d and https://huggingface.co./huemin | |
Compute provided by our project sponsor https://redmond.ai/ |