--- license: mit datasets: - databricks/databricks-dolly-15k language: - en pipeline_tag: text-generation --- # GPT-2-dolly **GPT-2-dolly** is an instruction fine-tuned model based on the GPT-2 transformer architecture. ### Benchmark Metrics | Metric | GPT-2-dolly | GPT-2 (base) | |-----------------------|-------|-------| | Avg. | 29.85 | **29.99** | | ARC (25-shot) | 21.76 | **21.84** | | HellaSwag (10-shot) | 30.77 | **31.6** | | MMLU (5-shot) | 24.66 | **25.86** | | TruthfulQA (0-shot) | **42.22** | 40.67 | We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results. ### Model Details * **Trained by**: Luiz G A Alves * **Model type:** **GPT-2-dolly** is an auto-regressive language model based on the GPT-2 transformer architecture. * **Language(s)**: English ### Prompt Template ``` ### Instruction: (without the <>) ### Response: ``` ### Training Dataset `lgaalves/gpt2-dolly` trained using the Databricks Dolly dataset [`garage-bAInd/Open-Platypus`](https://huggingface.co./datasets/garage-bAInd/Open-Platypus). ### Training Procedure `lgaalves/gpt2-dolly` was instruction fine-tuned using LoRA on 1 T4 GPU on Google Colab. It took about 1.5 hours to train it. # Intended uses, limitations & biases You can use the raw model for text generation or fine-tune it to a downstream task. The model was not extensively tested and may produce false information. It contains a lot of unfiltered content from the internet, which is far from neutral.