Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth!
We have a Google Colab Tesla T4 notebook for Llama-3 8b here: https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing Built with Meta Llama 3
β¨ Finetune for Free
All notebooks are beginner friendly! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
Unsloth supports | Free Notebooks | Performance | Memory use |
---|---|---|---|
Llama-3 8b | βΆοΈ Start on Colab | 2.4x faster | 58% less |
Gemma 7b | βΆοΈ Start on Colab | 2.4x faster | 58% less |
Mistral 7b | βΆοΈ Start on Colab | 2.2x faster | 62% less |
Llama-2 7b | βΆοΈ Start on Colab | 2.2x faster | 43% less |
TinyLlama | βΆοΈ Start on Colab | 3.9x faster | 74% less |
CodeLlama 34b A100 | βΆοΈ Start on Colab | 1.9x faster | 27% less |
Mistral 7b 1xT4 | βΆοΈ Start on Kaggle | 5x faster* | 62% less |
DPO - Zephyr | βΆοΈ Start on Colab | 1.9x faster | 19% less |
- This conversational notebook is useful for ShareGPT ChatML / Vicuna templates.
- This text completion notebook is for raw text. This DPO notebook replicates Zephyr.
- * Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
- Downloads last month
- 10,102
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.