LoRA trained in 4-bit with 8k context using meta-llama/Meta-Llama-3-8B-Instruct as the base model for 1 epoch.
Dataset used is mpasila/LimaRP-PIPPA-Mix-8K-Context which was made using grimulkan/LimaRP-augmented and KaraKaraWitch/PIPPA-ShareGPT-formatted.
This has been trained on the instruct model and not the base model. The model trained with the base model using the same dataset is here: mpasila/Llama-3-LiPPA-LoRA-8B
Prompt format: Llama 3 Instruct
Unsloth changed assistant to gpt and user to human.
Uploaded model
- Developed by: mpasila
- License: Llama 3 Community License
- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 2
Model tree for mpasila/Llama-3-Instruct-LiPPA-LoRA-8B
Base model
unsloth/llama-3-8b-Instruct-bnb-4bit