LoRA trained in 4-bit with 8k context using mistralai/Mistral-Nemo-Base-2407 as the base model for 1 epoch.

Dataset used is mpasila/LimaRP-PIPPA-freedom-rp-Mix-8K which was made using grimulkan/LimaRP-augmented, KaraKaraWitch/PIPPA-ShareGPT-formatted and openerotica/freedom-rp.

Merged from this LoRA: mpasila/Mistral-freeLiPPA-12B

Prompt format: ChatML

Changed to ChatML since it might be confusing to use Llama 3 Instruct template on a Mistral model...

Uploaded model

  • Developed by: mpasila
  • License: apache-2.0
  • Finetuned from model : unsloth/mistral-nemo-base-2407-bnb-4bit

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
23
Safetensors
Model size
12.2B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mpasila/Mistral-freeLiPPA-12B

Finetuned
(45)
this model

Datasets used to train mpasila/Mistral-freeLiPPA-12B