Edit model card

QLoRA fine-tune of Mixtral-8x22B-v0.1 on a combination of the Capybara and Airoboros datasets.

Uses Mistral instruct formatting, like this: [INST] Describe quantum computing to a layperson. [/INST]

Model details:

  • Trained with QLoRA, on 4 4090s, using my own qlora-pipe training script
  • LoRA rank 64
  • 4096 sequence length
  • 2 epochs

You can find the LoRA adapter files here. I have also uploaded a single quant (GGUF q4_k_s) here if you want to try it without quantizing yourself or waiting for someone else to make all the quants. It fits with at least 16k context length on 96GB VRAM.

Downloads last month
14
Safetensors
Model size
141B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for tdrussell/Mixtral-8x22B-Capyboros-v1

Finetunes
1 model
Quantizations
1 model

Datasets used to train tdrussell/Mixtral-8x22B-Capyboros-v1