This is a simple experiment using geman ORPO training for one epoch using qlora and unsloth on Vezora/Mistral-22B-v0.2

Downloads last month
5
Safetensors
Model size
22.2B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Datasets used to train johannhartmann/mistral22b_orpo_de