Edit model card

Visualize in Weights & Biases

Mistral 7B Zephyr DPO V2

The Zephyr DPO recipe applied on top of Mistral 7B (new recipe with chatML format)

Model description

  • Model type: A 7.2B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
  • Language(s) (NLP): Primarily English
  • Finetuned from model: wandb/mistral-7b-zephyr-sft

Recipe

We trained using the alignment handbook recipe and logging to W&B

Visit the W&B workspace here

Compute provided by Lambda Labs - 8xA100 80GB node

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 63.22
AI2 Reasoning Challenge (25-Shot) 63.05
HellaSwag (10-Shot) 85.54
MMLU (5-Shot) 61.88
TruthfulQA (0-shot) 59.30
Winogrande (5-shot) 78.53
GSM8k (5-shot) 31.01
Downloads last month
16
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for wandb/mistral-7b-zephyr-dpo

Finetuned
(1)
this model

Dataset used to train wandb/mistral-7b-zephyr-dpo

Evaluation results