Edit model card

image/jpeg

A DPO fine tuned mhm-7b-v1.3 on Intel/orca_dpo_pairs

Based upon mistral. Created using dare_ties and models from openllm leaderboard. Over 3 merges involving 7 different models, this was the result.

Just an experiment.

Downloads last month
691
Safetensors
Model size
7.24B params
Tensor type
FP16
Β·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Spaces using h2m/mhm-7b-v1.3-DPO-1 3