This is a general capability upgrade to Mistral-7B, using open source data to improve multilingual ability, overall knowledge, extended communication, and technical skill.

This model is primarily recommended as a superior-to-Mistral-7B baseline for additional finetuning, not for direct deployment to production as a chat model. The user accepts full responsibility for all outputs.

Downloads last month
174
Safetensors
Model size
7.24B params
Tensor type
FP16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for adonlee/Mistral_7B_SFT_DPO_v0

Quantizations
2 models

Spaces using adonlee/Mistral_7B_SFT_DPO_v0 6