metadata
library_name: transformers
pipeline_tag: text-generation
base_model: princeton-nlp/Mistral-7B-Instruct-DPO
QuantFactory/Mistral-7B-Instruct-DPO-GGUF
This is quantized version of princeton-nlp/Mistral-7B-Instruct-DPO created using llama.cpp
Model Description
This is a model released from the preprint: SimPO: Simple Preference Optimization with a Reference-Free Reward Please refer to our repository for more details.