File size: 585 Bytes
bddebb1
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
---
library_name: transformers
pipeline_tag: text-generation
base_model: princeton-nlp/Mistral-7B-Base-SFT-SimPO
---

# QuantFactory/Mistral-7B-Base-SFT-SimPO-GGUF
This is quantized version of [princeton-nlp/Mistral-7B-Base-SFT-SimPO](https://huggingface.co./princeton-nlp/Mistral-7B-Base-SFT-SimPO) created using llama.cpp

# Model Description
This is a model released from the preprint: *[SimPO: Simple Preference Optimization with a Reference-Free Reward](https://arxiv.org/abs/2405.14734)*  Please refer to our [repository](https://github.com/princeton-nlp/SimPO) for more details.