v000000's picture
Update README.md
67f7643 verified
---
base_model: v000000/L3.1-Celestial-Stone-2x8B-DPO
library_name: transformers
tags:
- merge
- llama
- mixtral
- dpo
- llama-cpp
---
# L3.1-Celestial-Stone-2x8B-DPO (GGUFs)
This model was converted to GGUF format from [`v000000/L3.1-Celestial-Stone-2x8B-DPO`](https://huggingface.co./v000000/L3.1-Celestial-Stone-2x8B-DPO) using llama.cpp.
Refer to the [original model card](https://huggingface.co./v000000/L3.1-Celestial-Stone-2x8B-DPO) for more details on the model.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/PIC3kb7XL2f14YhLkrRsm.png)
# Ordered by quality:
* q8_0 imatrix --- 14.2g
* q6_k imatrix --- 11.2g
* q5_k_s imatrix --- 9.48g
* iq4_xs imatrix --- 7.44g
Missing? See [mradermacher i1](https://huggingface.co./mradermacher/L3.1-Celestial-Stone-2x8B-DPO-i1-GGUF) for more types of imatrix quants.
<i>imatrix data (V2 - 287kb) randomized bartowski, kalomeze groups, ERP/RP snippets, working gpt4 code, toxic qa, human messaging, randomized posts, story, novels</i>