File size: 1,773 Bytes
7b8a87a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
---
license: apache-2.0
tags:
- mistral
- conversational
- text-generation-inference
base_model: BeaverAI/mistral-doryV2-12b
library_name: transformers
---
> [!WARNING]
> **Sampling:**<br>
> Mistral-Nemo-12B is very sensitive to the temperature sampler, try values near **0.3** at first or else you will get some weird results. This is mentioned by MistralAI at their [Transformers](https://huggingface.co./mistralai/Mistral-Nemo-Instruct-2407#transformers) section. <br>
> Flash-Attention seems to have seem weird effects with the model as well, however there is no confirmation on this.
**Original Model:**
[BeaverAI/mistral-doryV2-12b](https://huggingface.co./BeaverAI/mistral-doryV2-12b)
**How to Use:**
[llama.cpp](https://github.com/ggerganov/llama.cpp)
**License:**
Apache 2.0
# Quants
| Name | Quant Type | Size |
| ---- | ---- | ---- |
| [mistral-doryV2-12b-Q2_K.gguf](https://huggingface.co./starble-dev/mistral-doryV2-12b-gguf/blob/main/mistral-doryV2-12b_Q2_K.gguf) | Q2_K | 4.79 GB |
| [mistral-doryV2-12b-Q3_K_M.gguf](https://huggingface.co./starble-dev/mistral-doryV2-12b-gguf/blob/main/mistral-doryV2-12b_Q3_K_M.gguf) | Q3_K_M | 6.08 GB |
| [mistral-doryV2-12b-Q4_K_M.gguf](https://huggingface.co./starble-dev/mistral-doryV2-12b-gguf/blob/main/mistral-doryV2-12b_Q4_K_M.gguf) | Q4_K_M | 7.48 GB |
| [mistral-doryV2-12b-Q5_K_M.gguf](https://huggingface.co./starble-dev/mistral-doryV2-12b-gguf/blob/main/mistral-doryV2-12b_Q5_K_M.gguf) | Q5_K_M | 8.73 GB |
| [mistral-doryV2-12b-Q6_K.gguf](https://huggingface.co./starble-dev/mistral-doryV2-12b-gguf/blob/main/mistral-doryV2-12b_Q6_K.gguf) | Q6_K | 10.1 GB |
| [mistral-doryV2-12b-Q8_0.gguf](https://huggingface.co./starble-dev/mistral-doryV2-12b-gguf/blob/main/mistral-doryV2-12b_Q8_0.gguf) | Q8_0 | 13 GB | |