|
--- |
|
license: apache-2.0 |
|
tags: |
|
- mistral |
|
- conversational |
|
- text-generation-inference |
|
base_model: BeaverAI/mistral-doryV2-12b |
|
library_name: transformers |
|
--- |
|
|
|
> [!WARNING] |
|
> **Sampling:**<br> |
|
> Mistral-Nemo-12B is very sensitive to the temperature sampler, try values near **0.3** at first or else you will get some weird results. This is mentioned by MistralAI at their [Transformers](https://huggingface.co./mistralai/Mistral-Nemo-Instruct-2407#transformers) section. <br> |
|
> Flash-Attention seems to have seem weird effects with the model as well, however there is no confirmation on this. |
|
|
|
**Original Model:** |
|
[BeaverAI/mistral-doryV2-12b](https://huggingface.co./BeaverAI/mistral-doryV2-12b) |
|
|
|
**How to Use:** |
|
[llama.cpp](https://github.com/ggerganov/llama.cpp) |
|
|
|
**License:** |
|
Apache 2.0 |
|
|
|
# Quants |
|
| Name | Quant Type | Size | |
|
| ---- | ---- | ---- | |
|
| [mistral-doryV2-12b-Q2_K.gguf](https://huggingface.co./starble-dev/mistral-doryV2-12b-gguf/blob/main/mistral-doryV2-12b-Q2_K.gguf) | Q2_K | 4.79 GB | |
|
| [mistral-doryV2-12b-Q3_K_M.gguf](https://huggingface.co./starble-dev/mistral-doryV2-12b-gguf/blob/main/mistral-doryV2-12b-Q3_K_M.gguf) | Q3_K_M | 6.08 GB | |
|
| [mistral-doryV2-12b-Q4_K_M.gguf](https://huggingface.co./starble-dev/mistral-doryV2-12b-gguf/blob/main/mistral-doryV2-12b-Q4_K_M.gguf) | Q4_K_M | 7.48 GB | |
|
| [mistral-doryV2-12b-Q5_K_M.gguf](https://huggingface.co./starble-dev/mistral-doryV2-12b-gguf/blob/main/mistral-doryV2-12b-Q5_K_M.gguf) | Q5_K_M | 8.73 GB | |
|
| [mistral-doryV2-12b-Q6_K.gguf](https://huggingface.co./starble-dev/mistral-doryV2-12b-gguf/blob/main/mistral-doryV2-12b-Q6_K.gguf) | Q6_K | 10.1 GB | |
|
| [mistral-doryV2-12b-Q8_0.gguf](https://huggingface.co./starble-dev/mistral-doryV2-12b-gguf/blob/main/mistral-doryV2-12b-Q8_0.gguf) | Q8_0 | 13 GB | |