File size: 7,288 Bytes
9891491 8c328b6 9891491 2031b06 a03a6e8 cefd23b 98701f5 9891491 334508e 3ab7e66 1e02c75 e5386ba 8b77516 5718b52 9d1a5d5 da727e7 1ade106 02e2f6c 4c9cb03 7125b09 4f01ef3 9891491 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 |
---
base_model: mistralai/Mistral-7B-v0.3
inference: false
library_name: gguf
license: apache-2.0
pipeline_tag: text-generation
quantized_by: legraphista
tags:
- quantized
- GGUF
- imatrix
- quantization
- imat
- imatrix
- static
---
# Mistral-7B-v0.3-IMat-GGUF
_Llama.cpp imatrix quantization of mistralai/Mistral-7B-v0.3_
Original Model: [mistralai/Mistral-7B-v0.3](https://huggingface.co./mistralai/Mistral-7B-v0.3)
Original dtype: `BF16` (`bfloat16`)
Quantized by: llama.cpp [b3003](https://github.com/ggerganov/llama.cpp/releases/tag/b3003)
IMatrix dataset: [here](https://gist.githubusercontent.com/legraphista/d6d93f1a254bcfc58e0af3777eaec41e/raw/d380e7002cea4a51c33fffd47db851942754e7cc/imatrix.calibration.medium.raw)
- [Mistral-7B-v0.3-IMat-GGUF](#mistral-7b-v0-3-imat-gguf)
- [Files](#files)
- [IMatrix](#imatrix)
- [Common Quants](#common-quants)
- [All Quants](#all-quants)
- [Downloading using huggingface-cli](#downloading-using-huggingface-cli)
- [Inference](#inference)
- [Llama.cpp](#llama-cpp)
- [FAQ](#faq)
- [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere)
- [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf)
---
## Files
### IMatrix
Status: β
Available
Link: [here](https://huggingface.co./legraphista/Mistral-7B-v0.3-IMat-GGUF/blob/main/imatrix.dat)
### Common Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [Mistral-7B-v0.3.Q8_0.gguf](https://huggingface.co./legraphista/Mistral-7B-v0.3-IMat-GGUF/blob/main/Mistral-7B-v0.3.Q8_0.gguf) | Q8_0 | 7.70GB | β
Available | βͺ No | π¦ No
| [Mistral-7B-v0.3.Q6_K.gguf](https://huggingface.co./legraphista/Mistral-7B-v0.3-IMat-GGUF/blob/main/Mistral-7B-v0.3.Q6_K.gguf) | Q6_K | 5.95GB | β
Available | βͺ No | π¦ No
| [Mistral-7B-v0.3.Q4_K.gguf](https://huggingface.co./legraphista/Mistral-7B-v0.3-IMat-GGUF/blob/main/Mistral-7B-v0.3.Q4_K.gguf) | Q4_K | 4.37GB | β
Available | π’ Yes | π¦ No
| [Mistral-7B-v0.3.Q3_K.gguf](https://huggingface.co./legraphista/Mistral-7B-v0.3-IMat-GGUF/blob/main/Mistral-7B-v0.3.Q3_K.gguf) | Q3_K | 3.52GB | β
Available | π’ Yes | π¦ No
| [Mistral-7B-v0.3.Q2_K.gguf](https://huggingface.co./legraphista/Mistral-7B-v0.3-IMat-GGUF/blob/main/Mistral-7B-v0.3.Q2_K.gguf) | Q2_K | 2.72GB | β
Available | π’ Yes | π¦ No
### All Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [Mistral-7B-v0.3.FP16.gguf](https://huggingface.co./legraphista/Mistral-7B-v0.3-IMat-GGUF/blob/main/Mistral-7B-v0.3.FP16.gguf) | F16 | 14.50GB | β
Available | βͺ No | π¦ No
| [Mistral-7B-v0.3.BF16.gguf](https://huggingface.co./legraphista/Mistral-7B-v0.3-IMat-GGUF/blob/main/Mistral-7B-v0.3.BF16.gguf) | BF16 | 14.50GB | β
Available | βͺ No | π¦ No
| [Mistral-7B-v0.3.Q5_K.gguf](https://huggingface.co./legraphista/Mistral-7B-v0.3-IMat-GGUF/blob/main/Mistral-7B-v0.3.Q5_K.gguf) | Q5_K | 5.14GB | β
Available | βͺ No | π¦ No
| [Mistral-7B-v0.3.Q5_K_S.gguf](https://huggingface.co./legraphista/Mistral-7B-v0.3-IMat-GGUF/blob/main/Mistral-7B-v0.3.Q5_K_S.gguf) | Q5_K_S | 5.00GB | β
Available | βͺ No | π¦ No
| [Mistral-7B-v0.3.Q4_K_S.gguf](https://huggingface.co./legraphista/Mistral-7B-v0.3-IMat-GGUF/blob/main/Mistral-7B-v0.3.Q4_K_S.gguf) | Q4_K_S | 4.14GB | β
Available | π’ Yes | π¦ No
| [Mistral-7B-v0.3.Q3_K_L.gguf](https://huggingface.co./legraphista/Mistral-7B-v0.3-IMat-GGUF/blob/main/Mistral-7B-v0.3.Q3_K_L.gguf) | Q3_K_L | 3.83GB | β
Available | π’ Yes | π¦ No
| [Mistral-7B-v0.3.Q3_K_S.gguf](https://huggingface.co./legraphista/Mistral-7B-v0.3-IMat-GGUF/blob/main/Mistral-7B-v0.3.Q3_K_S.gguf) | Q3_K_S | 3.17GB | β
Available | π’ Yes | π¦ No
| [Mistral-7B-v0.3.Q2_K_S.gguf](https://huggingface.co./legraphista/Mistral-7B-v0.3-IMat-GGUF/blob/main/Mistral-7B-v0.3.Q2_K_S.gguf) | Q2_K_S | 2.53GB | β
Available | π’ Yes | π¦ No
| [Mistral-7B-v0.3.IQ4_NL.gguf](https://huggingface.co./legraphista/Mistral-7B-v0.3-IMat-GGUF/blob/main/Mistral-7B-v0.3.IQ4_NL.gguf) | IQ4_NL | 4.13GB | β
Available | π’ Yes | π¦ No
| [Mistral-7B-v0.3.IQ4_XS.gguf](https://huggingface.co./legraphista/Mistral-7B-v0.3-IMat-GGUF/blob/main/Mistral-7B-v0.3.IQ4_XS.gguf) | IQ4_XS | 3.91GB | β
Available | π’ Yes | π¦ No
| [Mistral-7B-v0.3.IQ3_M.gguf](https://huggingface.co./legraphista/Mistral-7B-v0.3-IMat-GGUF/blob/main/Mistral-7B-v0.3.IQ3_M.gguf) | IQ3_M | 3.29GB | β
Available | π’ Yes | π¦ No
| [Mistral-7B-v0.3.IQ3_S.gguf](https://huggingface.co./legraphista/Mistral-7B-v0.3-IMat-GGUF/blob/main/Mistral-7B-v0.3.IQ3_S.gguf) | IQ3_S | 3.19GB | β
Available | π’ Yes | π¦ No
| [Mistral-7B-v0.3.IQ3_XS.gguf](https://huggingface.co./legraphista/Mistral-7B-v0.3-IMat-GGUF/blob/main/Mistral-7B-v0.3.IQ3_XS.gguf) | IQ3_XS | 3.02GB | β
Available | π’ Yes | π¦ No
| [Mistral-7B-v0.3.IQ3_XXS.gguf](https://huggingface.co./legraphista/Mistral-7B-v0.3-IMat-GGUF/blob/main/Mistral-7B-v0.3.IQ3_XXS.gguf) | IQ3_XXS | 2.83GB | β
Available | π’ Yes | π¦ No
| Mistral-7B-v0.3.IQ2_M | IQ2_M | - | β³ Processing | π’ Yes | -
| Mistral-7B-v0.3.IQ2_S | IQ2_S | - | β³ Processing | π’ Yes | -
| Mistral-7B-v0.3.IQ2_XS | IQ2_XS | - | β³ Processing | π’ Yes | -
| Mistral-7B-v0.3.IQ2_XXS | IQ2_XXS | - | β³ Processing | π’ Yes | -
| Mistral-7B-v0.3.IQ1_M | IQ1_M | - | β³ Processing | π’ Yes | -
| Mistral-7B-v0.3.IQ1_S | IQ1_S | - | β³ Processing | π’ Yes | -
## Downloading using huggingface-cli
If you do not have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Download the specific file you want:
```
huggingface-cli download legraphista/Mistral-7B-v0.3-IMat-GGUF --include "Mistral-7B-v0.3.Q8_0.gguf" --local-dir ./
```
If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download legraphista/Mistral-7B-v0.3-IMat-GGUF --include "Mistral-7B-v0.3.Q8_0/*" --local-dir Mistral-7B-v0.3.Q8_0
# see FAQ for merging GGUF's
```
---
## Inference
### Llama.cpp
```
llama.cpp/main -m Mistral-7B-v0.3.Q8_0.gguf --color -i -p "prompt here"
```
---
## FAQ
### Why is the IMatrix not applied everywhere?
According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
### How do I merge a split GGUF?
1. Make sure you have `gguf-split` available
- To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
- Download the appropriate zip for your system from the latest release
- Unzip the archive and you should be able to find `gguf-split`
2. Locate your GGUF chunks folder (ex: `Mistral-7B-v0.3.Q8_0`)
3. Run `gguf-split --merge Mistral-7B-v0.3.Q8_0/Mistral-7B-v0.3.Q8_0-00001-of-XXXXX.gguf Mistral-7B-v0.3.Q8_0.gguf`
- Make sure to point `gguf-split` to the first chunk of the split.
---
Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)! |