mathstral-7B-v0.1-iMat-GGUF

Quantized from fp16.

  • Weighted quantizations were creating using fp16 GGUF and groups_merged.txt in 105 chunks and n_ctx=512
  • Static fp16 also included in repo

For a brief rundown of iMatrix quant performance please see this PR

All quants are verified working prior to uploading to repo for your safety and convenience

KL-Divergence Reference Chart (Click on image to view in full size)

Tips: There's no need to download the entire repo, just pick one of the GGUF files. As with smaller 7b models, Q6 or larger is recommended for best results. On quants smaller than Q3, repetition penalty = 1.05 - 1.3 and min P = 0.05 mitigated some issues, but set your expectations accordingly

Original model card can be found here

Downloads last month
370
GGUF
Model size
7.25B params
Architecture
llama

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for InferenceIllusionist/mathstral-7B-v0.1-iMat-GGUF

Quantized
(23)
this model

Collection including InferenceIllusionist/mathstral-7B-v0.1-iMat-GGUF