Edit model card

Tess-3-7B-SFT-GGUF Quants

I used llama.cpp release b3166 for making several quants for the FLOP poor. Visit the OG model card for more details: https://huggingface.co./migtissera/Tess-3-7B-SFT

Quants

Filename Quant type File Size
Tess-3-7B-SFT-f16.gguf F16 14.5 GB
Tess-3-7B-SFT-gguf-q8_0.gguf Q8_0 7.7 GB
Tess-3-7B-SFT-gguf-q6_k.gguf Q6_K 5.95 GB
Tess-3-7B-SFT-gguf-q5_k_s.gguf Q5_K_S 5 GB
Tess-3-7B-SFT-gguf-q4_k_m.gguf Q4_K_M 4.37 GB
Tess-3-7B-SFT-gguf-q4_k_s.gguf Q4_K_S 4.14 GB
Tess-3-7B-SFT-gguf-iq4_xs.gguf IQ4_XS 3.95 GB
Tess-3-7B-SFT-gguf-q3_k_l.gguf Q3_K_L 3.83 GB
Tess-3-7B-SFT-gguf-q3_k_m.gguf Q3_K_M 3.52 GB
Tess-3-7B-SFT-gguf-iq3_m.gguf IQ3_M 3.29 GB
Tess-3-7B-SFT-gguf-q3_k_s.gguf Q3_K_S 3.17 GB
Tess-3-7B-SFT-gguf-iq3_xs.gguf IQ3_XS 3.02 GB
Tess-3-7B-SFT-gguf-q2_k.gguf Q2_K 2.72 GB
Tess-3-7B-SFT-gguf-iq3_xxs.gguf IQ3_XXS 136 MB
Tess-3-7B-SFT-gguf-iq2_m.gguf IQ2_M 83.7 MB

Downloading using huggingface-cli

pip install -U "huggingface_hub[cli]"
huggingface-cli download juvi21/Tess-3-7B-SFT-GGUF --include "Tess-3-7B-SFT-gguf-q4_k_m.gguf" --local-dir ./
Downloads last month
499
GGUF
Model size
7.25B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for juvi21/Tess-3-7B-SFT-GGUF

Quantized
(5)
this model