Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
Felladrin
/
gguf-Lite-Mistral-150M-v2-Instruct
like
0
GGUF
Inference Endpoints
imatrix
conversational
Model card
Files
Files and versions
Community
Deploy
Use this model
main
gguf-Lite-Mistral-150M-v2-Instruct
1 contributor
History:
6 commits
Felladrin
Upload folder using huggingface_hub
071a511
verified
4 months ago
.gitattributes
3.01 kB
Upload folder using huggingface_hub
4 months ago
Lite-Mistral-150M-v2-Instruct.F16.IQ2_XXS.gguf
129 MB
LFS
Upload folder using huggingface_hub
4 months ago
Lite-Mistral-150M-v2-Instruct.F16.IQ3_XXS.gguf
142 MB
LFS
Upload folder using huggingface_hub
4 months ago
Lite-Mistral-150M-v2-Instruct.F16.IQ4_XS.gguf
158 MB
LFS
Upload folder using huggingface_hub
4 months ago
Lite-Mistral-150M-v2-Instruct.F16.Q3_K.gguf
152 MB
LFS
Upload folder using huggingface_hub
4 months ago
Lite-Mistral-150M-v2-Instruct.F16.Q4_K.gguf
165 MB
LFS
Upload folder using huggingface_hub
4 months ago
Lite-Mistral-150M-v2-Instruct.F16.Q5_K.gguf
177 MB
LFS
Upload folder using huggingface_hub
4 months ago
Lite-Mistral-150M-v2-Instruct.F16.Q6_K.gguf
189 MB
LFS
Upload folder using huggingface_hub
4 months ago
Lite-Mistral-150M-v2-Instruct.F16.Q8_0.gguf
214 MB
LFS
Upload folder using huggingface_hub
4 months ago
Lite-Mistral-150M-v2-Instruct.F16.gguf
314 MB
LFS
Upload folder using huggingface_hub
4 months ago
Lite-Mistral-150M-v2-Instruct.IQ1_S.gguf
48.3 MB
LFS
Upload folder using huggingface_hub
4 months ago
Lite-Mistral-150M-v2-Instruct.IQ2_XXS.gguf
54.2 MB
LFS
Upload folder using huggingface_hub
4 months ago
Lite-Mistral-150M-v2-Instruct.IQ3_XXS.gguf
69.2 MB
LFS
Upload folder using huggingface_hub
4 months ago
Lite-Mistral-150M-v2-Instruct.IQ4_XS.gguf
91.2 MB
LFS
Upload folder using huggingface_hub
4 months ago
Lite-Mistral-150M-v2-Instruct.Q3_K.gguf
83.1 MB
LFS
Upload folder using huggingface_hub
4 months ago
Lite-Mistral-150M-v2-Instruct.Q4_K.gguf
99.4 MB
LFS
Upload folder using huggingface_hub
4 months ago
Lite-Mistral-150M-v2-Instruct.Q5_K.gguf
114 MB
LFS
Upload folder using huggingface_hub
4 months ago
Lite-Mistral-150M-v2-Instruct.Q6_K.gguf
129 MB
LFS
Upload folder using huggingface_hub
4 months ago
Lite-Mistral-150M-v2-Instruct.Q8_0.gguf
167 MB
LFS
Upload folder using huggingface_hub
4 months ago
Lite-Mistral-150M-v2-Instruct.gguf
314 MB
LFS
Upload folder using huggingface_hub
4 months ago
README.md
175 Bytes
Upload folder using huggingface_hub
4 months ago
imatrix.dat
371 kB
Upload folder using huggingface_hub
4 months ago