Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
eaddario
/
DeepSeek-R1-Distill-Qwen-7B-GGUF
like
2
Text Generation
GGUF
eaddario/imatrix-calibration
English
quant
experimental
Inference Endpoints
conversational
License:
mit
Model card
Files
Files and versions
Community
Deploy
Use this model
8800615
DeepSeek-R1-Distill-Qwen-7B-GGUF
1 contributor
History:
47 commits
eaddario
Experimental quantize+prune Q5_K_M
8800615
verified
9 days ago
imatrix
Generate Small imatrix
about 1 month ago
logits
Rename to F16
about 1 month ago
scores
Generate perplexity and kld scores
about 1 month ago
.gitattributes
Safe
1.65 kB
Update lfs attributes and ignores
about 1 month ago
.gitignore
Safe
6.78 kB
Update lfs attributes and ignores
about 1 month ago
DeepSeek-R1-Distill-Qwen-7B-F16.gguf
15.2 GB
LFS
Rename to F16
about 1 month ago
DeepSeek-R1-Distill-Qwen-7B-IQ3_M.gguf
3.25 GB
LFS
Experimental quantize+prune IQ3_M
9 days ago
DeepSeek-R1-Distill-Qwen-7B-IQ3_S.gguf
3.18 GB
LFS
Experimental quantize+prune IQ3_S
9 days ago
DeepSeek-R1-Distill-Qwen-7B-IQ4_NL.gguf
4.1 GB
LFS
Experimental quantize+prune IQ4_NL
9 days ago
DeepSeek-R1-Distill-Qwen-7B-Q3_K_L.gguf
3.76 GB
LFS
Experimental quantize+prune Q3_K_L
9 days ago
DeepSeek-R1-Distill-Qwen-7B-Q3_K_M.gguf
3.48 GB
LFS
Experimental quantize+prune Q3_K_M
9 days ago
DeepSeek-R1-Distill-Qwen-7B-Q3_K_S.gguf
3.22 GB
LFS
Experimental quantize+prune Q3_K_S
9 days ago
DeepSeek-R1-Distill-Qwen-7B-Q4_K_M.gguf
4.34 GB
LFS
Experimental quantize+prune Q4_K_M
9 days ago
DeepSeek-R1-Distill-Qwen-7B-Q4_K_S.gguf
4.12 GB
LFS
Experimental quantize+prune Q4_K_S
9 days ago
DeepSeek-R1-Distill-Qwen-7B-Q5_K_M.gguf
5.04 GB
LFS
Experimental quantize+prune Q5_K_M
9 days ago
DeepSeek-R1-Distill-Qwen-7B-Q5_K_S.gguf
5.32 GB
LFS
Generate Q5_K_S quant
about 1 month ago
DeepSeek-R1-Distill-Qwen-7B-Q6_K.gguf
6.25 GB
LFS
Generate Q6_K quant
about 1 month ago
DeepSeek-R1-Distill-Qwen-7B-Q8_0.gguf
8.1 GB
LFS
Generate Q8_0 quant
about 1 month ago
README.md
9.64 kB
Update README.md
21 days ago