Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Alcoft
/
DeepSeek-R1-Distill-Qwen-1.5B-GGUF
like
0
Text Generation
GGUF
Inference Endpoints
conversational
License:
mit
Model card
Files
Files and versions
Community
Deploy
Use this model
README.md exists but content is empty.
Downloads last month
36
GGUF
Model size
1.78B params
Architecture
qwen2
2-bit
Q2_K
3-bit
Q3_K_S
Q3_K_M
Q3_K_L
4-bit
Q4_K_S
Q4_K_M
5-bit
Q5_K_S
Q5_K_M
6-bit
Q6_K
8-bit
Q8_0
16-bit
BF16
F16
Inference Providers
NEW
Text Generation
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.
Model tree for
Alcoft/DeepSeek-R1-Distill-Qwen-1.5B-GGUF
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
Quantized
(
90
)
this model
Collection including
Alcoft/DeepSeek-R1-Distill-Qwen-1.5B-GGUF
TAO71-AI Quants: Reasoning
Collection
2 items
•
Updated
3 days ago