Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
mlx-community
/
DeepSeek-R1-Distill-Qwen-32B-4bit
like
25
Follow
MLX Community
3.34k
Text Generation
Transformers
Safetensors
MLX
qwen2
conversational
text-generation-inference
Inference Endpoints
4-bit precision
Model card
Files
Files and versions
Community
1
Train
Deploy
Use this model
main
DeepSeek-R1-Distill-Qwen-32B-4bit
1 contributor
History:
2 commits
schroneko
Upload folder using huggingface_hub (
#1
)
f429cf7
verified
13 days ago
.gitattributes
Safe
1.57 kB
Upload folder using huggingface_hub (#1)
13 days ago
README.md
Safe
967 Bytes
Upload folder using huggingface_hub (#1)
13 days ago
config.json
Safe
868 Bytes
Upload folder using huggingface_hub (#1)
13 days ago
model-00001-of-00004.safetensors
Safe
5.37 GB
LFS
Upload folder using huggingface_hub (#1)
13 days ago
model-00002-of-00004.safetensors
Safe
5.34 GB
LFS
Upload folder using huggingface_hub (#1)
13 days ago
model-00003-of-00004.safetensors
Safe
5.37 GB
LFS
Upload folder using huggingface_hub (#1)
13 days ago
model-00004-of-00004.safetensors
Safe
2.36 GB
LFS
Upload folder using huggingface_hub (#1)
13 days ago
model.safetensors.index.json
Safe
143 kB
Upload folder using huggingface_hub (#1)
13 days ago
special_tokens_map.json
Safe
485 Bytes
Upload folder using huggingface_hub (#1)
13 days ago
tokenizer.json
Safe
11.4 MB
LFS
Upload folder using huggingface_hub (#1)
13 days ago
tokenizer_config.json
Safe
6.75 kB
Upload folder using huggingface_hub (#1)
13 days ago