Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
nisten
/
quad-mixtrals-gguf
like
32
GGUF
Inference Endpoints
License:
apache-2.0
Model card
Files
Files and versions
Community
Deploy
Use this model
5b4c094
quad-mixtrals-gguf
1 contributor
History:
26 commits
nisten
Rename 4mixtralq6_k.gguf to 4mixq6_k.gguf
5b4c094
10 months ago
.gitattributes
2.37 kB
Rename 4mixtralq6_k.gguf to 4mixq6_k.gguf
10 months ago
4mixq2.gguf
8.48 GB
LFS
smallest working 2k mixtral
10 months ago
4mixq3_k_l.gguf
11 GB
LFS
Rename 4mixtralq3_k_l.gguf to 4mixq3_k_l.gguf
10 months ago
4mixq6_k.gguf
20.2 GB
LFS
Rename 4mixtralq6_k.gguf to 4mixq6_k.gguf
10 months ago
4mixtq3_k.gguf
10.9 GB
LFS
working 6k quant
10 months ago
4mixtrainq4_0.gguf
14 GB
LFS
trainable 4bit experiment
10 months ago
4mixtrainq8_0.gguf
26.1 GB
LFS
meow7
10 months ago
4mixtralf16.gguf
48.3 GB
LFS
meow6
10 months ago
4mixtralq4_0.gguf
13.8 GB
LFS
Rename 4mixtq4_1.gguf to 4mixtralq4_0.gguf
10 months ago
4mixtralq4_1.gguf
15.3 GB
LFS
Rename 4mixtq4_1.gguf to 4mixtralq4_1.gguf
10 months ago
4mixtralq5_0.gguf
16.8 GB
LFS
Rename 4mixtralq5.gguf to 4mixtralq5_0.gguf
10 months ago
4mixtralq5_1.gguf
18.3 GB
LFS
meow5
10 months ago
README.md
659 Bytes
Update README.md
10 months ago