Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
nisten
/
quad-mixtrals-gguf
like
32
GGUF
Inference Endpoints
License:
apache-2.0
Model card
Files
Files and versions
Community
Deploy
Use this model
main
quad-mixtrals-gguf
1 contributor
History:
31 commits
nisten
Update README.md
678c9e9
10 months ago
.gitattributes
2.57 kB
custom 4k quant large
10 months ago
4mixq2.gguf
8.48 GB
LFS
smallest working 2k mixtral
10 months ago
4mixq3_k.gguf
10.9 GB
LFS
Rename 4mixtq3_k.gguf to 4mixq3_k.gguf
10 months ago
4mixq3_k_l.gguf
11 GB
LFS
Rename 4mixtralq3_k_l.gguf to 4mixq3_k_l.gguf
10 months ago
4mixq4kl.gguf
14.4 GB
LFS
custom 4k quant large
10 months ago
4mixq4km.gguf
14 GB
LFS
Upload 4mixq4km.gguf
10 months ago
4mixq4ks.gguf
13.8 GB
LFS
optimal 4bit model for google colab t4
10 months ago
4mixq6_k.gguf
20.2 GB
LFS
Rename 4mixtralq6_k.gguf to 4mixq6_k.gguf
10 months ago
4mixtrainq4_0.gguf
14 GB
LFS
trainable 4bit experiment
10 months ago
4mixtrainq8_0.gguf
26.1 GB
LFS
meow7
10 months ago
4mixtralf16.gguf
48.3 GB
LFS
meow6
10 months ago
4mixtralq4_0.gguf
13.8 GB
LFS
Rename 4mixtq4_1.gguf to 4mixtralq4_0.gguf
10 months ago
4mixtralq4_1.gguf
15.3 GB
LFS
Rename 4mixtq4_1.gguf to 4mixtralq4_1.gguf
10 months ago
4mixtralq5_0.gguf
16.8 GB
LFS
Rename 4mixtralq5.gguf to 4mixtralq5_0.gguf
10 months ago
4mixtralq5_1.gguf
18.3 GB
LFS
meow5
10 months ago
README.md
811 Bytes
Update README.md
10 months ago