Logo

πŸš€ SeaLLMs-v3-7B-Chat-Uncensored-GGUF

Optimized quantized models for efficient inference

πŸ“‹ Overview

A collection of optimized GGUF quantized models derived from BlossomsAI/SeaLLMs-v3-7B-Chat-Uncensored, providing various performance-quality tradeoffs.

πŸ’Ž Model Variants

Variant Use Case Download
Q2_K Basic text completion tasks πŸ“₯
Q3_K_M Memory-efficient quality operations πŸ“₯
Q4_K_S Balanced performance and quality πŸ“₯
Q4_K_M Balanced performance and quality πŸ“₯
Q5_K_S Enhanced quality text generation πŸ“₯
Q5_K_M Enhanced quality text generation πŸ“₯
Q6_K Superior quality outputs πŸ“₯
Q8_0 Maximum quality, production-grade results πŸ“₯

🀝 Contributors

Developed with ❀️ by BlossomAI


Star ⭐️ this repo if you find it valuable!
Downloads last month
127
GGUF

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for BlossomsAI/SeaLLMs-v3-7B-Chat-Uncensored-GGUF

Quantized
(4)
this model