Super Warding

63a3360f-f799-4d44-a343-af02caa80431.jpg

*a model specifically for use with swarm and Rag architecture outshining Many others in coherance-ENJOY

fuzzy-mittenz/Super_Warding-4o4-vpr2-Qw25-Q4k_m-gguf

This model was converted to GGUF format from FourOhFour/Vapor_v2_7B

Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)

brew install llama.cpp

Invoke the llama.cpp server or the CLI

Downloads last month
22
GGUF
Model size
7.62B params
Architecture
qwen2

4-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for IntelligentEstate/Super_Warding-4o4-vpr2-Qw25-Q4k_m-gguf

Base model

Qwen/Qwen2.5-7B
Quantized
(1)
this model