jxtngx/mamba-2.8b-hf-Q8_0-GGUF
This model was converted to GGUF format from state-spaces/mamba-2.8b-hf
using llama.cpp via the ggml.ai's GGUF-my-repo space.
Refer to the original model card for more details on the model.
- Downloads last month
- 2
Model tree for jxtngx/mamba-2.8b-hf-Q8_0-GGUF
Base model
state-spaces/mamba-2.8b-hf