Origin: https://huggingface.co./NousResearch/Nous-Hermes-2-Vision-Alpha This is the quantized GGUF version of a function calling fine tuned llava-type model using a tiny Vision tower.

Sharing it because it's novel and it has beene a pain to convert \build\bin\Release\llava-cli.exe -m Q:\models\llava\Nous-Hermes-2-Vision\ggml-model-q5_k --mmproj Q:\models\llava\Nous-Hermes-2-Vision\mmproj-model-f16.gguf -ngl 80 -p 1025 --image path/to/image -p "Describe the image (use the proper syntax)"

If you wish to quantize yourself you currently need this PR: https://github.com/ggerganov/llama.cpp/pull/4313

Warning: The model is not very good at this point - mostly for testing purposes

Downloads last month
604
GGUF
Model size
419M params
Architecture
clip

16-bit

Inference API
Unable to determine this model's library. Check the docs .