GGUF quantisation of https://huggingface.co./Lin-Chen/ShareGPT4V-13B
You can run the model with llama.cpp server, then visit the server webpage to upload an image. Example:
.\server.exe -m ".\models\ShareGPT4V-13B-Q4_K_M.gguf" -t 6 -c 4096 -ngl 26 --mmproj ".\models\mmproj-model-f16.gguf"
A low temperature is recommended.
- Downloads last month
- 189
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.