File size: 337 Bytes
d7be1e8 |
1 2 3 4 5 6 7 8 9 10 11 12 |
---
pipeline_tag: visual-question-answering
---
llava-v1.5-13b-GGUF
This repo contains GGUF files to inference llava-v1.5-13b with llama.cpp end-to-end without any extra dependency.
stirred by twobob
Note: The mmproj-model-f16.gguf file structure is experimental and may change. Always use the latest code in llama.cpp.
props to @mys |