Llama-3.2-11B-Vision-Instruct-GGUF
Sourced from Ollama.
The Llama 3.2-Vision collection of multimodal large language models (LLMs) is a collection of pretrained and instruction-tuned image reasoning generative models in 11B and 90B sizes (text + images in / text out). The Llama 3.2-Vision instruction-tuned models are optimized for visual recognition, image reasoning, captioning, and answering general questions about an image. The models outperform many of the available open source and closed multimodal models on common industry benchmarks.
- Downloads last month
- 20,330
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.
Model tree for leafspark/Llama-3.2-11B-Vision-Instruct-GGUF
Base model
meta-llama/Llama-3.2-11B-Vision-Instruct