Forced GGUF conversion via setting config to Qwen2VLForConditionalGeneration

Might not work.

Downloads last month
5,399
GGUF
Model size
7.62B params
Architecture
qwen2vl

4-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for benxh/Qwen2.5-VL-7B-Instruct-GGUF

Quantized
(6)
this model