This is the 7b Qwen2-VL image model exported via https://github.com/pdufour/llm-export.

Also see https://huggingface.co./pdufour/Qwen2-VL-2B-Instruct-ONNX-Q4-F16 for a 2b model that is onnxruntime-webgpu compatible.

Downloads last month
39
Inference API
Inference API (serverless) does not yet support transformers.js models for this pipeline type.

Model tree for pdufour/Qwen2-VL-7B-Instruct-onnx

Base model

Qwen/Qwen2-VL-7B
Quantized
(19)
this model

Collection including pdufour/Qwen2-VL-7B-Instruct-onnx