--- license: apache-2.0 language: - en base_model: - Qwen/Qwen2-VL-2B-Instruct pipeline_tag: image-text-to-text library_name: transformers tags: - QvQ - Qwen - Contexr-Explainer --- # **QvQ Step Tiny - [2B]** *QvQ-Step-Tiny* is a step-by-step context explainer Vision-Language model based on the Qwen2-VL architecture, fine-tuned using the VCR datasets for systematic step-by-step explanations. It is built on the Qwen2VLForConditionalGeneration framework with 2.21 billion parameters and uses BF16 (Brain Floating Point 16) precision. # **Quickstart with Transformers** Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`: ```python from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor from qwen_vl_utils import process_vision_info # default: Load the model on the available device(s) model = Qwen2VLForConditionalGeneration.from_pretrained( "prithivMLmods/QvQ-Step-Tiny", torch_dtype="auto", device_map="auto" ) messages = [ { "role": "user", "content": [ { "type": "image", "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg", }, {"type": "text", "text": "Describe this image."}, ], } ] text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Inference: Generation of the output generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` # **Key Enhancements of QvQ-Step-Tiny** 1. **State-of-the-Art Visual Understanding** - QvQ-Step-Tiny inherits the state-of-the-art capabilities of Qwen2-VL for understanding images of various resolutions and aspect ratios. - It excels on visual reasoning benchmarks such as **MathVista**, **DocVQA**, **RealWorldQA**, and **MTVQA**, making it a powerful tool for detailed visual content analysis and question answering. 2. **Extended Video Understanding** - With the ability to process and comprehend videos of over 20 minutes, QvQ-Step-Tiny supports high-quality video-based question answering, conversational dialogs, and video content generation. - It ensures a systematic, step-by-step explanation of video content, which is ideal for educational, entertainment, and professional applications. 3. **Integration with Devices and Systems** - Thanks to its advanced reasoning and decision-making capabilities, QvQ-Step-Tiny can act as an intelligent agent for operating devices such as mobile phones, robots, and other automated systems. - It can process visual environments alongside textual instructions to enable seamless automation and intelligent control of devices. 4. **Multilingual Support for Text in Images** - QvQ-Step-Tiny supports multilingual text recognition within images, handling English, Chinese, and a wide range of languages, including most European languages, Japanese, Korean, Arabic, and Vietnamese. - This makes it an effective model for global applications, from document analysis to multi-language accessibility solutions. # **Intended Use** 1. **Step-by-Step Context Explanation**: Designed to provide detailed and systematic explanations for images and videos, making it ideal for educational, analytical, and instructional tasks. 2. **Visual Content Understanding**: Effective for analyzing visual content across diverse resolutions, aspect ratios, and formats, including documents (DocVQA) and mathematical visuals (MathVista). 3. **Video-based Reasoning**: Supports comprehension of long-form videos (20+ minutes) for tasks like video question answering, dialog generation, and instructional content creation. 4. **Device Integration**: Can act as an intelligent agent to automate device operations (e.g., mobile phones, robots) by understanding visual environments and processing text-based instructions. 5. **Multilingual Visual Text Support**: Recognizes and processes multilingual text within images, making it suitable for global applications like document processing and accessibility tools. 6. **Advanced Question Answering**: Excels in question-answering tasks involving images, videos, and multimodal data, serving as a robust tool for interactive systems. 7. **Accessibility Enhancements**: Assists visually impaired users by explaining visual and textual content in a clear, step-by-step manner. # **Limitations** 1. **Model Size Constraints**: At 2.21 billion parameters, it may not perform as well as larger models for highly complex or nuanced tasks. 2. **Accuracy with Low-Quality Inputs**: Performance may degrade when dealing with low-resolution images, poor lighting conditions, or noisy video/audio inputs. 3. **Specialized Training Gaps**: While strong on general benchmarks, it might struggle with niche or highly specialized domains that require additional fine-tuning. 4. **Multilingual Text Variability**: While multilingual text recognition is supported, performance may vary across less common or highly complex languages. 5. **Context Length Tradeoffs**: Processing very long videos (e.g., over 20 minutes) or highly dense visual data might challenge its coherence or explanation accuracy. 6. **Device Integration Complexity**: Deploying the model for operating devices or robots may require significant engineering efforts and robust integration pipelines. 7. **Resource-Intensive for Long Contexts**: Despite BF16 precision, tasks with extended context lengths or high-resolution inputs could demand substantial computational resources. 8. **Ambiguity in Prompts**: Ambiguously phrased or poorly structured input prompts may lead to incomplete or inaccurate explanations. 9. **Static Model**: The model cannot learn dynamically from user interactions or adapt its behavior without retraining. # **Applications** - **Education**: Step-by-step explanations for visual and textual content in learning materials, including images and videos. - **Automation**: Integrating with robotics or smart devices for performing tasks based on visual and textual data. - **Content Creation**: Assisting in creating or analyzing video and image-based content, such as tutorials or product demos. - **Accessibility**: Enhancing accessibility tools for visually impaired or multilingual users by providing clear explanations of image or video content. - **Global Q&A Systems**: Supporting cross-lingual question answering in images and videos for diverse user bases.