Datasets:
license: apache-2.0
task_categories:
- image-to-text
- visual-question-answering
language:
- en
tags:
- context-violating
- visual-language
pretty_name: ContextualBench
Challenging and Enhancing the Reasoning Capacity of Multimodal LLMs in Context-violating Images
Hongxi Li,
Yuyang Chen,
Yayun Qi,
Xinxiao Wu,
Beijing Institute of Technology
arXiv 2024
🌎Website (Comming soon) |
🧑💻Code |
📄arXiv (Comming soon) |
🏆 Leaderboard
Dataset Description
ContextualBench consists of 6(categories) × 12 (instances) = 72 context instances, with each context instances containing 7 context-consistent images and 7 context-violating images. We design 4 visual reasoning tasks and collect human annotations.
Dataset Construction
Image Generation
We collect context instances from external database and formulate them as key-value pairs(as shown in ‘Constraint’ field of Dataset Viewer), then edit a subset of these pairs to make context-violating images. We generate images using text-to-image models. Each image is generated iteratively until the following three conditions are met:
- The image conforms to the given constraints.
- There are no visual illusions in the image.
- The image is free of ambiguity and potential offensiveness.
Task Design
We design 3 visual reasoning tasks for all images in ContextualBench: visual question answer, image caption, and image identification. An additional image explanation task is designed for context-violating images.
Image Annotation
For image caption, image identification, and image explanation, we collect 5 annotations from different human. For visual question answer, we generate Q-A pairs based on image caption following Q2 pipeline.
Licensing Information
- Purpose: The dataset was primarily designed for use as a test set.
- Commercial Use: Commercially, the dataset may be used as a test set, but it's prohibited to use it as a training set.
- Rights on Images: All rights to the images within the dataset are retained by the ContextualBench authors.
Citation Information
@article{hongxi2024challenging,
title={Challenging and Enhancing the Reasoning Capacity of Multimodal LLMs in Context-violating Images},
author={},
journal={},
year={2024}
}