File size: 3,987 Bytes
14fc1eb
 
 
 
 
 
 
 
 
 
4cc4ea4
14fc1eb
 
 
 
 
 
 
 
 
 
 
 
 
1f08cc2
 
528f8cf
1f08cc2
 
 
9574ee6
1f08cc2
2cb2e9e
1f08cc2
 
 
 
 
 
9574ee6
1f08cc2
2cb2e9e
4b841ee
2cb2e9e
1f08cc2
e574897
70979d7
e574897
9574ee6
 
70979d7
 
 
 
 
 
9574ee6
70979d7
 
9574ee6
70979d7
1f08cc2
9574ee6
1f08cc2
 
70979d7
1f08cc2
9574ee6
8f143d5
1f08cc2
 
 
 
8f143d5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
---
license: apache-2.0
task_categories:
- image-to-text
- visual-question-answering
language:
- en
tags:
- context-violating
- visual-language
pretty_name: ContextualBench
---

<h1 align='center' style="text-align:center; font-weight:bold; font-size:2.0em;letter-spacing:2.0px;">
  Challenging and Enhancing the Reasoning Capacity of Multimodal LLMs in Context-violating Images
</h1>      
<p align='center' style="text-align:center;font-size:1.25em;">
    <a href="https://wuxinxiao.github.io/" target="_blank" style="text-decoration: none;">Hongxi&nbsp;Li</a>,&nbsp;
    <a href="https://wuxinxiao.github.io/" target="_blank" style="text-decoration: none;">Yuyang&nbsp;Chen</a>,&nbsp;
    <a href="https://wuxinxiao.github.io/" target="_blank" style="text-decoration: none;">Yayun&nbsp;Qi</a>,&nbsp;
    <a href="https://wuxinxiao.github.io/" target="_blank" style="text-decoration: none;">Xinxiao&nbsp;Wu</a>,&nbsp;<br/>
&nbsp;Beijing Institute of Technology<br/>
<em>arXiv 2024</em><br/>
<a href="https://wuxinxiao.github.io/" title="Website" target="_blank" rel="nofollow" style="text-decoration: none;">🌎Website (Comming soon)</a> |
<a href="https://github.com/Tough-Stone/Counter-Context-Reasoning" title="Code" target="_blank" rel="nofollow" style="text-decoration: none;">🧑‍💻Code</a> |
<a href="https://wuxinxiao.github.io/" title="arXiv" target="_blank" rel="nofollow" style="text-decoration: none;">📄arXiv  (Comming soon)</a> |
<a href="https://huggingface.co./spaces/ToughStone/ContextualBench_Leaderboard" title="Leaderboard" target="_blank" rel="nofollow" style="text-decoration: none;">🏆 Leaderboard</a>
</p>


<!-- # Dataset Card for ContextualBench


- [Dataset Description](#dataset-description)
- [Dataset Construction](#dataset-construction)
  - [Image Generation](#data-fields)
  - [Task Design](#task-design)
  - [Image Annotation](#image-annotation)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information) -->

<p align='center'>
<img src="description.png" alt="dataset description" align='center' width="850" height="200">
</p>

# Dataset Description
<b>ContextualBench</b> consists of 6(categories) × 12 (instances) = 72 context instances, with each context instances containing 7 context-consistent images and 7 context-violating images. We design 4 visual reasoning tasks and collect human annotations.

# Dataset Construction
## Image Generation
We collect context instances from external database and formulate them as key-value pairs(as shown in <i>‘Constraint’</i> field of <b>Dataset Viewer</b>), then edit a subset of these pairs to make context-violating images. We generate images using text-to-image models.
Each image is generated iteratively until the following three conditions are met:
1. The image conforms to the given constraints.
2. There are no visual illusions in the image.
3. The image is free of ambiguity and potential offensiveness.

## Task Design
We design 3 visual reasoning tasks for all images in <b>ContextualBench</b>: visual question answer, image caption, and image identification. An additional image explanation task is designed for context-violating images.

## Image Annotation
For image caption, image identification, and image explanation, we collect 5 annotations from different human. For visual question answer, we generate Q-A pairs based on image caption following Q2 pipeline.

# Licensing Information
1. **Purpose:** The dataset was primarily designed for use as a test set.
2. **Commercial Use:** Commercially, the dataset may be used as a test set, but it's prohibited to use it as a training set.
3. **Rights on Images:** All rights to the images within the dataset are retained by the <b>ContextualBench</b> authors.

# Citation Information
    @article{hongxi2024challenging,
      title={Challenging and Enhancing the Reasoning Capacity of Multimodal LLMs in Context-violating Images},
      author={},
      journal={},
      year={2024}
    }