--- task_categories: - question-answering - text-classification license: apache-2.0 language: - en size_categories: - 1K The dataset used in this study is designed to evaluate and analyze multi-object hallucination by leveraging existing panoptic segmentation datasets. Specifically, it includes data from MSCOCO-Panoptic and ADE20K, ensuring access to diverse objects and their instance-level semantic annotations. For more information, please visit [Multi-Object Hallucination](https://multi-object-hallucination.github.io). ## Dataset Construction The dataset is divided into several subsets based on the distribution of object classes within each image at test time. This division allows for a more granular analysis of how different distributions affect the hallucination behavior of large vision-language models (LVLMs). - **Homogeneous**: All tested objects in an image belong to the same class (e.g., AAAAA). - **Heterogeneous**: All tested objects in an image belong to different classes (e.g., ABCDE). - **In-the-Wild**: A mixed distribution where the tested objects are randomly chosen and ordered within each image. - **Adversarial**: A subset designed to challenge the models with difficult object distributions(AAAAB,BAAAA). ## Dataset Statistics ### Training Data Statistics | Dataset | Total | COCO | ADE | | :---: | :---: | :---: | :---: | | Wild | 1539 | 732 | 807 | | Hom. | 312 | 168 | 144 | | Het. | 400 | 200 | 200 | | Adv. | 168 | 54 | 114 | ### Validation Data Statistics | Dataset | Total | COCO | ADE | | :---: | :---: | :---: | :---: | | Wild | 1172 | 547 | 625 | | Het. | 246 | 76 | 170 | | Hom. | 490 | 289 | 201 | | Adv. | 334 | 170 | 164 | ## Dataset File Structure The `ROPE` dataset is structured into training and validation directories, each containing images divided by their object class distributions. Each image directory includes visualizations of bounding boxes (`bbox`) and raw images (`raw`), further categorized into `ADE` and `COCO` sources. The `raw` directory contains the original images, while the `bbox` directory contains the same images with bounding boxes visualized on them. ```arduino ROPE/ │ ├── train/ │ ├── image/ │ │ ├── AAAAB-images/ │ │ │ ├── bbox/ │ │ │ │ ├── ADE/ │ │ │ │ ├── COCO/ │ │ │ ├── raw/ │ │ │ │ ├── ADE/ │ │ │ │ ├── COCO/ │ │ ├── BAAAA-images/ │ │ │ ├── bbox/ │ │ │ │ ├── ADE/ │ │ │ │ ├── COCO/ │ │ │ ├── raw/ │ │ │ │ ├── ADE/ │ │ │ │ ├── COCO/ │ │ ├── heterogenous-images/ │ │ │ ├── bbox/ │ │ │ │ ├── ADE/ │ │ │ │ ├── COCO/ │ │ │ ├── raw/ │ │ │ │ ├── ADE/ │ │ │ │ ├── COCO/ │ │ ├── homogenous-images/ │ │ │ ├── bbox/ │ │ │ │ ├── ADE/ │ │ │ │ ├── COCO/ │ │ │ ├── raw/ │ │ │ │ ├── ADE/ │ │ │ │ ├── COCO/ │ │ ├── mixed-images/ │ │ │ ├── bbox/ │ │ │ │ ├── ADE/ │ │ │ │ ├── COCO/ │ │ │ ├── raw/ │ │ │ │ ├── ADE/ │ │ │ │ ├── COCO/ │ ├── AAAAB_data.json │ ├── BAAAA_data.json │ ├── merged_heterogenous_data.json │ ├── merged_homogenous_data.json │ ├── merged_mixed_data.json │ ├── validation/ #similar to train part │ ├── image/ │ │ ├── AAAAB-images/ │ │ ├── BAAAA-images/ │ │ ├── heterogenous-images/ │ │ ├── homogenous-images/ │ │ ├── mixed-images/ │ ├── AAAAB_data.json │ ├── BAAAA_data.json │ ├── merged_heterogenous_data.json │ ├── merged_homogenous_data.json │ ├── merged_mixed_data.json │ ├── .gitattributes ├── README.md ├── train.zip ├── validation.zip ``` ## Json file Structure ```json { "features": [ { "name": "folder", "dtype": "string" }, { "name": "filename", "dtype": "string" }, { "name": "source", "dtype": "struct", "fields": [ { "name": "database", "dtype": "string" }, { "name": "image_id", "dtype": "string" }, { "name": "coco_id", "dtype": "string" }, { "name": "flickr_id", "dtype": "string" } ] }, { "name": "size", "dtype": "struct", "fields": [ { "name": "width", "dtype": "int32" }, { "name": "height", "dtype": "int32" }, { "name": "depth", "dtype": "int32" } ] }, { "name": "segmented", "dtype": "int32" }, { "name": "objects", "dtype": "list", "item": { "dtype": "struct", "fields": [ { "name": "name", "dtype": "string" }, { "name": "object_id", "dtype": "string" }, { "name": "difficult", "dtype": "int32" }, { "name": "bndbox", "dtype": "struct", "fields": [ { "name": "xmin", "dtype": "int32" }, { "name": "ymin", "dtype": "int32" }, { "name": "xmax", "dtype": "int32" }, { "name": "ymax", "dtype": "int32" } ] }, { "name": "area", "dtype": "int32" }, { "name": "bbox_number", "dtype": "int32" } ] } }, { "name": "relations", "dtype": "list", "item": { "dtype": "string" } }, { "name": "object_set", "dtype": "list", "item": { "dtype": "string" } }, { "name": "data_source", "dtype": "string" } ] } ``` ## Dataset Construction The dataset used in this study is constructed following the guidelines and protocols outlined by the SLED group. Detailed information and code about the data annotation process can be found in the official repository. For more information, please visit the [dataset construction guidelines](https://github.com/sled-group/moh/tree/main/data-annotation). ## Citation **BibTeX:** ```bibtex @inproceedings{chen2024multiobject, title={Multi-Object Hallucination in Vision Language Models}, author={Chen, Xuweiyi and Ma, Ziqiao and Zhang, Xuejun and Xu, Sihan and Qian, Shengyi and Yang, Jianing and Fouhey, David and Chai, Joyce}, booktitle={3rd Workshop on Advances in Language and Vision Research (ALVR)}, year={2024} }