Datasets:

Modalities:
Image
Languages:
English
Libraries:
Datasets
License:
ROPE / README.md
yifan-Eva's picture
Update README.md
b4939b8 verified
metadata
task_categories:
  - question-answering
  - text-classification
license: apache-2.0
language:
  - en
size_categories:
  - 1K<n<10K

Dataset Card for ROPE

The dataset used in this study is designed to evaluate and analyze multi-object hallucination by leveraging existing panoptic segmentation datasets. Specifically, it includes data from MSCOCO-Panoptic and ADE20K, ensuring access to diverse objects and their instance-level semantic annotations. For more information, please visit Multi-Object Hallucination.

Dataset Construction

The dataset is divided into several subsets based on the distribution of object classes within each image at test time. This division allows for a more granular analysis of how different distributions affect the hallucination behavior of large vision-language models (LVLMs).

  • Homogeneous: All tested objects in an image belong to the same class (e.g., AAAAA).
  • Heterogeneous: All tested objects in an image belong to different classes (e.g., ABCDE).
  • In-the-Wild: A mixed distribution where the tested objects are randomly chosen and ordered within each image.
  • Adversarial: A subset designed to challenge the models with difficult object distributions(AAAAB,BAAAA).

Dataset Statistics

Training Data Statistics

Dataset Total COCO ADE
Wild 1539 732 807
Hom. 312 168 144
Het. 400 200 200
Adv. 168 54 114

Validation Data Statistics

Dataset Total COCO ADE
Wild 1172 547 625
Het. 246 76 170
Hom. 490 289 201
Adv. 334 170 164

Dataset File Structure

The ROPE dataset is structured into training and validation directories, each containing images divided by their object class distributions. Each image directory includes visualizations of bounding boxes (bbox) and raw images (raw), further categorized into ADE and COCO sources. The raw directory contains the original images, while the bbox directory contains the same images with bounding boxes visualized on them.

ROPE/
β”‚
β”œβ”€β”€ train/
β”‚   β”œβ”€β”€ image/
β”‚   β”‚   β”œβ”€β”€ AAAAB-images/
β”‚   β”‚   β”‚   β”œβ”€β”€ bbox/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ ADE/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ COCO/
β”‚   β”‚   β”‚   β”œβ”€β”€ raw/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ ADE/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ COCO/
β”‚   β”‚   β”œβ”€β”€ BAAAA-images/
β”‚   β”‚   β”‚   β”œβ”€β”€ bbox/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ ADE/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ COCO/
β”‚   β”‚   β”‚   β”œβ”€β”€ raw/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ ADE/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ COCO/
β”‚   β”‚   β”œβ”€β”€ heterogenous-images/
β”‚   β”‚   β”‚   β”œβ”€β”€ bbox/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ ADE/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ COCO/
β”‚   β”‚   β”‚   β”œβ”€β”€ raw/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ ADE/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ COCO/
β”‚   β”‚   β”œβ”€β”€ homogenous-images/
β”‚   β”‚   β”‚   β”œβ”€β”€ bbox/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ ADE/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ COCO/
β”‚   β”‚   β”‚   β”œβ”€β”€ raw/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ ADE/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ COCO/
β”‚   β”‚   β”œβ”€β”€ mixed-images/
β”‚   β”‚   β”‚   β”œβ”€β”€ bbox/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ ADE/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ COCO/
β”‚   β”‚   β”‚   β”œβ”€β”€ raw/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ ADE/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ COCO/
β”‚   β”œβ”€β”€ AAAAB_data.json
β”‚   β”œβ”€β”€ BAAAA_data.json
β”‚   β”œβ”€β”€ merged_heterogenous_data.json
β”‚   β”œβ”€β”€ merged_homogenous_data.json
β”‚   β”œβ”€β”€ merged_mixed_data.json
β”‚
β”œβ”€β”€ validation/  #similar to train part
β”‚   β”œβ”€β”€ image/
β”‚   β”‚   β”œβ”€β”€ AAAAB-images/  
β”‚   β”‚   β”œβ”€β”€ BAAAA-images/
β”‚   β”‚   β”œβ”€β”€ heterogenous-images/
β”‚   β”‚   β”œβ”€β”€ homogenous-images/
β”‚   β”‚   β”œβ”€β”€ mixed-images/
β”‚   β”œβ”€β”€ AAAAB_data.json
β”‚   β”œβ”€β”€ BAAAA_data.json
β”‚   β”œβ”€β”€ merged_heterogenous_data.json
β”‚   β”œβ”€β”€ merged_homogenous_data.json
β”‚   β”œβ”€β”€ merged_mixed_data.json
β”‚
β”œβ”€β”€ .gitattributes
β”œβ”€β”€ README.md
β”œβ”€β”€ train.zip
β”œβ”€β”€ validation.zip

Json file Structure

{
  "features": [
    {
      "name": "folder",
      "dtype": "string"
    },
    {
      "name": "filename",
      "dtype": "string"
    },
    {
      "name": "source",
      "dtype": "struct",
      "fields": [
        {
          "name": "database",
          "dtype": "string"
        },
        {
          "name": "image_id",
          "dtype": "string"
        },
        {
          "name": "coco_id",
          "dtype": "string"
        },
        {
          "name": "flickr_id",
          "dtype": "string"
        }
      ]
    },
    {
      "name": "size",
      "dtype": "struct",
      "fields": [
        {
          "name": "width",
          "dtype": "int32"
        },
        {
          "name": "height",
          "dtype": "int32"
        },
        {
          "name": "depth",
          "dtype": "int32"
        }
      ]
    },
    {
      "name": "segmented",
      "dtype": "int32"
    },
    {
      "name": "objects",
      "dtype": "list",
      "item": {
        "dtype": "struct",
        "fields": [
          {
            "name": "name",
            "dtype": "string"
          },
          {
            "name": "object_id",
            "dtype": "string"
          },
          {
            "name": "difficult",
            "dtype": "int32"
          },
          {
            "name": "bndbox",
            "dtype": "struct",
            "fields": [
              {
                "name": "xmin",
                "dtype": "int32"
              },
              {
                "name": "ymin",
                "dtype": "int32"
              },
              {
                "name": "xmax",
                "dtype": "int32"
              },
              {
                "name": "ymax",
                "dtype": "int32"
              }
            ]
          },
          {
            "name": "area",
            "dtype": "int32"
          },
          {
            "name": "bbox_number",
            "dtype": "int32"
          }
        ]
      }
    },
    {
      "name": "relations",
      "dtype": "list",
      "item": {
        "dtype": "string"
      }
    },
    {
      "name": "object_set",
      "dtype": "list",
      "item": {
        "dtype": "string"
      }
    },
    {
      "name": "data_source",
      "dtype": "string"
    }
  ]
}

Dataset Construction

The dataset used in this study is constructed following the guidelines and protocols outlined by the SLED group. Detailed information and code about the data annotation process can be found in the official repository.

For more information, please visit the dataset construction guidelines.

Citation

BibTeX:

@inproceedings{chen2024multiobject,
  title={Multi-Object Hallucination in Vision Language Models},
  author={Chen, Xuweiyi and Ma, Ziqiao and Zhang, Xuejun and Xu, Sihan and Qian, Shengyi and Yang, Jianing and Fouhey, David and Chai, Joyce},
  booktitle={3rd Workshop on Advances in Language and Vision Research (ALVR)},
  year={2024}
}