Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowInvalid
Message:      JSON parse error: Column() changed from object to string in row 0
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 160, in _generate_tables
                  df = pandas_read_json(f)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 38, in pandas_read_json
                  return pd.read_json(path_or_buf, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 815, in read_json
                  return json_reader.read()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1025, in read
                  obj = self._get_object_parser(self.data)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1051, in _get_object_parser
                  obj = FrameParser(json, **kwargs).parse()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1187, in parse
                  self._parse()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1402, in _parse
                  self.obj = DataFrame(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/core/frame.py", line 778, in __init__
                  mgr = dict_to_mgr(data, index, columns, dtype=dtype, copy=copy, typ=manager)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/core/internals/construction.py", line 503, in dict_to_mgr
                  return arrays_to_mgr(arrays, columns, index, dtype=dtype, typ=typ, consolidate=copy)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/core/internals/construction.py", line 114, in arrays_to_mgr
                  index = _extract_index(arrays)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/core/internals/construction.py", line 677, in _extract_index
                  raise ValueError("All arrays must be of the same length")
              ValueError: All arrays must be of the same length
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1854, in _prepare_split_single
                  for _, table in generator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 163, in _generate_tables
                  raise e
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 137, in _generate_tables
                  pa_table = paj.read_json(
                File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to string in row 0
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1417, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1049, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 924, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1000, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1741, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1897, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

info
dict
licenses
list
images
list
annotations
list
categories
list
{"description":"Pix2Cap COCO Training Dataset","version":"1.0","year":2024,"date_created":"2024-11-0(...TRUNCATED)
[{"url":"http://creativecommons.org/licenses/by-nc-sa/2.0/","id":1,"name":"Attribution-NonCommercial(...TRUNCATED)
[{"license":4,"file_name":"000000023660.jpg","coco_url":"http://images.cocodataset.org/train2017/000(...TRUNCATED)
[{"segments_info":[{"id":5592426,"category_id":1,"iscrowd":0,"bbox":[26,197,31,70],"area":1271,"desc(...TRUNCATED)
[{"supercategory":"person","isthing":1,"id":1,"name":"person"},{"supercategory":"vehicle","isthing":(...TRUNCATED)

Pix2Cap COCO

Example Image

Dataset Description

Pix2Cap COCO is the first pixel-level captioning dataset derived from the panoptic COCO 2017 dataset, designed to provide more precise visual descriptions than traditional region-level captioning datasets. It consists of 20,550 images, partitioned into a training set (18,212 images) and a validation set (2,338 images), mirroring the original COCO split. The dataset includes 167,254 detailed pixel-level captions, each averaging 22.94 words in length. Unlike datasets like Visual Genome, which have significant redundancy, Pix2Cap COCO ensures one unique caption per mask, eliminating repetition and improving the clarity of object representation.

Pix2Cap COCO is designed to offer a more accurate match between the captions and visual content, enhancing tasks such as visual understanding, spatial reasoning, and object interaction analysis. Pix2Cap COCO stands out with its larger number of images and detailed captions, offering significant improvements over existing region-level captioning datasets.

Dataset Version

1.0

Languages

English

Task(s)

  • Pixel-level Captioning: Generating detailed pixel-level captions for segmented objects in images.
  • Visual Reasoning: Analyzing object relationships and spatial interactions in scenes.

Use Case(s)

Pix2Cap COCO is designed for tasks that require detailed visual understanding and caption generation, including:

  • Object detection and segmentation with contextual captions
  • Spatial reasoning and understanding spatial relations
  • Object interaction analysis and reasoning
  • Improving visual language models by providing more detailed descriptions of visual content

Example(s)

file_name image descriptions
000000231527.png Example Image 1:Another glass cup filled with orange jam or marmalade but slightly smaller in size.
2:A glass cup filled with orange jam or marmalade, it has an open top and is placed to the left side on the table.
3:A wooden-handled knife rests on the table close to a sliced piece of orange.
4:Positioned next to this whole uncut orange has a bright color indicating ripeness.
5:This is a half-sliced orange with juicy pulp visible, placed on the white cloth of the dining table.
6:A juicy slice of an orange that lies flat on the table near the knife.
7:A whole uncut orange sitting next to another one, both are positioned at the top right corner of the image.
8:The dining table is covered with a white cloth, and various items are placed on it, including cups of orange jam, slices of oranges, and a knife.
000000357081.png Example Image 1:The grass is lush and green , covering the ground uniformly. It appears well-maintained and provides a natural base for the other objects in the image.
2:The trees are in the background, their outlines slightly blurred but still visible. They stand tall and provide a contrasting dark green backdrop to the bright foreground.
3:This cow is larger, with a white body adorned with large black spots. It's standing upright and appears healthy and well-fed.
4:This smaller cow has similar coloring to it but is distinguished by its size and posture - it's head is down, suggesting it might be grazing.
000000407298.png Example Image 1:A child is visible from the chest up, wearing a light blue shirt. The child has curly hair and a cheerful expression, with eyes looking towards something interesting.
2:The glove is tan and well-worn, with dark brown lacing. It's open and appears to be in the act of catching a ball.
3:The background consists of vibrant green grass illuminated by natural light, providing a fresh and open atmosphere.
4:A white baseball with brown stitching is partially inside the baseball glove, appearing as if it has just been caught.

Dataset Analysis

Example Image Example Image

Data Scale

  • Total Images: 20,550
  • Training Images: 18,212
  • Validation Images: 2,338
  • Total Captions: 167,254

Caption Quality

  • Average Words per Caption: 22.94
  • Average Sentences per Caption: 2.73
  • Average Nouns per Caption: 7.08
  • Average Adjectives per Caption: 3.46
  • Average Verbs per Caption: 3.42

Pix2Cap COCO captions are significantly more detailed than datasets like Visual Genome, which averages only 5.09 words per caption. These highly detailed captions allow the dataset to capture intricate relationships within scenes and demonstrate a balanced use of linguistic elements. Pix2Cap COCO excels in capturing complex spatial relationships, with hierarchical annotations that describe both coarse (e.g., 'next to', 'above') and fine-grained spatial relations (e.g., 'partially occluded by', 'vertically aligned with').

License

This dataset is released under the Apache 2.0 License. Please ensure that you comply with the terms before using the dataset.

Citation

If you use this dataset in your work, please cite the original paper:

Acknowledgments

Pix2Cap COCO is built upon Panoptic COCO 2017 dataset, with the pipeline powered by Set-of-Mark and GPT-4v.

Downloads last month
12