Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    UnicodeDecodeError
Message:      'utf-8' codec can't decode byte 0x89 in position 0: invalid start byte
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
                  pa_table = paj.read_json(
                File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Invalid value. in row 0
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
                  for _, table in generator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
                  dataset = json.load(f)
                File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
                  return loads(fp.read(),
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 1104, in read_with_retries
                  out = read(*args, **kwargs)
                File "/usr/local/lib/python3.9/codecs.py", line 322, in decode
                  (result, consumed) = self._buffer_decode(data, self.errors, final)
              UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position 0: invalid start byte
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

id
string
question
string
image
sequence
answer
int64
clevr_query_CLEVR_val_000649
material: rubber
[ "clevr/query/CLEVR_val_000649.png" ]
2
clevr_query_CLEVR_val_000591
color: blue
[ "clevr/query/CLEVR_val_000591.png" ]
3
clevr_query_CLEVR_val_009612
material: metal
[ "clevr/query/CLEVR_val_009612.png" ]
1
clevr_query_CLEVR_val_001006
size: small
[ "clevr/query/CLEVR_val_001006.png" ]
4
clevr_query_CLEVR_val_006048
material: metal
[ "clevr/query/CLEVR_val_006048.png" ]
5
clevr_query_CLEVR_val_013325
material: metal
[ "clevr/query/CLEVR_val_013325.png" ]
5
clevr_query_CLEVR_val_005301
size: small
[ "clevr/query/CLEVR_val_005301.png" ]
4
clevr_query_CLEVR_val_004549
shape: sphere
[ "clevr/query/CLEVR_val_004549.png" ]
2
clevr_query_CLEVR_val_006515
color: brown
[ "clevr/query/CLEVR_val_006515.png" ]
2
clevr_query_CLEVR_val_006458
color: green
[ "clevr/query/CLEVR_val_006458.png" ]
1
clevr_query_CLEVR_val_004619
shape: cylinder
[ "clevr/query/CLEVR_val_004619.png" ]
3
clevr_query_CLEVR_val_009016
size: large
[ "clevr/query/CLEVR_val_009016.png" ]
1
clevr_query_CLEVR_val_000433
material: rubber
[ "clevr/query/CLEVR_val_000433.png" ]
3
clevr_query_CLEVR_val_006903
material: rubber
[ "clevr/query/CLEVR_val_006903.png" ]
2
clevr_query_CLEVR_val_012800
color: purple
[ "clevr/query/CLEVR_val_012800.png" ]
2
clevr_query_CLEVR_val_004164
shape: cylinder
[ "clevr/query/CLEVR_val_004164.png" ]
2
clevr_query_CLEVR_val_008886
material: metal
[ "clevr/query/CLEVR_val_008886.png" ]
3
clevr_query_CLEVR_val_013203
color: red
[ "clevr/query/CLEVR_val_013203.png" ]
2
clevr_query_CLEVR_val_012093
shape: sphere
[ "clevr/query/CLEVR_val_012093.png" ]
1
clevr_query_CLEVR_val_010563
material: metal
[ "clevr/query/CLEVR_val_010563.png" ]
3
clevr_query_CLEVR_val_000216
color: green
[ "clevr/query/CLEVR_val_000216.png" ]
1
clevr_query_CLEVR_val_011604
size: large
[ "clevr/query/CLEVR_val_011604.png" ]
4
clevr_query_CLEVR_val_003377
shape: cube
[ "clevr/query/CLEVR_val_003377.png" ]
2
clevr_query_CLEVR_val_011366
color: purple
[ "clevr/query/CLEVR_val_011366.png" ]
1
clevr_query_CLEVR_val_002093
color: purple
[ "clevr/query/CLEVR_val_002093.png" ]
2
clevr_query_CLEVR_val_013699
material: metal
[ "clevr/query/CLEVR_val_013699.png" ]
4
clevr_query_CLEVR_val_006748
shape: sphere
[ "clevr/query/CLEVR_val_006748.png" ]
1
clevr_query_CLEVR_val_000680
shape: sphere
[ "clevr/query/CLEVR_val_000680.png" ]
2
clevr_query_CLEVR_val_009172
shape: cube
[ "clevr/query/CLEVR_val_009172.png" ]
1
clevr_query_CLEVR_val_005767
shape: cylinder
[ "clevr/query/CLEVR_val_005767.png" ]
2
clevr_query_CLEVR_val_003739
shape: sphere
[ "clevr/query/CLEVR_val_003739.png" ]
2
clevr_query_CLEVR_val_000453
material: metal
[ "clevr/query/CLEVR_val_000453.png" ]
4
clevr_query_CLEVR_val_004406
color: brown
[ "clevr/query/CLEVR_val_004406.png" ]
1
clevr_query_CLEVR_val_005020
color: green
[ "clevr/query/CLEVR_val_005020.png" ]
1
clevr_query_CLEVR_val_014148
material: rubber
[ "clevr/query/CLEVR_val_014148.png" ]
2
clevr_query_CLEVR_val_004402
material: rubber
[ "clevr/query/CLEVR_val_004402.png" ]
3
clevr_query_CLEVR_val_008087
material: rubber
[ "clevr/query/CLEVR_val_008087.png" ]
3
clevr_query_CLEVR_val_011711
shape: cylinder
[ "clevr/query/CLEVR_val_011711.png" ]
1
clevr_query_CLEVR_val_013255
shape: cube
[ "clevr/query/CLEVR_val_013255.png" ]
1
clevr_query_CLEVR_val_003843
size: large
[ "clevr/query/CLEVR_val_003843.png" ]
4
clevr_query_CLEVR_val_013871
size: small
[ "clevr/query/CLEVR_val_013871.png" ]
3
clevr_query_CLEVR_val_013572
size: large
[ "clevr/query/CLEVR_val_013572.png" ]
1
clevr_query_CLEVR_val_005180
material: rubber
[ "clevr/query/CLEVR_val_005180.png" ]
2
clevr_query_CLEVR_val_014771
size: large
[ "clevr/query/CLEVR_val_014771.png" ]
9
clevr_query_CLEVR_val_012121
size: small
[ "clevr/query/CLEVR_val_012121.png" ]
2
clevr_query_CLEVR_val_005467
material: metal
[ "clevr/query/CLEVR_val_005467.png" ]
4
clevr_query_CLEVR_val_007009
shape: sphere
[ "clevr/query/CLEVR_val_007009.png" ]
1
clevr_query_CLEVR_val_002667
color: brown
[ "clevr/query/CLEVR_val_002667.png" ]
2
clevr_query_CLEVR_val_012132
size: small
[ "clevr/query/CLEVR_val_012132.png" ]
4
clevr_query_CLEVR_val_009804
size: large
[ "clevr/query/CLEVR_val_009804.png" ]
6
clevr_query_CLEVR_val_012596
size: small
[ "clevr/query/CLEVR_val_012596.png" ]
5
clevr_query_CLEVR_val_012623
shape: cylinder
[ "clevr/query/CLEVR_val_012623.png" ]
2
clevr_query_CLEVR_val_002314
size: small
[ "clevr/query/CLEVR_val_002314.png" ]
3
clevr_query_CLEVR_val_001211
color: cyan
[ "clevr/query/CLEVR_val_001211.png" ]
1
clevr_query_CLEVR_val_004613
color: cyan
[ "clevr/query/CLEVR_val_004613.png" ]
1
clevr_query_CLEVR_val_006208
material: metal
[ "clevr/query/CLEVR_val_006208.png" ]
1
clevr_query_CLEVR_val_004095
shape: cube
[ "clevr/query/CLEVR_val_004095.png" ]
5
clevr_query_CLEVR_val_007564
size: small
[ "clevr/query/CLEVR_val_007564.png" ]
4
clevr_query_CLEVR_val_013868
shape: cube
[ "clevr/query/CLEVR_val_013868.png" ]
5
clevr_query_CLEVR_val_009687
shape: cube
[ "clevr/query/CLEVR_val_009687.png" ]
1
clevr_query_CLEVR_val_007405
size: small
[ "clevr/query/CLEVR_val_007405.png" ]
3
clevr_query_CLEVR_val_013917
shape: sphere
[ "clevr/query/CLEVR_val_013917.png" ]
3
clevr_query_CLEVR_val_001249
color: gray
[ "clevr/query/CLEVR_val_001249.png" ]
4
clevr_query_CLEVR_val_009737
size: large
[ "clevr/query/CLEVR_val_009737.png" ]
5
clevr_query_CLEVR_val_004132
size: small
[ "clevr/query/CLEVR_val_004132.png" ]
1
clevr_query_CLEVR_val_011267
shape: sphere
[ "clevr/query/CLEVR_val_011267.png" ]
2
clevr_query_CLEVR_val_009503
material: rubber
[ "clevr/query/CLEVR_val_009503.png" ]
4
clevr_query_CLEVR_val_002883
size: large
[ "clevr/query/CLEVR_val_002883.png" ]
2
clevr_query_CLEVR_val_007653
material: metal
[ "clevr/query/CLEVR_val_007653.png" ]
2
clevr_query_CLEVR_val_008057
material: metal
[ "clevr/query/CLEVR_val_008057.png" ]
1
clevr_query_CLEVR_val_007644
shape: cube
[ "clevr/query/CLEVR_val_007644.png" ]
2
clevr_query_CLEVR_val_014884
shape: sphere
[ "clevr/query/CLEVR_val_014884.png" ]
1
clevr_query_CLEVR_val_011226
shape: cylinder
[ "clevr/query/CLEVR_val_011226.png" ]
4
clevr_query_CLEVR_val_000439
size: large
[ "clevr/query/CLEVR_val_000439.png" ]
1
clevr_query_CLEVR_val_008223
shape: cylinder
[ "clevr/query/CLEVR_val_008223.png" ]
2
clevr_query_CLEVR_val_009019
shape: cube
[ "clevr/query/CLEVR_val_009019.png" ]
1
clevr_query_CLEVR_val_000389
material: rubber
[ "clevr/query/CLEVR_val_000389.png" ]
4
clevr_query_CLEVR_val_006002
material: metal
[ "clevr/query/CLEVR_val_006002.png" ]
1
clevr_query_CLEVR_val_001436
shape: cube
[ "clevr/query/CLEVR_val_001436.png" ]
2
clevr_query_CLEVR_val_000071
shape: sphere
[ "clevr/query/CLEVR_val_000071.png" ]
4
clevr_query_CLEVR_val_006365
shape: cylinder
[ "clevr/query/CLEVR_val_006365.png" ]
2
clevr_query_CLEVR_val_006419
shape: sphere
[ "clevr/query/CLEVR_val_006419.png" ]
1
clevr_query_CLEVR_val_008396
shape: cylinder
[ "clevr/query/CLEVR_val_008396.png" ]
5
clevr_query_CLEVR_val_002994
size: large
[ "clevr/query/CLEVR_val_002994.png" ]
3
clevr_query_CLEVR_val_009327
color: green
[ "clevr/query/CLEVR_val_009327.png" ]
1
clevr_query_CLEVR_val_010933
size: large
[ "clevr/query/CLEVR_val_010933.png" ]
5
clevr_query_CLEVR_val_002923
size: large
[ "clevr/query/CLEVR_val_002923.png" ]
5
clevr_query_CLEVR_val_003875
material: metal
[ "clevr/query/CLEVR_val_003875.png" ]
6
clevr_query_CLEVR_val_012067
size: small
[ "clevr/query/CLEVR_val_012067.png" ]
4
clevr_query_CLEVR_val_012442
color: cyan
[ "clevr/query/CLEVR_val_012442.png" ]
1
clevr_query_CLEVR_val_013144
shape: cube
[ "clevr/query/CLEVR_val_013144.png" ]
1
clevr_query_CLEVR_val_007001
shape: sphere
[ "clevr/query/CLEVR_val_007001.png" ]
1
clevr_query_CLEVR_val_007274
size: small
[ "clevr/query/CLEVR_val_007274.png" ]
2
clevr_query_CLEVR_val_005543
shape: sphere
[ "clevr/query/CLEVR_val_005543.png" ]
4
clevr_query_CLEVR_val_004127
material: rubber
[ "clevr/query/CLEVR_val_004127.png" ]
2
clevr_query_CLEVR_val_000268
material: metal
[ "clevr/query/CLEVR_val_000268.png" ]
3
clevr_query_CLEVR_val_012361
material: rubber
[ "clevr/query/CLEVR_val_012361.png" ]
7
clevr_query_CLEVR_val_006989
shape: cylinder
[ "clevr/query/CLEVR_val_006989.png" ]
4
clevr_query_CLEVR_val_008882
size: large
[ "clevr/query/CLEVR_val_008882.png" ]
3
clevr_query_CLEVR_val_003409
size: small
[ "clevr/query/CLEVR_val_003409.png" ]
3
End of preview.

VL-ICL Bench

VL-ICL Bench: The Devil in the Details of Benchmarking Multimodal In-Context Learning

[Webpage] [Paper] [Code]

Image-to-Text Tasks

In all image-to-text tasks image is a list of image paths (typically one item - for interleaved cases there are two items).

Fast Open-Ended MiniImageNet

Frozen introduces the task of fast concept binding for MiniImageNet. The benchmark has a fixed structure so only the given support examples can be used for a given query example. We store all support images in the support directory and all query images in the query directory. We provide a support.json file with information about the support images, but these do not need to be used. Because of the fixed structure of the benchmark, all needed information is stored inside query.json file. This file includes information about the query image, the list of artificial classes that can be used for constructing the task with the given query image, as well as five examples for each class (we store the image paths and the caption that refers to all these examples). We used the 5-way 5-shot setting, but we are free to take only the query example class and between one and four other classes. For our experiments we use a 2-way setting. For each class we can take up to 5 support examples. We have 200 query examples and total of 5000 support examples, but we can extend it for up to 2500 query examples with the corresponding number of support examples.

Source of data: https://fh295.github.io/frozen.html

CLEVR Count Induction

We repurpose the CLEVR dataset to construct tasks where we try to count the number of objects with a given characteristic, for example all large objects. The available attributes are shape, size, material and colour. The specified criterion is included within the question, for example shape: large, and the count itself is in the answer. We have 800 images in the support set and 200 in the query set.

Source of data: https://cs.stanford.edu/people/jcjohns/clevr/

Operator Induction

The goal of this task is to predict what is the result. There is text in the image saying A ? B, where A and B are digits between 0 and 9. We randomly split all available options into 80 support and 60 query examples. For constructing the tasks we sample the images completely randomly, we sample the operation which ? represents, and then take the corresponding answer. We store 3 answers for each example in a list for the support examples: [A+B, A-B, AxB], and the result can be accessed with the appropriate index. The question that we ask is always What is the result of the following mathematical expression?. We generated the images using PIL library, using Arial font with size 100 on images of size 256x256. We store the operator for each query example, and we have 20 examples for each operator.

Interleaved Operator Induction

We also include an alternative interleaved version of operator induction where we input the two digits as separate images. The question that we ask is What is the result of the following mathematical expression?.

TextOCR

In TextOCR the goal is to recognize the text that is shown in the red rectangle. In our version of TextOCR there is always only one red rectangle in an image. We take the original training set for setting aside 800 support examples and the validation set for 200 query examples. We use the largest text in the image to simplify the task, and we make sure to filter out all cases that are not valid (marked as . in the annotation). We also filter out the rotated images. The question asked is What text is shown in the red box? and the answer is the text itself. We maintain various metadata, including the image and annotation id, width, height, box coordinates, points for the text, overall area.

Source of data: https://textvqa.org/textocr/

MiniImageNet Matching

In this variation of MiniImageNet we try to predict if two examples are from the same class. We have 400 query pairs and 1600 support pairs, evenly distributed between same and different classes. Each support pair includes a pair of examples from the same class and a pair of examples from different classes. The question is always Do the two images satisfy the induced relationship? and the answer is either Yes or No. We used our earlier Fast Open-Ended MiniImageNet to create this matching dataset.

Source of data: https://fh295.github.io/frozen.html

Text-to-Image Tasks

Fast Open-Ended T2I MiniImageNet

We introduce a variation of Fast Open-Ended MiniImageNet where the goal is to generate an image of the imaginary class as given by the support examples. The details are similar to our other version of Fast Open-Ended MiniImageNet, but the question is instead Generate a followed by the name of the imaginary class. We store the imaginary class in task_label field, and the real-world label in answer for the query examples (the support set examples have there the imaginary class). The labels were obtained from the real-world version of the benchmark. These labels can be used to assess if the generated image represents the desired imaginary class.

Source of data: https://fh295.github.io/frozen.html

CoBSAT

We reuse the CoBSAT benchmark for few-shot image generation tasks. We have 800 support and 200 query examples, and these are organized in such a way that for each of the 100 scenarios (defined by the task -- e.g. colour, and the choice of the latent variable -- e.g. object value), we have 8 support and 2 query examples. When sampling the support examples, we need to ensure that these share the same task and value of the latent variable latent, which can be either the value of attribute or object. The question has the value of the latent variable and defines what image should be generated. The image is the generated image. The answer is a list [value of the latent variable, value of the non-latent variable]. For each image we also store the values of the object, attribute.

Source of data: https://github.com/UW-Madison-Lee-Lab/CoBSAT

Text ICL Variations

We have also released the text variations of CLEVR, Operator Induction, and interleaved Operator Induction datasets to reproduce the comparison of multimodal and text ICL (Figure 7). You can either use the query.json in {dataset}_text/ folder for "text support set + text query", or use the query.json in {dataset}/ folder for "text support set + multimodal query".

Downloads last month
70