index
int32
question
string
answer
string
A
string
B
string
C
string
D
string
E
string
F
string
G
string
H
string
I
string
image
images list
category
string
l2-category
string
split
string
200
Please provide the bounding box coordinates for the described object or area using the format [x1, y1, x2, y2]. Here, [x1, y1] represent the top-left coordinates and [x2, y2] the bottom-right coordinates within a normalized range of 0 to 1, where [0, 0] is the top-left corner and [1, 1] is the bottom-right corner of the image. Note that the width of the input image is 500 and the height is 166. QUESTION:right watch
A
[0.672, 0.012, 0.992, 0.976]
[0.672, 0.012, 0.94, 1.127]
[0.672, 0.012, 0.936, 0.97]
[0.512, 0.0, 0.832, 0.964]
referring_detection
visual_grounding
VAL
201
Please provide the bounding box coordinates for the described object or area using the format [x1, y1, x2, y2]. Here, [x1, y1] represent the top-left coordinates and [x2, y2] the bottom-right coordinates within a normalized range of 0 to 1, where [0, 0] is the top-left corner and [1, 1] is the bottom-right corner of the image. Note that the width of the input image is 640 and the height is 426. QUESTION:woman facing us
B
[0.497, 0.113, 0.925, 0.277]
[0.553, 0.322, 0.752, 0.739]
[0.553, 0.322, 0.761, 0.784]
[0.553, 0.322, 0.761, 0.714]
referring_detection
visual_grounding
VAL
202
Please provide the bounding box coordinates for the described object or area using the format [x1, y1, x2, y2]. Here, [x1, y1] represent the top-left coordinates and [x2, y2] the bottom-right coordinates within a normalized range of 0 to 1, where [0, 0] is the top-left corner and [1, 1] is the bottom-right corner of the image. Note that the width of the input image is 480 and the height is 640. QUESTION:a women was smilling and cooking
C
[0.019, 0.628, 0.435, 0.734]
[0.463, 0.186, 0.492, 0.341]
[0.204, 0.164, 0.544, 0.991]
[0.217, 0.173, 0.556, 1.0]
referring_detection
visual_grounding
VAL
203
Please provide the bounding box coordinates for the described object or area using the format [x1, y1, x2, y2]. Here, [x1, y1] represent the top-left coordinates and [x2, y2] the bottom-right coordinates within a normalized range of 0 to 1, where [0, 0] is the top-left corner and [1, 1] is the bottom-right corner of the image. Note that the width of the input image is 500 and the height is 375. QUESTION:The freshly made cake has had a few slices taken out of it already.
A
[0.604, 0.357, 0.876, 0.584]
[0.676, 0.416, 0.948, 0.643]
[0.604, 0.357, 0.844, 0.587]
[0.604, 0.357, 0.836, 0.579]
referring_detection
visual_grounding
VAL
204
Please provide the bounding box coordinates for the described object or area using the format [x1, y1, x2, y2]. Here, [x1, y1] represent the top-left coordinates and [x2, y2] the bottom-right coordinates within a normalized range of 0 to 1, where [0, 0] is the top-left corner and [1, 1] is the bottom-right corner of the image. Note that the width of the input image is 612 and the height is 612. QUESTION:A wooden table with a bowl on it
D
[0.502, 0.342, 1.033, 0.523]
[0.418, 0.353, 0.668, 0.632]
[0.516, 0.307, 1.0, 0.461]
[0.502, 0.342, 0.985, 0.495]
referring_detection
visual_grounding
VAL
205
Please provide the bounding box coordinates for the described object or area using the format [x1, y1, x2, y2]. Here, [x1, y1] represent the top-left coordinates and [x2, y2] the bottom-right coordinates within a normalized range of 0 to 1, where [0, 0] is the top-left corner and [1, 1] is the bottom-right corner of the image. Note that the width of the input image is 640 and the height is 480. QUESTION:A small bench farthest to the left of similar benches.
D
[0.766, 0.64, 0.972, 0.975]
[0.109, 0.71, 0.234, 0.787]
[0.188, 0.469, 0.463, 0.938]
[0.116, 0.367, 0.391, 0.835]
referring_detection
visual_grounding
VAL
206
Please provide the bounding box coordinates for the described object or area using the format [x1, y1, x2, y2]. Here, [x1, y1] represent the top-left coordinates and [x2, y2] the bottom-right coordinates within a normalized range of 0 to 1, where [0, 0] is the top-left corner and [1, 1] is the bottom-right corner of the image. Note that the width of the input image is 640 and the height is 425. QUESTION:A pizza being bent.
A
[0.559, 0.296, 0.93, 0.602]
[0.03, 0.139, 0.127, 0.638]
[0.559, 0.296, 0.87, 0.546]
[0.63, 0.231, 1.0, 0.536]
referring_detection
visual_grounding
VAL
207
Please provide the bounding box coordinates for the described object or area using the format [x1, y1, x2, y2]. Here, [x1, y1] represent the top-left coordinates and [x2, y2] the bottom-right coordinates within a normalized range of 0 to 1, where [0, 0] is the top-left corner and [1, 1] is the bottom-right corner of the image. Note that the width of the input image is 500 and the height is 375. QUESTION:A man in jeans raising his Wii Remote.
A
[0.364, 0.155, 0.608, 0.989]
[0.364, 0.155, 0.61, 0.997]
[0.036, 0.48, 0.48, 0.624]
[0.486, 0.0, 0.73, 0.835]
referring_detection
visual_grounding
VAL
208
Please provide the bounding box coordinates for the described object or area using the format [x1, y1, x2, y2]. Here, [x1, y1] represent the top-left coordinates and [x2, y2] the bottom-right coordinates within a normalized range of 0 to 1, where [0, 0] is the top-left corner and [1, 1] is the bottom-right corner of the image. Note that the width of the input image is 426 and the height is 640. QUESTION:Skateboarder wearing shoes with blue laces.
C
[0.732, 0.472, 0.967, 0.548]
[0.031, 0.091, 0.561, 0.855]
[0.031, 0.091, 0.664, 0.88]
[0.031, 0.091, 0.709, 0.952]
referring_detection
visual_grounding
VAL
209
Please provide the bounding box coordinates for the described object or area using the format [x1, y1, x2, y2]. Here, [x1, y1] represent the top-left coordinates and [x2, y2] the bottom-right coordinates within a normalized range of 0 to 1, where [0, 0] is the top-left corner and [1, 1] is the bottom-right corner of the image. Note that the width of the input image is 500 and the height is 375. QUESTION:A lunch tray with a blue and white toy.
D
[0.628, 0.0, 0.936, 0.715]
[0.324, 0.264, 0.348, 0.507]
[0.55, 0.061, 0.928, 0.411]
[0.692, 0.16, 1.0, 0.875]
referring_detection
visual_grounding
VAL
210
Please provide the bounding box coordinates for the described object or area using the format [x1, y1, x2, y2]. Here, [x1, y1] represent the top-left coordinates and [x2, y2] the bottom-right coordinates within a normalized range of 0 to 1, where [0, 0] is the top-left corner and [1, 1] is the bottom-right corner of the image. Note that the width of the input image is 612 and the height is 612. QUESTION:man in tan shirt riding a street bike with a boy sitting on the back
B
[0.193, 0.662, 0.379, 1.0]
[0.261, 0.621, 0.448, 0.959]
[0.261, 0.621, 0.482, 1.016]
[0.261, 0.621, 0.438, 0.995]
referring_detection
visual_grounding
VAL
211
Please provide the bounding box coordinates for the described object or area using the format [x1, y1, x2, y2]. Here, [x1, y1] represent the top-left coordinates and [x2, y2] the bottom-right coordinates within a normalized range of 0 to 1, where [0, 0] is the top-left corner and [1, 1] is the bottom-right corner of the image. Note that the width of the input image is 612 and the height is 612. QUESTION:a chocolate donut
C
[0.358, 0.338, 0.737, 0.526]
[0.186, 0.35, 0.531, 0.508]
[0.358, 0.338, 0.703, 0.497]
[0.358, 0.338, 0.719, 0.528]
referring_detection
visual_grounding
VAL
212
Please provide the bounding box coordinates for the described object or area using the format [x1, y1, x2, y2]. Here, [x1, y1] represent the top-left coordinates and [x2, y2] the bottom-right coordinates within a normalized range of 0 to 1, where [0, 0] is the top-left corner and [1, 1] is the bottom-right corner of the image. Note that the width of the input image is 640 and the height is 516. QUESTION:woman on left
A
[0.156, 0.153, 0.453, 0.622]
[0.117, 0.721, 0.245, 0.983]
[0.016, 0.029, 0.312, 0.498]
[0.65, 0.828, 0.697, 0.969]
referring_detection
visual_grounding
VAL
213
Please provide the bounding box coordinates for the described object or area using the format [x1, y1, x2, y2]. Here, [x1, y1] represent the top-left coordinates and [x2, y2] the bottom-right coordinates within a normalized range of 0 to 1, where [0, 0] is the top-left corner and [1, 1] is the bottom-right corner of the image. Note that the width of the input image is 640 and the height is 432. QUESTION:BEACH UMBRELLA
D
[0.389, 0.537, 0.881, 0.917]
[0.005, 0.014, 0.963, 0.262]
[0.287, 0.495, 0.319, 0.845]
[0.005, 0.014, 0.816, 0.303]
referring_detection
visual_grounding
VAL
214
Please provide the bounding box coordinates for the described object or area using the format [x1, y1, x2, y2]. Here, [x1, y1] represent the top-left coordinates and [x2, y2] the bottom-right coordinates within a normalized range of 0 to 1, where [0, 0] is the top-left corner and [1, 1] is the bottom-right corner of the image. Note that the width of the input image is 375 and the height is 500. QUESTION:the empty pavilion
C
[0.392, 0.422, 0.808, 0.78]
[0.003, 0.55, 1.112, 0.694]
[0.003, 0.55, 0.941, 0.688]
[0.003, 0.55, 1.032, 0.676]
referring_detection
visual_grounding
VAL
215
Please provide the bounding box coordinates for the described object or area using the format [x1, y1, x2, y2]. Here, [x1, y1] represent the top-left coordinates and [x2, y2] the bottom-right coordinates within a normalized range of 0 to 1, where [0, 0] is the top-left corner and [1, 1] is the bottom-right corner of the image. Note that the width of the input image is 427 and the height is 640. QUESTION:Water glass on a table
C
[0.04, 0.498, 0.379, 0.692]
[0.614, 0.205, 0.916, 0.62]
[0.101, 0.178, 0.363, 0.388]
[0.101, 0.178, 0.398, 0.37]
referring_detection
visual_grounding
VAL
216
Please provide the bounding box coordinates for the described object or area using the format [x1, y1, x2, y2]. Here, [x1, y1] represent the top-left coordinates and [x2, y2] the bottom-right coordinates within a normalized range of 0 to 1, where [0, 0] is the top-left corner and [1, 1] is the bottom-right corner of the image. Note that the width of the input image is 640 and the height is 480. QUESTION:The urinal on the left.
D
[0.138, 0.485, 0.333, 0.829]
[0.205, 0.333, 0.422, 0.627]
[0.181, 0.415, 0.644, 0.623]
[0.205, 0.333, 0.4, 0.677]
referring_detection
visual_grounding
VAL
217
Please provide the bounding box coordinates for the described object or area using the format [x1, y1, x2, y2]. Here, [x1, y1] represent the top-left coordinates and [x2, y2] the bottom-right coordinates within a normalized range of 0 to 1, where [0, 0] is the top-left corner and [1, 1] is the bottom-right corner of the image. Note that the width of the input image is 640 and the height is 480. QUESTION:The bed nearest the window
D
[0.608, 0.481, 0.961, 0.779]
[0.484, 0.577, 0.947, 0.867]
[0.43, 0.554, 0.822, 0.85]
[0.608, 0.481, 1.0, 0.777]
referring_detection
visual_grounding
VAL
218
Please provide the bounding box coordinates for the described object or area using the format [x1, y1, x2, y2]. Here, [x1, y1] represent the top-left coordinates and [x2, y2] the bottom-right coordinates within a normalized range of 0 to 1, where [0, 0] is the top-left corner and [1, 1] is the bottom-right corner of the image. Note that the width of the input image is 500 and the height is 331. QUESTION:The longest wood skis in the scene being carried.
C
[0.634, 0.296, 0.94, 0.468]
[0.556, 0.299, 0.816, 0.489]
[0.556, 0.299, 0.862, 0.471]
[0.594, 0.381, 0.9, 0.553]
referring_detection
visual_grounding
VAL
219
Following the structural and analogical relations, which image best completes the problem matrix?
B
Choice 0
Choice 1
Choice 2
Choice 3
Choice 4
Choice 5
Choice 6
Choice 7
ravens_progressive_matrices
intelligence_quotient_test
VAL
220
Following the structural and analogical relations, which image best completes the problem matrix?
G
Choice 0
Choice 1
Choice 2
Choice 3
Choice 4
Choice 5
Choice 6
Choice 7
ravens_progressive_matrices
intelligence_quotient_test
VAL
221
Following the structural and analogical relations, which image best completes the problem matrix?
G
Choice 0
Choice 1
Choice 2
Choice 3
Choice 4
Choice 5
Choice 6
Choice 7
ravens_progressive_matrices
intelligence_quotient_test
VAL
222
Following the structural and analogical relations, which image best completes the problem matrix?
C
Choice 0
Choice 1
Choice 2
Choice 3
Choice 4
Choice 5
Choice 6
Choice 7
ravens_progressive_matrices
intelligence_quotient_test
VAL
223
Following the structural and analogical relations, which image best completes the problem matrix?
D
Choice 0
Choice 1
Choice 2
Choice 3
Choice 4
Choice 5
Choice 6
Choice 7
ravens_progressive_matrices
intelligence_quotient_test
VAL
224
Following the structural and analogical relations, which image best completes the problem matrix?
H
Choice 0
Choice 1
Choice 2
Choice 3
Choice 4
Choice 5
Choice 6
Choice 7
ravens_progressive_matrices
intelligence_quotient_test
VAL
225
Following the structural and analogical relations, which image best completes the problem matrix?
B
Choice 0
Choice 1
Choice 2
Choice 3
Choice 4
Choice 5
Choice 6
Choice 7
ravens_progressive_matrices
intelligence_quotient_test
VAL
226
Following the structural and analogical relations, which image best completes the problem matrix?
D
Choice 0
Choice 1
Choice 2
Choice 3
Choice 4
Choice 5
Choice 6
Choice 7
ravens_progressive_matrices
intelligence_quotient_test
VAL
227
Following the structural and analogical relations, which image best completes the problem matrix?
A
Choice 0
Choice 1
Choice 2
Choice 3
Choice 4
Choice 5
Choice 6
Choice 7
ravens_progressive_matrices
intelligence_quotient_test
VAL
228
Following the structural and analogical relations, which image best completes the problem matrix?
B
Choice 0
Choice 1
Choice 2
Choice 3
Choice 4
Choice 5
Choice 6
Choice 7
ravens_progressive_matrices
intelligence_quotient_test
VAL
229
Following the structural and analogical relations, which image best completes the problem matrix?
C
Choice 0
Choice 1
Choice 2
Choice 3
Choice 4
Choice 5
Choice 6
Choice 7
ravens_progressive_matrices
intelligence_quotient_test
VAL
230
Following the structural and analogical relations, which image best completes the problem matrix?
E
Choice 0
Choice 1
Choice 2
Choice 3
Choice 4
Choice 5
Choice 6
Choice 7
ravens_progressive_matrices
intelligence_quotient_test
VAL
231
Following the structural and analogical relations, which image best completes the problem matrix?
F
Choice 0
Choice 1
Choice 2
Choice 3
Choice 4
Choice 5
Choice 6
Choice 7
ravens_progressive_matrices
intelligence_quotient_test
VAL
232
Following the structural and analogical relations, which image best completes the problem matrix?
E
Choice 0
Choice 1
Choice 2
Choice 3
Choice 4
Choice 5
Choice 6
Choice 7
ravens_progressive_matrices
intelligence_quotient_test
VAL
233
Following the structural and analogical relations, which image best completes the problem matrix?
H
Choice 0
Choice 1
Choice 2
Choice 3
Choice 4
Choice 5
Choice 6
Choice 7
ravens_progressive_matrices
intelligence_quotient_test
VAL
234
Following the structural and analogical relations, which image best completes the problem matrix?
C
Choice 0
Choice 1
Choice 2
Choice 3
Choice 4
Choice 5
Choice 6
Choice 7
ravens_progressive_matrices
intelligence_quotient_test
VAL
235
Following the structural and analogical relations, which image best completes the problem matrix?
D
Choice 0
Choice 1
Choice 2
Choice 3
Choice 4
Choice 5
Choice 6
Choice 7
ravens_progressive_matrices
intelligence_quotient_test
VAL
236
Following the structural and analogical relations, which image best completes the problem matrix?
E
Choice 0
Choice 1
Choice 2
Choice 3
Choice 4
Choice 5
Choice 6
Choice 7
ravens_progressive_matrices
intelligence_quotient_test
VAL
237
Following the structural and analogical relations, which image best completes the problem matrix?
G
Choice 0
Choice 1
Choice 2
Choice 3
Choice 4
Choice 5
Choice 6
Choice 7
ravens_progressive_matrices
intelligence_quotient_test
VAL
238
Following the structural and analogical relations, which image best completes the problem matrix?
E
Choice 0
Choice 1
Choice 2
Choice 3
Choice 4
Choice 5
Choice 6
Choice 7
ravens_progressive_matrices
intelligence_quotient_test
VAL
239
You are a professional image matting expert. What is the alpha value of the pixel point at coordinates (0.479, 0.921) in the image for image matting purposes? The alpha value represents the degree of transparency of the salient object against the background at this specific pixel. In this context, an alpha value of 0 indicates complete transparency, meaning the pixel is entirely invisible, while an alpha value of 255 represents complete opacity, meaning the pixel is fully visible. The dimensions of the input image are given as 1444 in width and 1080 in height. The coordinates of the top left corner of the image are (0, 0), and those of the bottom right corner are (1.0, 1.0).
B
0
176
255
217
image_matting
pixel_level_perception
VAL
240
You are a professional image matting expert. What is the alpha value of the pixel point at coordinates (0.157, 1.124) in the image for image matting purposes? The alpha value represents the degree of transparency of the salient object against the background at this specific pixel. In this context, an alpha value of 0 indicates complete transparency, meaning the pixel is entirely invisible, while an alpha value of 255 represents complete opacity, meaning the pixel is fully visible. The dimensions of the input image are given as 1919 in width and 1080 in height. The coordinates of the top left corner of the image are (0, 0), and those of the bottom right corner are (1.0, 1.0).
B
0
4
34
255
image_matting
pixel_level_perception
VAL
241
You are a professional image matting expert. What is the alpha value of the pixel point at coordinates (1.328, 0.342) in the image for image matting purposes? The alpha value represents the degree of transparency of the salient object against the background at this specific pixel. In this context, an alpha value of 0 indicates complete transparency, meaning the pixel is entirely invisible, while an alpha value of 255 represents complete opacity, meaning the pixel is fully visible. The dimensions of the input image are given as 1080 in width and 1620 in height. The coordinates of the top left corner of the image are (0, 0), and those of the bottom right corner are (1.0, 1.0).
C
250
204
255
0
image_matting
pixel_level_perception
VAL
242
You are a professional image matting expert. What is the alpha value of the pixel point at coordinates (0.771, 0.557) in the image for image matting purposes? The alpha value represents the degree of transparency of the salient object against the background at this specific pixel. In this context, an alpha value of 0 indicates complete transparency, meaning the pixel is entirely invisible, while an alpha value of 255 represents complete opacity, meaning the pixel is fully visible. The dimensions of the input image are given as 1080 in width and 1439 in height. The coordinates of the top left corner of the image are (0, 0), and those of the bottom right corner are (1.0, 1.0).
A
255
181
0
227
image_matting
pixel_level_perception
VAL
243
You are a professional image matting expert. What is the alpha value of the pixel point at coordinates (0.894, 0.28) in the image for image matting purposes? The alpha value represents the degree of transparency of the salient object against the background at this specific pixel. In this context, an alpha value of 0 indicates complete transparency, meaning the pixel is entirely invisible, while an alpha value of 255 represents complete opacity, meaning the pixel is fully visible. The dimensions of the input image are given as 1080 in width and 1439 in height. The coordinates of the top left corner of the image are (0, 0), and those of the bottom right corner are (1.0, 1.0).
B
0
6
102
255
image_matting
pixel_level_perception
VAL
244
You are a professional image matting expert. What is the alpha value of the pixel point at coordinates (0.389, 0.688) in the image for image matting purposes? The alpha value represents the degree of transparency of the salient object against the background at this specific pixel. In this context, an alpha value of 0 indicates complete transparency, meaning the pixel is entirely invisible, while an alpha value of 255 represents complete opacity, meaning the pixel is fully visible. The dimensions of the input image are given as 1620 in width and 1080 in height. The coordinates of the top left corner of the image are (0, 0), and those of the bottom right corner are (1.0, 1.0).
C
9
0
34
255
image_matting
pixel_level_perception
VAL
245
You are a professional image matting expert. What is the alpha value of the pixel point at coordinates (1.327, 0.274) in the image for image matting purposes? The alpha value represents the degree of transparency of the salient object against the background at this specific pixel. In this context, an alpha value of 0 indicates complete transparency, meaning the pixel is entirely invisible, while an alpha value of 255 represents complete opacity, meaning the pixel is fully visible. The dimensions of the input image are given as 1080 in width and 1619 in height. The coordinates of the top left corner of the image are (0, 0), and those of the bottom right corner are (1.0, 1.0).
B
255
23
131
0
image_matting
pixel_level_perception
VAL
246
You are a professional image matting expert. What is the alpha value of the pixel point at coordinates (0.252, 1.258) in the image for image matting purposes? The alpha value represents the degree of transparency of the salient object against the background at this specific pixel. In this context, an alpha value of 0 indicates complete transparency, meaning the pixel is entirely invisible, while an alpha value of 255 represents complete opacity, meaning the pixel is fully visible. The dimensions of the input image are given as 1613 in width and 1080 in height. The coordinates of the top left corner of the image are (0, 0), and those of the bottom right corner are (1.0, 1.0).
A
255
0
168
127
image_matting
pixel_level_perception
VAL
247
You are a professional image matting expert. What is the alpha value of the pixel point at coordinates (0.973, 0.428) in the image for image matting purposes? The alpha value represents the degree of transparency of the salient object against the background at this specific pixel. In this context, an alpha value of 0 indicates complete transparency, meaning the pixel is entirely invisible, while an alpha value of 255 represents complete opacity, meaning the pixel is fully visible. The dimensions of the input image are given as 1080 in width and 1630 in height. The coordinates of the top left corner of the image are (0, 0), and those of the bottom right corner are (1.0, 1.0).
B
219
254
0
255
image_matting
pixel_level_perception
VAL
248
You are a professional image matting expert. What is the alpha value of the pixel point at coordinates (0.379, 0.574) in the image for image matting purposes? The alpha value represents the degree of transparency of the salient object against the background at this specific pixel. In this context, an alpha value of 0 indicates complete transparency, meaning the pixel is entirely invisible, while an alpha value of 255 represents complete opacity, meaning the pixel is fully visible. The dimensions of the input image are given as 1620 in width and 1080 in height. The coordinates of the top left corner of the image are (0, 0), and those of the bottom right corner are (1.0, 1.0).
A
107
0
77
255
image_matting
pixel_level_perception
VAL
249
You are a professional image matting expert. What is the alpha value of the pixel point at coordinates (1.144, 0.43) in the image for image matting purposes? The alpha value represents the degree of transparency of the salient object against the background at this specific pixel. In this context, an alpha value of 0 indicates complete transparency, meaning the pixel is entirely invisible, while an alpha value of 255 represents complete opacity, meaning the pixel is fully visible. The dimensions of the input image are given as 1080 in width and 1440 in height. The coordinates of the top left corner of the image are (0, 0), and those of the bottom right corner are (1.0, 1.0).
C
238
101
255
0
image_matting
pixel_level_perception
VAL
250
You are a professional image matting expert. What is the alpha value of the pixel point at coordinates (0.581, 0.94) in the image for image matting purposes? The alpha value represents the degree of transparency of the salient object against the background at this specific pixel. In this context, an alpha value of 0 indicates complete transparency, meaning the pixel is entirely invisible, while an alpha value of 255 represents complete opacity, meaning the pixel is fully visible. The dimensions of the input image are given as 1620 in width and 1080 in height. The coordinates of the top left corner of the image are (0, 0), and those of the bottom right corner are (1.0, 1.0).
A
0
42
255
37
image_matting
pixel_level_perception
VAL
251
You are a professional image matting expert. What is the alpha value of the pixel point at coordinates (0.62, 1.488) in the image for image matting purposes? The alpha value represents the degree of transparency of the salient object against the background at this specific pixel. In this context, an alpha value of 0 indicates complete transparency, meaning the pixel is entirely invisible, while an alpha value of 255 represents complete opacity, meaning the pixel is fully visible. The dimensions of the input image are given as 1620 in width and 1080 in height. The coordinates of the top left corner of the image are (0, 0), and those of the bottom right corner are (1.0, 1.0).
D
198
112
255
0
image_matting
pixel_level_perception
VAL
252
You are a professional image matting expert. What is the alpha value of the pixel point at coordinates (0.63, 0.881) in the image for image matting purposes? The alpha value represents the degree of transparency of the salient object against the background at this specific pixel. In this context, an alpha value of 0 indicates complete transparency, meaning the pixel is entirely invisible, while an alpha value of 255 represents complete opacity, meaning the pixel is fully visible. The dimensions of the input image are given as 1620 in width and 1080 in height. The coordinates of the top left corner of the image are (0, 0), and those of the bottom right corner are (1.0, 1.0).
D
255
177
0
254
image_matting
pixel_level_perception
VAL
253
You are a professional image matting expert. What is the alpha value of the pixel point at coordinates (0.245, 1.096) in the image for image matting purposes? The alpha value represents the degree of transparency of the salient object against the background at this specific pixel. In this context, an alpha value of 0 indicates complete transparency, meaning the pixel is entirely invisible, while an alpha value of 255 represents complete opacity, meaning the pixel is fully visible. The dimensions of the input image are given as 1920 in width and 1080 in height. The coordinates of the top left corner of the image are (0, 0), and those of the bottom right corner are (1.0, 1.0).
D
16
170
0
255
image_matting
pixel_level_perception
VAL
254
You are a professional image matting expert. What is the alpha value of the pixel point at coordinates (0.622, 0.216) in the image for image matting purposes? The alpha value represents the degree of transparency of the salient object against the background at this specific pixel. In this context, an alpha value of 0 indicates complete transparency, meaning the pixel is entirely invisible, while an alpha value of 255 represents complete opacity, meaning the pixel is fully visible. The dimensions of the input image are given as 1080 in width and 1655 in height. The coordinates of the top left corner of the image are (0, 0), and those of the bottom right corner are (1.0, 1.0).
D
0
10
255
254
image_matting
pixel_level_perception
VAL
255
You are a professional image matting expert. What is the alpha value of the pixel point at coordinates (0.88, 0.535) in the image for image matting purposes? The alpha value represents the degree of transparency of the salient object against the background at this specific pixel. In this context, an alpha value of 0 indicates complete transparency, meaning the pixel is entirely invisible, while an alpha value of 255 represents complete opacity, meaning the pixel is fully visible. The dimensions of the input image are given as 1080 in width and 1620 in height. The coordinates of the top left corner of the image are (0, 0), and those of the bottom right corner are (1.0, 1.0).
D
255
43
181
0
image_matting
pixel_level_perception
VAL
256
You are a professional image matting expert. What is the alpha value of the pixel point at coordinates (0.255, 0.334) in the image for image matting purposes? The alpha value represents the degree of transparency of the salient object against the background at this specific pixel. In this context, an alpha value of 0 indicates complete transparency, meaning the pixel is entirely invisible, while an alpha value of 255 represents complete opacity, meaning the pixel is fully visible. The dimensions of the input image are given as 1618 in width and 1080 in height. The coordinates of the top left corner of the image are (0, 0), and those of the bottom right corner are (1.0, 1.0).
D
241
255
0
247
image_matting
pixel_level_perception
VAL
257
You are a professional image matting expert. What is the alpha value of the pixel point at coordinates (0.927, 0.408) in the image for image matting purposes? The alpha value represents the degree of transparency of the salient object against the background at this specific pixel. In this context, an alpha value of 0 indicates complete transparency, meaning the pixel is entirely invisible, while an alpha value of 255 represents complete opacity, meaning the pixel is fully visible. The dimensions of the input image are given as 1080 in width and 1437 in height. The coordinates of the top left corner of the image are (0, 0), and those of the bottom right corner are (1.0, 1.0).
D
0
255
84
243
image_matting
pixel_level_perception
VAL
258
You are a professional image matting expert. What is the alpha value of the pixel point at coordinates (0.196, 0.894) in the image for image matting purposes? The alpha value represents the degree of transparency of the salient object against the background at this specific pixel. In this context, an alpha value of 0 indicates complete transparency, meaning the pixel is entirely invisible, while an alpha value of 255 represents complete opacity, meaning the pixel is fully visible. The dimensions of the input image are given as 1620 in width and 1080 in height. The coordinates of the top left corner of the image are (0, 0), and those of the bottom right corner are (1.0, 1.0).
C
255
0
254
143
image_matting
pixel_level_perception
VAL
259
What is the depth (in meters) at the coordinates (0.225, 1.125) in the figure? The camera intrinsic parameters are as follows, Focal Length: 518.8579, Principal Point: (519.46961, 325.58245), Distortion Parameters: 253.73617
C
1.403
0.091
0.783
0.101
depth_estimation
pixel_level_perception
VAL
260
What is the depth (in meters) at the coordinates (0.464, 1.144) in the figure? The camera intrinsic parameters are as follows, Focal Length: 518.8579, Principal Point: (519.46961, 325.58245), Distortion Parameters: 253.73617
B
2.179
3.213
5.608
4.223
depth_estimation
pixel_level_perception
VAL
261
What is the depth (in meters) at the coordinates (0.414, 1.21) in the figure? The camera intrinsic parameters are as follows, Focal Length: 518.8579, Principal Point: (519.46961, 325.58245), Distortion Parameters: 253.73617
D
5.661
0.96
3.897
3.603
depth_estimation
pixel_level_perception
VAL
262
What is the depth (in meters) at the coordinates (0.18, 0.715) in the figure? The camera intrinsic parameters are as follows, Focal Length: 721.5377, Principal Point: (721.5377, 609.5593), Distortion Parameters: 172.854
A
27.1640625
19.926
53.428
31.937
depth_estimation
pixel_level_perception
VAL
263
What is the depth (in meters) at the coordinates (0.225, 1.259) in the figure? The camera intrinsic parameters are as follows, Focal Length: 721.5377, Principal Point: (721.5377, 609.5593), Distortion Parameters: 172.854
A
11.56640625
17.426
20.768
11.259
depth_estimation
pixel_level_perception
VAL
264
What is the depth (in meters) at the coordinates (0.24, 2.88) in the figure? The camera intrinsic parameters are as follows, Focal Length: 721.5377, Principal Point: (721.5377, 609.5593), Distortion Parameters: 172.854
B
4.458
8.50390625
4.764
12.443
depth_estimation
pixel_level_perception
VAL
265
What is the depth (in meters) at the coordinates (0.199, 2.096) in the figure? The camera intrinsic parameters are as follows, Focal Length: 721.5377, Principal Point: (721.5377, 609.5593), Distortion Parameters: 172.854
B
3.413
12.7734375
2.555
9.476
depth_estimation
pixel_level_perception
VAL
266
What is the depth (in meters) at the coordinates (0.218, 1.465) in the figure? The camera intrinsic parameters are as follows, Focal Length: 721.5377, Principal Point: (721.5377, 609.5593), Distortion Parameters: 172.854
D
4.312
18.945
20.441
12.02734375
depth_estimation
pixel_level_perception
VAL
267
What is the depth (in meters) at the coordinates (0.279, 1.661) in the figure? The camera intrinsic parameters are as follows, Focal Length: 721.5377, Principal Point: (721.5377, 609.5593), Distortion Parameters: 172.854
A
6.80078125
2.975
7.84
5.655
depth_estimation
pixel_level_perception
VAL
268
What is the depth (in meters) at the coordinates (0.169, 1.472) in the figure? The camera intrinsic parameters are as follows, Focal Length: 721.5377, Principal Point: (721.5377, 609.5593), Distortion Parameters: 172.854
A
32.703125
25.846
30.798
28.812
depth_estimation
pixel_level_perception
VAL
269
What is the depth (in meters) at the coordinates (0.211, 2.043) in the figure? The camera intrinsic parameters are as follows, Focal Length: 721.5377, Principal Point: (721.5377, 609.5593), Distortion Parameters: 172.854
D
3.677
0.368
22.269
11.78125
depth_estimation
pixel_level_perception
VAL
270
What is the depth (in meters) at the coordinates (0.224, 2.947) in the figure? The camera intrinsic parameters are as follows, Focal Length: 721.5377, Principal Point: (721.5377, 609.5593), Distortion Parameters: 172.854
B
12.785
11.17578125
21.288
15.106
depth_estimation
pixel_level_perception
VAL
271
What is the depth (in meters) at the coordinates (0.25, 1.216) in the figure? The camera intrinsic parameters are as follows, Focal Length: 721.5377, Principal Point: (721.5377, 609.5593), Distortion Parameters: 172.854
D
1.848
13.267
6.072
8.4765625
depth_estimation
pixel_level_perception
VAL
272
What is the depth (in meters) at the coordinates (0.62, 1.129) in the figure? The camera intrinsic parameters are as follows, Focal Length: 518.8579, Principal Point: (519.46961, 325.58245), Distortion Parameters: 253.73617
B
2.002
1.279
2.213
1.074
depth_estimation
pixel_level_perception
VAL
273
What is the depth (in meters) at the coordinates (0.2, 2.147) in the figure? The camera intrinsic parameters are as follows, Focal Length: 721.5377, Principal Point: (721.5377, 609.5593), Distortion Parameters: 172.854
D
11.525
25.068
17.117
14.87109375
depth_estimation
pixel_level_perception
VAL
274
What is the depth (in meters) at the coordinates (0.212, 2.131) in the figure? The camera intrinsic parameters are as follows, Focal Length: 721.5377, Principal Point: (721.5377, 609.5593), Distortion Parameters: 172.854
D
14.065
23.368
22.82
13.359375
depth_estimation
pixel_level_perception
VAL
275
What is the depth (in meters) at the coordinates (0.252, 1.936) in the figure? The camera intrinsic parameters are as follows, Focal Length: 721.5377, Principal Point: (721.5377, 609.5593), Distortion Parameters: 172.854
C
11.501
12.198
8.7578125
0.145
depth_estimation
pixel_level_perception
VAL
276
What is the depth (in meters) at the coordinates (0.171, 0.792) in the figure? The camera intrinsic parameters are as follows, Focal Length: 721.5377, Principal Point: (721.5377, 609.5593), Distortion Parameters: 172.854
B
12.081
31.93359375
18.879
13.233
depth_estimation
pixel_level_perception
VAL
277
What is the depth (in meters) at the coordinates (0.164, 1.315) in the figure? The camera intrinsic parameters are as follows, Focal Length: 721.5377, Principal Point: (721.5377, 609.5593), Distortion Parameters: 172.854
D
12.6
2.697
8.668
10.3828125
depth_estimation
pixel_level_perception
VAL
278
What is the depth (in meters) at the coordinates (0.259, 1.181) in the figure? The camera intrinsic parameters are as follows, Focal Length: 721.5377, Principal Point: (721.5377, 609.5593), Distortion Parameters: 172.854
C
4.899
14.012
7.96875
3.777
depth_estimation
pixel_level_perception
VAL
279
Please detect all instances of the following categories in this image: person, bicycle, car, motorcycle, airplane, bus, train, truck, boat, traffic light, fire hydrant, stop sign, parking meter, bench, bird, cat, dog, horse, sheep, cow, elephant, bear, zebra, giraffe, backpack, umbrella, handbag, tie, suitcase, frisbee, skis, snowboard, sports ball, kite, baseball bat, baseball glove, skateboard, surfboard, tennis racket, bottle, wine glass, cup, fork, knife, spoon, bowl, banana, apple, sandwich, orange, broccoli, carrot, hot dog, pizza, donut, cake, chair, couch, potted plant, bed, dining table, toilet, tv, laptop, mouse, remote, keyboard, cell phone, microwave, oven, toaster, sink, refrigerator, book, clock, vase, scissors, teddy bear, hair drier, toothbrush. For each detected object, provide the output in the format: category: ((x1, y1), (x2, y2)). The point (x1, y1) represents a coordinate on the detected object, and the point (x2, y2) represents a coordinate outside the detected object. Note that the width of the input image is given as 640 and the height as 640.
B
('train', '450', '217', '113', '182'): [(0.703, 0.339), (0.177, 0.284)], ('train', '450', '217', '113', '182'): [(0.247, 0.836), (0.428, 0.031)]
('train', '260', '88', '416', '364'): [(0.406, 0.138), (0.65, 0.569)], ('train', '260', '88', '416', '364'): [(0.955, 0.431), (0.369, 0.775)]
('train', '595', '165', '142', '141'): [(0.93, 0.258), (0.222, 0.22)], ('train', '595', '165', '142', '141'): [(0.766, 0.548), (0.708, 0.723)]
('train', '277', '211', '135', '62'): [(0.433, 0.33), (0.211, 0.097)], ('train', '277', '211', '135', '62'): [(0.244, 0.669), (0.087, 0.791)]
pixel_localization
pixel_level_perception
VAL
280
Please detect all instances of the following categories in this image: person, bicycle, car, motorcycle, airplane, bus, train, truck, boat, traffic light, fire hydrant, stop sign, parking meter, bench, bird, cat, dog, horse, sheep, cow, elephant, bear, zebra, giraffe, backpack, umbrella, handbag, tie, suitcase, frisbee, skis, snowboard, sports ball, kite, baseball bat, baseball glove, skateboard, surfboard, tennis racket, bottle, wine glass, cup, fork, knife, spoon, bowl, banana, apple, sandwich, orange, broccoli, carrot, hot dog, pizza, donut, cake, chair, couch, potted plant, bed, dining table, toilet, tv, laptop, mouse, remote, keyboard, cell phone, microwave, oven, toaster, sink, refrigerator, book, clock, vase, scissors, teddy bear, hair drier, toothbrush. For each detected object, provide the output in the format: category: ((x1, y1), (x2, y2)). The point (x1, y1) represents a coordinate on the detected object, and the point (x2, y2) represents a coordinate outside the detected object. Note that the width of the input image is given as 640 and the height as 480.
B
('bus', '21', '285', '328', '134'): [(0.033, 0.594), (0.512, 0.279)], ('bus', '21', '285', '328', '134'): [(0.38, 0.579), (0.377, 0.585)]
('bus', '202', '369', '410', '381'): [(0.316, 0.769), (0.641, 0.794)], ('bus', '202', '369', '410', '381'): [(0.372, 0.621), (0.034, 1.085)]
('bus', '356', '420', '244', '281'): [(0.556, 0.875), (0.381, 0.585)], ('bus', '356', '420', '244', '281'): [(0.372, 0.621), (0.034, 1.085)]
('bus', '215', '424', '240', '298'): [(0.336, 0.883), (0.375, 0.621)], ('bus', '215', '424', '240', '298'): [(0.372, 0.623), (0.383, 0.61)]
pixel_localization
pixel_level_perception
VAL
281
Please detect all instances of the following categories in this image: person, bicycle, car, motorcycle, airplane, bus, train, truck, boat, traffic light, fire hydrant, stop sign, parking meter, bench, bird, cat, dog, horse, sheep, cow, elephant, bear, zebra, giraffe, backpack, umbrella, handbag, tie, suitcase, frisbee, skis, snowboard, sports ball, kite, baseball bat, baseball glove, skateboard, surfboard, tennis racket, bottle, wine glass, cup, fork, knife, spoon, bowl, banana, apple, sandwich, orange, broccoli, carrot, hot dog, pizza, donut, cake, chair, couch, potted plant, bed, dining table, toilet, tv, laptop, mouse, remote, keyboard, cell phone, microwave, oven, toaster, sink, refrigerator, book, clock, vase, scissors, teddy bear, hair drier, toothbrush. For each detected object, provide the output in the format: category: ((x1, y1), (x2, y2)). The point (x1, y1) represents a coordinate on the detected object, and the point (x2, y2) represents a coordinate outside the detected object. Note that the width of the input image is given as 640 and the height as 480.
A
('clock', '331', '208', '454', '14'): [(0.517, 0.433), (0.709, 0.029)], ('clock', '331', '208', '454', '14'): [(0.483, 1.085), (0.669, 1.304)]
('clock', '226', '119', '161', '1'): [(0.353, 0.248), (0.252, 0.002)], ('clock', '226', '119', '161', '1'): [(0.359, 1.308), (0.586, 0.458)]
('clock', '346', '519', '94', '358'): [(0.541, 1.081), (0.147, 0.746)], ('clock', '346', '519', '94', '358'): [(0.616, 0.938), (0.453, 0.423)]
('clock', '303', '580', '408', '519'): [(0.473, 1.208), (0.637, 1.081)], ('clock', '303', '580', '408', '519'): [(0.731, 0.317), (0.527, 0.456)]
pixel_localization
pixel_level_perception
VAL
282
Please detect all instances of the following categories in this image: person, bicycle, car, motorcycle, airplane, bus, train, truck, boat, traffic light, fire hydrant, stop sign, parking meter, bench, bird, cat, dog, horse, sheep, cow, elephant, bear, zebra, giraffe, backpack, umbrella, handbag, tie, suitcase, frisbee, skis, snowboard, sports ball, kite, baseball bat, baseball glove, skateboard, surfboard, tennis racket, bottle, wine glass, cup, fork, knife, spoon, bowl, banana, apple, sandwich, orange, broccoli, carrot, hot dog, pizza, donut, cake, chair, couch, potted plant, bed, dining table, toilet, tv, laptop, mouse, remote, keyboard, cell phone, microwave, oven, toaster, sink, refrigerator, book, clock, vase, scissors, teddy bear, hair drier, toothbrush. For each detected object, provide the output in the format: category: ((x1, y1), (x2, y2)). The point (x1, y1) represents a coordinate on the detected object, and the point (x2, y2) represents a coordinate outside the detected object. Note that the width of the input image is given as 427 and the height as 640.
B
('person', '529', '201', '225', '268'): [(1.239, 0.314), (0.527, 0.419)], ('person', '529', '201', '225', '268'): [(1.192, 0.414), (0.475, 0.328)]
('person', '183', '288', '94', '71'): [(0.429, 0.45), (0.22, 0.111)], ('person', '183', '288', '94', '71'): [(0.602, 0.034), (0.096, 0.584)]
('person', '357', '78', '456', '114'): [(0.836, 0.122), (1.068, 0.178)], ('person', '357', '78', '456', '114'): [(1.159, 0.031), (0.993, 0.047)]
('person', '82', '310', '141', '213'): [(0.192, 0.484), (0.33, 0.333)], ('person', '82', '310', '141', '213'): [(0.246, 0.305), (1.227, 0.469)]
pixel_localization
pixel_level_perception
VAL
283
Please detect all instances of the following categories in this image: person, bicycle, car, motorcycle, airplane, bus, train, truck, boat, traffic light, fire hydrant, stop sign, parking meter, bench, bird, cat, dog, horse, sheep, cow, elephant, bear, zebra, giraffe, backpack, umbrella, handbag, tie, suitcase, frisbee, skis, snowboard, sports ball, kite, baseball bat, baseball glove, skateboard, surfboard, tennis racket, bottle, wine glass, cup, fork, knife, spoon, bowl, banana, apple, sandwich, orange, broccoli, carrot, hot dog, pizza, donut, cake, chair, couch, potted plant, bed, dining table, toilet, tv, laptop, mouse, remote, keyboard, cell phone, microwave, oven, toaster, sink, refrigerator, book, clock, vase, scissors, teddy bear, hair drier, toothbrush. For each detected object, provide the output in the format: category: ((x1, y1), (x2, y2)). The point (x1, y1) represents a coordinate on the detected object, and the point (x2, y2) represents a coordinate outside the detected object. Note that the width of the input image is given as 640 and the height as 427.
D
('airplane', '207', '255', '310', '244'): [(0.323, 0.597), (0.484, 0.571)]
('airplane', '237', '87', '227', '87'): [(0.37, 0.204), (0.355, 0.204)]
('airplane', '267', '293', '228', '311'): [(0.417, 0.686), (0.356, 0.728)]
('airplane', '235', '203', '407', '370'): [(0.367, 0.475), (0.636, 0.867)]
pixel_localization
pixel_level_perception
VAL
284
Please detect all instances of the following categories in this image: person, bicycle, car, motorcycle, airplane, bus, train, truck, boat, traffic light, fire hydrant, stop sign, parking meter, bench, bird, cat, dog, horse, sheep, cow, elephant, bear, zebra, giraffe, backpack, umbrella, handbag, tie, suitcase, frisbee, skis, snowboard, sports ball, kite, baseball bat, baseball glove, skateboard, surfboard, tennis racket, bottle, wine glass, cup, fork, knife, spoon, bowl, banana, apple, sandwich, orange, broccoli, carrot, hot dog, pizza, donut, cake, chair, couch, potted plant, bed, dining table, toilet, tv, laptop, mouse, remote, keyboard, cell phone, microwave, oven, toaster, sink, refrigerator, book, clock, vase, scissors, teddy bear, hair drier, toothbrush. For each detected object, provide the output in the format: category: ((x1, y1), (x2, y2)). The point (x1, y1) represents a coordinate on the detected object, and the point (x2, y2) represents a coordinate outside the detected object. Note that the width of the input image is given as 640 and the height as 427.
D
('person', '186', '294', '49', '106'): [(0.291, 0.689), (0.077, 0.248)], ('person', '186', '294', '49', '106'): [(0.464, 0.782), (0.481, 0.794)]
('person', '185', '321', '210', '302'): [(0.289, 0.752), (0.328, 0.707)], ('person', '185', '321', '210', '302'): [(0.005, 0.159), (0.014, 0.384)]
('person', '280', '550', '62', '570'): [(0.438, 1.288), (0.097, 1.335)], ('person', '280', '550', '62', '570'): [(0.545, 0.363), (0.614, 0.438)]
('person', '186', '294', '49', '106'): [(0.291, 0.689), (0.077, 0.248)], ('person', '186', '294', '49', '106'): [(0.478, 0.803), (0.059, 0.225)]
pixel_localization
pixel_level_perception
VAL
285
Please detect all instances of the following categories in this image: person, bicycle, car, motorcycle, airplane, bus, train, truck, boat, traffic light, fire hydrant, stop sign, parking meter, bench, bird, cat, dog, horse, sheep, cow, elephant, bear, zebra, giraffe, backpack, umbrella, handbag, tie, suitcase, frisbee, skis, snowboard, sports ball, kite, baseball bat, baseball glove, skateboard, surfboard, tennis racket, bottle, wine glass, cup, fork, knife, spoon, bowl, banana, apple, sandwich, orange, broccoli, carrot, hot dog, pizza, donut, cake, chair, couch, potted plant, bed, dining table, toilet, tv, laptop, mouse, remote, keyboard, cell phone, microwave, oven, toaster, sink, refrigerator, book, clock, vase, scissors, teddy bear, hair drier, toothbrush. For each detected object, provide the output in the format: category: ((x1, y1), (x2, y2)). The point (x1, y1) represents a coordinate on the detected object, and the point (x2, y2) represents a coordinate outside the detected object. Note that the width of the input image is given as 500 and the height as 375.
D
('cat', '327', '314', '275', '271'): [(0.654, 0.837), (0.55, 0.723)], ('cat', '327', '314', '275', '271'): [(0.476, 0.147), (0.486, 0.885)]
('cat', '147', '430', '124', '263'): [(0.294, 1.147), (0.248, 0.701)], ('cat', '147', '430', '124', '263'): [(0.008, 0.389), (0.178, 0.304)]
('cat', '240', '324', '271', '337'): [(0.48, 0.864), (0.542, 0.899)], ('cat', '240', '324', '271', '337'): [(0.28, 0.048), (0.536, 0.832)]
('cat', '244', '271', '263', '171'): [(0.488, 0.723), (0.526, 0.456)], ('cat', '244', '271', '263', '171'): [(0.252, 0.827), (0.438, 0.44)]
pixel_localization
pixel_level_perception
VAL
286
Please detect all instances of the following categories in this image: person, bicycle, car, motorcycle, airplane, bus, train, truck, boat, traffic light, fire hydrant, stop sign, parking meter, bench, bird, cat, dog, horse, sheep, cow, elephant, bear, zebra, giraffe, backpack, umbrella, handbag, tie, suitcase, frisbee, skis, snowboard, sports ball, kite, baseball bat, baseball glove, skateboard, surfboard, tennis racket, bottle, wine glass, cup, fork, knife, spoon, bowl, banana, apple, sandwich, orange, broccoli, carrot, hot dog, pizza, donut, cake, chair, couch, potted plant, bed, dining table, toilet, tv, laptop, mouse, remote, keyboard, cell phone, microwave, oven, toaster, sink, refrigerator, book, clock, vase, scissors, teddy bear, hair drier, toothbrush. For each detected object, provide the output in the format: category: ((x1, y1), (x2, y2)). The point (x1, y1) represents a coordinate on the detected object, and the point (x2, y2) represents a coordinate outside the detected object. Note that the width of the input image is given as 640 and the height as 512.
C
('bird', '408', '173', '282', '409'): [(0.637, 0.338), (0.441, 0.799)]
('bird', '190', '434', '112', '191'): [(0.297, 0.848), (0.175, 0.373)]
('bird', '325', '464', '87', '466'): [(0.508, 0.906), (0.136, 0.91)]
('bird', '41', '360', '167', '92'): [(0.064, 0.703), (0.261, 0.18)]
pixel_localization
pixel_level_perception
VAL
287
Please detect all instances of the following categories in this image: person, bicycle, car, motorcycle, airplane, bus, train, truck, boat, traffic light, fire hydrant, stop sign, parking meter, bench, bird, cat, dog, horse, sheep, cow, elephant, bear, zebra, giraffe, backpack, umbrella, handbag, tie, suitcase, frisbee, skis, snowboard, sports ball, kite, baseball bat, baseball glove, skateboard, surfboard, tennis racket, bottle, wine glass, cup, fork, knife, spoon, bowl, banana, apple, sandwich, orange, broccoli, carrot, hot dog, pizza, donut, cake, chair, couch, potted plant, bed, dining table, toilet, tv, laptop, mouse, remote, keyboard, cell phone, microwave, oven, toaster, sink, refrigerator, book, clock, vase, scissors, teddy bear, hair drier, toothbrush. For each detected object, provide the output in the format: category: ((x1, y1), (x2, y2)). The point (x1, y1) represents a coordinate on the detected object, and the point (x2, y2) represents a coordinate outside the detected object. Note that the width of the input image is given as 640 and the height as 427.
D
('bear', '409', '279', '168', '201'): [(0.639, 0.653), (0.263, 0.471)]
('bear', '263', '138', '373', '163'): [(0.411, 0.323), (0.583, 0.382)]
('bear', '43', '268', '208', '82'): [(0.067, 0.628), (0.325, 0.192)]
('bear', '265', '95', '57', '45'): [(0.414, 0.222), (0.089, 0.105)]
pixel_localization
pixel_level_perception
VAL
288
Please detect all instances of the following categories in this image: person, bicycle, car, motorcycle, airplane, bus, train, truck, boat, traffic light, fire hydrant, stop sign, parking meter, bench, bird, cat, dog, horse, sheep, cow, elephant, bear, zebra, giraffe, backpack, umbrella, handbag, tie, suitcase, frisbee, skis, snowboard, sports ball, kite, baseball bat, baseball glove, skateboard, surfboard, tennis racket, bottle, wine glass, cup, fork, knife, spoon, bowl, banana, apple, sandwich, orange, broccoli, carrot, hot dog, pizza, donut, cake, chair, couch, potted plant, bed, dining table, toilet, tv, laptop, mouse, remote, keyboard, cell phone, microwave, oven, toaster, sink, refrigerator, book, clock, vase, scissors, teddy bear, hair drier, toothbrush. For each detected object, provide the output in the format: category: ((x1, y1), (x2, y2)). The point (x1, y1) represents a coordinate on the detected object, and the point (x2, y2) represents a coordinate outside the detected object. Note that the width of the input image is given as 426 and the height as 640.
A
('tie', '627', '226', '97', '261'): [(1.472, 0.353), (0.228, 0.408)], ('tie', '627', '226', '97', '261'): [(0.258, 0.548), (0.009, 0.655)], ('tie', '627', '226', '97', '261'): [(1.446, 0.123), (0.869, 0.603)]
('tie', '267', '183', '249', '291'): [(0.627, 0.286), (0.585, 0.455)], ('tie', '267', '183', '249', '291'): [(0.953, 0.492), (1.408, 0.25)], ('tie', '267', '183', '249', '291'): [(1.392, 0.147), (1.46, 0.169)]
('tie', '525', '226', '549', '216'): [(1.232, 0.353), (1.289, 0.338)], ('tie', '525', '226', '549', '216'): [(0.345, 0.236), (0.042, 0.114)], ('tie', '525', '226', '549', '216'): [(1.369, 0.173), (1.472, 0.119)]
('tie', '584', '216', '504', '239'): [(1.371, 0.338), (1.183, 0.373)], ('tie', '584', '216', '504', '239'): [(0.103, 0.603), (0.101, 0.661)], ('tie', '584', '216', '504', '239'): [(1.484, 0.097), (0.915, 0.188)]
pixel_localization
pixel_level_perception
VAL
289
Please detect all instances of the following categories in this image: person, bicycle, car, motorcycle, airplane, bus, train, truck, boat, traffic light, fire hydrant, stop sign, parking meter, bench, bird, cat, dog, horse, sheep, cow, elephant, bear, zebra, giraffe, backpack, umbrella, handbag, tie, suitcase, frisbee, skis, snowboard, sports ball, kite, baseball bat, baseball glove, skateboard, surfboard, tennis racket, bottle, wine glass, cup, fork, knife, spoon, bowl, banana, apple, sandwich, orange, broccoli, carrot, hot dog, pizza, donut, cake, chair, couch, potted plant, bed, dining table, toilet, tv, laptop, mouse, remote, keyboard, cell phone, microwave, oven, toaster, sink, refrigerator, book, clock, vase, scissors, teddy bear, hair drier, toothbrush. For each detected object, provide the output in the format: category: ((x1, y1), (x2, y2)). The point (x1, y1) represents a coordinate on the detected object, and the point (x2, y2) represents a coordinate outside the detected object. Note that the width of the input image is given as 612 and the height as 612.
C
('cow', '77', '129', '422', '403'): [(0.126, 0.211), (0.69, 0.658)], ('cow', '77', '129', '422', '403'): [(0.693, 0.663), (0.699, 0.582)], ('cow', '77', '129', '422', '403'): [(0.327, 0.843), (0.498, 0.358)]
('cow', '65', '77', '157', '206'): [(0.106, 0.126), (0.257, 0.337)], ('cow', '65', '77', '157', '206'): [(0.252, 0.773), (0.404, 0.003)], ('cow', '65', '77', '157', '206'): [(0.258, 0.391), (0.374, 0.523)]
('cow', '355', '252', '541', '28'): [(0.58, 0.412), (0.884, 0.046)], ('cow', '355', '252', '541', '28'): [(0.252, 0.773), (0.404, 0.003)], ('cow', '355', '252', '541', '28'): [(0.258, 0.391), (0.374, 0.523)]
('cow', '63', '356', '161', '210'): [(0.103, 0.582), (0.263, 0.343)], ('cow', '63', '356', '161', '210'): [(0.252, 0.773), (0.404, 0.003)], ('cow', '63', '356', '161', '210'): [(0.248, 0.355), (0.243, 0.381)]
pixel_localization
pixel_level_perception
VAL
290
Please detect all instances of the following categories in this image: person, bicycle, car, motorcycle, airplane, bus, train, truck, boat, traffic light, fire hydrant, stop sign, parking meter, bench, bird, cat, dog, horse, sheep, cow, elephant, bear, zebra, giraffe, backpack, umbrella, handbag, tie, suitcase, frisbee, skis, snowboard, sports ball, kite, baseball bat, baseball glove, skateboard, surfboard, tennis racket, bottle, wine glass, cup, fork, knife, spoon, bowl, banana, apple, sandwich, orange, broccoli, carrot, hot dog, pizza, donut, cake, chair, couch, potted plant, bed, dining table, toilet, tv, laptop, mouse, remote, keyboard, cell phone, microwave, oven, toaster, sink, refrigerator, book, clock, vase, scissors, teddy bear, hair drier, toothbrush. For each detected object, provide the output in the format: category: ((x1, y1), (x2, y2)). The point (x1, y1) represents a coordinate on the detected object, and the point (x2, y2) represents a coordinate outside the detected object. Note that the width of the input image is given as 385 and the height as 640.
C
('tie', '228', '170', '248', '174'): [(0.592, 0.266), (0.644, 0.272)], ('tie', '228', '170', '248', '174'): [(1.018, 0.402), (1.501, 0.103)], ('tie', '228', '170', '248', '174'): [(1.101, 0.098), (1.358, 0.395)]
('tie', '13', '297', '423', '208'): [(0.034, 0.464), (1.099, 0.325)], ('tie', '13', '297', '423', '208'): [(0.27, 0.028), (0.735, 0.259)], ('tie', '13', '297', '423', '208'): [(1.021, 0.109), (0.766, 0.258)]
('tie', '208', '164', '2', '104'): [(0.54, 0.256), (0.005, 0.163)], ('tie', '208', '164', '2', '104'): [(1.018, 0.402), (1.501, 0.103)], ('tie', '208', '164', '2', '104'): [(1.101, 0.098), (1.358, 0.395)]
('tie', '344', '250', '213', '166'): [(0.894, 0.391), (0.553, 0.259)], ('tie', '344', '250', '213', '166'): [(0.87, 0.438), (0.621, 0.391)], ('tie', '344', '250', '213', '166'): [(0.091, 0.4), (1.512, 0.369)]
pixel_localization
pixel_level_perception
VAL
291
Please detect all instances of the following categories in this image: person, bicycle, car, motorcycle, airplane, bus, train, truck, boat, traffic light, fire hydrant, stop sign, parking meter, bench, bird, cat, dog, horse, sheep, cow, elephant, bear, zebra, giraffe, backpack, umbrella, handbag, tie, suitcase, frisbee, skis, snowboard, sports ball, kite, baseball bat, baseball glove, skateboard, surfboard, tennis racket, bottle, wine glass, cup, fork, knife, spoon, bowl, banana, apple, sandwich, orange, broccoli, carrot, hot dog, pizza, donut, cake, chair, couch, potted plant, bed, dining table, toilet, tv, laptop, mouse, remote, keyboard, cell phone, microwave, oven, toaster, sink, refrigerator, book, clock, vase, scissors, teddy bear, hair drier, toothbrush. For each detected object, provide the output in the format: category: ((x1, y1), (x2, y2)). The point (x1, y1) represents a coordinate on the detected object, and the point (x2, y2) represents a coordinate outside the detected object. Note that the width of the input image is given as 424 and the height as 640.
C
('clock', '607', '250', '620', '157'): [(1.432, 0.391), (1.462, 0.245)]
('clock', '345', '308', '242', '218'): [(0.814, 0.481), (0.571, 0.341)]
('clock', '241', '203', '201', '197'): [(0.568, 0.317), (0.474, 0.308)]
('clock', '247', '216', '238', '205'): [(0.583, 0.338), (0.561, 0.32)]
pixel_localization
pixel_level_perception
VAL
292
Please detect all instances of the following categories in this image: person, bicycle, car, motorcycle, airplane, bus, train, truck, boat, traffic light, fire hydrant, stop sign, parking meter, bench, bird, cat, dog, horse, sheep, cow, elephant, bear, zebra, giraffe, backpack, umbrella, handbag, tie, suitcase, frisbee, skis, snowboard, sports ball, kite, baseball bat, baseball glove, skateboard, surfboard, tennis racket, bottle, wine glass, cup, fork, knife, spoon, bowl, banana, apple, sandwich, orange, broccoli, carrot, hot dog, pizza, donut, cake, chair, couch, potted plant, bed, dining table, toilet, tv, laptop, mouse, remote, keyboard, cell phone, microwave, oven, toaster, sink, refrigerator, book, clock, vase, scissors, teddy bear, hair drier, toothbrush. For each detected object, provide the output in the format: category: ((x1, y1), (x2, y2)). The point (x1, y1) represents a coordinate on the detected object, and the point (x2, y2) represents a coordinate outside the detected object. Note that the width of the input image is given as 500 and the height as 375.
A
('person', '261', '101', '152', '414'): [(0.522, 0.269), (0.304, 1.104)], ('person', '261', '101', '152', '414'): [(0.314, 1.168), (0.004, 0.285)]
('person', '212', '138', '298', '343'): [(0.424, 0.368), (0.596, 0.915)], ('person', '212', '138', '298', '343'): [(0.314, 1.168), (0.004, 0.285)]
('person', '97', '122', '160', '480'): [(0.194, 0.325), (0.32, 1.28)], ('person', '97', '122', '160', '480'): [(0.58, 1.072), (0.71, 1.091)]
('person', '348', '304', '299', '228'): [(0.696, 0.811), (0.598, 0.608)], ('person', '348', '304', '299', '228'): [(0.664, 0.864), (0.64, 1.235)]
pixel_localization
pixel_level_perception
VAL
293
Please detect all instances of the following categories in this image: person, bicycle, car, motorcycle, airplane, bus, train, truck, boat, traffic light, fire hydrant, stop sign, parking meter, bench, bird, cat, dog, horse, sheep, cow, elephant, bear, zebra, giraffe, backpack, umbrella, handbag, tie, suitcase, frisbee, skis, snowboard, sports ball, kite, baseball bat, baseball glove, skateboard, surfboard, tennis racket, bottle, wine glass, cup, fork, knife, spoon, bowl, banana, apple, sandwich, orange, broccoli, carrot, hot dog, pizza, donut, cake, chair, couch, potted plant, bed, dining table, toilet, tv, laptop, mouse, remote, keyboard, cell phone, microwave, oven, toaster, sink, refrigerator, book, clock, vase, scissors, teddy bear, hair drier, toothbrush. For each detected object, provide the output in the format: category: ((x1, y1), (x2, y2)). The point (x1, y1) represents a coordinate on the detected object, and the point (x2, y2) represents a coordinate outside the detected object. Note that the width of the input image is given as 428 and the height as 640.
A
('banana', '457', '222', '13', '110'): [(1.068, 0.347), (0.03, 0.172)]
('banana', '615', '99', '611', '98'): [(1.437, 0.155), (1.428, 0.153)]
('banana', '308', '426', '85', '303'): [(0.72, 0.666), (0.199, 0.473)]
('banana', '489', '399', '378', '404'): [(1.143, 0.623), (0.883, 0.631)]
pixel_localization
pixel_level_perception
VAL
294
Please detect all instances of the following categories in this image: person, bicycle, car, motorcycle, airplane, bus, train, truck, boat, traffic light, fire hydrant, stop sign, parking meter, bench, bird, cat, dog, horse, sheep, cow, elephant, bear, zebra, giraffe, backpack, umbrella, handbag, tie, suitcase, frisbee, skis, snowboard, sports ball, kite, baseball bat, baseball glove, skateboard, surfboard, tennis racket, bottle, wine glass, cup, fork, knife, spoon, bowl, banana, apple, sandwich, orange, broccoli, carrot, hot dog, pizza, donut, cake, chair, couch, potted plant, bed, dining table, toilet, tv, laptop, mouse, remote, keyboard, cell phone, microwave, oven, toaster, sink, refrigerator, book, clock, vase, scissors, teddy bear, hair drier, toothbrush. For each detected object, provide the output in the format: category: ((x1, y1), (x2, y2)). The point (x1, y1) represents a coordinate on the detected object, and the point (x2, y2) represents a coordinate outside the detected object. Note that the width of the input image is given as 640 and the height as 480.
C
('person', '235', '101', '271', '86'): [(0.367, 0.21), (0.423, 0.179)], ('person', '235', '101', '271', '86'): [(0.386, 0.108), (0.333, 0.173)], ('person', '235', '101', '271', '86'): [(0.498, 1.242), (0.287, 0.992)]
('person', '248', '80', '299', '94'): [(0.388, 0.167), (0.467, 0.196)], ('person', '248', '80', '299', '94'): [(0.344, 0.629), (0.333, 0.61)], ('person', '248', '80', '299', '94'): [(0.566, 1.219), (0.602, 0.071)]
('person', '215', '94', '343', '529'): [(0.336, 0.196), (0.536, 1.102)], ('person', '215', '94', '343', '529'): [(0.339, 0.617), (0.147, 0.006)], ('person', '215', '94', '343', '529'): [(0.498, 1.242), (0.287, 0.992)]
('person', '225', '74', '272', '109'): [(0.352, 0.154), (0.425, 0.227)], ('person', '225', '74', '272', '109'): [(0.336, 0.61), (0.334, 0.631)], ('person', '225', '74', '272', '109'): [(0.659, 0.577), (0.328, 0.627)]
pixel_localization
pixel_level_perception
VAL
295
Please detect all instances of the following categories in this image: person, bicycle, car, motorcycle, airplane, bus, train, truck, boat, traffic light, fire hydrant, stop sign, parking meter, bench, bird, cat, dog, horse, sheep, cow, elephant, bear, zebra, giraffe, backpack, umbrella, handbag, tie, suitcase, frisbee, skis, snowboard, sports ball, kite, baseball bat, baseball glove, skateboard, surfboard, tennis racket, bottle, wine glass, cup, fork, knife, spoon, bowl, banana, apple, sandwich, orange, broccoli, carrot, hot dog, pizza, donut, cake, chair, couch, potted plant, bed, dining table, toilet, tv, laptop, mouse, remote, keyboard, cell phone, microwave, oven, toaster, sink, refrigerator, book, clock, vase, scissors, teddy bear, hair drier, toothbrush. For each detected object, provide the output in the format: category: ((x1, y1), (x2, y2)). The point (x1, y1) represents a coordinate on the detected object, and the point (x2, y2) represents a coordinate outside the detected object. Note that the width of the input image is given as 640 and the height as 480.
B
('dog', '341', '181', '279', '247'): [(0.533, 0.377), (0.436, 0.515)], ('dog', '341', '181', '279', '247'): [(0.444, 0.287), (0.398, 0.34)], ('dog', '341', '181', '279', '247'): [(0.286, 0.456), (0.209, 1.137)]
('dog', '195', '504', '212', '113'): [(0.305, 1.05), (0.331, 0.235)], ('dog', '195', '504', '212', '113'): [(0.397, 0.315), (0.523, 1.265)], ('dog', '195', '504', '212', '113'): [(0.286, 0.456), (0.209, 1.137)]
('dog', '355', '108', '229', '274'): [(0.555, 0.225), (0.358, 0.571)], ('dog', '355', '108', '229', '274'): [(0.503, 0.6), (0.289, 0.444)], ('dog', '355', '108', '229', '274'): [(0.733, 0.106), (0.27, 0.433)]
('dog', '148', '626', '35', '558'): [(0.231, 1.304), (0.055, 1.163)], ('dog', '148', '626', '35', '558'): [(0.303, 0.254), (0.372, 0.26)], ('dog', '148', '626', '35', '558'): [(0.286, 0.456), (0.209, 1.137)]
pixel_localization
pixel_level_perception
VAL
296
Please detect all instances of the following categories in this image: person, bicycle, car, motorcycle, airplane, bus, train, truck, boat, traffic light, fire hydrant, stop sign, parking meter, bench, bird, cat, dog, horse, sheep, cow, elephant, bear, zebra, giraffe, backpack, umbrella, handbag, tie, suitcase, frisbee, skis, snowboard, sports ball, kite, baseball bat, baseball glove, skateboard, surfboard, tennis racket, bottle, wine glass, cup, fork, knife, spoon, bowl, banana, apple, sandwich, orange, broccoli, carrot, hot dog, pizza, donut, cake, chair, couch, potted plant, bed, dining table, toilet, tv, laptop, mouse, remote, keyboard, cell phone, microwave, oven, toaster, sink, refrigerator, book, clock, vase, scissors, teddy bear, hair drier, toothbrush. For each detected object, provide the output in the format: category: ((x1, y1), (x2, y2)). The point (x1, y1) represents a coordinate on the detected object, and the point (x2, y2) represents a coordinate outside the detected object. Note that the width of the input image is given as 480 and the height as 640.
D
('vase', '537', '142', '469', '258'): [(1.119, 0.222), (0.977, 0.403)], ('vase', '537', '142', '469', '258'): [(0.696, 0.042), (1.171, 0.15)]
('vase', '286', '237', '493', '84'): [(0.596, 0.37), (1.027, 0.131)], ('vase', '286', '237', '493', '84'): [(0.802, 0.359), (0.854, 0.403)]
('vase', '234', '341', '44', '145'): [(0.487, 0.533), (0.092, 0.227)], ('vase', '234', '341', '44', '145'): [(0.863, 0.256), (0.521, 0.191)]
('vase', '286', '237', '493', '84'): [(0.596, 0.37), (1.027, 0.131)], ('vase', '286', '237', '493', '84'): [(0.746, 0.38), (0.438, 0.128)]
pixel_localization
pixel_level_perception
VAL
297
Please detect all instances of the following categories in this image: person, bicycle, car, motorcycle, airplane, bus, train, truck, boat, traffic light, fire hydrant, stop sign, parking meter, bench, bird, cat, dog, horse, sheep, cow, elephant, bear, zebra, giraffe, backpack, umbrella, handbag, tie, suitcase, frisbee, skis, snowboard, sports ball, kite, baseball bat, baseball glove, skateboard, surfboard, tennis racket, bottle, wine glass, cup, fork, knife, spoon, bowl, banana, apple, sandwich, orange, broccoli, carrot, hot dog, pizza, donut, cake, chair, couch, potted plant, bed, dining table, toilet, tv, laptop, mouse, remote, keyboard, cell phone, microwave, oven, toaster, sink, refrigerator, book, clock, vase, scissors, teddy bear, hair drier, toothbrush. For each detected object, provide the output in the format: category: ((x1, y1), (x2, y2)). The point (x1, y1) represents a coordinate on the detected object, and the point (x2, y2) represents a coordinate outside the detected object. Note that the width of the input image is given as 500 and the height as 375.
B
('person', '28', '10', '158', '170'): [(0.056, 0.027), (0.316, 0.453)], ('person', '28', '10', '158', '170'): [(0.558, 0.597), (0.324, 0.352)]
('person', '56', '153', '212', '353'): [(0.112, 0.408), (0.424, 0.941)], ('person', '56', '153', '212', '353'): [(0.316, 0.405), (0.322, 0.293)]
('person', '162', '371', '112', '316'): [(0.324, 0.989), (0.224, 0.843)], ('person', '162', '371', '112', '316'): [(0.714, 0.621), (0.158, 0.373)]
('person', '56', '153', '212', '353'): [(0.112, 0.408), (0.424, 0.941)], ('person', '56', '153', '212', '353'): [(0.008, 1.256), (0.328, 0.424)]
pixel_localization
pixel_level_perception
VAL
298
Please detect all instances of the following categories in this image: person, bicycle, car, motorcycle, airplane, bus, train, truck, boat, traffic light, fire hydrant, stop sign, parking meter, bench, bird, cat, dog, horse, sheep, cow, elephant, bear, zebra, giraffe, backpack, umbrella, handbag, tie, suitcase, frisbee, skis, snowboard, sports ball, kite, baseball bat, baseball glove, skateboard, surfboard, tennis racket, bottle, wine glass, cup, fork, knife, spoon, bowl, banana, apple, sandwich, orange, broccoli, carrot, hot dog, pizza, donut, cake, chair, couch, potted plant, bed, dining table, toilet, tv, laptop, mouse, remote, keyboard, cell phone, microwave, oven, toaster, sink, refrigerator, book, clock, vase, scissors, teddy bear, hair drier, toothbrush. For each detected object, provide the output in the format: category: ((x1, y1), (x2, y2)). The point (x1, y1) represents a coordinate on the detected object, and the point (x2, y2) represents a coordinate outside the detected object. Note that the width of the input image is given as 500 and the height as 341.
C
('vase', '230', '240', '237', '265'): [(0.46, 0.704), (0.474, 0.777)], ('vase', '230', '240', '237', '265'): [(0.098, 1.085), (0.596, 0.587)]
('vase', '327', '248', '318', '204'): [(0.654, 0.727), (0.636, 0.598)], ('vase', '327', '248', '318', '204'): [(0.618, 0.211), (0.53, 0.613)]
('vase', '310', '238', '128', '26'): [(0.62, 0.698), (0.256, 0.076)], ('vase', '310', '238', '128', '26'): [(0.076, 0.584), (0.448, 0.223)]
('vase', '310', '238', '128', '26'): [(0.62, 0.698), (0.256, 0.076)], ('vase', '310', '238', '128', '26'): [(0.272, 1.062), (0.136, 0.32)]
pixel_localization
pixel_level_perception
VAL
299
What is the semantic category of the pixel point at coordinates (0.473, 0.222) in the image? Note that the width of the input image is given as 640 and the height as 427. The coordinates of the top left corner of the image are (0, 0), and the coordinates of the bottom right corner are (640, 427).
D
motorcycle
bus
car
truck
pixel_recognition
pixel_level_perception
VAL