commonsense_question
stringclasses 1
value | spatial_types
sequencelengths 0
2
| canary
stringclasses 1
value | targets
sequencelengths 1
4
| num_question_entities
int64 2
4
| num_hop
int64 1
4
| target_scores
sequencelengths 16
16
| reasoning
stringlengths 0
1.35k
| target_choices
sequencelengths 16
16
| question_type
stringclasses 1
value | context
stringlengths 108
1.27k
| reasoning_types
sequencelengths 0
0
| symbolic_context
stringlengths 58
570
| symbolic_reasoning
stringlengths 125
1.11k
| symbolic_question
sequencelengths 2
2
| question
stringlengths 27
134
| source_data
stringclasses 1
value | comments
sequencelengths 7
7
| symbolic_entity_map
stringlengths 61
590
| context_id
stringlengths 9
21
| num_context_entities
int64 3
18
| question_id
int64 1
7
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
[] | [
"inside"
] | 2 | 2 | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0
] | Step 1: From the context, the box one contains the medium yellow melon.
Step 2: From step 1, we can say that the medium yellow melon is inside the box one.
Step 3: It is given that the box two contains the box one.
Step 4: From step 3, we can say that the box one is inside the box two.
Step 5: From step 2 and 4, we can infer that the medium yellow melon is inside the box two. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | Two boxes, called one and two exist in the image. Box one has a medium yellow melon. To the south of the medium yellow melon is a small yellow watermelon. The small yellow watermelon is in box one. Box two with a medium orange fruit has box one. A small yellow melon is inside this box. This fruit is close to the medium orange fruit. | [] | {"0-->-1": ["ntpp"], "1-->-1": ["ntpp"], "0-->0x0": ["ntppi"], "0x1-->0x0": ["below"], "0x1-->0": ["ntpp"], "1-->1x1": ["ntppi"], "1-->0": ["ntppi"], "1x0-->1": ["ntpp"], "1x0-->1x1": ["near"]} | [{"head": "0x0", "tail": "0", "context_rel": {}, "inv_context_rel": {"ntppi": {"phrase": "has"}}, "inferred_rel": {"ntpp": {}}}, {"head": "0", "tail": "1", "context_rel": {}, "inv_context_rel": {"ntppi": {"phrase": "has"}}, "inferred_rel": {"ntpp": {}}}, {"head": "0x0", "tail": "1", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"ntpp": {}}}] | [
"0x0",
"1"
] | What is the position of the medium yellow melon regarding box two? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 74",
"seed_id: 0",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "1": "box two", "0x0": "medium yellow melon", "0x1": "small yellow watermelon", "1x0": "small yellow melon", "1x1": "medium orange fruit"} | train/1304-0 | 7 | 4 |
||
[] | [
"inside"
] | 2 | 1 | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0
] | Step 1: It is given that the box two contains the medium orange fruit.
Step 2: From step 1, we can infer that the medium orange fruit is inside the box two. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | Two boxes, called one and two exist in the image. Box one has a medium yellow melon. To the south of the medium yellow melon is a small yellow watermelon. The small yellow watermelon is in box one. Box two with a medium orange fruit has box one. A small yellow melon is inside this box. This fruit is close to the medium orange fruit. | [] | {"0-->-1": ["ntpp"], "1-->-1": ["ntpp"], "0-->0x0": ["ntppi"], "0x1-->0x0": ["below"], "0x1-->0": ["ntpp"], "1-->1x1": ["ntppi"], "1-->0": ["ntppi"], "1x0-->1": ["ntpp"], "1x0-->1x1": ["near"]} | [{"head": "1x1", "tail": "1", "context_rel": {}, "inv_context_rel": {"ntppi": {"phrase": "with"}}, "inferred_rel": {"ntpp": {}}}] | [
"1x1",
"1"
] | What is the position of the orange thing regarding box two? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 74",
"seed_id: 0",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "1": "box two", "0x0": "medium yellow melon", "0x1": "small yellow watermelon", "1x0": "small yellow melon", "1x1": "medium orange fruit"} | train/1304-0 | 7 | 5 |
||
[] | [
"above"
] | 2 | 1 | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | Step 1: It is given that the small yellow watermelon is below the medium yellow melon.
Step 2: From step 1, it can be inferred that the medium yellow melon is above the small yellow watermelon. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | Two boxes, called one and two exist in the image. Box one has a medium yellow melon. To the south of the medium yellow melon is a small yellow watermelon. The small yellow watermelon is in box one. Box two with a medium orange fruit has box one. A small yellow melon is inside this box. This fruit is close to the medium orange fruit. | [] | {"0-->-1": ["ntpp"], "1-->-1": ["ntpp"], "0-->0x0": ["ntppi"], "0x1-->0x0": ["below"], "0x1-->0": ["ntpp"], "1-->1x1": ["ntppi"], "1-->0": ["ntppi"], "1x0-->1": ["ntpp"], "1x0-->1x1": ["near"]} | [{"head": "0x0", "tail": "0x1", "context_rel": {}, "inv_context_rel": {"below": {"phrase": "to the south of"}}, "inferred_rel": {"above": {}}}] | [
"0x0",
"0x1"
] | Where is the medium yellow melon relative to the small yellow watermelon? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 74",
"seed_id: 0",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "1": "box two", "0x0": "medium yellow melon", "0x1": "small yellow watermelon", "1x0": "small yellow melon", "1x1": "medium orange fruit"} | train/1304-0 | 7 | 6 |
||
[] | [
"near"
] | 2 | 1 | [
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | Step 1: From the context, the small yellow melon is near the medium orange fruit.
Step 2: From step 1, we can infer that the medium orange fruit is near the small yellow melon. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | Two boxes, called one and two exist in the image. Box one has a medium yellow melon. To the south of the medium yellow melon is a small yellow watermelon. The small yellow watermelon is in box one. Box two with a medium orange fruit has box one. A small yellow melon is inside this box. This fruit is close to the medium orange fruit. | [] | {"0-->-1": ["ntpp"], "1-->-1": ["ntpp"], "0-->0x0": ["ntppi"], "0x1-->0x0": ["below"], "0x1-->0": ["ntpp"], "1-->1x1": ["ntppi"], "1-->0": ["ntppi"], "1x0-->1": ["ntpp"], "1x0-->1x1": ["near"]} | [{"head": "1x1", "tail": "1x0", "context_rel": {}, "inv_context_rel": {"near": {"phrase": "close to"}}, "inferred_rel": {"near": {}}}] | [
"1x1",
"1x0"
] | Where is the orange thing relative to the small yellow melon? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 74",
"seed_id: 0",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "1": "box two", "0x0": "medium yellow melon", "0x1": "small yellow watermelon", "1x0": "small yellow melon", "1x1": "medium orange fruit"} | train/1304-0 | 7 | 7 |
||
[
"Region|RCC8|TPP"
] | [
"below",
"behind",
"outside"
] | 3 | 2 | [
0,
0,
0,
1,
1,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | Step 1: From the context, the box one is above, outside and in front of the box two.
Step 2: From step 1, we can infer that the box two is below, outside and behind the box one.
Step 3: From the context, the medium green apple of box one is inside and touching the box one.
Step 4: From step 3, we can say that the box one contains and touches the medium green apple of box one.
Step 5: From step 2 and 4, it can be inferred that the box two is below, behind and outside the medium green apple of box one. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | A medium green apple is inside and touching a box named one. Above, disconnected from and in front of another box named two there is box one. A box named three exists in the image. A medium yellow apple is above a medium green apple and touches another medium yellow apple. Medium yellow apple number one is inside box three. The medium green apple is inside and touching box three. Box three has medium yellow apple number two. Over the fruit which was over the medium green apple is medium yellow apple number two. | [] | {"0x0-->0": ["tpp"], "0-->1": ["above", "dc", "front"], "2-->-1": ["ntpp"], "2x1-->2x0": ["above"], "2x1-->2x2": ["ec"], "2x1-->2": ["ntpp"], "2x0-->2": ["tpp"], "2-->2x2": ["ntppi"], "2x2-->2x1": ["above"]} | [{"head": "1", "tail": "0", "context_rel": {}, "inv_context_rel": {"above": {"phrase": "above"}, "dc": {"phrase": "disconnected from"}, "front": {"phrase": "in front of"}}, "inferred_rel": {"below": {}, "dc": {}, "behind": {}}}, {"head": "0", "tail": "0x0", "context_rel": {}, "inv_context_rel": {"tpp": {"phrase": "inside and touching"}}, "inferred_rel": {"tppi": {}}}, {"head": "1", "tail": "0x0", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"below": {}, "behind": {}, "dc": {}}}] | [
"1",
"0x0"
] | Where is box two relative to the medium green apple covered by box one? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 17",
"seed_id: 1",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "1": "box two", "2": "box three", "0x0": "medium green apple of box one", "2x0": "medium green apple of box three", "2x1": "medium yellow apple number one", "2x2": "medium yellow apple number two"} | train_simple/2170-1 | 8 | 4 |
||
[
"Region|RCC8|TPP"
] | [
"below"
] | 3 | 2 | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | Step 1: From the context, the medium yellow apple number one is above the medium green apple of box three.
Step 2: From step 1, we can say that the medium green apple of box three is below the medium yellow apple number one.
Step 3: From the context, the medium yellow apple number two is above the medium yellow apple number one.
Step 4: It is given that the medium yellow apple number one is outside and touching the medium yellow apple number two.
Step 5: From step 3 and 4, it can be inferred that the medium yellow apple number one is below, outside and touching the medium yellow apple number two.
Step 6: From step 2 and 5, we can infer that the medium green apple of box three is below the medium yellow apple number two. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | A medium green apple is inside and touching a box named one. Above, disconnected from and in front of another box named two there is box one. A box named three exists in the image. A medium yellow apple is above a medium green apple and touches another medium yellow apple. Medium yellow apple number one is inside box three. The medium green apple is inside and touching box three. Box three has medium yellow apple number two. Over the fruit which was over the medium green apple is medium yellow apple number two. | [] | {"0x0-->0": ["tpp"], "0-->1": ["above", "dc", "front"], "2-->-1": ["ntpp"], "2x1-->2x0": ["above"], "2x1-->2x2": ["ec"], "2x1-->2": ["ntpp"], "2x0-->2": ["tpp"], "2-->2x2": ["ntppi"], "2x2-->2x1": ["above"]} | [{"head": "2x0", "tail": "2x1", "context_rel": {}, "inv_context_rel": {"above": {"phrase": "over"}}, "inferred_rel": {"below": {}}}, {"head": "2x1", "tail": "2x2", "context_rel": {"ec": {"phrase": "touches"}}, "inv_context_rel": {"above": {"phrase": "over"}}, "inferred_rel": {"below": {}}}, {"head": "2x0", "tail": "2x2", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"below": {}}}] | [
"2x0",
"2x2"
] | What is the position of the medium green apple covered by box three regarding medium yellow apple number two? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 17",
"seed_id: 1",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "1": "box two", "2": "box three", "0x0": "medium green apple of box one", "2x0": "medium green apple of box three", "2x1": "medium yellow apple number one", "2x2": "medium yellow apple number two"} | train_simple/2170-1 | 8 | 5 |
||
[] | [
"in front"
] | 2 | 1 | [
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | Step 1: From the context, the medium triangle is behind the small blue circle.
Step 2: From step 1, we can say that the small blue circle is in front of the medium triangle. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | One block named AAA exists. A medium triangle and a small blue circle are within block AAA. Behind the small shape there is the medium shape. | [] | {"0-->-1": ["ntpp"], "0x1-->0": ["ntpp"], "0x0-->0": ["ntpp"], "0x1-->0x0": ["behind"]} | [{"head": "0x0", "tail": "0x1", "context_rel": {}, "inv_context_rel": {"behind": {"phrase": "behind"}}, "inferred_rel": {"front": {}}}] | [
"0x0",
"0x1"
] | What is the position of the small object regarding the triangle? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 7",
"seed_id: 2",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "block AAA", "0x0": "small blue circle", "0x1": "medium triangle"} | train_stepgame/700-2 | 4 | 3 |
||
[] | [
"contains"
] | 2 | 1 | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0
] | Step 1: From the context, the medium triangle is inside the block AAA.
Step 2: From step 1, we can say that the block AAA contains the medium triangle. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | One block named AAA exists. A medium triangle and a small blue circle are within block AAA. Behind the small shape there is the medium shape. | [] | {"0-->-1": ["ntpp"], "0x1-->0": ["ntpp"], "0x0-->0": ["ntpp"], "0x1-->0x0": ["behind"]} | [{"head": "0", "tail": "0x1", "context_rel": {}, "inv_context_rel": {"ntpp": {"phrase": "within"}}, "inferred_rel": {"ntppi": {}}}] | [
"0",
"0x1"
] | Where is block AAA regarding the medium triangle? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 7",
"seed_id: 2",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "block AAA", "0x0": "small blue circle", "0x1": "medium triangle"} | train_stepgame/700-2 | 4 | 4 |
||
[] | [
"contains"
] | 2 | 1 | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0
] | Step 1: It is given that the small blue circle is inside the block AAA.
Step 2: From step 1, we can say that the block AAA contains the small blue circle. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | One block named AAA exists. A medium triangle and a small blue circle are within block AAA. Behind the small shape there is the medium shape. | [] | {"0-->-1": ["ntpp"], "0x1-->0": ["ntpp"], "0x0-->0": ["ntpp"], "0x1-->0x0": ["behind"]} | [{"head": "0", "tail": "0x0", "context_rel": {}, "inv_context_rel": {"ntpp": {"phrase": "within"}}, "inferred_rel": {"ntppi": {}}}] | [
"0",
"0x0"
] | Where is block relative to the circle? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 7",
"seed_id: 2",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "block AAA", "0x0": "small blue circle", "0x1": "medium triangle"} | train_stepgame/700-2 | 4 | 5 |
||
[
"Region|RCC8|NTPP"
] | [
"below"
] | 3 | 2 | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | Step 1: It is given that the midsize white rectangle number one of box EEE is above the midsize white rectangle number two of box EEE.
Step 2: From the context, the midsize white rectangle number two of box EEE is outside and touching the midsize white rectangle number one of box EEE.
Step 3: From step 1 and 2, we can infer that the midsize white rectangle number two of box EEE is below, outside and touching the midsize white rectangle number one of box EEE.
Step 4: It is given that the midsize orange rectangle of box EEE is above the midsize white rectangle number one of box EEE.
Step 5: From step 4, we can infer that the midsize white rectangle number one of box EEE is below the midsize orange rectangle of box EEE.
Step 6: From step 3 and 5, we can infer that the midsize white rectangle number two of box EEE is below the midsize orange rectangle of box EEE. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | A box named DDD contain a midsize orange rectangle and a midsize green rectangle. The midsize green rectangle touches the midsize orange rectangle and is over a midsize white rectangle. Box DDD covers the midsize white rectangle. Under the midsize orange rectangle is the midsize white rectangle. There exists a box called EEE. Box EEE contains a midsize white rectangle which is above another midsize white rectangle. Midsize white rectangle number two is covered by box EEE. A midsize orange rectangle is inside box EEE. This thing is over and midsize white rectangle number two touches midsize white rectangle number one. A box called JJJ exists. | [] | {"0-->0x1": ["ntppi"], "0-->0x2": ["ntppi"], "0x2-->0x1": ["ec"], "0x2-->0x0": ["above"], "0-->0x0": ["tppi"], "0x0-->0x1": ["below"], "1-->-1": ["ntpp"], "1-->1x1": ["ntppi"], "1x1-->1x0": ["above"], "1x0-->1": ["tpp"], "1x2-->1": ["ntpp"], "1x2-->1x1": ["above"], "1x0-->1x1": ["ec"], "2-->-1": ["ntpp"]} | [{"head": "1x0", "tail": "1x1", "context_rel": {"ec": {"phrase": "touches"}}, "inv_context_rel": {"above": {"phrase": "above"}}, "inferred_rel": {"below": {}}}, {"head": "1x1", "tail": "1x2", "context_rel": {}, "inv_context_rel": {"above": {"phrase": "over"}}, "inferred_rel": {"below": {}}}, {"head": "1x0", "tail": "1x2", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"below": {}}}] | [
"1x0",
"1x2"
] | What is the position of midsize white rectangle number two regarding the midsize orange rectangle in EEE? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 63",
"seed_id: 3",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box DDD", "1": "box EEE", "2": "box JJJ", "0x0": "midsize white rectangle of box DDD", "0x1": "midsize orange rectangle of box DDD", "0x2": "midsize green rectangle", "1x0": "midsize white rectangle number two of box EEE", "1x1": "midsize white rectangle number one of box EEE", "1x2": "midsize orange rectangle of box EEE"} | train/3811-0 | 10 | 4 |
||
[] | [
"inside"
] | 2 | 1 | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0
] | Step 1: It is given that the box EEE contains the midsize white rectangle number one of box EEE.
Step 2: From step 1, we can say that the midsize white rectangle number one of box EEE is inside the box EEE. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | A box named DDD contain a midsize orange rectangle and a midsize green rectangle. The midsize green rectangle touches the midsize orange rectangle and is over a midsize white rectangle. Box DDD covers the midsize white rectangle. Under the midsize orange rectangle is the midsize white rectangle. There exists a box called EEE. Box EEE contains a midsize white rectangle which is above another midsize white rectangle. Midsize white rectangle number two is covered by box EEE. A midsize orange rectangle is inside box EEE. This thing is over and midsize white rectangle number two touches midsize white rectangle number one. A box called JJJ exists. | [] | {"0-->0x1": ["ntppi"], "0-->0x2": ["ntppi"], "0x2-->0x1": ["ec"], "0x2-->0x0": ["above"], "0-->0x0": ["tppi"], "0x0-->0x1": ["below"], "1-->-1": ["ntpp"], "1-->1x1": ["ntppi"], "1x1-->1x0": ["above"], "1x0-->1": ["tpp"], "1x2-->1": ["ntpp"], "1x2-->1x1": ["above"], "1x0-->1x1": ["ec"], "2-->-1": ["ntpp"]} | [{"head": "1x1", "tail": "1", "context_rel": {}, "inv_context_rel": {"ntppi": {"phrase": "contains"}}, "inferred_rel": {"ntpp": {}}}] | [
"1x1",
"1"
] | Where is midsize white rectangle number one regarding EEE? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 63",
"seed_id: 3",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box DDD", "1": "box EEE", "2": "box JJJ", "0x0": "midsize white rectangle of box DDD", "0x1": "midsize orange rectangle of box DDD", "0x2": "midsize green rectangle", "1x0": "midsize white rectangle number two of box EEE", "1x1": "midsize white rectangle number one of box EEE", "1x2": "midsize orange rectangle of box EEE"} | train/3811-0 | 10 | 5 |
||
[
"Region|RCC8|NTPP"
] | [
"below"
] | 3 | 1 | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | Step 1: It is given that the midsize orange rectangle of box EEE is above the midsize white rectangle number one of box EEE.
Step 2: From step 1, we can infer that the midsize white rectangle number one of box EEE is below the midsize orange rectangle of box EEE. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | A box named DDD contain a midsize orange rectangle and a midsize green rectangle. The midsize green rectangle touches the midsize orange rectangle and is over a midsize white rectangle. Box DDD covers the midsize white rectangle. Under the midsize orange rectangle is the midsize white rectangle. There exists a box called EEE. Box EEE contains a midsize white rectangle which is above another midsize white rectangle. Midsize white rectangle number two is covered by box EEE. A midsize orange rectangle is inside box EEE. This thing is over and midsize white rectangle number two touches midsize white rectangle number one. A box called JJJ exists. | [] | {"0-->0x1": ["ntppi"], "0-->0x2": ["ntppi"], "0x2-->0x1": ["ec"], "0x2-->0x0": ["above"], "0-->0x0": ["tppi"], "0x0-->0x1": ["below"], "1-->-1": ["ntpp"], "1-->1x1": ["ntppi"], "1x1-->1x0": ["above"], "1x0-->1": ["tpp"], "1x2-->1": ["ntpp"], "1x2-->1x1": ["above"], "1x0-->1x1": ["ec"], "2-->-1": ["ntpp"]} | [{"head": "1x1", "tail": "1x2", "context_rel": {}, "inv_context_rel": {"above": {"phrase": "over"}}, "inferred_rel": {"below": {}}}] | [
"1x1",
"1x2"
] | Where is midsize white rectangle number one regarding the midsize orange rectangle in EEE? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 63",
"seed_id: 3",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box DDD", "1": "box EEE", "2": "box JJJ", "0x0": "midsize white rectangle of box DDD", "0x1": "midsize orange rectangle of box DDD", "0x2": "midsize green rectangle", "1x0": "midsize white rectangle number two of box EEE", "1x1": "midsize white rectangle number one of box EEE", "1x2": "midsize orange rectangle of box EEE"} | train/3811-0 | 10 | 6 |
||
[] | [
"below",
"in front",
"outside"
] | 2 | 2 | [
0,
0,
0,
1,
0,
1,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | Step 1: It is given that the box one is outside the box two.
Step 2: It is given that the box two is below and in front of the box one.
Step 3: From step 1 and 2, we can infer that the box two is outside, below and in front of the box one.
Step 4: It is given that the box one contains the small green watermelon.
Step 5: From step 3 and 4, we can infer that the box two is below, outside and in front of the small green watermelon. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | Two boxes, called one and two exist. Box two is under and in front of box one. Disconnected from this box there is box one with a small green watermelon, a big yellow watermelon and a medium green apple. An orange melon is inside box one. Below the medium green apple there is the big yellow watermelon. Behind this fruit there is the orange fruit. A big yellow watermelon is in box two. Box two contains a small yellow watermelon which is over a medium yellow melon. Box two covers the medium yellow melon. | [] | {"0-->-1": ["ntpp"], "1-->-1": ["ntpp"], "1-->0": ["below", "front"], "0-->0x2": ["ntppi"], "0-->0x4": ["ntppi"], "0-->0x3": ["ntppi"], "0-->1": ["dc"], "0x0-->0": ["ntpp"], "0x4-->0x3": ["below"], "0x0-->0x4": ["behind"], "1x3-->1": ["ntpp"], "1-->1x2": ["ntppi"], "1x2-->1x1": ["above"], "1-->1x1": ["tppi"]} | [{"head": "1", "tail": "0", "context_rel": {"below": {"phrase": "under"}, "front": {"phrase": "in front of"}}, "inv_context_rel": {"dc": {"phrase": "disconnected from"}}, "inferred_rel": {"dc": {}}}, {"head": "0", "tail": "0x2", "context_rel": {"ntppi": {"phrase": "with"}}, "inv_context_rel": {}, "inferred_rel": {}}, {"head": "1", "tail": "0x2", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"below": {}, "front": {}, "dc": {}}}] | [
"1",
"0x2"
] | Where is box two regarding the small green watermelon? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 38",
"seed_id: 4",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "1": "box two", "0x0": "orange melon", "0x2": "small green watermelon", "0x3": "medium green apple", "0x4": "big yellow watermelon of box one", "1x1": "medium yellow melon", "1x2": "small yellow watermelon", "1x3": "big yellow watermelon of box two"} | train_simple/359-0 | 10 | 4 |
||
[
"Region|RCC8|NTPP"
] | [
"below",
"in front",
"outside"
] | 3 | 2 | [
0,
0,
0,
1,
0,
1,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | Step 1: From the context, the box one is outside the box two.
Step 2: From the context, the box two is below and in front of the box one.
Step 3: From step 1 and 2, we can infer that the box two is outside, below and in front of the box one.
Step 4: It is given that the box one contains the big yellow watermelon of box one.
Step 5: From step 3 and 4, we can say that the box two is below, outside and in front of the big yellow watermelon of box one. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | Two boxes, called one and two exist. Box two is under and in front of box one. Disconnected from this box there is box one with a small green watermelon, a big yellow watermelon and a medium green apple. An orange melon is inside box one. Below the medium green apple there is the big yellow watermelon. Behind this fruit there is the orange fruit. A big yellow watermelon is in box two. Box two contains a small yellow watermelon which is over a medium yellow melon. Box two covers the medium yellow melon. | [] | {"0-->-1": ["ntpp"], "1-->-1": ["ntpp"], "1-->0": ["below", "front"], "0-->0x2": ["ntppi"], "0-->0x4": ["ntppi"], "0-->0x3": ["ntppi"], "0-->1": ["dc"], "0x0-->0": ["ntpp"], "0x4-->0x3": ["below"], "0x0-->0x4": ["behind"], "1x3-->1": ["ntpp"], "1-->1x2": ["ntppi"], "1x2-->1x1": ["above"], "1-->1x1": ["tppi"]} | [{"head": "1", "tail": "0", "context_rel": {"below": {"phrase": "under"}, "front": {"phrase": "in front of"}}, "inv_context_rel": {"dc": {"phrase": "disconnected from"}}, "inferred_rel": {"dc": {}}}, {"head": "0", "tail": "0x4", "context_rel": {"ntppi": {"phrase": "with"}}, "inv_context_rel": {}, "inferred_rel": {}}, {"head": "1", "tail": "0x4", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"below": {}, "front": {}, "dc": {}}}] | [
"1",
"0x4"
] | Where is box two relative to the big yellow watermelon in box one? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 38",
"seed_id: 4",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "1": "box two", "0x0": "orange melon", "0x2": "small green watermelon", "0x3": "medium green apple", "0x4": "big yellow watermelon of box one", "1x1": "medium yellow melon", "1x2": "small yellow watermelon", "1x3": "big yellow watermelon of box two"} | train_simple/359-0 | 10 | 5 |
||
[
"Region|RCC8|NTPP"
] | [
"above",
"behind",
"outside"
] | 4 | 3 | [
0,
0,
1,
0,
1,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | Step 1: It is given that the box one contains the big yellow watermelon of box one.
Step 2: From step 1, we can say that the big yellow watermelon of box one is inside the box one.
Step 3: It is given that the box two is below and in front of the box one.
Step 4: From the context, the box one is outside the box two.
Step 5: From step 3 and 4, it can be inferred that the box one is above, behind and outside the box two.
Step 6: From step 2 and 5, we can infer that the big yellow watermelon of box one is above, behind and outside the box two.
Step 7: From the context, the big yellow watermelon of box two is inside the box two.
Step 8: From step 7, we can infer that the box two contains the big yellow watermelon of box two.
Step 9: From step 6 and 8, we can infer that the big yellow watermelon of box one is above, behind and outside the big yellow watermelon of box two. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | Two boxes, called one and two exist. Box two is under and in front of box one. Disconnected from this box there is box one with a small green watermelon, a big yellow watermelon and a medium green apple. An orange melon is inside box one. Below the medium green apple there is the big yellow watermelon. Behind this fruit there is the orange fruit. A big yellow watermelon is in box two. Box two contains a small yellow watermelon which is over a medium yellow melon. Box two covers the medium yellow melon. | [] | {"0-->-1": ["ntpp"], "1-->-1": ["ntpp"], "1-->0": ["below", "front"], "0-->0x2": ["ntppi"], "0-->0x4": ["ntppi"], "0-->0x3": ["ntppi"], "0-->1": ["dc"], "0x0-->0": ["ntpp"], "0x4-->0x3": ["below"], "0x0-->0x4": ["behind"], "1x3-->1": ["ntpp"], "1-->1x2": ["ntppi"], "1x2-->1x1": ["above"], "1-->1x1": ["tppi"]} | [{"head": "0x4", "tail": "0", "context_rel": {}, "inv_context_rel": {"ntppi": {"phrase": "with"}}, "inferred_rel": {"ntpp": {}}}, {"head": "0", "tail": "1", "context_rel": {"dc": {"phrase": "disconnected from"}}, "inv_context_rel": {"below": {"phrase": "under"}, "front": {"phrase": "in front of"}}, "inferred_rel": {"above": {}, "behind": {}}}, {"head": "0x4", "tail": "1", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"above": {}, "behind": {}, "dc": {}}}, {"head": "1", "tail": "1x3", "context_rel": {}, "inv_context_rel": {"ntpp": {"phrase": "in"}}, "inferred_rel": {"ntppi": {}}}, {"head": "0x4", "tail": "1x3", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"above": {}, "behind": {}, "dc": {}}}] | [
"0x4",
"1x3"
] | What is the position of the big yellow watermelon in box one relative to the big yellow watermelon in box two? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 38",
"seed_id: 4",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "1": "box two", "0x0": "orange melon", "0x2": "small green watermelon", "0x3": "medium green apple", "0x4": "big yellow watermelon of box one", "1x1": "medium yellow melon", "1x2": "small yellow watermelon", "1x3": "big yellow watermelon of box two"} | train_simple/359-0 | 10 | 6 |
||
[
"Region|RCC8|NTPP"
] | [
"above"
] | 3 | 1 | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | Step 1: It is given that the big yellow watermelon of box one is below the medium green apple.
Step 2: From step 1, we can infer that the medium green apple is above the big yellow watermelon of box one. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | Two boxes, called one and two exist. Box two is under and in front of box one. Disconnected from this box there is box one with a small green watermelon, a big yellow watermelon and a medium green apple. An orange melon is inside box one. Below the medium green apple there is the big yellow watermelon. Behind this fruit there is the orange fruit. A big yellow watermelon is in box two. Box two contains a small yellow watermelon which is over a medium yellow melon. Box two covers the medium yellow melon. | [] | {"0-->-1": ["ntpp"], "1-->-1": ["ntpp"], "1-->0": ["below", "front"], "0-->0x2": ["ntppi"], "0-->0x4": ["ntppi"], "0-->0x3": ["ntppi"], "0-->1": ["dc"], "0x0-->0": ["ntpp"], "0x4-->0x3": ["below"], "0x0-->0x4": ["behind"], "1x3-->1": ["ntpp"], "1-->1x2": ["ntppi"], "1x2-->1x1": ["above"], "1-->1x1": ["tppi"]} | [{"head": "0x3", "tail": "0x4", "context_rel": {}, "inv_context_rel": {"below": {"phrase": "below"}}, "inferred_rel": {"above": {}}}] | [
"0x3",
"0x4"
] | What is the position of the medium green apple relative to the big yellow watermelon in box one? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 38",
"seed_id: 4",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "1": "box two", "0x0": "orange melon", "0x2": "small green watermelon", "0x3": "medium green apple", "0x4": "big yellow watermelon of box one", "1x1": "medium yellow melon", "1x2": "small yellow watermelon", "1x3": "big yellow watermelon of box two"} | train_simple/359-0 | 10 | 7 |
||
[] | [
"right"
] | 2 | 1 | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | Step 1: It is given that the medium purple star is left of the large purple hexagon.
Step 2: From step 1, we can infer that the large purple hexagon is right of the medium purple star. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | A medium purple hexagon is in a block named HHH. Block HHH contain a large grey hexagon, a large grey pentagon and a little purple star. At 3 o'clock position regarding to the large grey pentagon there is the large grey hexagon. The medium purple hexagon is at 3 o'clock position regarding to the little purple star. Another block called LLL with two large purple stars is inside block HHH. This block cover a large red pentagon and a large purple hexagon. A medium purple star is at 9:00 position regarding to and the large red pentagon is at 12 o'clock position regarding to the large purple hexagon. The medium purple star is in block LLL. Near to the large red pentagon there is the large purple hexagon. | [] | {"0x5-->0": ["ntpp"], "0-->0x1": ["ntppi"], "0-->0x3": ["ntppi"], "0-->0x4": ["ntppi"], "0x1-->0x3": ["right"], "0x5-->0x4": ["right"], "1-->1x0": ["ntppi"], "1-->1x5": ["ntppi"], "1-->0": ["ntpp"], "1-->1x4": ["tppi"], "1-->1x1": ["tppi"], "1x3-->1x1": ["left"], "1x4-->1x1": ["above"], "1x3-->1": ["ntpp"], "1x1-->1x4": ["near"]} | [{"head": "1x1", "tail": "1x3", "context_rel": {}, "inv_context_rel": {"left": {"phrase": "at 9:00 position regarding to"}}, "inferred_rel": {"right": {}}}] | [
"1x1",
"1x3"
] | Where is the large purple hexagon relative to the medium purple star? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 16",
"seed_id: 5",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "block HHH", "1": "block LLL", "0x1": "large grey hexagon", "0x3": "large grey pentagon", "0x4": "little purple star", "0x5": "medium purple hexagon", "1x0": "large purple star number one", "1x1": "large purple hexagon", "1x3": "medium purple star", "1x4": "large red pentagon", "1x5": "large purple star number two"} | train_clock/642-1 | 11 | 2 |
||
[] | [
"left"
] | 2 | 2 | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | Step 1: It is given that the small black triangle is left of the small black circle.
Step 2: It is given that the small black circle is left of the big yellow square.
Step 3: From step 1 and 2, we can infer that the small black triangle is left of the big yellow square. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | One block called AAA exists. A big yellow square is covered by and a big blue square is inside block AAA. Block AAA cover a small black triangle and a small black circle. This block contain a big yellow triangle and a small blue circle. To the west of the small black circle is the small black triangle. To the west of the big yellow square there is the small black circle. The big yellow triangle are east of the small blue circle and the big blue square.The big blue square is north of and the small black circle is south of the small blue circle. The small blue circle and the big blue square are east of the small black triangle.Near to the big yellow triangle there is the small blue circle. | [] | {"0-->-1": ["ntpp"], "0x4-->0": ["tpp"], "0x3-->0": ["ntpp"], "0-->0x0": ["tppi"], "0-->0x5": ["tppi"], "0-->0x2": ["ntppi"], "0-->0x1": ["ntppi"], "0x0-->0x5": ["left"], "0x5-->0x4": ["left"], "0x2-->0x1": ["right"], "0x2-->0x3": ["right"], "0x3-->0x1": ["above"], "0x5-->0x1": ["below"], "0x1-->0x0": ["right"], "0x3-->0x0": ["right"], "0x1-->0x2": ["near"]} | [{"head": "0x0", "tail": "0x5", "context_rel": {"left": {"phrase": "to the west of"}}, "inv_context_rel": {}, "inferred_rel": {}}, {"head": "0x5", "tail": "0x4", "context_rel": {"left": {"phrase": "to the west of"}}, "inv_context_rel": {}, "inferred_rel": {}}, {"head": "0x0", "tail": "0x4", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"left": {}}}] | [
"0x0",
"0x4"
] | What is the position of the small black triangle regarding the big yellow square? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 58",
"seed_id: 6",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "block AAA", "0x0": "small black triangle", "0x1": "small blue circle", "0x2": "big yellow triangle", "0x3": "big blue square", "0x4": "big yellow square", "0x5": "small black circle"} | train_clock/675-2 | 8 | 2 |
||
[] | [
"right"
] | 2 | 2 | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | Step 1: It is given that the big yellow triangle is right of the big blue square.
Step 2: It is given that the big blue square is right of the small black triangle.
Step 3: From step 1 and 2, it can be inferred that the big yellow triangle is right of the small black triangle. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | One block called AAA exists. A big yellow square is covered by and a big blue square is inside block AAA. Block AAA cover a small black triangle and a small black circle. This block contain a big yellow triangle and a small blue circle. To the west of the small black circle is the small black triangle. To the west of the big yellow square there is the small black circle. The big yellow triangle are east of the small blue circle and the big blue square.The big blue square is north of and the small black circle is south of the small blue circle. The small blue circle and the big blue square are east of the small black triangle.Near to the big yellow triangle there is the small blue circle. | [] | {"0-->-1": ["ntpp"], "0x4-->0": ["tpp"], "0x3-->0": ["ntpp"], "0-->0x0": ["tppi"], "0-->0x5": ["tppi"], "0-->0x2": ["ntppi"], "0-->0x1": ["ntppi"], "0x0-->0x5": ["left"], "0x5-->0x4": ["left"], "0x2-->0x1": ["right"], "0x2-->0x3": ["right"], "0x3-->0x1": ["above"], "0x5-->0x1": ["below"], "0x1-->0x0": ["right"], "0x3-->0x0": ["right"], "0x1-->0x2": ["near"]} | [{"head": "0x2", "tail": "0x3", "context_rel": {"right": {"phrase": "east of"}}, "inv_context_rel": {}, "inferred_rel": {}}, {"head": "0x3", "tail": "0x0", "context_rel": {"right": {"phrase": "east of"}}, "inv_context_rel": {}, "inferred_rel": {}}, {"head": "0x2", "tail": "0x0", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"right": {}}}] | [
"0x2",
"0x0"
] | Where is the big yellow triangle relative to the small black triangle? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 58",
"seed_id: 6",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "block AAA", "0x0": "small black triangle", "0x1": "small blue circle", "0x2": "big yellow triangle", "0x3": "big blue square", "0x4": "big yellow square", "0x5": "small black circle"} | train_clock/675-2 | 8 | 3 |
||
[] | [
"contains"
] | 2 | 1 | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0
] | Step 1: It is given that the medium green watermelon is inside the box two.
Step 2: From step 1, it can be inferred that the box two contains the medium green watermelon. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | A box called one exists in the image. Away from another box called two is box one. A small yellow watermelon is in front of a medium green watermelon and is above a medium yellow melon. The small yellow watermelon is within box two. The medium green watermelon is inside box two. The medium yellow melon is in box two. Box two covers a small green melon which is in front of the small yellow watermelon and is below this fruit. In front of this thing there is the medium yellow melon. A small green apple is within box two. | [] | {"0-->-1": ["ntpp"], "0-->1": ["far"], "1x4-->1x0": ["front"], "1x4-->1x2": ["above"], "1x4-->1": ["ntpp"], "1x0-->1": ["ntpp"], "1x2-->1": ["ntpp"], "1-->1x3": ["tppi"], "1x3-->1x4": ["front"], "1x3-->1x2": ["below"], "1x2-->1x3": ["front"], "1x1-->1": ["ntpp"]} | [{"head": "1", "tail": "1x0", "context_rel": {}, "inv_context_rel": {"ntpp": {"phrase": "inside"}}, "inferred_rel": {"ntppi": {}}}] | [
"1",
"1x0"
] | What is the position of box two relative to the medium green watermelon? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 10",
"seed_id: 8",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "1": "box two", "1x0": "medium green watermelon", "1x1": "small green apple", "1x2": "medium yellow melon", "1x3": "small green melon", "1x4": "small yellow watermelon"} | train_simple/1731-3 | 8 | 4 |
||
[] | [
"behind"
] | 2 | 3 | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | Step 1: It is given that the small yellow watermelon is in front of the medium green watermelon.
Step 2: From step 1, we can say that the medium green watermelon is behind the small yellow watermelon.
Step 3: From the context, the small green melon is in front of the small yellow watermelon.
Step 4: From step 3, we can infer that the small yellow watermelon is behind the small green melon.
Step 5: From step 2 and 4, we can say that the medium green watermelon is behind the small green melon.
Step 6: It is given that the medium yellow melon is in front of the small green melon.
Step 7: From the context, the small green melon is below the medium yellow melon.
Step 8: From step 6 and 7, we can say that the small green melon is behind and below the medium yellow melon.
Step 9: From step 5 and 8, we can infer that the medium green watermelon is behind the medium yellow melon. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | A box called one exists in the image. Away from another box called two is box one. A small yellow watermelon is in front of a medium green watermelon and is above a medium yellow melon. The small yellow watermelon is within box two. The medium green watermelon is inside box two. The medium yellow melon is in box two. Box two covers a small green melon which is in front of the small yellow watermelon and is below this fruit. In front of this thing there is the medium yellow melon. A small green apple is within box two. | [] | {"0-->-1": ["ntpp"], "0-->1": ["far"], "1x4-->1x0": ["front"], "1x4-->1x2": ["above"], "1x4-->1": ["ntpp"], "1x0-->1": ["ntpp"], "1x2-->1": ["ntpp"], "1-->1x3": ["tppi"], "1x3-->1x4": ["front"], "1x3-->1x2": ["below"], "1x2-->1x3": ["front"], "1x1-->1": ["ntpp"]} | [{"head": "1x0", "tail": "1x4", "context_rel": {}, "inv_context_rel": {"front": {"phrase": "in front of"}}, "inferred_rel": {"behind": {}}}, {"head": "1x4", "tail": "1x3", "context_rel": {}, "inv_context_rel": {"front": {"phrase": "in front of"}}, "inferred_rel": {"behind": {}}}, {"head": "1x0", "tail": "1x3", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"behind": {}}}, {"head": "1x3", "tail": "1x2", "context_rel": {"below": {"phrase": "below"}}, "inv_context_rel": {"front": {"phrase": "in front of"}}, "inferred_rel": {"behind": {}}}, {"head": "1x0", "tail": "1x2", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"behind": {}}}] | [
"1x0",
"1x2"
] | Where is the medium green watermelon regarding the medium yellow melon? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 10",
"seed_id: 8",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "1": "box two", "1x0": "medium green watermelon", "1x1": "small green apple", "1x2": "medium yellow melon", "1x3": "small green melon", "1x4": "small yellow watermelon"} | train_simple/1731-3 | 8 | 5 |
||
[] | [
"above",
"behind"
] | 2 | 2 | [
0,
0,
1,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | Step 1: It is given that the small yellow watermelon is above the medium yellow melon.
Step 2: It is given that the small green melon is below the medium yellow melon.
Step 3: From the context, the medium yellow melon is in front of the small green melon.
Step 4: From step 2 and 3, we can infer that the medium yellow melon is above and in front of the small green melon.
Step 5: It is given that the small green melon is in front of the small yellow watermelon.
Step 6: From step 5, we can infer that the small yellow watermelon is above and behind the small green melon. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | A box called one exists in the image. Away from another box called two is box one. A small yellow watermelon is in front of a medium green watermelon and is above a medium yellow melon. The small yellow watermelon is within box two. The medium green watermelon is inside box two. The medium yellow melon is in box two. Box two covers a small green melon which is in front of the small yellow watermelon and is below this fruit. In front of this thing there is the medium yellow melon. A small green apple is within box two. | [] | {"0-->-1": ["ntpp"], "0-->1": ["far"], "1x4-->1x0": ["front"], "1x4-->1x2": ["above"], "1x4-->1": ["ntpp"], "1x0-->1": ["ntpp"], "1x2-->1": ["ntpp"], "1-->1x3": ["tppi"], "1x3-->1x4": ["front"], "1x3-->1x2": ["below"], "1x2-->1x3": ["front"], "1x1-->1": ["ntpp"]} | [{"head": "1x4", "tail": "1x2", "context_rel": {"above": {"phrase": "above"}}, "inv_context_rel": {}, "inferred_rel": {}}, {"head": "1x2", "tail": "1x3", "context_rel": {"front": {"phrase": "in front of"}}, "inv_context_rel": {"below": {"phrase": "below"}}, "inferred_rel": {"above": {}}}, {"head": "1x4", "tail": "1x3", "context_rel": {}, "inv_context_rel": {"front": {"phrase": "in front of"}}, "inferred_rel": {"above": {}, "behind": {}}}] | [
"1x4",
"1x3"
] | Where is the small yellow watermelon regarding the small green melon? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 10",
"seed_id: 8",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "1": "box two", "1x0": "medium green watermelon", "1x1": "small green apple", "1x2": "medium yellow melon", "1x3": "small green melon", "1x4": "small yellow watermelon"} | train_simple/1731-3 | 8 | 6 |
||
[] | [
"far"
] | 2 | 2 | [
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | Step 1: From the context, the box one is far from the box two.
Step 2: It is given that the small green apple is inside the box two.
Step 3: From step 2, we can infer that the box two contains the small green apple.
Step 4: From step 1 and 3, it can be inferred that the box one is far from the small green apple. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | A box called one exists in the image. Away from another box called two is box one. A small yellow watermelon is in front of a medium green watermelon and is above a medium yellow melon. The small yellow watermelon is within box two. The medium green watermelon is inside box two. The medium yellow melon is in box two. Box two covers a small green melon which is in front of the small yellow watermelon and is below this fruit. In front of this thing there is the medium yellow melon. A small green apple is within box two. | [] | {"0-->-1": ["ntpp"], "0-->1": ["far"], "1x4-->1x0": ["front"], "1x4-->1x2": ["above"], "1x4-->1": ["ntpp"], "1x0-->1": ["ntpp"], "1x2-->1": ["ntpp"], "1-->1x3": ["tppi"], "1x3-->1x4": ["front"], "1x3-->1x2": ["below"], "1x2-->1x3": ["front"], "1x1-->1": ["ntpp"]} | [{"head": "0", "tail": "1", "context_rel": {"far": {"phrase": "away from"}}, "inv_context_rel": {}, "inferred_rel": {}}, {"head": "1", "tail": "1x1", "context_rel": {}, "inv_context_rel": {"ntpp": {"phrase": "within"}}, "inferred_rel": {"ntppi": {}}}, {"head": "0", "tail": "1x1", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"far": {}}}] | [
"0",
"1x1"
] | Where is box one relative to the small green apple? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 10",
"seed_id: 8",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "1": "box two", "1x0": "medium green watermelon", "1x1": "small green apple", "1x2": "medium yellow melon", "1x3": "small green melon", "1x4": "small yellow watermelon"} | train_simple/1731-3 | 8 | 7 |
||
[] | [
"right",
"below",
"outside"
] | 2 | 2 | [
0,
1,
0,
1,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | Step 1: From the context, the box two contains and touches the orange apple.
Step 2: From step 1, we can say that the orange apple is inside and touching the box two.
Step 3: It is given that the box three is above, outside and left of the box two.
Step 4: From step 3, we can infer that the box two is below, outside and right of the box three.
Step 5: From step 2 and 4, it can be inferred that the orange apple is below, outside and right of the box three. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | Three boxes, called one, two and three exist. Box two covers an orange apple and has a medium yellow apple. The orange thing touches the medium yellow apple. Box three with a medium green apple touches box one. North of, disconnected from and to the west of box two is this box. This box covers a medium yellow apple. Another medium yellow apple is within this box. The medium green apple are north of two medium yellow apples. | [] | {"0-->-1": ["ntpp"], "1-->-1": ["ntpp"], "2-->-1": ["ntpp"], "1-->1x0": ["tppi"], "1-->1x1": ["ntppi"], "1x0-->1x1": ["ec"], "2-->2x2": ["ntppi"], "2-->0": ["ec"], "2-->1": ["above", "dc", "left"], "2-->2x0": ["tppi"], "2x1-->2": ["ntpp"], "2x2-->2x1": ["above"], "2x2-->2x0": ["above"]} | [{"head": "1x0", "tail": "1", "context_rel": {}, "inv_context_rel": {"tppi": {"phrase": "covers"}}, "inferred_rel": {"tpp": {}}}, {"head": "1", "tail": "2", "context_rel": {}, "inv_context_rel": {"above": {"phrase": "north of"}, "dc": {"phrase": "disconnected from"}, "left": {"phrase": "to the west of"}}, "inferred_rel": {"below": {}, "dc": {}, "right": {}}}, {"head": "1x0", "tail": "2", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"right": {}, "below": {}, "dc": {}}}] | [
"1x0",
"2"
] | What is the position of the orange fruit regarding box three? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 65",
"seed_id: 9",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "1": "box two", "2": "box three", "1x0": "orange apple", "1x1": "medium yellow apple of box two", "2x0": "medium yellow apple number one of box three", "2x1": "medium yellow apple number two of box three", "2x2": "medium green apple"} | train/4098-2 | 9 | 4 |
||
[
"Region|RCC8|NTPP"
] | [
"outside and touching"
] | 3 | 1 | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0
] | Step 1: From the context, the orange apple is outside and touching the medium yellow apple of box two.
Step 2: From step 1, it can be inferred that the medium yellow apple of box two is outside and touching the orange apple. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | Three boxes, called one, two and three exist. Box two covers an orange apple and has a medium yellow apple. The orange thing touches the medium yellow apple. Box three with a medium green apple touches box one. North of, disconnected from and to the west of box two is this box. This box covers a medium yellow apple. Another medium yellow apple is within this box. The medium green apple are north of two medium yellow apples. | [] | {"0-->-1": ["ntpp"], "1-->-1": ["ntpp"], "2-->-1": ["ntpp"], "1-->1x0": ["tppi"], "1-->1x1": ["ntppi"], "1x0-->1x1": ["ec"], "2-->2x2": ["ntppi"], "2-->0": ["ec"], "2-->1": ["above", "dc", "left"], "2-->2x0": ["tppi"], "2x1-->2": ["ntpp"], "2x2-->2x1": ["above"], "2x2-->2x0": ["above"]} | [{"head": "1x1", "tail": "1x0", "context_rel": {}, "inv_context_rel": {"ec": {"phrase": "touches"}}, "inferred_rel": {"ec": {}}}] | [
"1x1",
"1x0"
] | Where is the medium yellow apple in box two relative to the orange apple? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 65",
"seed_id: 9",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "1": "box two", "2": "box three", "1x0": "orange apple", "1x1": "medium yellow apple of box two", "2x0": "medium yellow apple number one of box three", "2x1": "medium yellow apple number two of box three", "2x2": "medium green apple"} | train/4098-2 | 9 | 5 |
||
[] | [
"inside"
] | 2 | 1 | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0
] | Step 1: From the context, the block AAA contains the medium blue square number two.
Step 2: From step 1, we can infer that the medium blue square number two is inside the block AAA. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | A medium blue square is inside and touching a block named AAA. This object touches another medium blue square. Block AAA contains medium blue square number two. | [] | {"0x0-->0": ["tpp"], "0x0-->0x1": ["ec"], "0-->0x1": ["ntppi"]} | [{"head": "0x1", "tail": "0", "context_rel": {}, "inv_context_rel": {"ntppi": {"phrase": "contains"}}, "inferred_rel": {"ntpp": {}}}] | [
"0x1",
"0"
] | What is the position of medium blue square number two regarding AAA? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 48",
"seed_id: 10",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "block AAA", "0x0": "medium blue square number one", "0x1": "medium blue square number two"} | train_stepgame/4014-1 | 3 | 3 |
||
[] | [
"right",
"below",
"far",
"outside"
] | 2 | 4 | [
0,
1,
0,
1,
0,
0,
0,
1,
1,
0,
0,
0,
0,
0,
0,
0
] | Step 1: It is given that the box one contains and touches the big yellow watermelon.
Step 2: From step 1, we can infer that the big yellow watermelon is inside and touching the box one.
Step 3: It is given that the box three is left of the box one.
Step 4: From the context, the box one is below, outside and far from the box three.
Step 5: From step 3 and 4, we can say that the box one is below, outside, right and far from the box three.
Step 6: From step 2 and 5, we can say that the big yellow watermelon is below, outside, right and far from the box three.
Step 7: From the context, the box three contains the box two.
Step 8: From step 6 and 7, we can say that the big yellow watermelon is below, outside, right and far from the box two.
Step 9: It is given that the medium green watermelon is inside the box two.
Step 10: From step 9, it can be inferred that the box two contains the medium green watermelon.
Step 11: From step 8 and 10, it can be inferred that the big yellow watermelon is below, outside, right and far from the medium green watermelon. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | There exist three boxes, named one, two and three. Box one covers a big yellow watermelon. A medium green watermelon is inside box two. Box two covers a big orange apple. Box three is on the left side of box one and has this box. Under, disconnected from and farther from this box there is box one. Box three cover a medium orange watermelon and a medium green apple. Below the medium orange watermelon is the medium green apple. | [] | {"0-->-1": ["ntpp"], "1-->-1": ["ntpp"], "2-->-1": ["ntpp"], "0-->0x2": ["tppi"], "1x4-->1": ["ntpp"], "1-->1x1": ["tppi"], "2-->0": ["left"], "2-->1": ["ntppi"], "0-->2": ["below", "dc", "far"], "2-->2x1": ["tppi"], "2-->2x2": ["tppi"], "2x2-->2x1": ["below"]} | [{"head": "0x2", "tail": "0", "context_rel": {}, "inv_context_rel": {"tppi": {"phrase": "covers"}}, "inferred_rel": {"tpp": {}}}, {"head": "0", "tail": "2", "context_rel": {"below": {"phrase": "under"}, "dc": {"phrase": "disconnected from"}, "far": {"phrase": "farther from"}}, "inv_context_rel": {"left": {"phrase": "on the left side of"}}, "inferred_rel": {"right": {}}}, {"head": "0x2", "tail": "2", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"right": {}, "below": {}, "far": {}, "dc": {}}}, {"head": "2", "tail": "1", "context_rel": {"ntppi": {"phrase": "has"}}, "inv_context_rel": {}, "inferred_rel": {}}, {"head": "0x2", "tail": "1", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"right": {}, "below": {}, "far": {}, "dc": {}}}, {"head": "1", "tail": "1x4", "context_rel": {}, "inv_context_rel": {"ntpp": {"phrase": "inside"}}, "inferred_rel": {"ntppi": {}}}, {"head": "0x2", "tail": "1x4", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"right": {}, "below": {}, "far": {}, "dc": {}}}] | [
"0x2",
"1x4"
] | Where is the big yellow watermelon relative to the medium green watermelon? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 9",
"seed_id: 11",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "1": "box two", "2": "box three", "0x2": "big yellow watermelon", "1x1": "big orange apple", "1x4": "medium green watermelon", "2x1": "medium orange watermelon", "2x2": "medium green apple"} | train_simple/2013-3 | 9 | 4 |
||
[] | [
"right",
"below",
"far",
"outside"
] | 2 | 4 | [
0,
1,
0,
1,
0,
0,
0,
1,
1,
0,
0,
0,
0,
0,
0,
0
] | Step 1: It is given that the box one contains and touches the big yellow watermelon.
Step 2: From step 1, we can say that the big yellow watermelon is inside and touching the box one.
Step 3: From the context, the box three is left of the box one.
Step 4: From the context, the box one is below, outside and far from the box three.
Step 5: From step 3 and 4, we can say that the box one is below, outside, right and far from the box three.
Step 6: From step 2 and 5, we can infer that the big yellow watermelon is below, outside, right and far from the box three.
Step 7: From the context, the box three contains the box two.
Step 8: From step 6 and 7, it can be inferred that the big yellow watermelon is below, outside, right and far from the box two.
Step 9: From the context, the box two contains and touches the big orange apple.
Step 10: From step 8 and 9, we can say that the big yellow watermelon is below, outside, right and far from the big orange apple. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | There exist three boxes, named one, two and three. Box one covers a big yellow watermelon. A medium green watermelon is inside box two. Box two covers a big orange apple. Box three is on the left side of box one and has this box. Under, disconnected from and farther from this box there is box one. Box three cover a medium orange watermelon and a medium green apple. Below the medium orange watermelon is the medium green apple. | [] | {"0-->-1": ["ntpp"], "1-->-1": ["ntpp"], "2-->-1": ["ntpp"], "0-->0x2": ["tppi"], "1x4-->1": ["ntpp"], "1-->1x1": ["tppi"], "2-->0": ["left"], "2-->1": ["ntppi"], "0-->2": ["below", "dc", "far"], "2-->2x1": ["tppi"], "2-->2x2": ["tppi"], "2x2-->2x1": ["below"]} | [{"head": "0x2", "tail": "0", "context_rel": {}, "inv_context_rel": {"tppi": {"phrase": "covers"}}, "inferred_rel": {"tpp": {}}}, {"head": "0", "tail": "2", "context_rel": {"below": {"phrase": "under"}, "dc": {"phrase": "disconnected from"}, "far": {"phrase": "farther from"}}, "inv_context_rel": {"left": {"phrase": "on the left side of"}}, "inferred_rel": {"right": {}}}, {"head": "0x2", "tail": "2", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"right": {}, "below": {}, "far": {}, "dc": {}}}, {"head": "2", "tail": "1", "context_rel": {"ntppi": {"phrase": "has"}}, "inv_context_rel": {}, "inferred_rel": {}}, {"head": "0x2", "tail": "1", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"right": {}, "below": {}, "far": {}, "dc": {}}}, {"head": "1", "tail": "1x1", "context_rel": {"tppi": {"phrase": "covers"}}, "inv_context_rel": {}, "inferred_rel": {}}, {"head": "0x2", "tail": "1x1", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"right": {}, "below": {}, "far": {}, "dc": {}}}] | [
"0x2",
"1x1"
] | Where is the big yellow watermelon relative to the big orange apple? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 9",
"seed_id: 11",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "1": "box two", "2": "box three", "0x2": "big yellow watermelon", "1x1": "big orange apple", "1x4": "medium green watermelon", "2x1": "medium orange watermelon", "2x2": "medium green apple"} | train_simple/2013-3 | 9 | 5 |
||
[] | [
"above"
] | 2 | 1 | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | Step 1: From the context, the medium green apple is below the medium orange watermelon.
Step 2: From step 1, we can say that the medium orange watermelon is above the medium green apple. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | There exist three boxes, named one, two and three. Box one covers a big yellow watermelon. A medium green watermelon is inside box two. Box two covers a big orange apple. Box three is on the left side of box one and has this box. Under, disconnected from and farther from this box there is box one. Box three cover a medium orange watermelon and a medium green apple. Below the medium orange watermelon is the medium green apple. | [] | {"0-->-1": ["ntpp"], "1-->-1": ["ntpp"], "2-->-1": ["ntpp"], "0-->0x2": ["tppi"], "1x4-->1": ["ntpp"], "1-->1x1": ["tppi"], "2-->0": ["left"], "2-->1": ["ntppi"], "0-->2": ["below", "dc", "far"], "2-->2x1": ["tppi"], "2-->2x2": ["tppi"], "2x2-->2x1": ["below"]} | [{"head": "2x1", "tail": "2x2", "context_rel": {}, "inv_context_rel": {"below": {"phrase": "below"}}, "inferred_rel": {"above": {}}}] | [
"2x1",
"2x2"
] | Where is the medium orange watermelon regarding the medium green apple? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 9",
"seed_id: 11",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "1": "box two", "2": "box three", "0x2": "big yellow watermelon", "1x1": "big orange apple", "1x4": "medium green watermelon", "2x1": "medium orange watermelon", "2x2": "medium green apple"} | train_simple/2013-3 | 9 | 6 |
||
[] | [
"inside and touching"
] | 2 | 1 | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | Step 1: From the context, the box three contains and touches the medium orange watermelon.
Step 2: From step 1, it can be inferred that the medium orange watermelon is inside and touching the box three. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | There exist three boxes, named one, two and three. Box one covers a big yellow watermelon. A medium green watermelon is inside box two. Box two covers a big orange apple. Box three is on the left side of box one and has this box. Under, disconnected from and farther from this box there is box one. Box three cover a medium orange watermelon and a medium green apple. Below the medium orange watermelon is the medium green apple. | [] | {"0-->-1": ["ntpp"], "1-->-1": ["ntpp"], "2-->-1": ["ntpp"], "0-->0x2": ["tppi"], "1x4-->1": ["ntpp"], "1-->1x1": ["tppi"], "2-->0": ["left"], "2-->1": ["ntppi"], "0-->2": ["below", "dc", "far"], "2-->2x1": ["tppi"], "2-->2x2": ["tppi"], "2x2-->2x1": ["below"]} | [{"head": "2x1", "tail": "2", "context_rel": {}, "inv_context_rel": {"tppi": {"phrase": "cover"}}, "inferred_rel": {"tpp": {}}}] | [
"2x1",
"2"
] | Where is the medium orange watermelon regarding box three? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 9",
"seed_id: 11",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "1": "box two", "2": "box three", "0x2": "big yellow watermelon", "1x1": "big orange apple", "1x4": "medium green watermelon", "2x1": "medium orange watermelon", "2x2": "medium green apple"} | train_simple/2013-3 | 9 | 7 |
||
[] | [
"inside"
] | 2 | 1 | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0
] | Step 1: From the context, the box one contains the small yellow melon.
Step 2: From step 1, it can be inferred that the small yellow melon is inside the box one. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | A yellow apple is covered by a box called one. Under and to the right-hand side of a small yellow melon there is this fruit. Box one has the small yellow melon. In another box named two there is this box. A big green watermelon is close to an orange melon. The green thing is inside box two. Box two has the orange thing. | [] | {"0x0-->0": ["tpp"], "0x0-->0x1": ["below", "right"], "0-->0x1": ["ntppi"], "0-->1": ["ntpp"], "1x1-->1x0": ["near"], "1x1-->1": ["ntpp"], "1-->1x0": ["ntppi"]} | [{"head": "0x1", "tail": "0", "context_rel": {}, "inv_context_rel": {"ntppi": {"phrase": "has"}}, "inferred_rel": {"ntpp": {}}}] | [
"0x1",
"0"
] | Where is the small yellow melon relative to box one? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 19",
"seed_id: 12",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "1": "box two", "0x0": "yellow apple", "0x1": "small yellow melon", "1x0": "orange melon", "1x1": "big green watermelon"} | train_simple/268-1 | 6 | 4 |
||
[] | [
"near"
] | 2 | 1 | [
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | Step 1: From the context, the big green watermelon is near the orange melon.
Step 2: From step 1, it can be inferred that the orange melon is near the big green watermelon. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | A yellow apple is covered by a box called one. Under and to the right-hand side of a small yellow melon there is this fruit. Box one has the small yellow melon. In another box named two there is this box. A big green watermelon is close to an orange melon. The green thing is inside box two. Box two has the orange thing. | [] | {"0x0-->0": ["tpp"], "0x0-->0x1": ["below", "right"], "0-->0x1": ["ntppi"], "0-->1": ["ntpp"], "1x1-->1x0": ["near"], "1x1-->1": ["ntpp"], "1-->1x0": ["ntppi"]} | [{"head": "1x0", "tail": "1x1", "context_rel": {}, "inv_context_rel": {"near": {"phrase": "close to"}}, "inferred_rel": {"near": {}}}] | [
"1x0",
"1x1"
] | Where is the orange thing relative to the big thing? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 19",
"seed_id: 12",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "1": "box two", "0x0": "yellow apple", "0x1": "small yellow melon", "1x0": "orange melon", "1x1": "big green watermelon"} | train_simple/268-1 | 6 | 5 |
||
[] | [
"contains"
] | 2 | 1 | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0
] | Step 1: It is given that the box one is inside the box two.
Step 2: From step 1, we can say that the box two contains the box one. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | A yellow apple is covered by a box called one. Under and to the right-hand side of a small yellow melon there is this fruit. Box one has the small yellow melon. In another box named two there is this box. A big green watermelon is close to an orange melon. The green thing is inside box two. Box two has the orange thing. | [] | {"0x0-->0": ["tpp"], "0x0-->0x1": ["below", "right"], "0-->0x1": ["ntppi"], "0-->1": ["ntpp"], "1x1-->1x0": ["near"], "1x1-->1": ["ntpp"], "1-->1x0": ["ntppi"]} | [{"head": "1", "tail": "0", "context_rel": {}, "inv_context_rel": {"ntpp": {"phrase": "in"}}, "inferred_rel": {"ntppi": {}}}] | [
"1",
"0"
] | Where is box two regarding box one? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 19",
"seed_id: 12",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "1": "box two", "0x0": "yellow apple", "0x1": "small yellow melon", "1x0": "orange melon", "1x1": "big green watermelon"} | train_simple/268-1 | 6 | 6 |
||
[] | [
"contains"
] | 2 | 1 | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0
] | Step 1: From the context, the big green watermelon is inside the box two.
Step 2: From step 1, it can be inferred that the box two contains the big green watermelon. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | A yellow apple is covered by a box called one. Under and to the right-hand side of a small yellow melon there is this fruit. Box one has the small yellow melon. In another box named two there is this box. A big green watermelon is close to an orange melon. The green thing is inside box two. Box two has the orange thing. | [] | {"0x0-->0": ["tpp"], "0x0-->0x1": ["below", "right"], "0-->0x1": ["ntppi"], "0-->1": ["ntpp"], "1x1-->1x0": ["near"], "1x1-->1": ["ntpp"], "1-->1x0": ["ntppi"]} | [{"head": "1", "tail": "1x1", "context_rel": {}, "inv_context_rel": {"ntpp": {"phrase": "inside"}}, "inferred_rel": {"ntppi": {}}}] | [
"1",
"1x1"
] | What is the position of box two regarding the big fruit? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 19",
"seed_id: 12",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "1": "box two", "0x0": "yellow apple", "0x1": "small yellow melon", "1x0": "orange melon", "1x1": "big green watermelon"} | train_simple/268-1 | 6 | 7 |
||
[] | [
"left",
"above",
"far",
"outside"
] | 2 | 2 | [
1,
0,
1,
0,
0,
0,
0,
1,
1,
0,
0,
0,
0,
0,
0,
0
] | Step 1: It is given that the block HHH is right of the block LLL.
Step 2: From the context, the block LLL is above, outside and far from the block HHH.
Step 3: From step 1 and 2, we can infer that the block LLL is above, outside, left and far from the block HHH.
Step 4: It is given that the medium grey hexagon is inside and touching the block HHH.
Step 5: From step 4, it can be inferred that the block HHH contains and touches the medium grey hexagon.
Step 6: From step 3 and 5, we can infer that the block LLL is above, outside, left and far from the medium grey hexagon. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | Two blocks, named HHH and LLL exist. East of block LLL there is block HHH. To the north of, disconnected from and far from this block is block LLL. A medium grey hexagon is covered by and a medium grey pentagon is in block HHH. Near to the medium grey hexagon is the pentagon. A large purple hexagon is covered by block LLL. | [] | {"0-->-1": ["ntpp"], "1-->-1": ["ntpp"], "0-->1": ["right"], "1-->0": ["above", "dc", "far"], "0x0-->0": ["tpp"], "0x1-->0": ["ntpp"], "0x1-->0x0": ["near"], "1x0-->1": ["tpp"]} | [{"head": "1", "tail": "0", "context_rel": {"above": {"phrase": "to the north of"}, "dc": {"phrase": "disconnected from"}, "far": {"phrase": "far from"}}, "inv_context_rel": {"right": {"phrase": "east of"}}, "inferred_rel": {"left": {}}}, {"head": "0", "tail": "0x0", "context_rel": {}, "inv_context_rel": {"tpp": {"phrase": "covered by"}}, "inferred_rel": {"tppi": {}}}, {"head": "1", "tail": "0x0", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"left": {}, "above": {}, "far": {}, "dc": {}}}] | [
"1",
"0x0"
] | Where is block LLL relative to the medium grey hexagon? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 63",
"seed_id: 13",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "block HHH", "1": "block LLL", "0x0": "medium grey hexagon", "0x1": "medium grey pentagon", "1x0": "large purple hexagon"} | train_clock/333-2 | 6 | 4 |
||
[] | [
"right",
"below",
"far",
"outside"
] | 2 | 3 | [
0,
1,
0,
1,
0,
0,
0,
1,
1,
0,
0,
0,
0,
0,
0,
0
] | Step 1: It is given that the medium grey hexagon is inside and touching the block HHH.
Step 2: From the context, the block LLL is above, outside and far from the block HHH.
Step 3: It is given that the block HHH is right of the block LLL.
Step 4: From step 2 and 3, we can infer that the block HHH is below, outside, far and right of the block LLL.
Step 5: From step 1 and 4, it can be inferred that the medium grey hexagon is below, outside, right and far from the block LLL.
Step 6: It is given that the large purple hexagon is inside and touching the block LLL.
Step 7: From step 6, we can say that the block LLL contains and touches the large purple hexagon.
Step 8: From step 5 and 7, it can be inferred that the medium grey hexagon is below, outside, right and far from the large purple hexagon. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | Two blocks, named HHH and LLL exist. East of block LLL there is block HHH. To the north of, disconnected from and far from this block is block LLL. A medium grey hexagon is covered by and a medium grey pentagon is in block HHH. Near to the medium grey hexagon is the pentagon. A large purple hexagon is covered by block LLL. | [] | {"0-->-1": ["ntpp"], "1-->-1": ["ntpp"], "0-->1": ["right"], "1-->0": ["above", "dc", "far"], "0x0-->0": ["tpp"], "0x1-->0": ["ntpp"], "0x1-->0x0": ["near"], "1x0-->1": ["tpp"]} | [{"head": "0x0", "tail": "0", "context_rel": {"tpp": {"phrase": "covered by"}}, "inv_context_rel": {}, "inferred_rel": {}}, {"head": "0", "tail": "1", "context_rel": {"right": {"phrase": "east of"}}, "inv_context_rel": {"above": {"phrase": "to the north of"}, "dc": {"phrase": "disconnected from"}, "far": {"phrase": "far from"}}, "inferred_rel": {"below": {}, "dc": {}, "far": {}}}, {"head": "0x0", "tail": "1", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"right": {}, "below": {}, "far": {}, "dc": {}}}, {"head": "1", "tail": "1x0", "context_rel": {}, "inv_context_rel": {"tpp": {"phrase": "covered by"}}, "inferred_rel": {"tppi": {}}}, {"head": "0x0", "tail": "1x0", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"right": {}, "below": {}, "far": {}, "dc": {}}}] | [
"0x0",
"1x0"
] | Where is the medium grey hexagon relative to the purple object? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 63",
"seed_id: 13",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "block HHH", "1": "block LLL", "0x0": "medium grey hexagon", "0x1": "medium grey pentagon", "1x0": "large purple hexagon"} | train_clock/333-2 | 6 | 5 |
||
[] | [
"right",
"below",
"far",
"outside"
] | 2 | 3 | [
0,
1,
0,
1,
0,
0,
0,
1,
1,
0,
0,
0,
0,
0,
0,
0
] | Step 1: From the context, the medium grey pentagon is inside the block HHH.
Step 2: From the context, the block LLL is above, outside and far from the block HHH.
Step 3: From the context, the block HHH is right of the block LLL.
Step 4: From step 2 and 3, we can infer that the block HHH is below, outside, far and right of the block LLL.
Step 5: From step 1 and 4, we can say that the medium grey pentagon is below, outside, right and far from the block LLL.
Step 6: It is given that the large purple hexagon is inside and touching the block LLL.
Step 7: From step 6, we can say that the block LLL contains and touches the large purple hexagon.
Step 8: From step 5 and 7, we can say that the medium grey pentagon is below, outside, right and far from the large purple hexagon. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | Two blocks, named HHH and LLL exist. East of block LLL there is block HHH. To the north of, disconnected from and far from this block is block LLL. A medium grey hexagon is covered by and a medium grey pentagon is in block HHH. Near to the medium grey hexagon is the pentagon. A large purple hexagon is covered by block LLL. | [] | {"0-->-1": ["ntpp"], "1-->-1": ["ntpp"], "0-->1": ["right"], "1-->0": ["above", "dc", "far"], "0x0-->0": ["tpp"], "0x1-->0": ["ntpp"], "0x1-->0x0": ["near"], "1x0-->1": ["tpp"]} | [{"head": "0x1", "tail": "0", "context_rel": {"ntpp": {"phrase": "in"}}, "inv_context_rel": {}, "inferred_rel": {}}, {"head": "0", "tail": "1", "context_rel": {"right": {"phrase": "east of"}}, "inv_context_rel": {"above": {"phrase": "to the north of"}, "dc": {"phrase": "disconnected from"}, "far": {"phrase": "far from"}}, "inferred_rel": {"below": {}, "dc": {}, "far": {}}}, {"head": "0x1", "tail": "1", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"right": {}, "below": {}, "far": {}, "dc": {}}}, {"head": "1", "tail": "1x0", "context_rel": {}, "inv_context_rel": {"tpp": {"phrase": "covered by"}}, "inferred_rel": {"tppi": {}}}, {"head": "0x1", "tail": "1x0", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"right": {}, "below": {}, "far": {}, "dc": {}}}] | [
"0x1",
"1x0"
] | What is the position of the pentagon relative to the purple thing? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 63",
"seed_id: 13",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "block HHH", "1": "block LLL", "0x0": "medium grey hexagon", "0x1": "medium grey pentagon", "1x0": "large purple hexagon"} | train_clock/333-2 | 6 | 6 |
||
[] | [
"near"
] | 2 | 1 | [
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | Step 1: From the context, the medium grey pentagon is near the medium grey hexagon.
Step 2: From step 1, it can be inferred that the medium grey hexagon is near the medium grey pentagon. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | Two blocks, named HHH and LLL exist. East of block LLL there is block HHH. To the north of, disconnected from and far from this block is block LLL. A medium grey hexagon is covered by and a medium grey pentagon is in block HHH. Near to the medium grey hexagon is the pentagon. A large purple hexagon is covered by block LLL. | [] | {"0-->-1": ["ntpp"], "1-->-1": ["ntpp"], "0-->1": ["right"], "1-->0": ["above", "dc", "far"], "0x0-->0": ["tpp"], "0x1-->0": ["ntpp"], "0x1-->0x0": ["near"], "1x0-->1": ["tpp"]} | [{"head": "0x0", "tail": "0x1", "context_rel": {}, "inv_context_rel": {"near": {"phrase": "near to"}}, "inferred_rel": {"near": {}}}] | [
"0x0",
"0x1"
] | Where is the medium grey hexagon relative to the pentagon? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 63",
"seed_id: 13",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "block HHH", "1": "block LLL", "0x0": "medium grey hexagon", "0x1": "medium grey pentagon", "1x0": "large purple hexagon"} | train_clock/333-2 | 6 | 7 |
||
[
"Region|RCC8|NTPP"
] | [
"in front",
"far",
"outside"
] | 3 | 4 | [
0,
0,
0,
0,
0,
1,
0,
1,
1,
0,
0,
0,
0,
0,
0,
0
] | Step 1: It is given that the medium blue square is inside the block CCC.
Step 2: It is given that the block AAA contains the block CCC.
Step 3: From step 2, it can be inferred that the block CCC is inside the block AAA.
Step 4: From step 1 and 3, it can be inferred that the medium blue square is inside the block AAA.
Step 5: From the context, the block BBB is behind, outside and far from the block AAA.
Step 6: From step 5, we can infer that the block AAA is outside, in front and far from the block BBB.
Step 7: From step 4 and 6, we can infer that the medium blue square is outside, in front and far from the block BBB.
Step 8: It is given that the block BBB contains the medium yellow square of block BBB.
Step 9: From step 7 and 8, we can infer that the medium blue square is outside, in front and far from the medium yellow square of block BBB. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | Three blocks, called AAA, BBB and CCC exist in the image. Behind, disconnected from and away from block AAA is block BBB with a medium yellow square. A medium black square is inside and touching this block. Block AAA contains block CCC. A medium blue square is in block CCC. Block CCC covers a medium yellow square. | [] | {"0-->-1": ["ntpp"], "1-->-1": ["ntpp"], "2-->-1": ["ntpp"], "1-->1x1": ["ntppi"], "1-->0": ["behind", "dc", "far"], "1x0-->1": ["tpp"], "0-->2": ["ntppi"], "2x1-->2": ["ntpp"], "2-->2x0": ["tppi"]} | [{"head": "2x1", "tail": "2", "context_rel": {"ntpp": {"phrase": "in"}}, "inv_context_rel": {}, "inferred_rel": {}}, {"head": "2", "tail": "0", "context_rel": {}, "inv_context_rel": {"ntppi": {"phrase": "contains"}}, "inferred_rel": {"ntpp": {}}}, {"head": "2x1", "tail": "0", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"ntpp": {}}}, {"head": "0", "tail": "1", "context_rel": {}, "inv_context_rel": {"behind": {"phrase": "behind"}, "dc": {"phrase": "disconnected from"}, "far": {"phrase": "away from"}}, "inferred_rel": {"front": {}, "dc": {}, "far": {}}}, {"head": "2x1", "tail": "1", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"front": {}, "far": {}, "dc": {}}}, {"head": "1", "tail": "1x1", "context_rel": {"ntppi": {"phrase": "with"}}, "inv_context_rel": {}, "inferred_rel": {}}, {"head": "2x1", "tail": "1x1", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"front": {}, "far": {}, "dc": {}}}] | [
"2x1",
"1x1"
] | Where is the medium blue square regarding the medium yellow square in BBB? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 57",
"seed_id: 14",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "block AAA", "1": "block BBB", "2": "block CCC", "1x0": "medium black square", "1x1": "medium yellow square of block BBB", "2x0": "medium yellow square of block CCC", "2x1": "medium blue square"} | train/3528-1 | 8 | 4 |
||
[] | [
"in front",
"far",
"outside"
] | 2 | 3 | [
0,
0,
0,
0,
0,
1,
0,
1,
1,
0,
0,
0,
0,
0,
0,
0
] | Step 1: It is given that the block AAA contains the block CCC.
Step 2: From step 1, we can infer that the block CCC is inside the block AAA.
Step 3: From the context, the block BBB is behind, outside and far from the block AAA.
Step 4: From step 3, we can infer that the block AAA is outside, in front and far from the block BBB.
Step 5: From step 2 and 4, it can be inferred that the block CCC is outside, in front and far from the block BBB.
Step 6: It is given that the medium black square is inside and touching the block BBB.
Step 7: From step 6, we can infer that the block BBB contains and touches the medium black square.
Step 8: From step 5 and 7, we can say that the block CCC is outside, in front and far from the medium black square. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | Three blocks, called AAA, BBB and CCC exist in the image. Behind, disconnected from and away from block AAA is block BBB with a medium yellow square. A medium black square is inside and touching this block. Block AAA contains block CCC. A medium blue square is in block CCC. Block CCC covers a medium yellow square. | [] | {"0-->-1": ["ntpp"], "1-->-1": ["ntpp"], "2-->-1": ["ntpp"], "1-->1x1": ["ntppi"], "1-->0": ["behind", "dc", "far"], "1x0-->1": ["tpp"], "0-->2": ["ntppi"], "2x1-->2": ["ntpp"], "2-->2x0": ["tppi"]} | [{"head": "2", "tail": "0", "context_rel": {}, "inv_context_rel": {"ntppi": {"phrase": "contains"}}, "inferred_rel": {"ntpp": {}}}, {"head": "0", "tail": "1", "context_rel": {}, "inv_context_rel": {"behind": {"phrase": "behind"}, "dc": {"phrase": "disconnected from"}, "far": {"phrase": "away from"}}, "inferred_rel": {"front": {}, "dc": {}, "far": {}}}, {"head": "2", "tail": "1", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"front": {}, "far": {}, "dc": {}}}, {"head": "1", "tail": "1x0", "context_rel": {}, "inv_context_rel": {"tpp": {"phrase": "inside and touching"}}, "inferred_rel": {"tppi": {}}}, {"head": "2", "tail": "1x0", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"front": {}, "far": {}, "dc": {}}}] | [
"2",
"1x0"
] | Where is CCC relative to the black object? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 57",
"seed_id: 14",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "block AAA", "1": "block BBB", "2": "block CCC", "1x0": "medium black square", "1x1": "medium yellow square of block BBB", "2x0": "medium yellow square of block CCC", "2x1": "medium blue square"} | train/3528-1 | 8 | 5 |
||
[
"Region|RCC8|NTPP",
"Region|RCC8|TPP"
] | [
"behind",
"far",
"outside"
] | 4 | 4 | [
0,
0,
0,
0,
1,
0,
0,
1,
1,
0,
0,
0,
0,
0,
0,
0
] | Step 1: It is given that the block BBB contains the medium yellow square of block BBB.
Step 2: From step 1, it can be inferred that the medium yellow square of block BBB is inside the block BBB.
Step 3: It is given that the block BBB is behind, outside and far from the block AAA.
Step 4: From step 2 and 3, it can be inferred that the medium yellow square of block BBB is behind, outside and far from the block AAA.
Step 5: It is given that the block AAA contains the block CCC.
Step 6: From step 4 and 5, we can infer that the medium yellow square of block BBB is behind, outside and far from the block CCC.
Step 7: From the context, the block CCC contains and touches the medium yellow square of block CCC.
Step 8: From step 6 and 7, we can say that the medium yellow square of block BBB is behind, outside and far from the medium yellow square of block CCC. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | Three blocks, called AAA, BBB and CCC exist in the image. Behind, disconnected from and away from block AAA is block BBB with a medium yellow square. A medium black square is inside and touching this block. Block AAA contains block CCC. A medium blue square is in block CCC. Block CCC covers a medium yellow square. | [] | {"0-->-1": ["ntpp"], "1-->-1": ["ntpp"], "2-->-1": ["ntpp"], "1-->1x1": ["ntppi"], "1-->0": ["behind", "dc", "far"], "1x0-->1": ["tpp"], "0-->2": ["ntppi"], "2x1-->2": ["ntpp"], "2-->2x0": ["tppi"]} | [{"head": "1x1", "tail": "1", "context_rel": {}, "inv_context_rel": {"ntppi": {"phrase": "with"}}, "inferred_rel": {"ntpp": {}}}, {"head": "1", "tail": "0", "context_rel": {"behind": {"phrase": "behind"}, "dc": {"phrase": "disconnected from"}, "far": {"phrase": "away from"}}, "inv_context_rel": {}, "inferred_rel": {}}, {"head": "1x1", "tail": "0", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"behind": {}, "far": {}, "dc": {}}}, {"head": "0", "tail": "2", "context_rel": {"ntppi": {"phrase": "contains"}}, "inv_context_rel": {}, "inferred_rel": {}}, {"head": "1x1", "tail": "2", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"behind": {}, "far": {}, "dc": {}}}, {"head": "2", "tail": "2x0", "context_rel": {"tppi": {"phrase": "covers"}}, "inv_context_rel": {}, "inferred_rel": {}}, {"head": "1x1", "tail": "2x0", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"behind": {}, "far": {}, "dc": {}}}] | [
"1x1",
"2x0"
] | What is the position of the medium yellow square in block BBB regarding the medium yellow square covered by block CCC? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 57",
"seed_id: 14",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "block AAA", "1": "block BBB", "2": "block CCC", "1x0": "medium black square", "1x1": "medium yellow square of block BBB", "2x0": "medium yellow square of block CCC", "2x1": "medium blue square"} | train/3528-1 | 8 | 6 |
||
[] | [
"contains"
] | 2 | 1 | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0
] | Step 1: From the context, the tiny green rectangle is inside the box DDD.
Step 2: From step 1, it can be inferred that the box DDD contains the tiny green rectangle. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | There exists one box named DDD. A tiny green rectangle and a tiny white rectangle are within box DDD. Under the green object there is the white object. | [] | {"0-->-1": ["ntpp"], "0x0-->0": ["ntpp"], "0x1-->0": ["ntpp"], "0x1-->0x0": ["below"]} | [{"head": "0", "tail": "0x0", "context_rel": {}, "inv_context_rel": {"ntpp": {"phrase": "within"}}, "inferred_rel": {"ntppi": {}}}] | [
"0",
"0x0"
] | What is the position of DDD relative to the green thing? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 70",
"seed_id: 15",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box DDD", "0x0": "tiny green rectangle", "0x1": "tiny white rectangle"} | train_clock/2014-1 | 4 | 3 |
||
[] | [
"contains"
] | 2 | 1 | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0
] | Step 1: It is given that the tiny white rectangle is inside the box DDD.
Step 2: From step 1, it can be inferred that the box DDD contains the tiny white rectangle. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | There exists one box named DDD. A tiny green rectangle and a tiny white rectangle are within box DDD. Under the green object there is the white object. | [] | {"0-->-1": ["ntpp"], "0x0-->0": ["ntpp"], "0x1-->0": ["ntpp"], "0x1-->0x0": ["below"]} | [{"head": "0", "tail": "0x1", "context_rel": {}, "inv_context_rel": {"ntpp": {"phrase": "within"}}, "inferred_rel": {"ntppi": {}}}] | [
"0",
"0x1"
] | What is the position of box DDD regarding the white thing? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 70",
"seed_id: 15",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box DDD", "0x0": "tiny green rectangle", "0x1": "tiny white rectangle"} | train_clock/2014-1 | 4 | 4 |
||
[] | [
"above"
] | 2 | 1 | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | Step 1: It is given that the tiny white rectangle is below the tiny green rectangle.
Step 2: From step 1, we can infer that the tiny green rectangle is above the tiny white rectangle. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | There exists one box named DDD. A tiny green rectangle and a tiny white rectangle are within box DDD. Under the green object there is the white object. | [] | {"0-->-1": ["ntpp"], "0x0-->0": ["ntpp"], "0x1-->0": ["ntpp"], "0x1-->0x0": ["below"]} | [{"head": "0x0", "tail": "0x1", "context_rel": {}, "inv_context_rel": {"below": {"phrase": "under"}}, "inferred_rel": {"above": {}}}] | [
"0x0",
"0x1"
] | What is the position of the green thing regarding the white object? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 70",
"seed_id: 15",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box DDD", "0x0": "tiny green rectangle", "0x1": "tiny white rectangle"} | train_clock/2014-1 | 4 | 5 |
||
[] | [
"inside"
] | 2 | 1 | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0
] | Step 1: From the context, the block KKK contains the block HHH.
Step 2: From step 1, we can say that the block HHH is inside the block KKK. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | Three blocks, named HHH, LLL and KKK exist in the image. A medium purple hexagon is in block HHH. Block HHH covers a medium grey hexagon which touches this thing. A medium purple hexagon is to the north of a medium grey hexagon. The medium purple hexagon is in block LLL. Block LLL has the medium grey hexagon. This block has another medium purple hexagon which are to the north of medium purple hexagon number one and a medium red hexagon. Block LLL covers the medium red hexagon. Block KKK with a medium red hexagon has block HHH and this block. This block covers a medium purple hexagon. Another medium purple hexagon touches the medium red hexagon. Medium purple hexagon number two is in block KKK. | [] | {"0-->-1": ["ntpp"], "1-->-1": ["ntpp"], "2-->-1": ["ntpp"], "0x1-->0": ["ntpp"], "0-->0x0": ["tppi"], "0x0-->0x1": ["ec"], "1x2-->1x1": ["above"], "1x2-->1": ["ntpp"], "1-->1x1": ["ntppi"], "1-->1x3": ["ntppi"], "1x3-->1x2": ["above"], "1x3-->1x0": ["above"], "1-->1x0": ["tppi"], "2-->2x2": ["ntppi"], "2-->0": ["ntppi"], "2-->1": ["ntppi"], "2-->2x0": ["tppi"], "2x1-->2x2": ["ec"], "2x1-->2": ["ntpp"]} | [{"head": "0", "tail": "2", "context_rel": {}, "inv_context_rel": {"ntppi": {"phrase": "has"}}, "inferred_rel": {"ntpp": {}}}] | [
"0",
"2"
] | Where is HHH regarding KKK? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 70",
"seed_id: 16",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "block HHH", "1": "block LLL", "2": "block KKK", "0x0": "medium grey hexagon of block HHH", "0x1": "medium purple hexagon of block HHH", "1x0": "medium red hexagon of block LLL", "1x1": "medium grey hexagon of block LLL", "1x2": "medium purple hexagon number one of block LLL", "1x3": "medium purple hexagon number two of block LLL", "2x0": "medium purple hexagon number one of block KKK", "2x1": "medium purple hexagon number two of block KKK", "2x2": "medium red hexagon of block KKK"} | train_clock/3470-1 | 13 | 4 |
||
[] | [
"inside"
] | 2 | 2 | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0
] | Step 1: It is given that the medium green apple is inside and touching the box one.
Step 2: It is given that the box one is inside the box two.
Step 3: From step 1 and 2, we can infer that the medium green apple is inside the box two. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | Two boxes, called one and two exist in the image. Within box two there is box one. A medium green apple and a medium orange apple are covered by box one. A small orange apple and a big yellow apple are inside box one. Box one covers a big orange melon. To the left-hand side of the big orange melon there is the small orange apple. The big yellow apple is over the big orange melon. To the right of the medium orange apple is the medium green apple. | [] | {"0-->-1": ["ntpp"], "1-->-1": ["ntpp"], "0-->1": ["ntpp"], "0x4-->0": ["tpp"], "0x1-->0": ["tpp"], "0x0-->0": ["ntpp"], "0x3-->0": ["ntpp"], "0-->0x5": ["tppi"], "0x0-->0x5": ["left"], "0x3-->0x5": ["above"], "0x4-->0x1": ["right"]} | [{"head": "0x4", "tail": "0", "context_rel": {"tpp": {"phrase": "covered by"}}, "inv_context_rel": {}, "inferred_rel": {}}, {"head": "0", "tail": "1", "context_rel": {"ntpp": {"phrase": "within"}}, "inv_context_rel": {}, "inferred_rel": {}}, {"head": "0x4", "tail": "1", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"ntpp": {}}}] | [
"0x4",
"1"
] | Where is the medium green apple relative to box two? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 50",
"seed_id: 17",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "1": "box two", "0x0": "small orange apple", "0x1": "medium orange apple", "0x3": "big yellow apple", "0x4": "medium green apple", "0x5": "big orange melon"} | train_simple/668-0 | 8 | 1 |
||
[] | [
"inside"
] | 2 | 1 | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0
] | Step 1: It is given that the box DDD contains the midsize green rectangle.
Step 2: From step 1, we can infer that the midsize green rectangle is inside the box DDD. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | We have one box named DDD. This box contains a midsize green rectangle and covers a midsize white rectangle. The green object and another midsize white rectangle are at 12 o'clock position regarding to the midsize white rectangle.The midsize green rectangle touches midsize white rectangle number two and is at 6:00 position relative to another midsize white rectangle. Midsize white rectangle number three is inside box DDD. At 12:00 position relative to the green object there is midsize white rectangle number two. Midsize white rectangle number two is inside box DDD. Midsize white rectangle number three is at 12 o'clock position regarding to this shape. | [] | {"-1-->0": ["ntppi"], "0-->0x1": ["ntppi"], "0-->0x0": ["tppi"], "0x1-->0x0": ["above"], "0x2-->0x0": ["above"], "0x1-->0x2": ["ec"], "0x1-->0x3": ["below"], "0x3-->0": ["ntpp"], "0x2-->0x1": ["above"], "0x2-->0": ["ntpp"], "0x3-->0x2": ["above"]} | [{"head": "0x1", "tail": "0", "context_rel": {}, "inv_context_rel": {"ntppi": {"phrase": "contains"}}, "inferred_rel": {"ntpp": {}}}] | [
"0x1",
"0"
] | Where is the green object regarding box? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 37",
"seed_id: 18",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box DDD", "0x0": "midsize white rectangle number one", "0x1": "midsize green rectangle", "0x2": "midsize white rectangle number two", "0x3": "midsize white rectangle number three"} | train_stepgame/2955-2 | 6 | 4 |
||
[] | [
"above",
"outside and touching"
] | 2 | 1 | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0
] | Step 1: It is given that the midsize green rectangle is outside and touching the midsize white rectangle number two.
Step 2: From the context, the midsize white rectangle number two is above the midsize green rectangle.
Step 3: From step 1 and 2, we can infer that the midsize white rectangle number two is outside and touching and above the midsize green rectangle. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | We have one box named DDD. This box contains a midsize green rectangle and covers a midsize white rectangle. The green object and another midsize white rectangle are at 12 o'clock position regarding to the midsize white rectangle.The midsize green rectangle touches midsize white rectangle number two and is at 6:00 position relative to another midsize white rectangle. Midsize white rectangle number three is inside box DDD. At 12:00 position relative to the green object there is midsize white rectangle number two. Midsize white rectangle number two is inside box DDD. Midsize white rectangle number three is at 12 o'clock position regarding to this shape. | [] | {"-1-->0": ["ntppi"], "0-->0x1": ["ntppi"], "0-->0x0": ["tppi"], "0x1-->0x0": ["above"], "0x2-->0x0": ["above"], "0x1-->0x2": ["ec"], "0x1-->0x3": ["below"], "0x3-->0": ["ntpp"], "0x2-->0x1": ["above"], "0x2-->0": ["ntpp"], "0x3-->0x2": ["above"]} | [{"head": "0x2", "tail": "0x1", "context_rel": {"above": {"phrase": "at 12:00 position relative to"}}, "inv_context_rel": {"ec": {"phrase": "touches"}}, "inferred_rel": {"ec": {}}}] | [
"0x2",
"0x1"
] | Where is midsize white rectangle number two regarding the green thing? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 37",
"seed_id: 18",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box DDD", "0x0": "midsize white rectangle number one", "0x1": "midsize green rectangle", "0x2": "midsize white rectangle number two", "0x3": "midsize white rectangle number three"} | train_stepgame/2955-2 | 6 | 5 |
||
[] | [
"above"
] | 2 | 2 | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | Step 1: From the context, the yellow square is above, outside and touching the medium blue square number one.
Step 2: It is given that the medium blue square number two is below the medium blue square number one.
Step 3: It is given that the medium blue square number one is outside and touching the medium blue square number two.
Step 4: From step 2 and 3, it can be inferred that the medium blue square number one is above, outside and touching the medium blue square number two.
Step 5: From step 1 and 4, we can say that the yellow square is above the medium blue square number two. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | There is one block named AAA. A yellow square is within block AAA. Another medium blue square is at 6:00 position regarding to a medium blue square. At 12:00 position regarding to and touches medium blue square number one is the yellow square. Block AAA contains medium blue square number one. This block covers medium blue square number two. Medium blue square number one touches medium blue square number two. | [] | {"0-->-1": ["ntpp"], "0x2-->0": ["ntpp"], "0x0-->0x1": ["below"], "0x2-->0x1": ["above", "ec"], "0-->0x1": ["ntppi"], "0-->0x0": ["tppi"], "0x1-->0x0": ["ec"]} | [{"head": "0x2", "tail": "0x1", "context_rel": {"above": {"phrase": "at 12:00 position regarding to"}, "ec": {"phrase": "touches"}}, "inv_context_rel": {}, "inferred_rel": {}}, {"head": "0x1", "tail": "0x0", "context_rel": {"ec": {"phrase": "touches"}}, "inv_context_rel": {"below": {"phrase": "at 6:00 position regarding to"}}, "inferred_rel": {"above": {}}}, {"head": "0x2", "tail": "0x0", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"above": {}}}] | [
"0x2",
"0x0"
] | What is the position of the yellow thing relative to medium blue square number two? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 15",
"seed_id: 19",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "block AAA", "0x0": "medium blue square number two", "0x1": "medium blue square number one", "0x2": "yellow square"} | train_clock/3255-2 | 5 | 4 |
||
[] | [
"contains"
] | 2 | 1 | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0
] | Step 1: From the context, the yellow square is inside the block AAA.
Step 2: From step 1, it can be inferred that the block AAA contains the yellow square. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | There is one block named AAA. A yellow square is within block AAA. Another medium blue square is at 6:00 position regarding to a medium blue square. At 12:00 position regarding to and touches medium blue square number one is the yellow square. Block AAA contains medium blue square number one. This block covers medium blue square number two. Medium blue square number one touches medium blue square number two. | [] | {"0-->-1": ["ntpp"], "0x2-->0": ["ntpp"], "0x0-->0x1": ["below"], "0x2-->0x1": ["above", "ec"], "0-->0x1": ["ntppi"], "0-->0x0": ["tppi"], "0x1-->0x0": ["ec"]} | [{"head": "0", "tail": "0x2", "context_rel": {}, "inv_context_rel": {"ntpp": {"phrase": "within"}}, "inferred_rel": {"ntppi": {}}}] | [
"0",
"0x2"
] | Where is AAA relative to the yellow object? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 15",
"seed_id: 19",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "block AAA", "0x0": "medium blue square number two", "0x1": "medium blue square number one", "0x2": "yellow square"} | train_clock/3255-2 | 5 | 5 |
||
[] | [
"below",
"outside and touching"
] | 2 | 1 | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0
] | Step 1: It is given that the yellow square is above, outside and touching the medium blue square number one.
Step 2: From step 1, we can infer that the medium blue square number one is below, outside and touching the yellow square. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | There is one block named AAA. A yellow square is within block AAA. Another medium blue square is at 6:00 position regarding to a medium blue square. At 12:00 position regarding to and touches medium blue square number one is the yellow square. Block AAA contains medium blue square number one. This block covers medium blue square number two. Medium blue square number one touches medium blue square number two. | [] | {"0-->-1": ["ntpp"], "0x2-->0": ["ntpp"], "0x0-->0x1": ["below"], "0x2-->0x1": ["above", "ec"], "0-->0x1": ["ntppi"], "0-->0x0": ["tppi"], "0x1-->0x0": ["ec"]} | [{"head": "0x1", "tail": "0x2", "context_rel": {}, "inv_context_rel": {"above": {"phrase": "at 12:00 position regarding to"}, "ec": {"phrase": "touches"}}, "inferred_rel": {"below": {}, "ec": {}}}] | [
"0x1",
"0x2"
] | What is the position of medium blue square number one relative to the yellow object? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 15",
"seed_id: 19",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "block AAA", "0x0": "medium blue square number two", "0x1": "medium blue square number one", "0x2": "yellow square"} | train_clock/3255-2 | 5 | 6 |
||
[] | [
"below",
"outside and touching"
] | 2 | 1 | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0
] | Step 1: It is given that the medium blue square number one is outside and touching the medium blue square number two.
Step 2: From the context, the medium blue square number two is below the medium blue square number one.
Step 3: From step 1 and 2, it can be inferred that the medium blue square number two is outside and touching and below the medium blue square number one. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | There is one block named AAA. A yellow square is within block AAA. Another medium blue square is at 6:00 position regarding to a medium blue square. At 12:00 position regarding to and touches medium blue square number one is the yellow square. Block AAA contains medium blue square number one. This block covers medium blue square number two. Medium blue square number one touches medium blue square number two. | [] | {"0-->-1": ["ntpp"], "0x2-->0": ["ntpp"], "0x0-->0x1": ["below"], "0x2-->0x1": ["above", "ec"], "0-->0x1": ["ntppi"], "0-->0x0": ["tppi"], "0x1-->0x0": ["ec"]} | [{"head": "0x0", "tail": "0x1", "context_rel": {"below": {"phrase": "at 6:00 position regarding to"}}, "inv_context_rel": {"ec": {"phrase": "touches"}}, "inferred_rel": {"ec": {}}}] | [
"0x0",
"0x1"
] | What is the position of medium blue square number two relative to medium blue square number one? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 15",
"seed_id: 19",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "block AAA", "0x0": "medium blue square number two", "0x1": "medium blue square number one", "0x2": "yellow square"} | train_clock/3255-2 | 5 | 7 |
||
[
"Region|RCC8|NTPP"
] | [
"above",
"in front",
"far",
"outside"
] | 3 | 3 | [
0,
0,
1,
0,
0,
1,
0,
1,
1,
0,
0,
0,
0,
0,
0,
0
] | Step 1: From the context, the box one contains the small yellow apple of box one.
Step 2: From step 1, it can be inferred that the small yellow apple of box one is inside the box one.
Step 3: It is given that the box two is behind and outside the box one.
Step 4: From the context, the box one is above and far from the box two.
Step 5: From step 3 and 4, it can be inferred that the box one is outside, above, in front and far from the box two.
Step 6: From step 2 and 5, we can infer that the small yellow apple of box one is above, outside, in front and far from the box two.
Step 7: It is given that the yellow melon is inside and touching the box two.
Step 8: From step 7, we can infer that the box two contains and touches the yellow melon.
Step 9: From step 6 and 8, we can say that the small yellow apple of box one is above, outside, in front and far from the yellow melon. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | A box called one contains a small yellow apple. Above and farther from another box called two is this box. Box two is behind and disconnected from this box. A yellow melon is in front of and a small yellow apple is behind a medium orange watermelon which is inside this box. The melon is inside and touching box two. The small yellow apple is inside and touching box two. Below the thing which was in front of the medium orange watermelon is this fruit. | [] | {"0-->0x0": ["ntppi"], "0-->1": ["above", "far"], "1-->0": ["behind", "dc"], "1x0-->1": ["ntpp"], "1x1-->1x0": ["front"], "1x2-->1x0": ["behind"], "1x1-->1": ["tpp"], "1x2-->1": ["tpp"], "1x2-->1x1": ["below"]} | [{"head": "0x0", "tail": "0", "context_rel": {}, "inv_context_rel": {"ntppi": {"phrase": "contains"}}, "inferred_rel": {"ntpp": {}}}, {"head": "0", "tail": "1", "context_rel": {"above": {"phrase": "above"}, "far": {"phrase": "farther from"}}, "inv_context_rel": {"behind": {"phrase": "behind"}, "dc": {"phrase": "disconnected from"}}, "inferred_rel": {"front": {}, "dc": {}}}, {"head": "0x0", "tail": "1", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"above": {}, "front": {}, "far": {}, "dc": {}}}, {"head": "1", "tail": "1x1", "context_rel": {}, "inv_context_rel": {"tpp": {"phrase": "inside and touching"}}, "inferred_rel": {"tppi": {}}}, {"head": "0x0", "tail": "1x1", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"above": {}, "front": {}, "far": {}, "dc": {}}}] | [
"0x0",
"1x1"
] | What is the position of the small yellow apple in box one relative to the melon? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 25",
"seed_id: 21",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "1": "box two", "0x0": "small yellow apple of box one", "1x0": "medium orange watermelon", "1x1": "yellow melon", "1x2": "small yellow apple of box two"} | train_simple/1745-1 | 6 | 4 |
||
[] | [
"below",
"behind",
"far",
"outside"
] | 2 | 2 | [
0,
0,
0,
1,
1,
0,
0,
1,
1,
0,
0,
0,
0,
0,
0,
0
] | Step 1: It is given that the medium orange watermelon is inside the box two.
Step 2: It is given that the box one is above and far from the box two.
Step 3: From the context, the box two is behind and outside the box one.
Step 4: From step 2 and 3, it can be inferred that the box two is below, behind, outside and far from the box one.
Step 5: From step 1 and 4, we can say that the medium orange watermelon is below, behind, outside and far from the box one. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | A box called one contains a small yellow apple. Above and farther from another box called two is this box. Box two is behind and disconnected from this box. A yellow melon is in front of and a small yellow apple is behind a medium orange watermelon which is inside this box. The melon is inside and touching box two. The small yellow apple is inside and touching box two. Below the thing which was in front of the medium orange watermelon is this fruit. | [] | {"0-->0x0": ["ntppi"], "0-->1": ["above", "far"], "1-->0": ["behind", "dc"], "1x0-->1": ["ntpp"], "1x1-->1x0": ["front"], "1x2-->1x0": ["behind"], "1x1-->1": ["tpp"], "1x2-->1": ["tpp"], "1x2-->1x1": ["below"]} | [{"head": "1x0", "tail": "1", "context_rel": {"ntpp": {"phrase": "inside"}}, "inv_context_rel": {}, "inferred_rel": {}}, {"head": "1", "tail": "0", "context_rel": {"behind": {"phrase": "behind"}, "dc": {"phrase": "disconnected from"}}, "inv_context_rel": {"above": {"phrase": "above"}, "far": {"phrase": "farther from"}}, "inferred_rel": {"below": {}, "far": {}}}, {"head": "1x0", "tail": "0", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"below": {}, "behind": {}, "far": {}, "dc": {}}}] | [
"1x0",
"0"
] | Where is the orange fruit relative to box one? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 25",
"seed_id: 21",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "1": "box two", "0x0": "small yellow apple of box one", "1x0": "medium orange watermelon", "1x1": "yellow melon", "1x2": "small yellow apple of box two"} | train_simple/1745-1 | 6 | 5 |
||
[
"Region|RCC8|NTPP"
] | [
"above",
"in front",
"far",
"outside"
] | 3 | 2 | [
0,
0,
1,
0,
0,
1,
0,
1,
1,
0,
0,
0,
0,
0,
0,
0
] | Step 1: It is given that the box one contains the small yellow apple of box one.
Step 2: From step 1, we can say that the small yellow apple of box one is inside the box one.
Step 3: From the context, the box two is behind and outside the box one.
Step 4: It is given that the box one is above and far from the box two.
Step 5: From step 3 and 4, we can infer that the box one is outside, above, in front and far from the box two.
Step 6: From step 2 and 5, we can say that the small yellow apple of box one is above, outside, in front and far from the box two. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | A box called one contains a small yellow apple. Above and farther from another box called two is this box. Box two is behind and disconnected from this box. A yellow melon is in front of and a small yellow apple is behind a medium orange watermelon which is inside this box. The melon is inside and touching box two. The small yellow apple is inside and touching box two. Below the thing which was in front of the medium orange watermelon is this fruit. | [] | {"0-->0x0": ["ntppi"], "0-->1": ["above", "far"], "1-->0": ["behind", "dc"], "1x0-->1": ["ntpp"], "1x1-->1x0": ["front"], "1x2-->1x0": ["behind"], "1x1-->1": ["tpp"], "1x2-->1": ["tpp"], "1x2-->1x1": ["below"]} | [{"head": "0x0", "tail": "0", "context_rel": {}, "inv_context_rel": {"ntppi": {"phrase": "contains"}}, "inferred_rel": {"ntpp": {}}}, {"head": "0", "tail": "1", "context_rel": {"above": {"phrase": "above"}, "far": {"phrase": "farther from"}}, "inv_context_rel": {"behind": {"phrase": "behind"}, "dc": {"phrase": "disconnected from"}}, "inferred_rel": {"front": {}, "dc": {}}}, {"head": "0x0", "tail": "1", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"above": {}, "front": {}, "far": {}, "dc": {}}}] | [
"0x0",
"1"
] | Where is the small yellow apple in box one regarding box two? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 25",
"seed_id: 21",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "1": "box two", "0x0": "small yellow apple of box one", "1x0": "medium orange watermelon", "1x1": "yellow melon", "1x2": "small yellow apple of box two"} | train_simple/1745-1 | 6 | 6 |
||
[] | [
"below",
"behind",
"far",
"outside"
] | 2 | 1 | [
0,
0,
0,
1,
1,
0,
0,
1,
1,
0,
0,
0,
0,
0,
0,
0
] | Step 1: From the context, the box one is above and far from the box two.
Step 2: It is given that the box two is behind and outside the box one.
Step 3: From step 1 and 2, it can be inferred that the box two is below, behind, outside and far from the box one. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | A box called one contains a small yellow apple. Above and farther from another box called two is this box. Box two is behind and disconnected from this box. A yellow melon is in front of and a small yellow apple is behind a medium orange watermelon which is inside this box. The melon is inside and touching box two. The small yellow apple is inside and touching box two. Below the thing which was in front of the medium orange watermelon is this fruit. | [] | {"0-->0x0": ["ntppi"], "0-->1": ["above", "far"], "1-->0": ["behind", "dc"], "1x0-->1": ["ntpp"], "1x1-->1x0": ["front"], "1x2-->1x0": ["behind"], "1x1-->1": ["tpp"], "1x2-->1": ["tpp"], "1x2-->1x1": ["below"]} | [{"head": "1", "tail": "0", "context_rel": {"behind": {"phrase": "behind"}, "dc": {"phrase": "disconnected from"}}, "inv_context_rel": {"above": {"phrase": "above"}, "far": {"phrase": "farther from"}}, "inferred_rel": {"below": {}, "far": {}}}] | [
"1",
"0"
] | Where is box two relative to box one? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 25",
"seed_id: 21",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "1": "box two", "0x0": "small yellow apple of box one", "1x0": "medium orange watermelon", "1x1": "yellow melon", "1x2": "small yellow apple of box two"} | train_simple/1745-1 | 6 | 7 |
||
[] | [
"below",
"in front"
] | 2 | 3 | [
0,
0,
0,
1,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | Step 1: It is given that the box EEE contains and touches the midsize green rectangle.
Step 2: From step 1, it can be inferred that the midsize green rectangle is inside and touching the box EEE.
Step 3: From the context, the box DDD is behind the box EEE.
Step 4: It is given that the box EEE is below and near the box DDD.
Step 5: From step 3 and 4, we can infer that the box EEE is below, near and in front of the box DDD.
Step 6: From step 2 and 5, it can be inferred that the midsize green rectangle is below and in front of the box DDD.
Step 7: It is given that the midsize white rectangle is inside and touching the box DDD.
Step 8: From step 7, we can say that the box DDD contains and touches the midsize white rectangle.
Step 9: From step 6 and 8, we can infer that the midsize green rectangle is below and in front of the midsize white rectangle. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | We have two boxes, called DDD and EEE. Behind box EEE is box DDD. Under and close to this box is box EEE. A midsize white rectangle is inside and touching box DDD. Box EEE covers a midsize green rectangle. | [] | {"-1-->0": ["ntppi"], "-1-->1": ["ntppi"], "0-->1": ["behind"], "1-->0": ["below", "near"], "0x0-->0": ["tpp"], "1-->1x0": ["tppi"]} | [{"head": "1x0", "tail": "1", "context_rel": {}, "inv_context_rel": {"tppi": {"phrase": "covers"}}, "inferred_rel": {"tpp": {}}}, {"head": "1", "tail": "0", "context_rel": {"below": {"phrase": "under"}, "near": {"phrase": "close to"}}, "inv_context_rel": {"behind": {"phrase": "behind"}}, "inferred_rel": {"front": {}}}, {"head": "1x0", "tail": "0", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"below": {}, "front": {}}}, {"head": "0", "tail": "0x0", "context_rel": {}, "inv_context_rel": {"tpp": {"phrase": "inside and touching"}}, "inferred_rel": {"tppi": {}}}, {"head": "1x0", "tail": "0x0", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"below": {}, "front": {}}}] | [
"1x0",
"0x0"
] | What is the position of the green thing relative to the white object? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 60",
"seed_id: 22",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box DDD", "1": "box EEE", "0x0": "midsize white rectangle", "1x0": "midsize green rectangle"} | train_clock/2151-0 | 5 | 3 |
||
[] | [
"above",
"behind"
] | 2 | 2 | [
0,
0,
1,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | Step 1: It is given that the box EEE is below and near the box DDD.
Step 2: It is given that the box DDD is behind the box EEE.
Step 3: From step 1 and 2, we can infer that the box DDD is above, near and behind the box EEE.
Step 4: From the context, the box EEE contains and touches the midsize green rectangle.
Step 5: From step 3 and 4, we can say that the box DDD is above and behind the midsize green rectangle. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | We have two boxes, called DDD and EEE. Behind box EEE is box DDD. Under and close to this box is box EEE. A midsize white rectangle is inside and touching box DDD. Box EEE covers a midsize green rectangle. | [] | {"-1-->0": ["ntppi"], "-1-->1": ["ntppi"], "0-->1": ["behind"], "1-->0": ["below", "near"], "0x0-->0": ["tpp"], "1-->1x0": ["tppi"]} | [{"head": "0", "tail": "1", "context_rel": {"behind": {"phrase": "behind"}}, "inv_context_rel": {"below": {"phrase": "under"}, "near": {"phrase": "close to"}}, "inferred_rel": {"above": {}, "near": {}}}, {"head": "1", "tail": "1x0", "context_rel": {"tppi": {"phrase": "covers"}}, "inv_context_rel": {}, "inferred_rel": {}}, {"head": "0", "tail": "1x0", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"above": {}, "behind": {}}}] | [
"0",
"1x0"
] | Where is DDD relative to the midsize green rectangle? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 60",
"seed_id: 22",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box DDD", "1": "box EEE", "0x0": "midsize white rectangle", "1x0": "midsize green rectangle"} | train_clock/2151-0 | 5 | 4 |
||
[] | [
"above",
"behind",
"near"
] | 2 | 1 | [
0,
0,
1,
0,
1,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | Step 1: It is given that the box EEE is below and near the box DDD.
Step 2: It is given that the box DDD is behind the box EEE.
Step 3: From step 1 and 2, we can say that the box DDD is above, near and behind the box EEE. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | We have two boxes, called DDD and EEE. Behind box EEE is box DDD. Under and close to this box is box EEE. A midsize white rectangle is inside and touching box DDD. Box EEE covers a midsize green rectangle. | [] | {"-1-->0": ["ntppi"], "-1-->1": ["ntppi"], "0-->1": ["behind"], "1-->0": ["below", "near"], "0x0-->0": ["tpp"], "1-->1x0": ["tppi"]} | [{"head": "0", "tail": "1", "context_rel": {"behind": {"phrase": "behind"}}, "inv_context_rel": {"below": {"phrase": "under"}, "near": {"phrase": "close to"}}, "inferred_rel": {"above": {}, "near": {}}}] | [
"0",
"1"
] | What is the position of box DDD relative to EEE? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 60",
"seed_id: 22",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box DDD", "1": "box EEE", "0x0": "midsize white rectangle", "1x0": "midsize green rectangle"} | train_clock/2151-0 | 5 | 5 |
||
[] | [
"inside and touching"
] | 2 | 1 | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | Step 1: It is given that the box one contains and touches the medium green apple number one.
Step 2: From step 1, it can be inferred that the medium green apple number one is inside and touching the box one. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | A box named one covers a medium green apple. Another medium green apple is in this box. Above medium green apple number one there is this fruit. | [] | {"0-->0x0": ["tppi"], "0x1-->0": ["ntpp"], "0x1-->0x0": ["above"]} | [{"head": "0x0", "tail": "0", "context_rel": {}, "inv_context_rel": {"tppi": {"phrase": "covers"}}, "inferred_rel": {"tpp": {}}}] | [
"0x0",
"0"
] | Where is medium green apple number one relative to box? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 0",
"seed_id: 23",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "0x0": "medium green apple number one", "0x1": "medium green apple number two"} | train_simple/3680-1 | 3 | 3 |
||
[] | [
"left"
] | 2 | 2 | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | Step 1: From the context, the big green watermelon number two is right of the small yellow melon.
Step 2: From step 1, we can say that the small yellow melon is left of the big green watermelon number two.
Step 3: It is given that the big orange apple is below and right of the big green watermelon number two.
Step 4: From step 3, we can say that the big green watermelon number two is above and left of the big orange apple.
Step 5: From step 2 and 4, we can infer that the small yellow melon is left of the big orange apple. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | There are two boxes, called one and two. Covered by box two there is box one. A small yellow melon and a big orange apple are inside and touching box one. Box one covers a big green watermelon. Another big green watermelon is inside this box. To the right-hand side of the small thing there is big green watermelon number two. The apple and big green watermelon number one are below and on the right side of this fruit.Below big green watermelon number one is the orange fruit. Below big green watermelon number one there is the small thing. | [] | {"0-->-1": ["ntpp"], "1-->-1": ["ntpp"], "0-->1": ["tpp"], "0x3-->0": ["tpp"], "0x2-->0": ["tpp"], "0-->0x1": ["tppi"], "0x0-->0": ["ntpp"], "0x0-->0x3": ["right"], "0x2-->0x0": ["below", "right"], "0x1-->0x0": ["below", "right"], "0x2-->0x1": ["below"], "0x3-->0x1": ["below"]} | [{"head": "0x3", "tail": "0x0", "context_rel": {}, "inv_context_rel": {"right": {"phrase": "to the right-hand side of"}}, "inferred_rel": {"left": {}}}, {"head": "0x0", "tail": "0x2", "context_rel": {}, "inv_context_rel": {"below": {"phrase": "below"}, "right": {"phrase": "on the right side of"}}, "inferred_rel": {"above": {}, "left": {}}}, {"head": "0x3", "tail": "0x2", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"left": {}}}] | [
"0x3",
"0x2"
] | Where is the yellow thing relative to the apple? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 77",
"seed_id: 24",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "1": "box two", "0x0": "big green watermelon number two", "0x1": "big green watermelon number one", "0x2": "big orange apple", "0x3": "small yellow melon"} | train/1519-0 | 7 | 4 |
||
[] | [
"inside and touching"
] | 2 | 1 | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | Step 1: It is given that the box one contains and touches the big green watermelon number one.
Step 2: From step 1, it can be inferred that the big green watermelon number one is inside and touching the box one. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | There are two boxes, called one and two. Covered by box two there is box one. A small yellow melon and a big orange apple are inside and touching box one. Box one covers a big green watermelon. Another big green watermelon is inside this box. To the right-hand side of the small thing there is big green watermelon number two. The apple and big green watermelon number one are below and on the right side of this fruit.Below big green watermelon number one is the orange fruit. Below big green watermelon number one there is the small thing. | [] | {"0-->-1": ["ntpp"], "1-->-1": ["ntpp"], "0-->1": ["tpp"], "0x3-->0": ["tpp"], "0x2-->0": ["tpp"], "0-->0x1": ["tppi"], "0x0-->0": ["ntpp"], "0x0-->0x3": ["right"], "0x2-->0x0": ["below", "right"], "0x1-->0x0": ["below", "right"], "0x2-->0x1": ["below"], "0x3-->0x1": ["below"]} | [{"head": "0x1", "tail": "0", "context_rel": {}, "inv_context_rel": {"tppi": {"phrase": "covers"}}, "inferred_rel": {"tpp": {}}}] | [
"0x1",
"0"
] | Where is big green watermelon number one relative to box one? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 77",
"seed_id: 24",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "1": "box two", "0x0": "big green watermelon number two", "0x1": "big green watermelon number one", "0x2": "big orange apple", "0x3": "small yellow melon"} | train/1519-0 | 7 | 5 |
||
[] | [
"right",
"above"
] | 2 | 2 | [
0,
1,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | Step 1: It is given that the big green watermelon number one is below and right of the big green watermelon number two.
Step 2: From step 1, we can say that the big green watermelon number two is above and left of the big green watermelon number one.
Step 3: From the context, the small yellow melon is below the big green watermelon number one.
Step 4: From step 3, we can say that the big green watermelon number one is above the small yellow melon.
Step 5: It is given that the big green watermelon number two is right of the small yellow melon.
Step 6: From step 5, we can infer that the big green watermelon number two is above and right of the small yellow melon. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | There are two boxes, called one and two. Covered by box two there is box one. A small yellow melon and a big orange apple are inside and touching box one. Box one covers a big green watermelon. Another big green watermelon is inside this box. To the right-hand side of the small thing there is big green watermelon number two. The apple and big green watermelon number one are below and on the right side of this fruit.Below big green watermelon number one is the orange fruit. Below big green watermelon number one there is the small thing. | [] | {"0-->-1": ["ntpp"], "1-->-1": ["ntpp"], "0-->1": ["tpp"], "0x3-->0": ["tpp"], "0x2-->0": ["tpp"], "0-->0x1": ["tppi"], "0x0-->0": ["ntpp"], "0x0-->0x3": ["right"], "0x2-->0x0": ["below", "right"], "0x1-->0x0": ["below", "right"], "0x2-->0x1": ["below"], "0x3-->0x1": ["below"]} | [{"head": "0x0", "tail": "0x1", "context_rel": {}, "inv_context_rel": {"below": {"phrase": "below"}, "right": {"phrase": "on the right side of"}}, "inferred_rel": {"above": {}, "left": {}}}, {"head": "0x1", "tail": "0x3", "context_rel": {}, "inv_context_rel": {"below": {"phrase": "below"}}, "inferred_rel": {"above": {}}}, {"head": "0x0", "tail": "0x3", "context_rel": {"right": {"phrase": "to the right-hand side of"}}, "inv_context_rel": {}, "inferred_rel": {"above": {}}}] | [
"0x0",
"0x3"
] | Where is big green watermelon number two relative to the small thing? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 77",
"seed_id: 24",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "1": "box two", "0x0": "big green watermelon number two", "0x1": "big green watermelon number one", "0x2": "big orange apple", "0x3": "small yellow melon"} | train/1519-0 | 7 | 6 |
||
[] | [
"inside"
] | 2 | 2 | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0
] | Step 1: It is given that the big green watermelon number two is inside the box one.
Step 2: From the context, the box one is inside and touching the box two.
Step 3: From step 1 and 2, we can say that the big green watermelon number two is inside the box two. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | There are two boxes, called one and two. Covered by box two there is box one. A small yellow melon and a big orange apple are inside and touching box one. Box one covers a big green watermelon. Another big green watermelon is inside this box. To the right-hand side of the small thing there is big green watermelon number two. The apple and big green watermelon number one are below and on the right side of this fruit.Below big green watermelon number one is the orange fruit. Below big green watermelon number one there is the small thing. | [] | {"0-->-1": ["ntpp"], "1-->-1": ["ntpp"], "0-->1": ["tpp"], "0x3-->0": ["tpp"], "0x2-->0": ["tpp"], "0-->0x1": ["tppi"], "0x0-->0": ["ntpp"], "0x0-->0x3": ["right"], "0x2-->0x0": ["below", "right"], "0x1-->0x0": ["below", "right"], "0x2-->0x1": ["below"], "0x3-->0x1": ["below"]} | [{"head": "0x0", "tail": "0", "context_rel": {"ntpp": {"phrase": "inside"}}, "inv_context_rel": {}, "inferred_rel": {}}, {"head": "0", "tail": "1", "context_rel": {"tpp": {"phrase": "covered by"}}, "inv_context_rel": {}, "inferred_rel": {}}, {"head": "0x0", "tail": "1", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"ntpp": {}}}] | [
"0x0",
"1"
] | Where is big green watermelon number two regarding box two? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 77",
"seed_id: 24",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "1": "box two", "0x0": "big green watermelon number two", "0x1": "big green watermelon number one", "0x2": "big orange apple", "0x3": "small yellow melon"} | train/1519-0 | 7 | 7 |
||
[] | [
"above"
] | 2 | 1 | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | Step 1: From the context, the medium orange watermelon is below the big green watermelon.
Step 2: From step 1, it can be inferred that the big green watermelon is above the medium orange watermelon. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | A box called one cover a medium yellow watermelon and a big green watermelon. A medium orange watermelon is below the big green watermelon. This thing is under and to the left of the medium yellow watermelon. The medium orange watermelon is inside and touching box one. A big yellow watermelon is covered by and a big apple is in box one. Box one contains a small green melon. The big yellow watermelon is close to the big apple and is under the medium yellow watermelon. To the right of the melon there is the big green watermelon. | [] | {"0-->0x3": ["tppi"], "0-->0x2": ["tppi"], "0x5-->0x2": ["below"], "0x5-->0x3": ["below", "left"], "0x5-->0": ["tpp"], "0x1-->0": ["tpp"], "0x4-->0": ["ntpp"], "0-->0x0": ["ntppi"], "0x1-->0x4": ["near"], "0x1-->0x3": ["below"], "0x2-->0x0": ["right"]} | [{"head": "0x2", "tail": "0x5", "context_rel": {}, "inv_context_rel": {"below": {"phrase": "below"}}, "inferred_rel": {"above": {}}}] | [
"0x2",
"0x5"
] | Where is the big green watermelon regarding the medium orange watermelon? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 22",
"seed_id: 25",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "0x0": "small green melon", "0x1": "big yellow watermelon", "0x2": "big green watermelon", "0x3": "medium yellow watermelon", "0x4": "big apple", "0x5": "medium orange watermelon"} | train_clock/1490-2 | 7 | 1 |
||
[] | [
"inside"
] | 2 | 2 | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0
] | Step 1: It is given that the box DDD contains the tiny green oval.
Step 2: From step 1, we can say that the tiny green oval is inside the box DDD.
Step 3: From the context, the box EEE contains and touches the box DDD.
Step 4: From step 3, we can infer that the box DDD is inside and touching the box EEE.
Step 5: From step 2 and 4, we can say that the tiny green oval is inside the box EEE. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | Two boxes, named DDD and EEE exist in the image. A midsize green oval is inside and touching and a tiny green rectangle is in box DDD. Box DDD covers a tiny white dimond which is at 6 o'clock position regarding to a tiny green oval. Box DDD contains the tiny green oval. Near to the tiny green rectangle is the midsize shape. Box DDD contains a large white rectangle which is at 9:00 position regarding to the tiny white dimond. Box DDD covers a large green oval. Box EEE covers this box. | [] | {"0-->-1": ["ntpp"], "1-->-1": ["ntpp"], "0x4-->0": ["tpp"], "0x2-->0": ["ntpp"], "0-->0x0": ["tppi"], "0x0-->0x5": ["below"], "0-->0x5": ["ntppi"], "0x4-->0x2": ["near"], "0-->0x1": ["ntppi"], "0x1-->0x0": ["left"], "0-->0x3": ["tppi"], "1-->0": ["tppi"]} | [{"head": "0x5", "tail": "0", "context_rel": {}, "inv_context_rel": {"ntppi": {"phrase": "contains"}}, "inferred_rel": {"ntpp": {}}}, {"head": "0", "tail": "1", "context_rel": {}, "inv_context_rel": {"tppi": {"phrase": "covers"}}, "inferred_rel": {"tpp": {}}}, {"head": "0x5", "tail": "1", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"ntpp": {}}}] | [
"0x5",
"1"
] | Where is the tiny green oval relative to EEE? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 54",
"seed_id: 26",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box DDD", "1": "box EEE", "0x0": "tiny white dimond", "0x1": "large white rectangle", "0x2": "tiny green rectangle", "0x3": "large green oval", "0x4": "midsize green oval", "0x5": "tiny green oval"} | train_stepgame/2004-2 | 9 | 3 |
||
[] | [
"inside"
] | 2 | 2 | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0
] | Step 1: It is given that the box DDD contains the large white rectangle.
Step 2: From step 1, it can be inferred that the large white rectangle is inside the box DDD.
Step 3: It is given that the box EEE contains and touches the box DDD.
Step 4: From step 3, we can infer that the box DDD is inside and touching the box EEE.
Step 5: From step 2 and 4, it can be inferred that the large white rectangle is inside the box EEE. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | Two boxes, named DDD and EEE exist in the image. A midsize green oval is inside and touching and a tiny green rectangle is in box DDD. Box DDD covers a tiny white dimond which is at 6 o'clock position regarding to a tiny green oval. Box DDD contains the tiny green oval. Near to the tiny green rectangle is the midsize shape. Box DDD contains a large white rectangle which is at 9:00 position regarding to the tiny white dimond. Box DDD covers a large green oval. Box EEE covers this box. | [] | {"0-->-1": ["ntpp"], "1-->-1": ["ntpp"], "0x4-->0": ["tpp"], "0x2-->0": ["ntpp"], "0-->0x0": ["tppi"], "0x0-->0x5": ["below"], "0-->0x5": ["ntppi"], "0x4-->0x2": ["near"], "0-->0x1": ["ntppi"], "0x1-->0x0": ["left"], "0-->0x3": ["tppi"], "1-->0": ["tppi"]} | [{"head": "0x1", "tail": "0", "context_rel": {}, "inv_context_rel": {"ntppi": {"phrase": "contains"}}, "inferred_rel": {"ntpp": {}}}, {"head": "0", "tail": "1", "context_rel": {}, "inv_context_rel": {"tppi": {"phrase": "covers"}}, "inferred_rel": {"tpp": {}}}, {"head": "0x1", "tail": "1", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"ntpp": {}}}] | [
"0x1",
"1"
] | Where is the large white rectangle regarding box EEE? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 54",
"seed_id: 26",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box DDD", "1": "box EEE", "0x0": "tiny white dimond", "0x1": "large white rectangle", "0x2": "tiny green rectangle", "0x3": "large green oval", "0x4": "midsize green oval", "0x5": "tiny green oval"} | train_stepgame/2004-2 | 9 | 4 |
||
[] | [
"above"
] | 2 | 1 | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | Step 1: From the context, the tiny white dimond is below the tiny green oval.
Step 2: From step 1, we can say that the tiny green oval is above the tiny white dimond. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | Two boxes, named DDD and EEE exist in the image. A midsize green oval is inside and touching and a tiny green rectangle is in box DDD. Box DDD covers a tiny white dimond which is at 6 o'clock position regarding to a tiny green oval. Box DDD contains the tiny green oval. Near to the tiny green rectangle is the midsize shape. Box DDD contains a large white rectangle which is at 9:00 position regarding to the tiny white dimond. Box DDD covers a large green oval. Box EEE covers this box. | [] | {"0-->-1": ["ntpp"], "1-->-1": ["ntpp"], "0x4-->0": ["tpp"], "0x2-->0": ["ntpp"], "0-->0x0": ["tppi"], "0x0-->0x5": ["below"], "0-->0x5": ["ntppi"], "0x4-->0x2": ["near"], "0-->0x1": ["ntppi"], "0x1-->0x0": ["left"], "0-->0x3": ["tppi"], "1-->0": ["tppi"]} | [{"head": "0x5", "tail": "0x0", "context_rel": {}, "inv_context_rel": {"below": {"phrase": "at 6 o'clock position regarding to"}}, "inferred_rel": {"above": {}}}] | [
"0x5",
"0x0"
] | Where is the tiny green oval relative to the tiny white dimond? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 54",
"seed_id: 26",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box DDD", "1": "box EEE", "0x0": "tiny white dimond", "0x1": "large white rectangle", "0x2": "tiny green rectangle", "0x3": "large green oval", "0x4": "midsize green oval", "0x5": "tiny green oval"} | train_stepgame/2004-2 | 9 | 5 |
||
[] | [
"right",
"below",
"outside"
] | 2 | 4 | [
0,
1,
0,
1,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | Step 1: It is given that the medium yellow apple is inside the box two.
Step 2: From the context, the box three is outside the box two.
Step 3: It is given that the box two is below and right of the box three.
Step 4: From step 2 and 3, it can be inferred that the box two is outside, below and right of the box three.
Step 5: From step 1 and 4, we can say that the medium yellow apple is below, outside and right of the box three.
Step 6: From the context, the box three contains and touches the box one.
Step 7: From step 5 and 6, it can be inferred that the medium yellow apple is below, outside and right of the box one.
Step 8: It is given that the box one contains and touches the medium green apple number one.
Step 9: From step 7 and 8, it can be inferred that the medium yellow apple is below, outside and right of the medium green apple number one. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | We have three boxes, called one, two and three. Box three is disconnected from box two and covers box one. To the south of and east of this box there is box two. Box one covers a medium green apple. To the north of medium green apple number one is another medium green apple. Medium green apple number two is within box one. Another medium green apple is in box one. A medium yellow apple is inside box two. To the north of an orange apple is this fruit. Box two covers the orange apple. | [] | {"-1-->0": ["ntppi"], "-1-->1": ["ntppi"], "-1-->2": ["ntppi"], "2-->1": ["dc"], "2-->0": ["tppi"], "1-->2": ["below", "right"], "0-->0x0": ["tppi"], "0x3-->0x0": ["above"], "0x3-->0": ["ntpp"], "0x1-->0": ["ntpp"], "1x1-->1": ["ntpp"], "1x1-->1x0": ["above"], "1-->1x0": ["tppi"]} | [{"head": "1x1", "tail": "1", "context_rel": {"ntpp": {"phrase": "inside"}}, "inv_context_rel": {}, "inferred_rel": {}}, {"head": "1", "tail": "2", "context_rel": {"below": {"phrase": "to the south of"}, "right": {"phrase": "east of"}}, "inv_context_rel": {"dc": {"phrase": "disconnected from"}}, "inferred_rel": {"dc": {}}}, {"head": "1x1", "tail": "2", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"right": {}, "below": {}, "dc": {}}}, {"head": "2", "tail": "0", "context_rel": {"tppi": {"phrase": "covers"}}, "inv_context_rel": {}, "inferred_rel": {}}, {"head": "1x1", "tail": "0", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"right": {}, "below": {}, "dc": {}}}, {"head": "0", "tail": "0x0", "context_rel": {"tppi": {"phrase": "covers"}}, "inv_context_rel": {}, "inferred_rel": {}}, {"head": "1x1", "tail": "0x0", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"right": {}, "below": {}, "dc": {}}}] | [
"1x1",
"0x0"
] | What is the position of the medium yellow apple regarding medium green apple number one? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 50",
"seed_id: 27",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "1": "box two", "2": "box three", "0x0": "medium green apple number one", "0x1": "medium green apple number three", "0x3": "medium green apple number two", "1x0": "orange apple", "1x1": "medium yellow apple"} | train/2477-1 | 9 | 4 |
||
[] | [
"left",
"above",
"outside"
] | 2 | 4 | [
1,
0,
1,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | Step 1: From the context, the box one contains and touches the medium green apple number one.
Step 2: From step 1, we can infer that the medium green apple number one is inside and touching the box one.
Step 3: From the context, the box three contains and touches the box one.
Step 4: From step 3, we can infer that the box one is inside and touching the box three.
Step 5: From step 2 and 4, we can infer that the medium green apple number one is inside the box three.
Step 6: It is given that the box two is below and right of the box three.
Step 7: It is given that the box three is outside the box two.
Step 8: From step 6 and 7, we can say that the box three is above, outside and left of the box two.
Step 9: From step 5 and 8, we can infer that the medium green apple number one is above, outside and left of the box two.
Step 10: From the context, the box two contains and touches the orange apple.
Step 11: From step 9 and 10, it can be inferred that the medium green apple number one is above, outside and left of the orange apple. | [
"left",
"right",
"above",
"below",
"behind",
"in front",
"near",
"far",
"outside",
"outside and touching",
"partially overlapping",
"inside and touching",
"inside",
"contains and touches",
"contains",
"overlapping"
] | FR | We have three boxes, called one, two and three. Box three is disconnected from box two and covers box one. To the south of and east of this box there is box two. Box one covers a medium green apple. To the north of medium green apple number one is another medium green apple. Medium green apple number two is within box one. Another medium green apple is in box one. A medium yellow apple is inside box two. To the north of an orange apple is this fruit. Box two covers the orange apple. | [] | {"-1-->0": ["ntppi"], "-1-->1": ["ntppi"], "-1-->2": ["ntppi"], "2-->1": ["dc"], "2-->0": ["tppi"], "1-->2": ["below", "right"], "0-->0x0": ["tppi"], "0x3-->0x0": ["above"], "0x3-->0": ["ntpp"], "0x1-->0": ["ntpp"], "1x1-->1": ["ntpp"], "1x1-->1x0": ["above"], "1-->1x0": ["tppi"]} | [{"head": "0x0", "tail": "0", "context_rel": {}, "inv_context_rel": {"tppi": {"phrase": "covers"}}, "inferred_rel": {"tpp": {}}}, {"head": "0", "tail": "2", "context_rel": {}, "inv_context_rel": {"tppi": {"phrase": "covers"}}, "inferred_rel": {"tpp": {}}}, {"head": "0x0", "tail": "2", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"ntpp": {}}}, {"head": "2", "tail": "1", "context_rel": {"dc": {"phrase": "disconnected from"}}, "inv_context_rel": {"below": {"phrase": "to the south of"}, "right": {"phrase": "east of"}}, "inferred_rel": {"above": {}, "left": {}}}, {"head": "0x0", "tail": "1", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"left": {}, "above": {}, "dc": {}}}, {"head": "1", "tail": "1x0", "context_rel": {"tppi": {"phrase": "covers"}}, "inv_context_rel": {}, "inferred_rel": {}}, {"head": "0x0", "tail": "1x0", "context_rel": {}, "inv_context_rel": {}, "inferred_rel": {"left": {}, "above": {}, "dc": {}}}] | [
"0x0",
"1x0"
] | What is the position of medium green apple number one relative to the orange fruit? | SpaRTUN | [
"image_repo: https://github.com/lil-lab/nlvr/tree/master/nlvr/train/images",
"directory: 50",
"seed_id: 27",
"point_of_view_type: Fixed Orientation Point of View",
"relation_type: Relations Under-specified",
"entity_type: Extended Objects",
"quantitative_type: Quantitatively Un-specified"
] | {"0": "box one", "1": "box two", "2": "box three", "0x0": "medium green apple number one", "0x1": "medium green apple number three", "0x3": "medium green apple number two", "1x0": "orange apple", "1x1": "medium yellow apple"} | train/2477-1 | 9 | 5 |
Dataset Card for Spatial Reasoning Path (SpaRP)
Dataset Summary
This dataset is a consolidation of SpaRTUN and StepGame datasets with an extension of additional spatial characterization and reasoning path generation. The methodology is explained in our ACL 2024 paper - SpaRC and SpaRP: Spatial Reasoning Characterization and Path Generation for Understanding Spatial Reasoning Capability of Large Language Models. The dataset format and fields are normalized across the two upstream benchmark datasets -- SpaRTUN and StepGame. The datasets are primarily a Spatial Question Answering datasets, which are enriched with verbalized reasoning paths. The prominent fields of interests are:
- context: Textual description of the spatial context.
- question: A question about finding spatial relations between two entities in the context.
- targets: Answer i.e. list of spatial relations between the entities in the question.
- target_choices: List of all the spatial relations to choose from.
- target_scores: Binarized Multi-label representation of targets over target_choices.
- reasoning: Verbalized reasoning path as deductively-verified CoT for training or few-shot examples.
Additionally, the fields with the metadata are:
- context_id: An identifier from the source data corresponding to the context.
context_id
is unique. A single context can have multiple questions (e.g. SpaRTUN). Hence, (context_id, question_id) is a unique identifier for a dataset instance. - question_id: An identifier from the source data corresponding to the question.
- num_hop: Ground truth number of hop required for the question.
- symbolic_context: A json string describing the symbolic context.
- symbolic_entity_map: A json string that maps symbolic entities to their complete descriptive names used.
- symbolic_question: A list containing head and tail entities of the question.
- symbolic_reasoning: A json string containing symbolic reasoning steps.
- num_context_entities: Number of entities in the context.
- num_question_entities: Number of entities in the question.
- question_type: Type of the question. Only
FR
i.e. Find Relation type questions are currently present. - canary: A canary string present only in the
test
. - reasoning_types: The type of reasoning copied from the source data required for answering the question.
- spatial_types: The type of spatial relations copied from the source data required for answering the question.
- source_data: The upstream source of the data (either SpaRTUN or StepGame) for a given instance.
- comments: Additional comments specific to the upstream data.
Languages
English
Additional Information
You can download the data via:
from datasets import load_dataset
dataset = load_dataset("UKPLab/sparp") # default config is "SpaRP-PS1 (SpaRTUN)"
dataset = load_dataset("UKPLab/sparp", "SpaRP-PS2 (StepGame)") # use the "SpaRP-PS2 (StepGame)" tag for the StepGame dataset
Please find more information about the code and how the data was collected on GitHub.
Dataset Curators
Curation is managed by our data manager at UKP.
Licensing Information
Citation Information
Please cite this data using:
@inproceedings{rizvi-2024-sparc,
title={SpaRC and SpaRP: Spatial Reasoning Characterization and Path Generation for Understanding Spatial Reasoning Capability of Large Language Models},
author={Rizvi, Md Imbesat Hassan Rizvi and Zhu, Xiaodan and Gurevych, Iryna},
editor = "",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "",
doi = "",
pages = "",
}
- Downloads last month
- 72