MERA-evaluation commited on
Commit
1683204
·
verified ·
1 Parent(s): 3c1d352

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -61,7 +61,7 @@ RuCLEVR is a Visual Question Answering (VQA) dataset inspired by the [CLEVR](htt
61
 
62
  RuCLEVR consists of automatically generated images of 3D objects, each characterized by attributes such as shape, size, color, and material, arranged within various scenes to form complex visual environments. The dataset includes questions based on these images, organized into specific families such as querying attributes, comparing attributes, existence, counting, and integer comparison. Each question is formulated using predefined templates to ensure consistency and variety. The set was created from scratch to prevent biases. Questions are designed to assess the models' ability to perform tasks that require accurate visual reasoning by analyzing the attributes and relationships of objects in each scene. Through this structured design, the dataset provides a controlled environment for evaluating the precise reasoning skills of models when presented with visual data.
63
 
64
- Evaluated skills: Expert domain knowledge, Problem decomposition, Mathematical reasoning, Text recognition (OCR), Scheme recognition
65
 
66
  Contributors: Ksenia Biryukova, Daria Chelnokova, Jamilya Erkenova, Artem Chervyakov, Maria Tikhonova
67
 
 
61
 
62
  RuCLEVR consists of automatically generated images of 3D objects, each characterized by attributes such as shape, size, color, and material, arranged within various scenes to form complex visual environments. The dataset includes questions based on these images, organized into specific families such as querying attributes, comparing attributes, existence, counting, and integer comparison. Each question is formulated using predefined templates to ensure consistency and variety. The set was created from scratch to prevent biases. Questions are designed to assess the models' ability to perform tasks that require accurate visual reasoning by analyzing the attributes and relationships of objects in each scene. Through this structured design, the dataset provides a controlled environment for evaluating the precise reasoning skills of models when presented with visual data.
63
 
64
+ Evaluated skills: Common everyday knowledge, Spatial object relationship, Object recognition, Physical property understanding, Static counting, Comparative reasoning
65
 
66
  Contributors: Ksenia Biryukova, Daria Chelnokova, Jamilya Erkenova, Artem Chervyakov, Maria Tikhonova
67