MERA-evaluation commited on
Commit
9514cc6
·
verified ·
1 Parent(s): f30f4e0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +95 -0
README.md CHANGED
@@ -42,3 +42,98 @@ configs:
42
  - split: test
43
  path: data/test-*
44
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
  - split: test
43
  path: data/test-*
44
  ---
45
+
46
+
47
+ # ruCLEVR
48
+
49
+
50
+ ## Task description
51
+
52
+ RuCLEVR is a Visual Question Answering (VQA) dataset inspired by the [CLEVR](https://cs.stanford.edu/people/jcjohns/clevr/) methodology and adapted for the Russian language.
53
+
54
+ RuCLEVR consists of automatically generated images of 3D objects, each characterized by attributes such as shape, size, color, and material, arranged within various scenes to form complex visual environments. The dataset includes questions based on these images, organized into specific families such as querying attributes, comparing attributes, existence, counting, and integer comparison. Each question is formulated using predefined templates to ensure consistency and variety. The set was created from scratch to prevent biases. Questions are designed to assess the models' ability to perform tasks that require accurate visual reasoning by analyzing the attributes and relationships of objects in each scene. Through this structured design, the dataset provides a controlled environment for evaluating the precise reasoning skills of models when presented with visual data.
55
+
56
+ Evaluated skills: Expert domain knowledge, Problem decomposition, Mathematical reasoning, Text recognition (OCR), Scheme recognition
57
+
58
+ Contributors: Ksenia Biryukova, Daria Chelnokova, Jamilya Erkenova, Artem Chervyakov, Maria Tikhonova
59
+
60
+
61
+ ## Motivation
62
+
63
+ The RuCLEVR dataset was created to evaluate the visual reasoning capabilities of multimodal language models, specifically in the Russian language, where there is a lack of diagnostic datasets for such tasks. It aims to assess models' abilities to reason about shapes, colors, quantities, and spatial relationships in visual scenes, moving beyond simple language understanding to test compositional reasoning. This is crucial for models that are expected to analyze visual data and perform tasks requiring logical inferences about object interactions. The dataset's design, which uses structured question families, ensures that the evaluation is comprehensive and unbiased, focusing on the models' reasoning skills rather than pattern recognition.
64
+
65
+
66
+ ## Data description
67
+
68
+ ### Data fields
69
+
70
+ Each dataset question includes data in the following fields:
71
+
72
+ - `instruction` [str] — Instruction prompt template with question elements placeholders.
73
+ - `inputs` — Input data that forms the task for the model. Can include one or multiple modalities - video, audio, image, text.
74
+ - `image` [str] — Path to the image file related to the question.
75
+ - `question` [str] — Text of the question.
76
+ - `outputs` [str] — The correct answer to the question.
77
+ - `meta` — Metadata related to the test example, not used in the question (hidden from the tested model).
78
+ - `id` [int] — Identification number of the question in the dataset.
79
+ - `question_type` [str] — Question type according to possible answers: binary, colors, count, materials, shapes, size.
80
+ - `image` — Image metadata.
81
+ - `synt_source` [list] — Sources used to generate or recreate data for the question, including names of generative models.
82
+ - `type` [str] — Image type — according to the image classification for MERA datasets.
83
+
84
+
85
+ ### Data formatting example
86
+
87
+ ```json
88
+ {
89
+ "instruction": "Даны вопрос и картинка, необходимая для ответа на вопрос. Посмотри на изображение и дай ответ на вопрос. Ответом является одна цифра или одно слово в начальной форме.\nИзображение:<image>\nВопрос:{question}\nОтвет:",
90
+ "inputs": {
91
+ "image": "samples/image0123.png",
92
+ "question": "Одинаков ли цвет большой металлической сферы и матового блока?"
93
+ },
94
+ "outputs": "да",
95
+ "meta": {
96
+ "id": 17,
97
+ "question_type": "binary",
98
+ "image": {
99
+ "synt_source": [
100
+ "blender"
101
+ ],
102
+ "type": "generated"
103
+ }
104
+ }
105
+ }
106
+ ```
107
+
108
+
109
+ ### Prompts
110
+
111
+ For the task, 10 prompts were prepared and evenly distributed among the questions on the principle of "one prompt per question". The templates in curly braces in each prompt are filled in from the fields inside the `inputs` field in each question.
112
+
113
+ Prompt example:
114
+
115
+ ```
116
+ Даны вопрос и картинка, необходимая для ответа на вопрос. Посмотри на изображение и дай ответ на вопрос. Ответом является одна цифра или одно слово в начальной форме.
117
+ Изображение:<image>
118
+ Вопрос:{question}
119
+ Ответ:
120
+ ```
121
+
122
+
123
+ ### Dataset creation
124
+
125
+ To create RuCLEVR, we used two strategies: 1) generation of the new samples and 2) data augmentation with color replacement. Below, each technique is described in more detail:
126
+
127
+ **Generation of the New Samples**: We generated new, unique images and corresponding questions from scratch. This process involved a multi-step process to ensure a controlled and comprehensive evaluation of visual reasoning. First, 3D images were automatically generated using Blender, featuring objects with specific attributes such as shape, size, color, and material. These objects were arranged in diverse configurations to create complex scenes. Questions with the corresponding answers were then generated based on predefined templates, which structured the inquiries into families, such as attribute queries and comparisons. To avoid conjunction errors, we stick to the original format and generate questions in English, further translating them into Russian using Google Translator. After generation, we automatically filtered incorrectly translated questions using the [model](https://huggingface.co/RussianNLP/ruRoBERTa-large-rucola) pertained to the linguistic acceptability task. In addition, we checked the dataset for the absence of duplicates.
128
+
129
+ **Data Augmentation with Color Replacement**: We also augmented the dataset modifying the images from the validation set of the original CLEVER. Specifically, we developed a [script](https://github.com/erkenovaj/RuCLEVR/tree/main) to systematically replace colors in questions and images according to predefined rules, thereby creating new augmented samples. This process was initially conducted in English to avoid morphological complexities. Once the questions were augmented, they were translated into Russian and verified for grammatical correctness.
130
+
131
+
132
+ ## Evaluation
133
+
134
+
135
+ ### Metrics
136
+
137
+ Metrics for aggregated evaluation of responses:
138
+
139
+ - `Exact match`: Exact match is the average of scores for all processed cases, where a given case score is 1 if the predicted string is the exact same as its reference string, and is 0 otherwise.