--- dataset_info: features: - name: instruction dtype: string - name: inputs struct: - name: image dtype: image: decode: false - name: question dtype: string - name: outputs dtype: string - name: meta struct: - name: id dtype: int32 - name: question_type dtype: string - name: image struct: - name: synt_source sequence: string - name: type dtype: string splits: - name: shots num_bytes: 1503376 num_examples: 10 - name: test num_bytes: 257029579 num_examples: 2300 download_size: 257313901 dataset_size: 258532955 configs: - config_name: default data_files: - split: shots path: data/shots-* - split: test path: data/test-* license: cc-by-4.0 task_categories: - visual-question-answering language: - ru pretty_name: ruCLEVR size_categories: - 1K\nВопрос:{question}\nОтвет:", "inputs": { "image": "samples/image0123.png", "question": "Одинаков ли цвет большой металлической сферы и матового блока?" }, "outputs": "да", "meta": { "id": 17, "question_type": "binary", "image": { "synt_source": [ "blender" ], "type": "generated" } } } ``` ### Prompts For the task, 10 prompts were prepared and evenly distributed among the questions on the principle of "one prompt per question". The templates in curly braces in each prompt are filled in from the fields inside the `inputs` field in each question. Prompt example: ``` Даны вопрос и картинка, необходимая для ответа на вопрос. Посмотри на изображение и дай ответ на вопрос. Ответом является одна цифра или одно слово в начальной форме. Изображение: Вопрос:{question} Ответ: ``` ### Dataset creation To create RuCLEVR, we used two strategies: 1) generation of the new samples and 2) data augmentation with color replacement. Below, each technique is described in more detail: **Generation of the New Samples**: We generated new, unique images and corresponding questions from scratch. This process involved a multi-step process to ensure a controlled and comprehensive evaluation of visual reasoning. First, 3D images were automatically generated using Blender, featuring objects with specific attributes such as shape, size, color, and material. These objects were arranged in diverse configurations to create complex scenes. Questions with the corresponding answers were then generated based on predefined templates, which structured the inquiries into families, such as attribute queries and comparisons. To avoid conjunction errors, we stick to the original format and generate questions in English, further translating them into Russian using Google Translator. After generation, we automatically filtered incorrectly translated questions using the [model](https://huggingface.co./RussianNLP/ruRoBERTa-large-rucola) pertained to the linguistic acceptability task. In addition, we checked the dataset for the absence of duplicates. **Data Augmentation with Color Replacement**: We also augmented the dataset modifying the images from the validation set of the original CLEVER. Specifically, we developed a [script](https://github.com/erkenovaj/RuCLEVR/tree/main) to systematically replace colors in questions and images according to predefined rules, thereby creating new augmented samples. This process was initially conducted in English to avoid morphological complexities. Once the questions were augmented, they were translated into Russian and verified for grammatical correctness. ## Evaluation ### Metrics Metrics for aggregated evaluation of responses: - `Exact match`: Exact match is the average of scores for all processed cases, where a given case score is 1 if the predicted string is the exact same as its reference string, and is 0 otherwise.