Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 8,292 Bytes
50ed665
 
6ca43e1
50ed665
266c787
 
6ca43e1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50ed665
 
 
6ca43e1
 
 
 
 
 
 
 
 
266c787
 
 
 
6ca43e1
50ed665
266c787
6ca43e1
 
 
 
 
 
 
 
 
266c787
 
 
 
6ca43e1
50ed665
 
6ca43e1
 
 
 
 
9509578
6ca43e1
 
 
9509578
6ca43e1
 
 
9509578
6ca43e1
 
50ed665
 
e6b393d
50ed665
298b7d6
 
50ed665
 
 
 
 
 
 
 
 
 
 
 
 
53ca998
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50ed665
 
e6b393d
50ed665
 
 
 
 
53ca998
50ed665
5641dbf
 
50ed665
 
 
 
 
53ca998
50ed665
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
266c787
 
 
 
 
 
50ed665
 
 
 
266c787
53ca998
 
 
 
 
 
 
 
 
 
 
 
 
50ed665
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
---
dataset_info:
- config_name: train
  features:
  - name: video_path 
    dtype: string
  - name: internal_id
    dtype: string  
  - name: prompt
    dtype: string
  - name: url
    dtype: string
  - name: annotation
    struct:
    - name: alignment
      dtype: int64
      range: [1,5] 
    - name: composition  
      dtype: int64
      range: [1,3]
    - name: focus
      dtype: int64
      range: [1,3]
    - name: camera movement
      dtype: int64
      range: [1,3]
    - name: color
      dtype: int64
      range: [1,5]
    - name: lighting accurate
      dtype: int64
      range: [1,4]
    - name: lighting aes
      dtype: int64
      range: [1,5]
    - name: shape at beginning
      dtype: int64
      range: [0,3]
    - name: shape throughout
      dtype: int64
      range: [0,4]
    - name: object motion dynamic
      dtype: int64
      range: [1,5]
    - name: camera motion dynamic
      dtype: int64
      range: [1,5]
    - name: movement smoothness 
      dtype: int64
      range: [0,4]
    - name: movement reality
      dtype: int64
      range: [0,4]
    - name: clear
      dtype: int64
      range: [1,5]
    - name: image quality stability
      dtype: int64
      range: [1,5]
    - name: camera stability
      dtype: int64
      range: [1,3]
    - name: detail refinement
      dtype: int64
      range: [1,5]
    - name: letters
      dtype: int64
      range: [1,4]
    - name: physics law
      dtype: int64
      range: [1,5]
    - name: unsafe type
      dtype: int64
      range: [1,5]
    - name: safety
      dtype: int64
      range: [1,5]
  - name: meta_result
    sequence: 
      dtype: int64
  - name: meta_mask
    sequence:
      dtype: int64
  splits:
  - name: train
    num_examples: 40743

- config_name: regression
  features:
  - name: internal_id
    dtype: string
  - name: prompt
    dtype: string 
  - name: standard_answer
    dtype: string
  - name: video1_path
    dtype: string
  - name: video2_path
    dtype: string
  splits:
  - name: regression
    num_examples: 1795

- config_name: monetbench
  features:
  - name: internal_id
    dtype: string
  - name: prompt
    dtype: string 
  - name: standard_answer
    dtype: string
  - name: video1_path
    dtype: string
  - name: video2_path
    dtype: string
  splits:
  - name: monetbench
    num_examples: 1000

configs:
- config_name: train
  data_files:
  - split: train
    path: train/*.parquet
- config_name: regression
  data_files:
  - split: regression
    path: regression/*.parquet
- config_name: monetbench
  data_files:
  - split: monetbench
    path: monetbench/*.parquet

license: apache-2.0
---

# VisionRewardDB-Video

This dataset is a comprehensive collection of video evaluation data designed for multi-dimensional quality assessment of AI-generated videos. It encompasses annotations across 21 diverse aspects, including text-to-video consistency, aesthetic quality, motion dynamics, physical realism, and technical specifications.  ๐ŸŒŸโœจ
[**Github Repository**](https://github.com/THUDM/VisionReward) ๐Ÿ”—

The dataset is structured to facilitate both model training and standardized evaluation:
- `Train`: A primary training set with detailed multi-dimensional annotations
- `Regression`: A regression set with paired preference data
- `MonetBench`: A benchmark test set for standardized performance evaluation

This holistic approach enables the development and validation of sophisticated video quality assessment models that can evaluate AI-generated videos across multiple critical dimensions, moving beyond simple aesthetic judgments to encompass technical accuracy, semantic consistency, and dynamic performance.


## Annotation Details

Each video in the dataset is annotated with the following attributes:

<table border="1" style="border-collapse: collapse; width: 100%;">
    <tr>
        <th style="padding: 8px; width: 30%;">Dimension</th>
        <th style="padding: 8px; width: 70%;">Attributes</th>
    </tr>
    <tr>
        <td style="padding: 8px;">Alignment</td>
        <td style="padding: 8px;">Alignment</td>
    </tr>
    <tr>
        <td style="padding: 8px;">Composition</td>
        <td style="padding: 8px;">Composition</td>
    </tr>
    <tr>
        <td style="padding: 8px;">Quality</td>
        <td style="padding: 8px;">Color; Lighting Accurate; Lighting Aes; Clear</td>
    </tr>
    <tr>
        <td style="padding: 8px;">Fidelity</td>
        <td style="padding: 8px;">Detail Refinement; Movement Reality; Letters</td>
    </tr>
    <tr>
        <td style="padding: 8px;">Safety</td>
        <td style="padding: 8px;">Safety</td>
    </tr>
    <tr>
        <td style="padding: 8px;">Stability</td>
        <td style="padding: 8px;">Movement Smoothness; Image Quality Stability; Focus; Camera Movement; Camera Stability</td>
    </tr>
    <tr>
        <td style="padding: 8px;">Preservation</td>
        <td style="padding: 8px;">Shape at Beginning; Shape throughout</td>
    </tr>
    <tr>
        <td style="padding: 8px;">Dynamic</td>
        <td style="padding: 8px;">Object Motion dynamic; Camera Motion dynamic</td>
    </tr>
    <tr>
        <td style="padding: 8px;">Physics</td>
        <td style="padding: 8px;">Physics Law</td>
    </tr>
</table>

### Example:  Camera Stability
  - **3:** Very stable
  - **2:** Slight shake 
  - **1:** Heavy shake
- Note: When annotations are missing, the corresponding value will be set to **-1**.

For more detailed annotation guidelines(such as the meanings of different scores and annotation rules), please refer to:

- [annotation_deatils](https://flame-spaghetti-eb9.notion.site/VisioinReward-Video-Annotation-Details-196a0162280e8077b1acef109b3810ff)
- [annotation_deatils_ch](https://flame-spaghetti-eb9.notion.site/VisionReward-Video-196a0162280e80e7806af42fc5808c99)

## Additional Feature Details
The dataset includes two special features: `annotation` and `meta_result`.

### Annotation
The `annotation` feature contains scores across 21 different dimensions of video assessment, with each dimension having its own scoring criteria as detailed above.

### Meta Result
The `meta_result` feature transforms multi-choice questions into a series of binary judgments. For example, for the `Camera Stability` dimension:

| Score | Is the camera very stable? | Is the camera not unstable? |
|-------|--------------------------|---------------------------|
| 3     | 1                        | 1                         |
| 2     | 0                        | 1                         |
| 1     | 0                        | 0                         |

- note: When the corresponding meta_result is -1 (It means missing annotation), the binary judgment should be excluded from consideration

Each element in the binary array represents a yes/no answer to a specific aspect of the assessment. For detailed questions corresponding to these binary judgments, please refer to the meta_qa_en.txt file.

### Meta Mask
The `meta_mask` feature is used for balanced sampling during model training:
- Elements with value 1 indicate that the corresponding binary judgment was used in training
- Elements with value 0 indicate that the corresponding binary judgment was ignored during training

## Data Processing
```bash
cd videos
tar -xvzf train.tar.gz
tar -xvzf regression.tar.gz
tar -xvzf monetbench.tar.gz
```

We provide `extract.py` for processing the `train` dataset into JSONL format. The script can optionally extract the balanced positive/negative QA pairs used in VisionReward training by processing `meta_result` and `meta_mask` fields.

```bash
python extract.py
```

## Citation Information
```
@misc{xu2024visionrewardfinegrainedmultidimensionalhuman,
      title={VisionReward: Fine-Grained Multi-Dimensional Human Preference Learning for Image and Video Generation}, 
      author={Jiazheng Xu and Yu Huang and Jiale Cheng and Yuanming Yang and Jiajun Xu and Yuan Wang and Wenbo Duan and Shen Yang and Qunlin Jin and Shurun Li and Jiayan Teng and Zhuoyi Yang and Wendi Zheng and Xiao Liu and Ming Ding and Xiaohan Zhang and Xiaotao Gu and Shiyu Huang and Minlie Huang and Jie Tang and Yuxiao Dong},
      year={2024},
      eprint={2412.21059},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2412.21059}, 
}
```