Datasets:

Modalities:
Text
Formats:
text
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
File size: 5,458 Bytes
76be06b
 
 
 
 
 
dc86e11
76be06b
 
 
dc86e11
76be06b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dc86e11
76be06b
 
dc86e11
76be06b
 
dc86e11
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
---
license: cc-by-nc-sa-4.0
---

# E.T. Bench

[arXiv](https://arxiv.org/abs/2409.18111) | [Project Page](https://polyu-chenlab.github.io/etbench) | [GitHub](https://github.com/PolyU-ChenLab/ETBench)

E.T. Bench is a large-scale and high-quality benchmark for open-ended event-level video understanding. Categorized within a 3-level task taxonomy, it encompasses 7.3K samples under 12 tasks with 7K videos (251.4h total length) under 8 domains, providing comprehensive evaluations on 4 essential capabilities for time-sensitive video understanding.

## ๐Ÿ“ฆ Data Preparation

You may download the evaluation kit for E.T. Bench using the following command.

```
git lfs install
git clone [email protected]:datasets/PolyU-ChenLab/ETBench
```

Then, enter the directory and extract the files in the `videos` folder by running:

```
cd ETBench
for path in videos/*.tar.gz; do tar -xvf $path -C videos; done
```

**[Optional]** You may also want to compress the videos (to lower FPS & resolution) for faster I/O.

```
python compress_videos.py --fps 3 --size 224
```

<details>
<summary><i>Arguments of <code>compress_videos.py</code></i></summary>

- `--src_dir` Path to the videos folder (Default: `videos`)
- `--tgt_dir` Path to the output folder (Default: `videos_compressed`)
- `--fps` The target FPS for output (Default: `3`)
- `--size` The length of the shortest side of output frames (Default: `224`)
- `--workers` Number of workers to use (Default: `None` same as the number of CPUs)

</details>

This will compress all the videos to `3 FPS` and `224 pixels shortest side`. The audio will be removed as well. The output videos will be saved in `videos_compressed` folder with the same structure as `videos`.

## ๐Ÿš€ Getting Started

The folder for E.T. Bench is organized as follows.

```
ETBench
โ”œโ”€ annotations
โ”‚  โ”œโ”€ txt (annotations for sub-tasks, with timestamps as text)
โ”‚  โ”œโ”€ vid (annotations for sub-tasks, with timestamps as <vid> tokens)
โ”‚  โ”œโ”€ etbench_txt_v1.0.json (merged annotations in `txt` folder)
โ”‚  โ””โ”€ etbench_vid_v1.0.json (merged annotations in `vid` folder)
โ”œโ”€ evaluation
โ”‚  โ”œโ”€ compute_metrics.py (script for computing metrics)
โ”‚  โ”œโ”€ requirements.txt (requirements for the evaluation script)
โ”‚  โ””โ”€ subset.json (IDs of the subset for evaluating commercial models)
โ”œโ”€ videos (raw video files)
โ”œโ”€ videos_compressed (compressed video files)
โ””โ”€ compress_videos.py (script for compressing videos)
```

For full evaluation on 7,289 samples, you just need to use either of the following annotation file.

- `etbench_txt_v1.0.json` - for models representing timestamps in pure text, e.g., '2.5 - 4.8 seconds'
- `etbench_vid_v1.0.json` - for models using special tokens for timestamps, e.g., \<vid\> token in E.T. Chat

Each JSON file contains a list of dicts with the following entries.

```python
{
  "version": 1.0,                       # annotation version
  "idx": 0,                             # sample index
  "task": "tvg",                        # task
  "source": "qvhighlights",             # source dataset
  "video": "qvhighlights/example.mp4",  # path to video
  "duration": 35.0,                     # video duration (seconds)
  "src": [1.2, 15.0],                   # [optional] timestamps (seconds) in model inputs
  "tgt": [[15.0, 31.0], [31.4, 34.9]],  # [optional] timestamps (seconds) in model outputs
  "p": 0,                               # [optional] index of correct answer (for RAR, ECA, RVQ, GVQ)
  "o": ["a", "b", "c", "d"],            # [optional] answer candidates (for RAR, ECA, RVQ, GVQ)
  "g": ["a cat...", "it then..."],      # [optional] ground truth captions (for DVC, SLC)
  "q": "...",                           # model input prompt
  "a": "..."                            # [to be added by the user] model response
}
```

For each sample, you can simply load the corresponding video and send it together with the prompt in `q` to the model. In `vid` style annotations, all the timestamps in `q` have been replaced with `<vid>` and their original values can be found in `src`.

After obtaining model outputs, you need to place raw text responses into the `a` entries of each sample and dump the entire list to a new JSON file. ***Please make sure the dumped file has exactly the same structure as the annotation file, except that each sample has a new `a` entry storing model outputs.***

Please refer to the [inference script](../etchat/eval/infer_etbench.py) of E.T. Chat as an example.

## ๐Ÿ”ฎ Compute Metrics

Run the following command to install the requirements for the evaluation script.

```
pip install -r evaluation/requirements.txt
```

After that, compute the metrics by running

```
python evaluation/compute_metrics.py <path-to-the-dumped-json>

# In case you want to evaluate on the subset with 470 samples (same as the commercial models in Table 1 of the paper)
# python evaluation/compute_metrics.py <path-to-the-dumped-json> --subset
```

The evaluation log and computed metrics will be saved in `metrics.log` and `metrics.json`, respectively.

## ๐Ÿ“– Citation

Please kindly cite our paper if you find this project helpful.

```
@inproceedings{liu2024etbench,
  title={E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding},
  author={Liu, Ye and Ma, Zongyang and Qi, Zhongang and Wu, Yang and Chen, Chang Wen and Shan, Ying},
  booktitle={Neural Information Processing Systems (NeurIPS)},
  year={2024}
}
```