Datasets:

Modalities:
Text
Formats:
text
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
yeliudev commited on
Commit
76be06b
โ€ข
1 Parent(s): 6b67d78

Update README

Browse files
Files changed (1) hide show
  1. README.md +129 -3
README.md CHANGED
@@ -1,3 +1,129 @@
1
- ---
2
- license: cc-by-nc-sa-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ ---
4
+
5
+ # E.T. Bench
6
+
7
+ <p align="center">
8
+ <img width="700" src="https://github.com/PolyU-ChenLab/ETBench/blob/main/.github/benchmark.jpg">
9
+ </p>
10
+
11
+ E.T. Bench is a large-scale and high-quality benchmark for open-ended event-level video understanding. Categorized within a 3-level task taxonomy, it encompasses 7.3K samples under 12 tasks with 7K videos (251.4h total length) under 8 domains, providing comprehensive evaluations on 4 essential capabilities for time-sensitive video understanding.
12
+
13
+ ## ๐Ÿ“š Data Preparation
14
+
15
+ You may download the evaluation kit for E.T. Bench using the following command.
16
+
17
+ ```
18
+ git lfs install
19
+ git clone [email protected]:datasets/PolyU-ChenLab/ETBench
20
+ ```
21
+
22
+ Then, enter the directory and extract the files in the `videos` folder by running:
23
+
24
+ ```
25
+ cd ETBench
26
+ for path in videos/*.tar.gz; do tar -xvf $path -C videos; done
27
+ ```
28
+
29
+ **[Optional]** You may also want to compress the videos (to lower FPS & resolution) for faster I/O.
30
+
31
+ ```
32
+ python compress_videos.py --fps 3 --size 224
33
+ ```
34
+
35
+ <details>
36
+ <summary><i>Arguments of <code>compress_videos.py</code></i></summary>
37
+
38
+ - `--src_dir` Path to the videos folder (Default: `videos`)
39
+ - `--tgt_dir` Path to the output folder (Default: `videos_compressed`)
40
+ - `--fps` The target FPS for output (Default: `3`)
41
+ - `--size` The length of the shortest side of output frames (Default: `224`)
42
+ - `--workers` Number of workers to use (Default: `None` same as the number of CPUs)
43
+
44
+ </details>
45
+
46
+ This will compress all the videos to `3 FPS` and `224 pixels shortest side`. The audio will be removed as well. The output videos will be saved in `videos_compressed` folder with the same structure as `videos`.
47
+
48
+ ## ๐Ÿš€ Getting Started
49
+
50
+ The folder for E.T. Bench is organized as follows.
51
+
52
+ ```
53
+ ETBench
54
+ โ”œโ”€ annotations
55
+ โ”‚ โ”œโ”€ txt (annotations for sub-tasks, with timestamps as text)
56
+ โ”‚ โ”œโ”€ vid (annotations for sub-tasks, with timestamps as <vid> tokens)
57
+ โ”‚ โ”œโ”€ etbench_txt_v1.0.json (merged annotations in `txt` folder)
58
+ โ”‚ โ””โ”€ etbench_vid_v1.0.json (merged annotations in `vid` folder)
59
+ โ”œโ”€ evaluation
60
+ โ”‚ โ”œโ”€ compute_metrics.py (script for computing metrics)
61
+ โ”‚ โ”œโ”€ requirements.txt (requirements for the evaluation script)
62
+ โ”‚ โ””โ”€ subset.json (IDs of the subset for evaluating commercial models)
63
+ โ”œโ”€ videos (raw video files)
64
+ โ”œโ”€ videos_compressed (compressed video files)
65
+ โ””โ”€ compress_videos.py (script for compressing videos)
66
+ ```
67
+
68
+ For full evaluation on 7,289 samples, you just need to use either of the following annotation file.
69
+
70
+ - `etbench_txt_v1.0.json` - for models representing timestamps in pure text, e.g., '2.5 - 4.8 seconds'
71
+ - `etbench_vid_v1.0.json` - for models using special tokens for timestamps, e.g., \<vid\> token in E.T. Chat
72
+
73
+ Each JSON file contains a list of dicts with the following entries.
74
+
75
+ ```python
76
+ {
77
+ "version": 1.0, # annotation version
78
+ "idx": 0, # sample index
79
+ "task": "tvg", # task
80
+ "source": "qvhighlights", # source dataset
81
+ "video": "qvhighlights/example.mp4", # path to video
82
+ "duration": 35.0, # video duration (seconds)
83
+ "src": [1.2, 15.0], # [optional] timestamps (seconds) in model inputs
84
+ "tgt": [[15.0, 31.0], [31.4, 34.9]], # [optional] timestamps (seconds) in model outputs
85
+ "p": 0, # [optional] index of correct answer (for RAR, ECA, RVQ, GVQ)
86
+ "o": ["a", "b", "c", "d"], # [optional] answer candidates (for RAR, ECA, RVQ, GVQ)
87
+ "g": ["a cat...", "it then..."], # [optional] ground truth captions (for DVC, SLC)
88
+ "q": "...", # model input prompt
89
+ "a": "..." # [to be added by the user] model response
90
+ }
91
+ ```
92
+
93
+ For each sample, you can simply load the corresponding video and send it together with the prompt in `q` to the model. In `vid` style annotations, all the timestamps in `q` have been replaced with `<vid>` and their original values can be found in `src`.
94
+
95
+ After obtaining model outputs, you need to place raw text responses into the `a` entries of each sample and dump the entire list to a new JSON file. ***Please make sure the dumped file has exactly the same structure as the annotation file, except that each sample has a new `a` entry storing model outputs.***
96
+
97
+ Please refer to the [inference script](../etchat/eval/infer_etbench.py) of E.T. Chat as an example.
98
+
99
+ ## ๐Ÿ”ฎ Compute Metrics
100
+
101
+ Run the following command to install the requirements for the evaluation script.
102
+
103
+ ```
104
+ pip install -r evaluation/requirements.txt
105
+ ```
106
+
107
+ After that, compute the metrics by running
108
+
109
+ ```
110
+ python evaluation/compute_metrics.py <path-to-the-dumped-json>
111
+
112
+ # In case you want to evaluate on the subset with 470 samples (same as the commercial models in Table 1 of the paper)
113
+ # python evaluation/compute_metrics.py <path-to-the-dumped-json> --subset
114
+ ```
115
+
116
+ The evaluation log and computed metrics will be saved in `metrics.log` and `metrics.json`, respectively.
117
+
118
+ ## ๐Ÿ“– Citation
119
+
120
+ Please kindly cite our paper if you find this project helpful.
121
+
122
+ ```
123
+ @article{liu2024etbench,
124
+ title={E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding},
125
+ author={Liu, Ye and Ma, Zongyang and Qi, Zhongang and Wu, Yang and Chen, Chang Wen and Shan, Ying},
126
+ journal={Tech Report},
127
+ year={2024}
128
+ }
129
+ ```