Datasets:

Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
dcores commited on
Commit
52e2c9f
·
verified ·
1 Parent(s): 04acc6a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -50,8 +50,10 @@ size_categories:
50
  </div>
51
 
52
  ### Updates
 
 
53
  - <h4 style="color:darkgreen">25 October 2024: Please redownload dataset due to removal of duplicated samples for Action Sequence and Unexpected Action.</h4>
54
-
55
  # TVBench
56
  TVBench is a new benchmark specifically created to evaluate temporal understanding in video QA. We identified three main issues in existing datasets: (i) static information from single frames is often sufficient to solve the tasks (ii) the text of the questions and candidate answers is overly informative, allowing models to answer correctly without relying on any visual input (iii) world knowledge alone can answer many of the questions, making the benchmarks a test of knowledge replication rather than visual reasoning. In addition, we found that open-ended question-answering benchmarks for video understanding suffer from similar issues while the automatic evaluation process with LLMs is unreliable, making it an unsuitable alternative.
57
 
@@ -74,7 +76,7 @@ Question and answers are provided as a json file for each task.
74
  Videos in TVBench are sourced from Perception Test, CLEVRER, STAR, MoVQA, Charades-STA, NTU RGB+D, FunQA and CSV. All videos are included in this repository, except for those from NTU RGB+D, which can be downloaded from the official [website](https://rose1.ntu.edu.sg/dataset/actionRecognition/). It is not necessary to download the full dataset, as NTU RGB+D provides a subset specifically for TVBench with the required videos. These videos are required by th Action Antonym task and should be stored in the `video/action_antonym` folder.
75
 
76
  ## Leaderboard
77
- ![image](figs/sota.png)
78
 
79
  # Citation
80
  If you find this benchmark useful, please consider citing:
 
50
  </div>
51
 
52
  ### Updates
53
+ - <h4 style="color:darkgreen">23 December 2024: Please redownload the dataset, as the Unexpected Action labels have been updated.</h4>
54
+ <!--
55
  - <h4 style="color:darkgreen">25 October 2024: Please redownload dataset due to removal of duplicated samples for Action Sequence and Unexpected Action.</h4>
56
+ -->
57
  # TVBench
58
  TVBench is a new benchmark specifically created to evaluate temporal understanding in video QA. We identified three main issues in existing datasets: (i) static information from single frames is often sufficient to solve the tasks (ii) the text of the questions and candidate answers is overly informative, allowing models to answer correctly without relying on any visual input (iii) world knowledge alone can answer many of the questions, making the benchmarks a test of knowledge replication rather than visual reasoning. In addition, we found that open-ended question-answering benchmarks for video understanding suffer from similar issues while the automatic evaluation process with LLMs is unreliable, making it an unsuitable alternative.
59
 
 
76
  Videos in TVBench are sourced from Perception Test, CLEVRER, STAR, MoVQA, Charades-STA, NTU RGB+D, FunQA and CSV. All videos are included in this repository, except for those from NTU RGB+D, which can be downloaded from the official [website](https://rose1.ntu.edu.sg/dataset/actionRecognition/). It is not necessary to download the full dataset, as NTU RGB+D provides a subset specifically for TVBench with the required videos. These videos are required by th Action Antonym task and should be stored in the `video/action_antonym` folder.
77
 
78
  ## Leaderboard
79
+ https://paperswithcode.com/sota/video-question-answering-on-tvbench
80
 
81
  # Citation
82
  If you find this benchmark useful, please consider citing: