MM-AU / README.md
JeffreyChou's picture
Update README.md
b44de5e verified
|
raw
history blame
1.75 kB
---
license: cc-by-nc-4.0
---
Here is the dataset repo of paper "Abductive Ego-View Accident Video Understanding for Safe Driving Perception"
The Github repo: [Link](https://github.com/jeffreychou777/LOTVS-MM-AU)
Due to the large amount of data, chunked compression is used before uploading.
After downloading the data, You need to merge the files before extracting them.
```
#Take DADA2000 as an example
cd DADA-2000_chunks
cat DADA2000.part_* > DADA2000.tar.gz
tar -xzvf DADA2000.tar.gz
```
After decompression, please make the file structured as following:
```
MM-AU # root of your MM-AU
β”œβ”€β”€ CAP-DATA
β”‚ β”œβ”€β”€ 1-10
β”‚ β”œβ”€β”€ 1
β”‚ β”œβ”€β”€ 001537/images
β”‚ β”œβ”€β”€ 000001.jpg
β”‚ β”œβ”€β”€ ......
β”‚ β”œβ”€β”€ 2
β”‚ β”œβ”€β”€ ......
β”‚ β”œβ”€β”€ 10
β”‚ β”œβ”€β”€ 11
β”‚ β”œβ”€β”€ 12-42
β”‚ β”œβ”€β”€ 43
β”‚ β”œβ”€β”€ 44-62
β”‚ β”œβ”€β”€ cap_text_annotations.xls
β”œβ”€β”€ DADA-DATA
β”‚ β”œβ”€β”€ 1
β”‚ β”œβ”€β”€ 001/images
β”‚ β”œβ”€β”€ 0001.png
β”‚ β”œβ”€β”€ ......
β”‚ β”œβ”€β”€ 2
β”‚ β”œβ”€β”€ ......
β”‚ β”œβ”€β”€ 61
β”‚ β”œβ”€β”€ dada_text_annotations.xlsx
```
The coco style datasets for object detection task has been uploaded!
Note :The object detection data used in the paper and the improved version MMAU-Detectv1 differ in both file names and number of videos due to different data cleaning methods and organization, but both maintain the same cocodataset style and the same dataset division strategy. The dataset used in the paper is provided to ensure the reproducibility of our paper, while the organization of MMAU-Detectv1 allows for better access to the video and image metadata when needed.