Datasets:
license: mit
size_categories:
- 1K<n<10K
task_categories:
- object-detection
tags:
- industry
dataset_info:
features:
- name: image
dtype: image
- name: labels
dtype: string
splits:
- name: video1
num_bytes: 4345358.918
num_examples: 1261
- name: video2
num_bytes: 3923994.088
num_examples: 1221
- name: video3
num_bytes: 3923444.43
num_examples: 1221
- name: video4
num_bytes: 4944592.07
num_examples: 1481
- name: video5
num_bytes: 4574715.249
num_examples: 1301
download_size: 19498138
dataset_size: 21712104.754999995
configs:
- config_name: default
data_files:
- split: video1
path: data/video1-*
- split: video2
path: data/video2-*
- split: video3
path: data/video3-*
- split: video4
path: data/video4-*
- split: video5
path: data/video5-*
The IndustrialDetectionStaticCameras dataset is divided into five primary files named videoY
, where Y=1,2,3,4,5
. Each videoY
folder contains the following:
- The video of the scene in
.mp4
format:videoY.mp4
- A folder with the images of each frame of the video:
imgs_videoY
- A folder that includes for each frame a
.txt
file that holds for each labelled object a line with the annotation in kitti format:annotations_videoY
Remark: Each label file contains a set of lines, with each line representing the annotation for a single object in the corresponding image. The format of each line is as follows:
<object_type> <truncation> <occlusion> <alpha> <left> <top> <right> <bottom> <height> <width> <length> <x> <y> <z> <rotation_y>
,
where only the fields <object_type>, <left>, <top>, <right>, <bottom>
and <rotation_y>
are considered. The <rotation_y>
field has been used to indicate whether the labelled object is a static object in the scene or not -value 1
represents that object is static and value 0
symbolizes that it is not-.