dataset_info:
features:
- name: index
dtype: int64
- name: v_kmph
dtype: float64
- name: ax_mpss
dtype: float64
- name: ay_mpss
dtype: float64
- name: yaw_rate_radps
dtype: float64
- name: frame
dtype: image
- name: d_lanecenter_m
dtype: float64
- name: alias
dtype: string
- name: steering_rack_pos_m
dtype: float64
- name: steering_torque_N
dtype: float64
- name: lane_curvature_radpm
dtype: float64
- name: stationary
dtype: float64
- name: segment
dtype: int64
- name: split
dtype: string
- name: road_type
dtype: string
- name: driving_situation_rural
dtype: string
- name: driving_situation_federal
dtype: string
- name: driving_situation_highway
dtype: string
- name: rep_id
dtype: int64
- name: frame_nr
dtype: int64
splits:
- name: val_val
num_bytes: 9160076169.901
num_examples: 34767
- name: val_train
num_bytes: 41105223625.104
num_examples: 138572
- name: pretrain
num_bytes: 73729563090.513
num_examples: 304287
- name: pretrain_train
num_bytes: 59523614752.871
num_examples: 242887
- name: pretrain_val
num_bytes: 14759288492.4
num_examples: 61400
download_size: 193239069632
dataset_size: 198277766130.789
configs:
- config_name: default
data_files:
- split: val_val
path: data/val_val-*
- split: val_train
path: data/val_train-*
- split: pretrain
path: data/pretrain-*
- split: pretrain_train
path: data/pretrain_train-*
- split: pretrain_val
path: data/pretrain_val-*
license: cc-by-4.0
pretty_name: SADC
size_categories:
- 1M<n<10M
Dataset Card for Dataset SADC
There is evidence that the driving style of an autonomous vehicle is important to increase the acceptance and trust of the passengers. The driving situation has been found to have a significant influence on human driving behavior. However, current driving style models only partially incorporate driving environment information, limiting the alignment between an agent and the given situation.
Therefore, we propose a dataset for situation-aware driving style modeling.
Dataset Details
Dataset Description
The dataset is composed as follows: the pretrain set DP is split into a training subset DP,T with 242 887 samples, and a validation subset DP,V with 61 400 samples. Similarly, the validation set DV is split into a training subset DV,T and a validation subset DV,V with 138 572 and 34 767 samples. Each subset consists of 1280 × 960 images, driving behavior indicators like the distance to the lane center, vehicle signals like velocity or accelerations, as well as traffic conditions and road type labels.
- Curated by: Johann Haselberger
- License: CC-BY-4.0
Dataset Sources
We collected over 16 hours of driving data from single test driver as pretrain data. For the driving style adaptation, we collected driving behavior data from five different subjects driving on the same route for one hour, denoted as validation data.
Usage
Download Script
For an easy usage of our dataset, we provide a download script with our repo: https://github.com/jHaselberger/SADC-Situation-Awareness-for-Driver-Centric-Driving-Style-Adaptation/blob/master/utils/download_dataset.py.
python download_dataset.py --target_dir ../data --split pretrain_train
List Available Split Names
from datasets import load_dataset, get_dataset_split_names
split_names = get_dataset_split_names("jHaselberger/SADC-Situation-Awareness-for-Driver-Centric-Driving-Style-Adaptation")
print(f"Available split names: {split_names}")
Inspect some Samples
from datasets import load_dataset, get_dataset_split_names
from matplotlib import pyplot as plt
import pandas as pd
dataset = load_dataset("jHaselberger/SADC-Situation-Awareness-for-Driver-Centric-Driving-Style-Adaptation", split="val_val", streaming=True)
samples = dataset.take(50)
df = pd.DataFrame.from_dict([s for s in samples])
print(df.head())
Visualize some Time-Series
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
ax1.plot(df["frame_nr"],df["v_kmph"],"ko-",label="velocity")
ax2.plot(df["frame_nr"],df["steering_torque_N"],"ro-",label="steering torque")
ax1.set_xlabel('Frame')
ax1.set_ylabel('Velocity in km/h', color='k')
ax2.set_ylabel('Steering Torque in N', color='r')
plt.show()
Visualize the Camera Image
plt.imshow(df["frame"].iloc[-1])
plt.axis('off')
plt.show()
Dataset Structure
Dataset Splits
Split | Number of Samples | Description |
---|---|---|
Used for the Experiments in the Paper | ||
pretrain | 304287 | The full pretrain dataset. |
pretrain_train | 242887 | Subset of pretrain used for training. |
pretrain_val | 61400 | Subset of pretrain used for validation. |
val_train | 138572 | Subset of validation used for training. |
val_val | 34767 | Subset of validation used for validation. |
Additional Data | ||
pretrain_unfiltered | 1180252 | The full unfiltered pretrain dataset. |
val_unfiltered | 686328 | The full unfiltered validation dataset. |
Files
- The folder
driving_data
contains the vehicle signals. Downloading these files is optional and is only required if you do not want to download the entire image data set. - The folder
image_lists
contains the image lists used for training of the featrue encoders and NN-based behavior predictors. Downloading these files is optional.
Personal and Sensitive Information
To blur vehicle license plates and human faces in the camera frames, we utilize EgoBlur https://github.com/facebookresearch/EgoBlur.
Furthermore, all subject-related data, including the socio-demographics, are anonymized.
Bias, Risks, and Limitations
Considering the limitations of our dataset, real-world tests should be conducted with care in a safe environment. To publish the data concerning privacy policies, we utilized a state-of-the-art anonymization framework to blur human faces and vehicle license plates to mitigate privacy concerns.
Citation [optional]
BibTeX:
@misc{haselberger2024situation,
title={Situation Awareness for Driver-Centric Driving Style Adaptation},
author={Johann Haselberger and Bonifaz Stuhr and Bernhard Schick and Steffen Müller},
year={2024},
eprint={2403.19595},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
APA:
Johann Haselberger, Bonifaz Stuhr, Bernhard Schick, & Steffen Müller. (2024). Situation Awareness for Driver-Centric Driving Style Adaptation.