Datasets:
metadata
language:
- en
license: mit
size_categories:
- 10K<n<100K
task_categories:
- audio-classification
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: speaker_id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: digit
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
'4': '4'
'5': '5'
'6': '6'
'7': '7'
'8': '8'
'9': '9'
- name: gender
dtype:
class_label:
names:
'0': male
'1': female
- name: accent
dtype: string
- name: age
dtype: int64
- name: native_speaker
dtype: bool
- name: origin
dtype: string
splits:
- name: train
num_bytes: 1493209727
num_examples: 24000
- name: test
num_bytes: 360966680
num_examples: 6000
download_size: 1483680961
dataset_size: 1854176407
Dataset Card for "AudioMNIST"
The audioMNIST dataset has 50 English recordings per digit (0-9) of 60 speakers. There are 60 participants in total, with 12 being women and 48 being men, all featuring a diverse range of accents and country of origin. Their ages vary from 22 to 61 years old. This is a great dataset to explore a simple audio classification problem: either the digit or the gender.
Bias, Risks, and Limitations
- The genders represented in the dataset are unbalanced, with around 80% being men.
- The majority of the speakers, around 70%, have a German accent
Citation Information
The original creators of the dataset ask you to cite their paper if you use this data:
@ARTICLE{becker2018interpreting,
author = {Becker, S\"oren and Ackermann, Marcel and Lapuschkin, Sebastian and M\"uller, Klaus-Robert and Samek, Wojciech},
title = {Interpreting and Explaining Deep Neural Networks for Classification of Audio Signals},
journal = {CoRR},
volume = {abs/1807.03418},
year = {2018},
archivePrefix = {arXiv},
eprint = {1807.03418},
}