NTUVictor's picture
Upload README.md with huggingface_hub
e921186
|
raw
history blame
545 Bytes
---
configs:
- config_name: default
data_files:
- split: speech_tokenizer_16k
path: data/speech_tokenizer_16k-*
dataset_info:
features:
- name: id
dtype: string
- name: unit
sequence:
sequence: int64
splits:
- name: speech_tokenizer_16k
num_bytes: 80977275
num_examples: 5531
download_size: 12564590
dataset_size: 80977275
---
# Dataset Card for "iemocap_extract_unit"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)