--- viewer: false license: cc-by-nc-4.0 task_categories: - text-to-speech - automatic-speech-recognition language: - zh - en - ja - fr - de - ko pretty_name: Emilia size_categories: - n>1T extra_gated_prompt: >- Terms of Access: The researcher has requested permission to use the Emilia dataset and the Emilia-Pipe preprocessing pipeline. In exchange for such permission, the researcher hereby agrees to the following terms and conditions: 1. The researcher shall use the dataset ONLY for non-commercial research and educational purposes. 2. The authors make no representations or warranties regarding the dataset, including but not limited to warranties of non-infringement or fitness for a particular purpose. 3. The researcher accepts full responsibility for their use of the dataset and shall defend and indemnify the authors of Emilia, including their employees, trustees, officers, and agents, against any and all claims arising from the researcher's use of the dataset, including but not limited to the researcher's use of any copies of copyrighted content that they may create from the dataset. 4. The researcher may provide research associates and colleagues with access to the dataset, provided that they first agree to be bound by these terms and conditions. 5. The authors reserve the right to terminate the researcher's access to the dataset at any time. 6. If the researcher is employed by a for-profit, commercial entity, the researcher's employer shall also be bound by these terms and conditions, and the researcher hereby represents that they are fully authorized to enter into this agreement on behalf of such employer. extra_gated_fields: Name: text Email: text Affiliation: text Position: text Your Supervisor/manager/director: text I agree to the Terms of Access: checkbox --- # Emilia: An Extensive, Multilingual, and Diverse Speech Dataset for Large-Scale Speech Generation This is the official repository 👑 for the **Emilia** dataset and the source code for **Emilia-Pipe** speech data preprocessing pipeline.
## News 🔥 - **2024/08/28**: Welcome to join Amphion's [Discord channel](https://discord.com/invite/ZxxREr3Y) to stay connected and engage with our community! - **2024/08/27**: *The Emilia dataset is now publicly available!* Discover the most extensive and diverse speech generation dataset with 101k hours of in-the-wild speech data now at [HuggingFace](https://huggingface.co./datasets/amphion/Emilia-Dataset) or [OpenDataLab](https://opendatalab.com/Amphion/Emilia)! 👑👑👑 - **2024/07/08**: Our preprint [paper](https://arxiv.org/abs/2407.05361) is now available! 🔥🔥🔥 - **2024/07/03**: We welcome everyone to check our [homepage](https://emilia-dataset.github.io/Emilia-Demo-Page/) for our brief introduction for Emilia dataset and our demos! - **2024/07/01**: We release of Emilia and Emilia-Pipe! We welcome everyone to explore it on our [GitHub](https://github.com/open-mmlab/Amphion/tree/main/preprocessors/Emilia)! 🎉🎉🎉 ## Emilia Overview ⭐️ The **Emilia** dataset is a comprehensive, multilingual dataset with the following features: - containing over *101k* hours of speech data; - covering six different languages: *English (En), Chinese (Zh), German (De), French (Fr), Japanese (Ja), and Korean (Ko)*; - containing diverse speech data with *various speaking styles* from diverse video platforms and podcasts on the Internet, covering various content genres such as talk shows, interviews, debates, sports commentary, and audiobooks. The table below provides the duration statistics for each language in the dataset. | Language | Duration (hours) | |:-----------:|:----------------:| | English | 46,828 | | Chinese | 49,922 | | German | 1,590 | | French | 1,381 | | Japanese | 1,715 | | Korean | 217 | The **Emilia-Pipe** is the first open-source preprocessing pipeline designed to transform raw, in-the-wild speech data into high-quality training data with annotations for speech generation. This pipeline can process one hour of raw audio into model-ready data in just a few minutes, requiring only the raw speech data. Detailed description for the Emilia and Emilia-Pipe could be found in our [paper](https://arxiv.org/abs/2407.05361). ## Emilia Dataset Usage 📖 The Emilia dataset is now publicly available at [HuggingFace](https://huggingface.co./datasets/amphion/Emilia-Dataset)! Users in mainland China can also download Emilia from [OpenDataLab](https://opendatalab.com/Amphion/Emilia)! - To download from HuggingFace, you must first gain access to the dataset by completing the request form and accepting the terms of access. Please note that due to HuggingFace's file size limit of 50 GB, the `EN/EN_B00008.tar.gz` file has been split into `EN/EN_B00008.tar.gz.0` and `EN/EN_B00008.tar.gz.1`. Before extracting the files, you will need to run the following command to combine the parts: `cat EN/EN_B00008.tar.gz.* > EN/EN_B00008.tar.gz` - To download from OpenDataLab (i.e., OpenXLab), please follow the guidence [here](https://speechteam.feishu.cn/wiki/PC8Ew5igviqBiJkElMJcJxNonJc) to gain access. **ENJOY USING EMILIA!!!** 🔥 If you wish to re-build Emilia from scratch, you may download the raw audio files from the [provided URL list](https://huggingface.co./datasets/amphion/Emilia) and use our open-source [Emilia-Pipe](https://github.com/open-mmlab/Amphion/tree/main/preprocessors/Emilia) preprocessing pipeline to preprocess the raw data. Additionally, users can easily use Emilia-Pipe to preprocess their own raw speech data for custom needs. By open-sourcing the Emilia-Pipe code, we aim to enable the speech community to collaborate on large-scale speech generation research. *Please note that Emilia does not own the copyright to the audio files; the copyright remains with the original owners of the videos or audio. Users are permitted to use this dataset only for non-commercial purposes under the CC BY-NC-4.0 license.* ## Emilia Dataset Structure ⛪️ The Emilia dataset will be structured as follows: Structure example: ``` |-- openemilia_all.tar.gz (all .JSONL files are gzipped with directory structure in this file) |-- EN (114 batches) | |-- EN_B00000.jsonl | |-- EN_B00000 (= EN_B00000.tar.gz) | | |-- EN_B00000_S00000 | | | `-- mp3 | | | |-- EN_B00000_S00000_W000000.mp3 | | | `-- EN_B00000_S00000_W000001.mp3 | | |-- ... | |-- ... | |-- EN_B00113.jsonl | `-- EN_B00113 |-- ZH (92 batches) |-- DE (9 batches) |-- FR (10 batches) |-- JA (7 batches) |-- KO (4 batches) ``` JSONL files example: ``` {"id": "EN_B00000_S00000_W000000", "wav": "EN_B00000/EN_B00000_S00000/mp3/EN_B00000_S00000_W000000.mp3", "text": " You can help my mother and you- No. You didn't leave a bad situation back home to get caught up in another one here. What happened to you, Los Angeles?", "duration": 6.264, "speaker": "EN_B00000_S00000", "language": "en", "dnsmos": 3.2927} {"id": "EN_B00000_S00000_W000001", "wav": "EN_B00000/EN_B00000_S00000/mp3/EN_B00000_S00000_W000001.mp3", "text": " Honda's gone, 20 squads done. X is gonna split us up and put us on different squads. The team's come and go, but 20 squad, can't believe it's ending.", "duration": 8.031, "speaker": "EN_B00000_S00000", "language": "en", "dnsmos": 3.0442} ``` ## Reference 📖 If you use the Emilia dataset or the Emilia-Pipe pipeline, please cite the following papers: ```bibtex @article{emilia, title={Emilia: An Extensive, Multilingual, and Diverse Speech Dataset for Large-Scale Speech Generation}, author={He, Haorui and Shang, Zengqiang and Wang, Chaoren and Li, Xuyuan and Gu, Yicheng and Hua, Hua and Liu, Liwei and Yang, Chen and Li, Jiaqi and Shi, Peiyang and Wang, Yuancheng and Chen, Kai and Zhang, Pengyuan and Wu, Zhizheng}, journal={arXiv}, volume={abs/2407.05361}, year={2024} } ``` ```bibtex @article{amphion, title={Amphion: An Open-Source Audio, Music and Speech Generation Toolkit}, author={Zhang, Xueyao and Xue, Liumeng and Gu, Yicheng and Wang, Yuancheng and He, Haorui and Wang, Chaoren and Chen, Xi and Fang, Zihao and Chen, Haopeng and Zhang, Junan and Tang, Tze Ying and Zou, Lexiao and Wang, Mingxuan and Han, Jun and Chen, Kai and Li, Haizhou and Wu, Zhizheng}, journal={arXiv}, volume={abs/2312.09911}, year={2024}, } ```