Datasets:
language:
- yue
license: cc0-1.0
size_categories:
- 10K<n<100K
task_categories:
- automatic-speech-recognition
- text-to-speech
- text-generation
- feature-extraction
- audio-to-audio
- audio-classification
- text-to-audio
pretty_name: c
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- cantonese
- audio
- art
dataset_info:
features:
- name: audio
dtype: audio
- name: id
dtype: string
- name: episode_id
dtype: int64
- name: audio_duration
dtype: float64
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 38770110912.616
num_examples: 39173
download_size: 38780593424
dataset_size: 38770110912.616
張悦楷講《三國演義》語音數據集
呢個係張悦楷講《三國演義》語音數據集。張悦楷係廣州最出名嘅講古佬 / 粵語説書藝人。佢從上世紀七十年代開始就喺廣東各個收音電台度講古,佢把聲係好多廣州人嘅共同回憶。本數據集《三國演義》係佢最知名嘅作品一。
數據集用途:
- TTS(語音合成)訓練集
- ASR(語音識別)訓練集或測試集
- 各種語言學、文學研究
- 直接聽嚟欣賞藝術!
TTS 效果演示:https://huggingface.co./spaces/laubonghaudoi/zoengjyutgaai_tts
説明
- 所有文本都根據 https://jyutping.org/blog/typo/ 同 https://jyutping.org/blog/particles/ 規範用字。
- 所有文本都使用全角標點,冇半角標點。
- 所有文本都用漢字轉寫,無阿拉伯數字無英文字母
- 所有音頻源都存放喺
/webm
,為方便直接用作訓練數據,切分後嘅音頻都重採樣升 44100Hz 放喺wav/
- 所有源字幕 SRT 文件都存放喺
srt/
路經下,搭配 webm 可以直接作為帶字幕錄音賞聽。 cut.py
係切分腳本,可以將對應嘅 wav 根據 srt 切分成短句並生成一個文本轉寫 csv。stats.py
係統計腳本,運行佢就會顯示成個數據集嘅各項統計數據。
下載使用
要下載使用呢個數據集,可以喺 Python 入面直接跑:
from datasets import load_dataset
ds = load_dataset("CanCLID/zoengjyutgaai_saamgwokjinji")
如果想單純將 wav/
入面所有嘢下載落嚟,可以跑下面嘅 Python 代碼,注意要安裝 pip install --upgrade huggingface_hub
先:
from huggingface_hub import snapshot_download
# 如果你淨係想下載啲字幕或者源音頻,噉就將下面嘅 `wav/*` 改成 `srt/*` 或者 `webm/*`
snapshot_download(repo_id="CanCLID/zoengjyutgaai_saamgwokjinji",allow_patterns="wav/*",local_dir="./",repo_type="dataset")
如果唔想用 python,你亦都可以用命令行叫 git 針對克隆個wav/
或者其他路經,避免將成個 repo 都克隆落嚟浪費空間同下載時間:
mkdir zoengjyutgaai_saamgwokjinji
cd zoengjyutgaai_saamgwokjinji
git init
git remote add origin https://huggingface.co./datasets/CanCLID/zoengjyutgaai_saamgwokjinji
git sparse-checkout init --cone
# 指定凈係下載個別路徑
git sparse-checkout set wav
# 開始下載
git pull origin main
數據集構建流程
本數據集嘅收集、構建過程係:
- 從 YouTube 或者國內評書網站度下載錄音源文件,一般都係每集半個鐘長嘅
.webm
或者.mp3
。 - 用加字幕工具幫呢啲錄音加字幕,得到對應嘅
.srt
文件。 - 將啲源錄音用下面嘅命令儘可能無壓縮噉轉換成
.wav
格式。 - 運行
cut.py
,將每一集.wav
按照.srt
入面嘅時間點切分成一句一個.wav
,然後對應嘅文本寫入本數據集嘅xxx.csv
。 - 然後打開一個 IPython,逐句跑下面嘅命令,將啲數據推上 HuggingFace。
from datasets import load_dataset
from huggingface_hub import login
dataset = load_dataset('audiofolder', data_dir='./wav')
# 檢查下讀入嘅數據有冇問題
dataset['train'][0]
# 準備好個 token 嚟登入
login()
# 推上 HuggingFace datasets
dataset.push_to_hub("CanCLID/zoengjyutgaai_saamgwokjinji")
將.webm
無損轉為.wav
首先要安裝 ffmpeg,然後運行:
ffmpeg -i "webm/001.webm" -vn -ar 44100 -c:a pcm_s16le "001.wav"
如果唔想指定採樣率,儘可能無損轉換,可以將上面嘅-ar 44100
刪去。本數據集入面所有 wav 都已經轉為 44100 採樣率。
Zoeng Jyut Gaai story-telling Romance of the Three Kingdoms voice dataset
This is a speech dataset of Zoeng Jyut Gaai story-telling Romance of the Three Kingdoms. Zoeng Jyut Gaai is a famous actor, stand-up commedian and story-teller (講古佬) in 20th centry Canton. His voice remains in the memories of thousands of Cantonese people. This dataset is built from one of his most well-known story-telling piece: Romance of the Three Kingdoms.
Use case of this dataset:
- TTS (Text-To-Speech) training set
- ASR (Automatic Speech Recognition) training or eval set
- Various linguistics / art analysis
- Just listen and enjoy the art piece!
TTS demo: https://huggingface.co./spaces/laubonghaudoi/zoengjyutgaai_tts
Introduction
- All transcriptions follow the prescribed orthography detailed in https://jyutping.org/blog/typo/ and https://jyutping.org/blog/particles/
- All transcriptions use full-width punctuations, no half-width punctuations is used.
- All transcriptions are in Chinese characters, no Arabic numbers or Latin letters.
- All source audio are stored in
/webm
. For the convenice of training, segmented audios are resampled into 44.1 kHz and stored inwav/
. - All source subtitle SRT files are stored in
srt/
. Use them with the webm files to enjoy subtitled storytelling pieces. cut.py
is the script for cutting wav audios into smaller senteneces based on the srt, and generates a csv file for transcriptions.stats.py
is the script for showing stats of this datasets.
Usage
To use this dataset, simply run in Python:
from datasets import load_dataset
ds = load_dataset("CanCLID/zoengjyutgaai_saamgwokjinji")
If you only want to download a certain directory to save time and space from cloning the entire repo, run the Python codes below. Make sure you have pip install --upgrade huggingface_hub
first:
from huggingface_hub import snapshot_download
# If you only want to download the source audio or the subtitles, change the `wav/*` below into `srt/*` or `webm/*`
snapshot_download(repo_id="CanCLID/zoengjyutgaai_saamgwokjinji",allow_patterns="wav/*",local_dir="./",repo_type="dataset")
If you don't want to run python codes and want to do this via command lines, you can selectively clone only a directory of the repo:
mkdir zoengjyutgaai_saamgwokjinji
cd zoengjyutgaai_saamgwokjinji
git init
git remote add origin https://huggingface.co./datasets/CanCLID/zoengjyutgaai_saamgwokjinji
git sparse-checkout init --cone
# Tell git which directory you want
git sparse-checkout set wav
# Pull the content
git pull origin main