Datasets:

parquet-converter commited on
Commit
1289000
1 Parent(s): c90c871

Update parquet files

Browse files
README.md DELETED
@@ -1,227 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - expert-generated
4
- language_creators:
5
- - crowdsourced
6
- language:
7
- - en
8
- license:
9
- - cc-by-4.0
10
- multilinguality:
11
- - monolingual
12
- pretty_name: VCTK
13
- size_categories:
14
- - 10K<n<100K
15
- source_datasets:
16
- - original
17
- task_categories:
18
- - automatic-speech-recognition
19
- task_ids: []
20
- paperswithcode_id: vctk
21
- train-eval-index:
22
- - config: main
23
- task: automatic-speech-recognition
24
- task_id: speech_recognition
25
- splits:
26
- train_split: train
27
- col_mapping:
28
- file: path
29
- text: text
30
- metrics:
31
- - type: wer
32
- name: WER
33
- - type: cer
34
- name: CER
35
- dataset_info:
36
- features:
37
- - name: speaker_id
38
- dtype: string
39
- - name: audio
40
- dtype:
41
- audio:
42
- sampling_rate: 48000
43
- - name: file
44
- dtype: string
45
- - name: text
46
- dtype: string
47
- - name: text_id
48
- dtype: string
49
- - name: age
50
- dtype: string
51
- - name: gender
52
- dtype: string
53
- - name: accent
54
- dtype: string
55
- - name: region
56
- dtype: string
57
- - name: comment
58
- dtype: string
59
- config_name: main
60
- splits:
61
- - name: train
62
- num_bytes: 40103111
63
- num_examples: 88156
64
- download_size: 11747302977
65
- dataset_size: 40103111
66
- ---
67
-
68
- # Dataset Card for VCTK
69
-
70
- ## Table of Contents
71
- - [Dataset Description](#dataset-description)
72
- - [Dataset Summary](#dataset-summary)
73
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
74
- - [Languages](#languages)
75
- - [Dataset Structure](#dataset-structure)
76
- - [Data Instances](#data-instances)
77
- - [Data Fields](#data-fields)
78
- - [Data Splits](#data-splits)
79
- - [Dataset Creation](#dataset-creation)
80
- - [Curation Rationale](#curation-rationale)
81
- - [Source Data](#source-data)
82
- - [Annotations](#annotations)
83
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
84
- - [Considerations for Using the Data](#considerations-for-using-the-data)
85
- - [Social Impact of Dataset](#social-impact-of-dataset)
86
- - [Discussion of Biases](#discussion-of-biases)
87
- - [Other Known Limitations](#other-known-limitations)
88
- - [Additional Information](#additional-information)
89
- - [Dataset Curators](#dataset-curators)
90
- - [Licensing Information](#licensing-information)
91
- - [Citation Information](#citation-information)
92
- - [Contributions](#contributions)
93
-
94
- ## Dataset Description
95
-
96
- - **Homepage:** [Edinburg DataShare](https://doi.org/10.7488/ds/2645)
97
- - **Repository:**
98
- - **Paper:**
99
- - **Leaderboard:**
100
- - **Point of Contact:**
101
-
102
- ### Dataset Summary
103
-
104
- This CSTR VCTK Corpus includes speech data uttered by 110 English speakers with various accents. Each speaker reads out about 400 sentences, which were selected from a newspaper, the rainbow passage and an elicitation paragraph used for the speech accent archive.
105
-
106
- ### Supported Tasks and Leaderboards
107
-
108
- [More Information Needed]
109
-
110
- ### Languages
111
-
112
- [More Information Needed]
113
-
114
- ## Dataset Structure
115
-
116
- ### Data Instances
117
-
118
- A data point comprises the path to the audio file, called `file` and its transcription, called `text`.
119
-
120
- ```
121
- {
122
- 'speaker_id': 'p225',
123
- 'text_id': '001',
124
- 'text': 'Please call Stella.',
125
- 'age': '23',
126
- 'gender': 'F',
127
- 'accent': 'English',
128
- 'region': 'Southern England',
129
- 'file': '/datasets/downloads/extracted/8ed7dad05dfffdb552a3699777442af8e8ed11e656feb277f35bf9aea448f49e/wav48_silence_trimmed/p225/p225_001_mic1.flac',
130
- 'audio':
131
- {
132
- 'path': '/datasets/downloads/extracted/8ed7dad05dfffdb552a3699777442af8e8ed11e656feb277f35bf9aea448f49e/wav48_silence_trimmed/p225/p225_001_mic1.flac',
133
- 'array': array([0.00485229, 0.00689697, 0.00619507, ..., 0.00811768, 0.00836182, 0.00854492], dtype=float32),
134
- 'sampling_rate': 48000
135
- },
136
- 'comment': ''
137
- }
138
- ```
139
-
140
- Each audio file is a single-channel FLAC with a sample rate of 48000 Hz.
141
-
142
- ### Data Fields
143
-
144
- Each row consists of the following fields:
145
-
146
- - `speaker_id`: Speaker ID
147
- - `audio`: Audio recording
148
- - `file`: Path to audio file
149
- - `text`: Text transcription of corresponding audio
150
- - `text_id`: Text ID
151
- - `age`: Speaker's age
152
- - `gender`: Speaker's gender
153
- - `accent`: Speaker's accent
154
- - `region`: Speaker's region, if annotation exists
155
- - `comment`: Miscellaneous comments, if any
156
-
157
- ### Data Splits
158
-
159
- The dataset has no predefined splits.
160
-
161
- ## Dataset Creation
162
-
163
- ### Curation Rationale
164
-
165
- [More Information Needed]
166
-
167
- ### Source Data
168
-
169
- #### Initial Data Collection and Normalization
170
-
171
- [More Information Needed]
172
-
173
- #### Who are the source language producers?
174
-
175
- [More Information Needed]
176
-
177
- ### Annotations
178
-
179
- #### Annotation process
180
-
181
- [More Information Needed]
182
-
183
- #### Who are the annotators?
184
-
185
- [More Information Needed]
186
-
187
- ### Personal and Sensitive Information
188
-
189
- The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
190
-
191
- ## Considerations for Using the Data
192
-
193
- ### Social Impact of Dataset
194
-
195
- [More Information Needed]
196
-
197
- ### Discussion of Biases
198
-
199
- [More Information Needed]
200
-
201
- ### Other Known Limitations
202
-
203
- [More Information Needed]
204
-
205
- ## Additional Information
206
-
207
- ### Dataset Curators
208
-
209
- [More Information Needed]
210
-
211
- ### Licensing Information
212
-
213
- Public Domain, Creative Commons Attribution 4.0 International Public License ([CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/legalcode))
214
-
215
- ### Citation Information
216
-
217
- ```bibtex
218
- @inproceedings{Veaux2017CSTRVC,
219
- title = {CSTR VCTK Corpus: English Multi-speaker Corpus for CSTR Voice Cloning Toolkit},
220
- author = {Christophe Veaux and Junichi Yamagishi and Kirsten MacDonald},
221
- year = 2017
222
- }
223
- ```
224
-
225
- ### Contributions
226
-
227
- Thanks to [@jaketae](https://github.com/jaketae) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"main": {"description": "", "citation": "@inproceedings{Veaux2017CSTRVC,\n\ttitle = {CSTR VCTK Corpus: English Multi-speaker Corpus for CSTR Voice Cloning Toolkit},\n\tauthor = {Christophe Veaux and Junichi Yamagishi and Kirsten MacDonald},\n\tyear = 2017\n}\n", "homepage": "https://datashare.ed.ac.uk/handle/10283/3443", "license": "", "features": {"speaker_id": {"dtype": "string", "id": null, "_type": "Value"}, "audio": {"sampling_rate": 48000, "mono": true, "_storage_dtype": "string", "id": null, "_type": "Audio"}, "file": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "text_id": {"dtype": "string", "id": null, "_type": "Value"}, "age": {"dtype": "string", "id": null, "_type": "Value"}, "gender": {"dtype": "string", "id": null, "_type": "Value"}, "accent": {"dtype": "string", "id": null, "_type": "Value"}, "region": {"dtype": "string", "id": null, "_type": "Value"}, "comment": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": {"input": "file", "output": "text"}, "task_templates": [{"task": "automatic-speech-recognition", "audio_column": "audio", "transcription_column": "text"}], "builder_name": "vctk", "config_name": "main", "version": {"version_str": "0.9.2", "description": null, "major": 0, "minor": 9, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 40103111, "num_examples": 88156, "dataset_name": "vctk"}}, "download_checksums": {"https://datashare.is.ed.ac.uk/bitstream/handle/10283/3443/VCTK-Corpus-0.92.zip": {"num_bytes": 11747302977, "checksum": "f96258be9fdc2cbff6559541aae7ea4f59df3fcaf5cf963aae5ca647357e359c"}}, "download_size": 11747302977, "post_processing_size": null, "dataset_size": 40103111, "size_in_bytes": 11787406088}}
 
 
main/train/0000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:832409b80208069614650af1438b242351d1ea6cc338547a39fd2617fbadef2d
3
+ size 471632807
main/train/0001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:349648997920a878fe5a82048587236b35f06b284a03eb84431e72278b219e0f
3
+ size 415387720
main/train/0002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dec94b112af35981f2046feedb7f290234ed304f3e234260cc4b9adff97b6866
3
+ size 441013052
main/train/0003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7fb3428e58384acc29a29fc16741144d7a25c75e9349b016ad7377a1be2c13bb
3
+ size 456796730
main/train/0004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67a152467b3e09529d988e723979bdc04e83f37a8edd4f554a8e5afb019faf4b
3
+ size 447028518
main/train/0005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9abc09da5aaa4a389e6da1aff2073e0ac6c6867e0160f63bc384941ddd1b2577
3
+ size 439078904
main/train/0006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67bee0ebb4dab788337f4f023f393277c18f664c64c3cc822ed2e43f54598328
3
+ size 448929241
main/train/0007.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5c80e2e782bbb6d321b19c92ebbea7794cd11c0278891e6e3cceceb0d98aa6a6
3
+ size 421190487
main/train/0008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:44142f42b36970a117f4b23d4925fe5700f29bb338946df3f09dd9a91253b203
3
+ size 430892942
main/train/0009.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:456f37796630e65e718a1565d9dcba07698f980cb6344fa8dee7bffa3ecba7f7
3
+ size 454240101
main/train/0010.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bbb17b7778690f69aa3be4a4ab23efbbf4c8865632799f2874918cb3b43b9ca7
3
+ size 468908412
main/train/0011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2019fd0f995eb311773aaca8ab6a3383d78bef52f0259b163bced1c372f9ae3b
3
+ size 433448483
main/train/0012.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:82ae53d1361507e30bfaeac080fd4c054f9a866243dc87cb0ab65513b8ee05a2
3
+ size 447928238
main/train/0013.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac0323524e39bcce97b14f85f23330a1c899a4446d14fb1fb4e622b031283743
3
+ size 425435243
main/train/0014.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a3def40fd1ab4f993b77634c49ca804a0f8bd5ab7e10d6f38962cb35d4cef41
3
+ size 408578405
main/train/0015.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:077b3d1f2a6db67cdd8201a55d8f406ca5d49dc56147cebfd1e6e944260c0b76
3
+ size 425807561
main/train/0016.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:faa8f13a2a2132818bce1c6997e016389e06b7c9539521ed470674b36a8759fc
3
+ size 453612283
main/train/0017.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:21d3c5b5c8c0ff22d615f08046da73696b7a38904840107824b0dae2d2dea270
3
+ size 496703992
main/train/0018.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20a98a952dec559aa58f6133082a28be414f94d913878d2f739e35349021d098
3
+ size 487519230
main/train/0019.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f40e878095e1a2844a0289ec1f4347c6b4129d3c39b8a20837f5e3338aa4ea74
3
+ size 421850585
main/train/0020.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc444ec93574cb8f25bed83b1999f5c7245d9737b08f5303571402b9a695ad2f
3
+ size 416165865
main/train/0021.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7bfedf41ed788ef28a1b45f6aff1b22e4559f17b819039fb3dd207a712b27543
3
+ size 432364498
main/train/0022.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d9a718c4071c6a179a4023589ef4ef51174e29d10cac8b514f12c995f4f3394e
3
+ size 419557215
main/train/0023.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c525cebac285d7dcac8d4428900984523e7fd866a51e238efa6b5ff8c257235
3
+ size 386412884
main/train/0024.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c67d95a3d1d21b6ff0d927b8200090ce2784218196a5022e5265efb49497e79
3
+ size 408793993
main/train/0025.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:91eb1570e96b4855da8198df91532aa1a2dd897d109c9cd2736a1e814f18c652
3
+ size 349888540
main/train/0026.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e5d6a12b26d0c616d95536266d9e74832b6b74ab50aa918df56061f5bd417776
3
+ size 406514710
vctk.py DELETED
@@ -1,133 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2021 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- # Lint as: python3
17
- """VCTK dataset."""
18
-
19
-
20
- import os
21
- import re
22
-
23
- import datasets
24
- from datasets.tasks import AutomaticSpeechRecognition
25
-
26
-
27
- _CITATION = """\
28
- @inproceedings{Veaux2017CSTRVC,
29
- title = {CSTR VCTK Corpus: English Multi-speaker Corpus for CSTR Voice Cloning Toolkit},
30
- author = {Christophe Veaux and Junichi Yamagishi and Kirsten MacDonald},
31
- year = 2017
32
- }
33
- """
34
-
35
- _DESCRIPTION = """\
36
- The CSTR VCTK Corpus includes speech data uttered by 110 English speakers with various accents.
37
- """
38
-
39
- _URL = "https://datashare.ed.ac.uk/handle/10283/3443"
40
- _DL_URL = "https://datashare.is.ed.ac.uk/bitstream/handle/10283/3443/VCTK-Corpus-0.92.zip"
41
-
42
-
43
- class VCTK(datasets.GeneratorBasedBuilder):
44
- """VCTK dataset."""
45
-
46
- VERSION = datasets.Version("0.9.2")
47
-
48
- BUILDER_CONFIGS = [
49
- datasets.BuilderConfig(name="main", version=VERSION, description="VCTK dataset"),
50
- ]
51
-
52
- def _info(self):
53
- return datasets.DatasetInfo(
54
- description=_DESCRIPTION,
55
- features=datasets.Features(
56
- {
57
- "speaker_id": datasets.Value("string"),
58
- "audio": datasets.features.Audio(sampling_rate=48_000),
59
- "file": datasets.Value("string"),
60
- "text": datasets.Value("string"),
61
- "text_id": datasets.Value("string"),
62
- "age": datasets.Value("string"),
63
- "gender": datasets.Value("string"),
64
- "accent": datasets.Value("string"),
65
- "region": datasets.Value("string"),
66
- "comment": datasets.Value("string"),
67
- }
68
- ),
69
- supervised_keys=("file", "text"),
70
- homepage=_URL,
71
- citation=_CITATION,
72
- task_templates=[AutomaticSpeechRecognition(audio_column="audio", transcription_column="text")],
73
- )
74
-
75
- def _split_generators(self, dl_manager):
76
- root_path = dl_manager.download_and_extract(_DL_URL)
77
-
78
- return [
79
- datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"root_path": root_path}),
80
- ]
81
-
82
- def _generate_examples(self, root_path):
83
- """Generate examples from the VCTK corpus root path."""
84
-
85
- meta_path = os.path.join(root_path, "speaker-info.txt")
86
- txt_root = os.path.join(root_path, "txt")
87
- wav_root = os.path.join(root_path, "wav48_silence_trimmed")
88
- # NOTE: "comment" is handled separately in logic below
89
- fields = ["speaker_id", "age", "gender", "accent", "region"]
90
-
91
- key = 0
92
- with open(meta_path, encoding="utf-8") as meta_file:
93
- _ = next(iter(meta_file))
94
- for line in meta_file:
95
- data = {}
96
- line = line.strip()
97
- search = re.search(r"\(.*\)", line)
98
- if search is None:
99
- data["comment"] = ""
100
- else:
101
- start, _ = search.span()
102
- data["comment"] = line[start:]
103
- line = line[:start]
104
- values = line.split()
105
- for i, field in enumerate(fields):
106
- if field == "region":
107
- data[field] = " ".join(values[i:])
108
- else:
109
- data[field] = values[i] if i < len(values) else ""
110
- speaker_id = data["speaker_id"]
111
- speaker_txt_path = os.path.join(txt_root, speaker_id)
112
- speaker_wav_path = os.path.join(wav_root, speaker_id)
113
- # NOTE: p315 does not have text
114
- if not os.path.exists(speaker_txt_path):
115
- continue
116
- for txt_file in sorted(os.listdir(speaker_txt_path)):
117
- filename, _ = os.path.splitext(txt_file)
118
- _, text_id = filename.split("_")
119
- for i in [1, 2]:
120
- wav_file = os.path.join(speaker_wav_path, f"{filename}_mic{i}.flac")
121
- # NOTE: p280 does not have mic2 files
122
- if not os.path.exists(wav_file):
123
- continue
124
- with open(os.path.join(speaker_txt_path, txt_file), encoding="utf-8") as text_file:
125
- text = text_file.readline().strip()
126
- more_data = {
127
- "file": wav_file,
128
- "audio": wav_file,
129
- "text": text,
130
- "text_id": text_id,
131
- }
132
- yield key, {**data, **more_data}
133
- key += 1