add metadata before audio
Browse files- README.md +288 -0
- common_voice_17_0.py +198 -0
- languages.py +1 -0
- n_shards.json +10 -0
- release_stats.py +72 -0
- transcript/en/dev.tsv +0 -0
- transcript/en/invalidated.tsv +3 -0
- transcript/en/other.tsv +3 -0
- transcript/en/test.tsv +0 -0
- transcript/en/train.tsv +3 -0
- transcript/en/validated.tsv +3 -0
README.md
ADDED
@@ -0,0 +1,288 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
pretty_name: Common Voice Corpus 17.0 (EN codec2)
|
3 |
+
annotations_creators:
|
4 |
+
- crowdsourced
|
5 |
+
language_creators:
|
6 |
+
- crowdsourced
|
7 |
+
language:
|
8 |
+
- en
|
9 |
+
license:
|
10 |
+
- cc0-1.0
|
11 |
+
multilinguality:
|
12 |
+
- multilingual
|
13 |
+
source_datasets:
|
14 |
+
- extended|common_voice
|
15 |
+
paperswithcode_id: common-voice
|
16 |
+
extra_gated_prompt: "By clicking on “Access repository” below, you also agree to not attempt to determine the identity of speakers in the Common Voice dataset."
|
17 |
+
---
|
18 |
+
|
19 |
+
# Dataset Card for Common Voice Corpus 17.0
|
20 |
+
|
21 |
+
## Table of Contents
|
22 |
+
- [Dataset Description](#dataset-description)
|
23 |
+
- [Dataset Summary](#dataset-summary)
|
24 |
+
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
25 |
+
- [Languages](#languages)
|
26 |
+
- [Dataset Structure](#dataset-structure)
|
27 |
+
- [Data Instances](#data-instances)
|
28 |
+
- [Data Fields](#data-fields)
|
29 |
+
- [Data Splits](#data-splits)
|
30 |
+
- [Dataset Creation](#dataset-creation)
|
31 |
+
- [Curation Rationale](#curation-rationale)
|
32 |
+
- [Source Data](#source-data)
|
33 |
+
- [Annotations](#annotations)
|
34 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
35 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
36 |
+
- [Social Impact of Dataset](#social-impact-of-dataset)
|
37 |
+
- [Discussion of Biases](#discussion-of-biases)
|
38 |
+
- [Other Known Limitations](#other-known-limitations)
|
39 |
+
- [Additional Information](#additional-information)
|
40 |
+
- [Dataset Curators](#dataset-curators)
|
41 |
+
- [Licensing Information](#licensing-information)
|
42 |
+
- [Citation Information](#citation-information)
|
43 |
+
- [Contributions](#contributions)
|
44 |
+
|
45 |
+
## Dataset Description
|
46 |
+
|
47 |
+
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
|
48 |
+
- **Repository:** https://github.com/common-voice/common-voice
|
49 |
+
- **Paper:** https://arxiv.org/abs/1912.06670
|
50 |
+
- **Leaderboard:** https://paperswithcode.com/dataset/common-voice
|
51 |
+
- **Point of Contact:** [Vaibhav Srivastav](mailto:[email protected])
|
52 |
+
|
53 |
+
### Dataset Summary
|
54 |
+
|
55 |
+
The Common Voice dataset consists of a unique MP3 and corresponding text file.
|
56 |
+
Many of the 31175 recorded hours in the dataset also include demographic metadata like age, sex, and accent
|
57 |
+
that can help improve the accuracy of speech recognition engines.
|
58 |
+
|
59 |
+
The dataset currently consists of 20408 validated hours in 124 languages, but more voices and languages are always added.
|
60 |
+
Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
|
61 |
+
|
62 |
+
You can donate to this non-profit, donation-funded project here (https://commonvoice.mozilla.org/?form=common-voice)
|
63 |
+
|
64 |
+
### Supported Tasks and Leaderboards
|
65 |
+
|
66 |
+
The results for models trained on the Common Voice datasets are available via the
|
67 |
+
[🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
|
68 |
+
|
69 |
+
### Languages
|
70 |
+
|
71 |
+
```
|
72 |
+
Abkhaz, Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Bengali, Breton, Bulgarian, Cantonese, Catalan, Central Kurdish, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Danish, Dhivehi, Dioula, Dutch, English, Erzya, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Haitian, Hakha Chin, Hausa, Hebrew, Hill Mari, Hindi, Hungarian, Icelandic, Igbo, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Korean, Kurmanji Kurdish, Kyrgyz, Lao, Latgalian, Latvian, Ligurian, Lithuanian, Luganda, Macedonian, Malayalam, Maltese, Marathi, Meadow Mari, Moksha, Mongolian, Nepali, Northern Sotho, Norwegian Nynorsk, Occitan, Odia, Ossetian, Pashto, Persian, Polish, Portuguese, Punjabi, Quechua Chanka, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Santali (Ol Chiki), Saraiki, Sardinian, Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swahili, Swedish, Taiwanese (Minnan), Tamazight, Tamil, Tatar, Telugu, Thai, Tigre, Tigrinya, Toki Pona, Turkish, Turkmen, Twi, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh, Western Sierra Puebla Nahuatl, Yiddish, Yoruba, Zaza, Zulu
|
73 |
+
```
|
74 |
+
|
75 |
+
## How to use
|
76 |
+
|
77 |
+
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
|
78 |
+
|
79 |
+
For example, to download the Hindi config, simply specify the corresponding language config name (i.e., "hi" for Hindi):
|
80 |
+
```python
|
81 |
+
from datasets import load_dataset
|
82 |
+
|
83 |
+
cv_17 = load_dataset("mozilla-foundation/common_voice_17_0", "hi", split="train")
|
84 |
+
```
|
85 |
+
|
86 |
+
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
|
87 |
+
```python
|
88 |
+
from datasets import load_dataset
|
89 |
+
|
90 |
+
cv_17 = load_dataset("mozilla-foundation/common_voice_17_0", "hi", split="train", streaming=True)
|
91 |
+
|
92 |
+
print(next(iter(cv_17)))
|
93 |
+
```
|
94 |
+
|
95 |
+
*Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
|
96 |
+
|
97 |
+
### Local
|
98 |
+
|
99 |
+
```python
|
100 |
+
from datasets import load_dataset
|
101 |
+
from torch.utils.data.sampler import BatchSampler, RandomSampler
|
102 |
+
|
103 |
+
cv_17 = load_dataset("mozilla-foundation/common_voice_17_0", "hi", split="train")
|
104 |
+
|
105 |
+
batch_sampler = BatchSampler(RandomSampler(cv_17), batch_size=32, drop_last=False)
|
106 |
+
dataloader = DataLoader(cv_17, batch_sampler=batch_sampler)
|
107 |
+
```
|
108 |
+
|
109 |
+
### Streaming
|
110 |
+
|
111 |
+
```python
|
112 |
+
from datasets import load_dataset
|
113 |
+
from torch.utils.data import DataLoader
|
114 |
+
|
115 |
+
cv_17 = load_dataset("mozilla-foundation/common_voice_17_0", "hi", split="train")
|
116 |
+
dataloader = DataLoader(cv_17, batch_size=32)
|
117 |
+
```
|
118 |
+
|
119 |
+
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
|
120 |
+
|
121 |
+
### Example scripts
|
122 |
+
|
123 |
+
Train your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 16 with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).
|
124 |
+
|
125 |
+
## Dataset Structure
|
126 |
+
|
127 |
+
### Data Instances
|
128 |
+
|
129 |
+
A typical data point comprises the `path` to the audio file and its `sentence`.
|
130 |
+
Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
|
131 |
+
|
132 |
+
```python
|
133 |
+
{
|
134 |
+
'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
|
135 |
+
'path': 'et/clips/common_voice_et_18318995.mp3',
|
136 |
+
'audio': {
|
137 |
+
'path': 'et/clips/common_voice_et_18318995.mp3',
|
138 |
+
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
|
139 |
+
'sampling_rate': 48000
|
140 |
+
},
|
141 |
+
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
|
142 |
+
'up_votes': 2,
|
143 |
+
'down_votes': 0,
|
144 |
+
'age': 'twenties',
|
145 |
+
'gender': 'male',
|
146 |
+
'accent': '',
|
147 |
+
'locale': 'et',
|
148 |
+
'segment': ''
|
149 |
+
}
|
150 |
+
```
|
151 |
+
|
152 |
+
### Data Fields
|
153 |
+
|
154 |
+
`client_id` (`string`): An id for which client (voice) made the recording
|
155 |
+
|
156 |
+
`path` (`string`): The path to the audio file
|
157 |
+
|
158 |
+
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
|
159 |
+
|
160 |
+
`sentence` (`string`): The sentence the user was prompted to speak
|
161 |
+
|
162 |
+
`up_votes` (`int64`): How many upvotes the audio file has received from reviewers
|
163 |
+
|
164 |
+
`down_votes` (`int64`): How many downvotes the audio file has received from reviewers
|
165 |
+
|
166 |
+
`age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
|
167 |
+
|
168 |
+
`gender` (`string`): The gender of the speaker
|
169 |
+
|
170 |
+
`accent` (`string`): Accent of the speaker
|
171 |
+
|
172 |
+
`locale` (`string`): The locale of the speaker
|
173 |
+
|
174 |
+
`segment` (`string`): Usually an empty field
|
175 |
+
|
176 |
+
### Data Splits
|
177 |
+
|
178 |
+
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
|
179 |
+
|
180 |
+
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
|
181 |
+
|
182 |
+
The invalidated data is data has been invalidated by reviewers
|
183 |
+
and received downvotes indicating that the data is of low quality.
|
184 |
+
|
185 |
+
The reported data is data that has been reported, for different reasons.
|
186 |
+
|
187 |
+
The other data is data that has not yet been reviewed.
|
188 |
+
|
189 |
+
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
|
190 |
+
|
191 |
+
## Data Preprocessing Recommended by Hugging Face
|
192 |
+
|
193 |
+
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
|
194 |
+
|
195 |
+
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
|
196 |
+
|
197 |
+
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
|
198 |
+
|
199 |
+
```python
|
200 |
+
from datasets import load_dataset
|
201 |
+
|
202 |
+
ds = load_dataset("mozilla-foundation/common_voice_17", "en", use_auth_token=True)
|
203 |
+
|
204 |
+
def prepare_dataset(batch):
|
205 |
+
"""Function to preprocess the dataset with the .map method"""
|
206 |
+
transcription = batch["sentence"]
|
207 |
+
|
208 |
+
if transcription.startswith('"') and transcription.endswith('"'):
|
209 |
+
# we can remove trailing quotation marks as they do not affect the transcription
|
210 |
+
transcription = transcription[1:-1]
|
211 |
+
|
212 |
+
if transcription[-1] not in [".", "?", "!"]:
|
213 |
+
# append a full-stop to sentences that do not end in punctuation
|
214 |
+
transcription = transcription + "."
|
215 |
+
|
216 |
+
batch["sentence"] = transcription
|
217 |
+
|
218 |
+
return batch
|
219 |
+
|
220 |
+
ds = ds.map(prepare_dataset, desc="preprocess dataset")
|
221 |
+
```
|
222 |
+
|
223 |
+
## Dataset Creation
|
224 |
+
|
225 |
+
### Curation Rationale
|
226 |
+
|
227 |
+
[Needs More Information]
|
228 |
+
|
229 |
+
### Source Data
|
230 |
+
|
231 |
+
#### Initial Data Collection and Normalization
|
232 |
+
|
233 |
+
[Needs More Information]
|
234 |
+
|
235 |
+
#### Who are the source language producers?
|
236 |
+
|
237 |
+
[Needs More Information]
|
238 |
+
|
239 |
+
### Annotations
|
240 |
+
|
241 |
+
#### Annotation process
|
242 |
+
|
243 |
+
[Needs More Information]
|
244 |
+
|
245 |
+
#### Who are the annotators?
|
246 |
+
|
247 |
+
[Needs More Information]
|
248 |
+
|
249 |
+
### Personal and Sensitive Information
|
250 |
+
|
251 |
+
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
|
252 |
+
|
253 |
+
## Considerations for Using the Data
|
254 |
+
|
255 |
+
### Social Impact of Dataset
|
256 |
+
|
257 |
+
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
|
258 |
+
|
259 |
+
### Discussion of Biases
|
260 |
+
|
261 |
+
[More Information Needed]
|
262 |
+
|
263 |
+
### Other Known Limitations
|
264 |
+
|
265 |
+
[More Information Needed]
|
266 |
+
|
267 |
+
## Additional Information
|
268 |
+
|
269 |
+
### Dataset Curators
|
270 |
+
|
271 |
+
[More Information Needed]
|
272 |
+
|
273 |
+
### Licensing Information
|
274 |
+
|
275 |
+
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
|
276 |
+
|
277 |
+
### Citation Information
|
278 |
+
|
279 |
+
```
|
280 |
+
@inproceedings{commonvoice:2020,
|
281 |
+
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
|
282 |
+
title = {Common Voice: A Massively-Multilingual Speech Corpus},
|
283 |
+
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
|
284 |
+
pages = {4211--4215},
|
285 |
+
year = 2020
|
286 |
+
}
|
287 |
+
```
|
288 |
+
|
common_voice_17_0.py
ADDED
@@ -0,0 +1,198 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# coding=utf-8
|
2 |
+
# Copyright 2022 The HuggingFace Datasets Authors and the current dataset script contributor.
|
3 |
+
#
|
4 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
+
# you may not use this file except in compliance with the License.
|
6 |
+
# You may obtain a copy of the License at
|
7 |
+
#
|
8 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
+
#
|
10 |
+
# Unless required by applicable law or agreed to in writing, software
|
11 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
+
# See the License for the specific language governing permissions and
|
14 |
+
# limitations under the License.
|
15 |
+
""" Common Voice Dataset"""
|
16 |
+
|
17 |
+
|
18 |
+
import csv
|
19 |
+
import os
|
20 |
+
import json
|
21 |
+
|
22 |
+
import datasets
|
23 |
+
from datasets.utils.py_utils import size_str
|
24 |
+
from tqdm import tqdm
|
25 |
+
|
26 |
+
from .languages import LANGUAGES
|
27 |
+
from .release_stats import STATS
|
28 |
+
|
29 |
+
|
30 |
+
_CITATION = """\
|
31 |
+
@inproceedings{commonvoice:2020,
|
32 |
+
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
|
33 |
+
title = {Common Voice: A Massively-Multilingual Speech Corpus},
|
34 |
+
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
|
35 |
+
pages = {4211--4215},
|
36 |
+
year = 2020
|
37 |
+
}
|
38 |
+
"""
|
39 |
+
|
40 |
+
_HOMEPAGE = "https://commonvoice.mozilla.org/en/datasets"
|
41 |
+
|
42 |
+
_LICENSE = "https://creativecommons.org/publicdomain/zero/1.0/"
|
43 |
+
|
44 |
+
# TODO: change "streaming" to "main" after merge!
|
45 |
+
_BASE_URL = "https://huggingface.co/datasets/mozilla-foundation/common_voice_17_0/resolve/main/"
|
46 |
+
|
47 |
+
_AUDIO_URL = _BASE_URL + "audio/{lang}/{split}/{lang}_{split}_{shard_idx}.tar"
|
48 |
+
|
49 |
+
_TRANSCRIPT_URL = _BASE_URL + "transcript/{lang}/{split}.tsv"
|
50 |
+
|
51 |
+
_N_SHARDS_URL = _BASE_URL + "n_shards.json"
|
52 |
+
|
53 |
+
|
54 |
+
class CommonVoiceConfig(datasets.BuilderConfig):
|
55 |
+
"""BuilderConfig for CommonVoice."""
|
56 |
+
|
57 |
+
def __init__(self, name, version, **kwargs):
|
58 |
+
self.language = kwargs.pop("language", None)
|
59 |
+
self.release_date = kwargs.pop("release_date", None)
|
60 |
+
self.num_clips = kwargs.pop("num_clips", None)
|
61 |
+
self.num_speakers = kwargs.pop("num_speakers", None)
|
62 |
+
self.validated_hr = kwargs.pop("validated_hr", None)
|
63 |
+
self.total_hr = kwargs.pop("total_hr", None)
|
64 |
+
self.size_bytes = kwargs.pop("size_bytes", None)
|
65 |
+
self.size_human = size_str(self.size_bytes)
|
66 |
+
description = (
|
67 |
+
f"Common Voice speech to text dataset in {self.language} released on {self.release_date}. "
|
68 |
+
f"The dataset comprises {self.validated_hr} hours of validated transcribed speech data "
|
69 |
+
f"out of {self.total_hr} hours in total from {self.num_speakers} speakers. "
|
70 |
+
f"The dataset contains {self.num_clips} audio clips and has a size of {self.size_human}."
|
71 |
+
)
|
72 |
+
super(CommonVoiceConfig, self).__init__(
|
73 |
+
name=name,
|
74 |
+
version=datasets.Version(version),
|
75 |
+
description=description,
|
76 |
+
**kwargs,
|
77 |
+
)
|
78 |
+
|
79 |
+
|
80 |
+
class CommonVoice(datasets.GeneratorBasedBuilder):
|
81 |
+
DEFAULT_WRITER_BATCH_SIZE = 1000
|
82 |
+
|
83 |
+
BUILDER_CONFIGS = [
|
84 |
+
CommonVoiceConfig(
|
85 |
+
name=lang,
|
86 |
+
version=STATS["version"],
|
87 |
+
language=LANGUAGES[lang],
|
88 |
+
release_date=STATS["date"],
|
89 |
+
num_clips=lang_stats["clips"],
|
90 |
+
num_speakers=lang_stats["users"],
|
91 |
+
validated_hr=float(lang_stats["validHrs"]) if lang_stats["validHrs"] else None,
|
92 |
+
total_hr=float(lang_stats["totalHrs"]) if lang_stats["totalHrs"] else None,
|
93 |
+
size_bytes=int(lang_stats["size"]) if lang_stats["size"] else None,
|
94 |
+
)
|
95 |
+
for lang, lang_stats in STATS["locales"].items()
|
96 |
+
]
|
97 |
+
|
98 |
+
def _info(self):
|
99 |
+
total_languages = len(STATS["locales"])
|
100 |
+
total_valid_hours = STATS["totalValidHrs"]
|
101 |
+
description = (
|
102 |
+
"Common Voice is Mozilla's initiative to help teach machines how real people speak. "
|
103 |
+
f"The dataset currently consists of {total_valid_hours} validated hours of speech "
|
104 |
+
f" in {total_languages} languages, but more voices and languages are always added."
|
105 |
+
)
|
106 |
+
features = datasets.Features(
|
107 |
+
{
|
108 |
+
"client_id": datasets.Value("string"),
|
109 |
+
"path": datasets.Value("string"),
|
110 |
+
"audio": datasets.features.Audio(sampling_rate=48_000),
|
111 |
+
"sentence": datasets.Value("string"),
|
112 |
+
"up_votes": datasets.Value("int64"),
|
113 |
+
"down_votes": datasets.Value("int64"),
|
114 |
+
"age": datasets.Value("string"),
|
115 |
+
"gender": datasets.Value("string"),
|
116 |
+
"accent": datasets.Value("string"),
|
117 |
+
"locale": datasets.Value("string"),
|
118 |
+
"segment": datasets.Value("string"),
|
119 |
+
"variant": datasets.Value("string"),
|
120 |
+
}
|
121 |
+
)
|
122 |
+
|
123 |
+
return datasets.DatasetInfo(
|
124 |
+
description=description,
|
125 |
+
features=features,
|
126 |
+
supervised_keys=None,
|
127 |
+
homepage=_HOMEPAGE,
|
128 |
+
license=_LICENSE,
|
129 |
+
citation=_CITATION,
|
130 |
+
version=self.config.version,
|
131 |
+
)
|
132 |
+
|
133 |
+
def _split_generators(self, dl_manager):
|
134 |
+
lang = self.config.name
|
135 |
+
n_shards_path = dl_manager.download_and_extract(_N_SHARDS_URL)
|
136 |
+
with open(n_shards_path, encoding="utf-8") as f:
|
137 |
+
n_shards = json.load(f)
|
138 |
+
|
139 |
+
audio_urls = {}
|
140 |
+
splits = ("train", "dev", "test", "other", "invalidated", "validated")
|
141 |
+
for split in splits:
|
142 |
+
audio_urls[split] = [
|
143 |
+
_AUDIO_URL.format(lang=lang, split=split, shard_idx=i) for i in range(n_shards[lang][split])
|
144 |
+
]
|
145 |
+
archive_paths = dl_manager.download(audio_urls)
|
146 |
+
local_extracted_archive_paths = dl_manager.extract(archive_paths) if not dl_manager.is_streaming else {}
|
147 |
+
|
148 |
+
meta_urls = {split: _TRANSCRIPT_URL.format(lang=lang, split=split) for split in splits}
|
149 |
+
meta_paths = dl_manager.download_and_extract(meta_urls)
|
150 |
+
|
151 |
+
split_generators = []
|
152 |
+
split_names = {
|
153 |
+
"train": datasets.Split.TRAIN,
|
154 |
+
"dev": datasets.Split.VALIDATION,
|
155 |
+
"test": datasets.Split.TEST,
|
156 |
+
}
|
157 |
+
for split in splits:
|
158 |
+
split_generators.append(
|
159 |
+
datasets.SplitGenerator(
|
160 |
+
name=split_names.get(split, split),
|
161 |
+
gen_kwargs={
|
162 |
+
"local_extracted_archive_paths": local_extracted_archive_paths.get(split),
|
163 |
+
"archives": [dl_manager.iter_archive(path) for path in archive_paths.get(split)],
|
164 |
+
"meta_path": meta_paths[split],
|
165 |
+
},
|
166 |
+
),
|
167 |
+
)
|
168 |
+
|
169 |
+
return split_generators
|
170 |
+
|
171 |
+
def _generate_examples(self, local_extracted_archive_paths, archives, meta_path):
|
172 |
+
data_fields = list(self._info().features.keys())
|
173 |
+
metadata = {}
|
174 |
+
with open(meta_path, encoding="utf-8") as f:
|
175 |
+
reader = csv.DictReader(f, delimiter="\t", quoting=csv.QUOTE_NONE)
|
176 |
+
for row in tqdm(reader, desc="Reading metadata..."):
|
177 |
+
if not row["path"].endswith(".mp3"):
|
178 |
+
row["path"] += ".mp3"
|
179 |
+
# accent -> accents in CV 8.0
|
180 |
+
if "accents" in row:
|
181 |
+
row["accent"] = row["accents"]
|
182 |
+
del row["accents"]
|
183 |
+
# if data is incomplete, fill with empty values
|
184 |
+
for field in data_fields:
|
185 |
+
if field not in row:
|
186 |
+
row[field] = ""
|
187 |
+
metadata[row["path"]] = row
|
188 |
+
|
189 |
+
for i, audio_archive in enumerate(archives):
|
190 |
+
for path, file in audio_archive:
|
191 |
+
_, filename = os.path.split(path)
|
192 |
+
if filename in metadata:
|
193 |
+
result = dict(metadata[filename])
|
194 |
+
# set the audio feature and the path to the extracted file
|
195 |
+
path = os.path.join(local_extracted_archive_paths[i], path) if local_extracted_archive_paths else path
|
196 |
+
result["audio"] = {"path": path, "bytes": file.read()}
|
197 |
+
result["path"] = path
|
198 |
+
yield path, result
|
languages.py
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
LANGUAGES = {"en": "English"}
|
n_shards.json
ADDED
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"en": {
|
3 |
+
"train": 28,
|
4 |
+
"dev": 1,
|
5 |
+
"test": 1,
|
6 |
+
"other": 9,
|
7 |
+
"invalidated": 8,
|
8 |
+
"validated": 45
|
9 |
+
}
|
10 |
+
}
|
release_stats.py
ADDED
@@ -0,0 +1,72 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
STATS = {
|
2 |
+
"locales": {
|
3 |
+
"en": {
|
4 |
+
"buckets": {
|
5 |
+
"validated": 1799288,
|
6 |
+
"invalidated": 292773,
|
7 |
+
"dev": 16393,
|
8 |
+
"test": 16393,
|
9 |
+
"train": 1101170,
|
10 |
+
"other": 321347,
|
11 |
+
},
|
12 |
+
"duration": 12625984447,
|
13 |
+
"reportedSentences": 7928,
|
14 |
+
"validatedSentences": 1676433,
|
15 |
+
"unvalidatedSentences": 2294,
|
16 |
+
"clips": 2413408,
|
17 |
+
"splits": {
|
18 |
+
"accent": {},
|
19 |
+
"age": {
|
20 |
+
"": 0.36,
|
21 |
+
"twenties": 0.25,
|
22 |
+
"thirties": 0.14,
|
23 |
+
"teens": 0.06,
|
24 |
+
"fourties": 0.09,
|
25 |
+
"fifties": 0.05,
|
26 |
+
"sixties": 0.04,
|
27 |
+
"seventies": 0.01,
|
28 |
+
"eighties": 0,
|
29 |
+
"nineties": 0,
|
30 |
+
},
|
31 |
+
"gender": {
|
32 |
+
"": 0.38,
|
33 |
+
"male_masculine": 0.45,
|
34 |
+
"female_feminine": 0.17,
|
35 |
+
"transgender": 0,
|
36 |
+
"non-binary": 0,
|
37 |
+
"do_not_wish_to_say": 0,
|
38 |
+
},
|
39 |
+
"sentence_domain": {
|
40 |
+
"": 2413304,
|
41 |
+
"agriculture": 1,
|
42 |
+
"automotive": 0,
|
43 |
+
"finance": 0,
|
44 |
+
"food_service_retail": 6,
|
45 |
+
"general": 60,
|
46 |
+
"healthcare": 1,
|
47 |
+
"history_law_government": 1,
|
48 |
+
"language_fundamentals": 0,
|
49 |
+
"media_entertainment": 8,
|
50 |
+
"nature_environment": 9,
|
51 |
+
"news_current_affairs": 2,
|
52 |
+
"technology_robotics": 16,
|
53 |
+
},
|
54 |
+
},
|
55 |
+
"users": 92325,
|
56 |
+
"size": 88478352967,
|
57 |
+
"checksum": "e55889fb825803d8eea9deaddd7cae1421470464d892d75dcda670477ce2cb56",
|
58 |
+
"avgDurationSecs": 5.232,
|
59 |
+
"validDurationSecs": 9413154.47,
|
60 |
+
"totalHrs": 3507.21,
|
61 |
+
"validHrs": 2614.76,
|
62 |
+
},
|
63 |
+
},
|
64 |
+
"totalDuration": 12625984447,
|
65 |
+
"totalValidDurationSecs": 9413154.47,
|
66 |
+
"totalHrs": 3507.21,
|
67 |
+
"totalValidHrs": 2614.76,
|
68 |
+
"version": "17.0.0",
|
69 |
+
"date": "2024-03-25",
|
70 |
+
"name": "Common Voice Corpus 17.0 (EN codec2)",
|
71 |
+
"multilingual": True,
|
72 |
+
}
|
transcript/en/dev.tsv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
transcript/en/invalidated.tsv
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a43b5d7d21c6327fa49fc01ec4ee1d64d504b9e375004d77df8f1fde5965319d
|
3 |
+
size 93743750
|
transcript/en/other.tsv
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cd736bf7c28f62fc0cafb3ddaab0e57eb1c1523f758baa3b64e844242aabb81e
|
3 |
+
size 103222438
|
transcript/en/test.tsv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
transcript/en/train.tsv
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ed0cc29724b3ba2ab77f424b6e28279dc4bf98708ab8c81fddb148a2a8d52b4b
|
3 |
+
size 363282006
|
transcript/en/validated.tsv
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:347b0882211193da2ffbc841a95f72bf8d9b2b6e5de348abc516f07b8952abe6
|
3 |
+
size 573371447
|