Bharat Ramanathan commited on
Commit
dfb50ac
·
1 Parent(s): 797ac78

First version of the bengali_asr_corpus dataset.

Browse files
Files changed (2) hide show
  1. README.md +144 -0
  2. bengali_asr_corpus.py +104 -0
README.md ADDED
@@ -0,0 +1,144 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language:
5
+ - bn
6
+ language_creators:
7
+ - found
8
+ license:
9
+ - cc-by-4.0
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: Bengali ASR Corpus
13
+ size_categories:
14
+ - 100K<n<1M
15
+ source_datasets:
16
+ - extended|openslr
17
+ tags: []
18
+ task_categories:
19
+ - automatic-speech-recognition
20
+ task_ids: []
21
+ ---
22
+
23
+ # Dataset Card for [Bengali Asr Corpus]
24
+
25
+ ## Table of Contents
26
+ - [Table of Contents](#table-of-contents)
27
+ - [Dataset Description](#dataset-description)
28
+ - [Dataset Summary](#dataset-summary)
29
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
30
+ - [Languages](#languages)
31
+ - [Dataset Structure](#dataset-structure)
32
+ - [Data Instances](#data-instances)
33
+ - [Data Fields](#data-fields)
34
+ - [Data Splits](#data-splits)
35
+ - [Dataset Creation](#dataset-creation)
36
+ - [Curation Rationale](#curation-rationale)
37
+ - [Source Data](#source-data)
38
+ - [Annotations](#annotations)
39
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
40
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
41
+ - [Social Impact of Dataset](#social-impact-of-dataset)
42
+ - [Discussion of Biases](#discussion-of-biases)
43
+ - [Other Known Limitations](#other-known-limitations)
44
+ - [Additional Information](#additional-information)
45
+ - [Dataset Curators](#dataset-curators)
46
+ - [Licensing Information](#licensing-information)
47
+ - [Citation Information](#citation-information)
48
+ - [Contributions](#contributions)
49
+
50
+ ## Dataset Description
51
+
52
+ - **Homepage:**
53
+ - **Repository:**
54
+ - **Paper:**
55
+ - **Leaderboard:**
56
+ - **Point of Contact:**
57
+
58
+ ### Dataset Summary
59
+
60
+ [More Information Needed]
61
+
62
+ ### Supported Tasks and Leaderboards
63
+
64
+ [More Information Needed]
65
+
66
+ ### Languages
67
+
68
+ [More Information Needed]
69
+
70
+ ## Dataset Structure
71
+
72
+ ### Data Instances
73
+
74
+ [More Information Needed]
75
+
76
+ ### Data Fields
77
+
78
+ [More Information Needed]
79
+
80
+ ### Data Splits
81
+
82
+ [More Information Needed]
83
+
84
+ ## Dataset Creation
85
+
86
+ ### Curation Rationale
87
+
88
+ [More Information Needed]
89
+
90
+ ### Source Data
91
+
92
+ #### Initial Data Collection and Normalization
93
+
94
+ [More Information Needed]
95
+
96
+ #### Who are the source language producers?
97
+
98
+ [More Information Needed]
99
+
100
+ ### Annotations
101
+
102
+ #### Annotation process
103
+
104
+ [More Information Needed]
105
+
106
+ #### Who are the annotators?
107
+
108
+ [More Information Needed]
109
+
110
+ ### Personal and Sensitive Information
111
+
112
+ [More Information Needed]
113
+
114
+ ## Considerations for Using the Data
115
+
116
+ ### Social Impact of Dataset
117
+
118
+ [More Information Needed]
119
+
120
+ ### Discussion of Biases
121
+
122
+ [More Information Needed]
123
+
124
+ ### Other Known Limitations
125
+
126
+ [More Information Needed]
127
+
128
+ ## Additional Information
129
+
130
+ ### Dataset Curators
131
+
132
+ [More Information Needed]
133
+
134
+ ### Licensing Information
135
+
136
+ [More Information Needed]
137
+
138
+ ### Citation Information
139
+
140
+ [More Information Needed]
141
+
142
+ ### Contributions
143
+
144
+ Thanks to [@parambharat](https://github.com/parambharat) for adding this dataset.
bengali_asr_corpus.py ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ """Filtered Bengali ASR corpus collected from madasr, indictts, kathbath, openslr53, openslr37, and ai4bharat corpora filtered for duration between 2 - 30 secs"""
16
+
17
+
18
+ import json
19
+ import os
20
+
21
+ import datasets
22
+
23
+ _CITATION = """
24
+ """
25
+
26
+ _DESCRIPTION = """\
27
+ The corpus contains roughly 500 hours of audio and transcripts in Bangla language.
28
+ The transcripts have beed de-duplicated using exact match deduplication and audio has be converted to 16000 samples
29
+ """
30
+
31
+ _HOMEPAGE = ""
32
+
33
+ _LICENSE = "https://creativecommons.org/licenses/"
34
+
35
+
36
+ _METADATA_URLS = {
37
+ "train": "data/train.jsonl",
38
+ }
39
+ _URLS = {
40
+ "train": "data/train.tar.gz",
41
+
42
+ }
43
+
44
+ class BengaliASRCorpus(datasets.GeneratorBasedBuilder):
45
+ """Bengali ASR Corpus contains transcribed speech corpus for training ASR systems for Bengali language."""
46
+
47
+ VERSION = datasets.Version("1.1.0")
48
+ def _info(self):
49
+ features = datasets.Features(
50
+ {
51
+ "audio": datasets.Audio(sampling_rate=16_000),
52
+ "path": datasets.Value("string"),
53
+ "sentence": datasets.Value("string"),
54
+ "duration": datasets.Value("float")
55
+ }
56
+ )
57
+ return datasets.DatasetInfo(
58
+ description=_DESCRIPTION,
59
+ features=features,
60
+ supervised_keys=("sentence", "label"),
61
+ homepage=_HOMEPAGE,
62
+ license=_LICENSE,
63
+ citation=_CITATION,
64
+ )
65
+
66
+ def _split_generators(self, dl_manager):
67
+ metadata_paths = dl_manager.download(_METADATA_URLS)
68
+ train_archive = dl_manager.download(_URLS["train"])
69
+ local_extracted_train_archive = dl_manager.extract(train_archive) if not dl_manager.is_streaming else None
70
+ train_dir = "train"
71
+
72
+ return [
73
+ datasets.SplitGenerator(
74
+ name=datasets.Split.TRAIN,
75
+ gen_kwargs={
76
+ "metadata_path": metadata_paths["train"],
77
+ "local_extracted_archive": local_extracted_train_archive,
78
+ "path_to_clips": train_dir,
79
+ "audio_files": dl_manager.iter_archive(train_archive),
80
+ },
81
+ ),
82
+ ]
83
+
84
+ def _generate_examples(self, metadata_path, local_extracted_archive, path_to_clips, audio_files):
85
+ """Yields examples as (key, example) tuples."""
86
+ examples = {}
87
+ with open(metadata_path, encoding="utf-8") as f:
88
+ for key, row in enumerate(f):
89
+ data = json.loads(row)
90
+ examples[data["path"]] = data
91
+ inside_clips_dir = False
92
+ id_ = 0
93
+ for path, f in audio_files:
94
+ if path.startswith(path_to_clips):
95
+ inside_clips_dir = True
96
+ if path in examples:
97
+ result = examples[path]
98
+ path = os.path.join(local_extracted_archive, path) if local_extracted_archive else path
99
+ result["audio"] = {"path": path, "bytes": f.read()}
100
+ result["path"] = path
101
+ yield id_, result
102
+ id_ += 1
103
+ elif inside_clips_dir:
104
+ break