Search is not available for this dataset
audio
audioduration (s)
0.45
54.6

Jeli-ASR Dataset

This repository contains the Jeli-ASR dataset, which is primarily a reviewed version of Aboubacar Ouattara's Bambara-ASR dataset (drawn from jeli-asr and available at oza75/bambara-asr) combined with the best data retained from the former version: jeli-data-manifest. This dataset features improved data quality for automatic speech recognition (ASR) and translation tasks, with variable length Bambara audio samples, Bambara transcriptions and French translations.

Important Note

Please note that this dataset is currently in development and is therefore not fixed. The structure, content, and availability of the dataset may change as improvements and updates are made.


Key Changes in This Version

1. Name Change

  • The Dataset name was changed from jeli-data-manifest to jeli-asr.

2. Mono Channel Conversion

  • All stereo audio files have been converted to mono to ensure consistency across the dataset.
    • This step was required only for the jeli-asr-rmai subset as oza-bam-asr was already consistent.

3. Removal of Misaligned Samples

  • More than 70% of the data in the previous version contained misaligned samples due to concatenation issues that kind of spread misalignment in the dataset.
  • A filtering process was applied using both manual classification and trained classifiers:
    • A subset of the data was manually classified as aligned or misaligned.
    • This subset was used to train classifiers (Logistic Regression and XGBoost) to label the remaining samples. Classifier Performance:
    • Best-performing model: Logistic Regression
      • Accuracy: 0.84
      • F1-score (misaligned - class 0): 0.86
      • F1-score (aligned - class 1): 0.82 Training Details:
    • Balanced training set: Positive samples (aligned) were supplemented using additional aligned samples from Oza's Bambara-ASR dataset.
    • Misaligned samples: No additional samples were needed as they formed a majority.
    • Embedding processing: Manually separated data has been represented as embeddings for training classifiers. The embeddings were obtained by inferring Wav2Vec and BERT, then concatenated for every example and labeled as either aligned or misaligned.

Misaligned samples identified during classification were removed. That subset is currently undergoing further review and may be partially reintegrated in a future version of this dataset.

4. Integration of Oza's Bambara-ASR Dataset

  • This version integrates a clean subset from Oza's Bambara-ASR dataset making about 90% of the data.

5. Lowercased Transcriptions

  • All transcriptions and translations have been converted to lowercase for consistency.

6. Silent/Empty File Filtering

  • Silent or empty audio files with inaudible content were removed.

Directory Structure

jeli-asr/
|
β”œβ”€β”€ README.md
β”œβ”€β”€ metadata.jsonl
β”œβ”€β”€ manifests/
β”‚   β”œβ”€β”€ jeli-asr-rmai-test-manifest.json
β”‚   β”œβ”€β”€ jeli-asr-rmai-train-manifest.json
β”‚   β”œβ”€β”€ oza-bam-asr-test-manifest.json
β”‚   └── oza-bam-asr-train-manifest.json
β”‚   └── train-manifest.json # jeli-asr-rmai-train-manifest.json + oza-bam-asr-train-manifest.json
β”‚   └── test-manifest.json # jeli-asr-rmai-test-manifest.json + oza-bam-asr-test-manifest.json
β”‚
β”œβ”€β”€ scripts/
β”‚   β”œβ”€β”€ clean_tsv.py
β”‚   β”œβ”€β”€ convert_to_mono_channel.py
β”‚   β”œβ”€β”€ create_data_manifest.py
β”‚   β”œβ”€β”€ create_manifest_oza_bam_asr.py
β”‚   β”œβ”€β”€ filter_silent_and_inaudible.py
β”‚   └── lower_transcriptions_in_manifests.py
β”‚
β”œβ”€β”€ french-manifests/
β”‚   β”œβ”€β”€ jeli-asr-rmai-test-french-manifest.json
β”‚   β”œβ”€β”€ jeli-asr-rmai-train-french-manifest.json
β”‚   β”œβ”€β”€ oza-bam-asr-test-french-manifest.json
β”‚   └── oza-bam-asr-train-french-manifest.json
β”‚
β”œβ”€β”€ jeli-asr-rmai/
β”‚   β”œβ”€β”€ train/
β”‚   └── test/
β”‚
β”œβ”€β”€ bam-asr-oza/
β”‚   β”œβ”€β”€ train/
β”‚   └── test/

manifests Directory

This directory contains the manifest files used for training speech recognition (ASR) and text-to-speech (TTS) models. Those are JSON files:

Each line in the manifest files is a JSON object with the following structure:

{
  "audio_filepath": "jeli-asr/bam-asr-oza/train/oza75-bam-asr-14.wav", 
  "duration": 4.888, 
  "text": "n'o tΙ› n'a fΙ”ra den o den ma ko yiriba, i b'a kΙ”lΙ”si a bΙ›na kΙ› mΙ”gΙ”jΙ›mΙ”gΙ” ye don dΙ”."
}
  • audio_filepath: The relative path to the corresponding audio file.
  • duration: The duration of the audio file in seconds.
  • text: The transcription of the audio in Bambara.

3. french-manifests/

This directory contains French equivalent manifest files for the dataset. The structure is similar to the manifests/ directory but with French transcriptions


Scripts Explanation

1. convert_to_mono_channel.py

  • Converts stereo audio files to mono.
  • Ensures consistent audio channel dimensions.

2. filter_silent_and_inaudible.py

  • Filters out silent or inaudible audio files.

3. lower_transcriptions_in_manifests.py

  • Converts all text in the manifest files to lowercase for uniform formatting.

4. clean_tsv.py

  • Script to remove some of the most common issues in the .tsv transcription files created during the last revision work on the dataset in January 2023, such as unwanted characters (", <>), consecutive tabs (making some rows incositent) and spacing errors (used to create jeli-data-manifest).

5. create_data_manifest.py

  • A script used to create manifest files for training and testing. It re-samples the audio files published as the first version of Jeli-ASR dataset and generates the corresponding JSON manifest files (used to create jeli-data-manifest).

6. create_manifest_oza_bam_asr.py

  • Create manifest files for the oza75/bambara-asr clean subset .

Dataset Details

  • Total Duration: 32.48 hours
  • Number of Samples: 33,643
    • Training Set: 32,180 samples (~95%)
    • Testing Set: 1,463 samples (~5%)

Subsets:

  • Oza's Bambara-ASR: ~29 hours (clean subset).
  • Jeli-ASR-RMAI: ~3.5 hours (filtered subset).

Note that since the two subsets were drawn from the original Jeli-ASR dataset, they are just different variation of the same data.


Usage

The manifest files are specifically created for training Automatic Speech Recognition (ASR) models in NVIDIA NeMo framework, but they can be used with any other framework that supports manifest-based input formats or reformatted for other use cases.

To use the dataset, simply load the manifest files (train-manifest.json and test-manifest.json) in your training script. The file paths for the audio files and the corresponding transcriptions are already provided in these manifest files.

Downloading the Dataset:

from datasets import load_dataset

# Clone dataset repository maintaining directory structure
!git clone https://huggingface.co./datasets/RobotsMali/jeli-asr

#  Or

# Load the dataset into Hugging Face Dataset object
dataset = load_dataset("RobotsMali/jeli-asr")

Finetuning Example in NeMo:

from nemo.collectisr.models import ASRModel
train_manifest = 'jeli-asr/manifests/train-manifest.json'
test_manifest = 'jeli-asr/manifests/test-manifest.json'

asr_model = ASRModel.from_pretrained("QuartzNet15x5Base-En")

# Adapt the model's vocab before training
asr_model.setup_training_data(train_data_config={'manifest_filepath': train_manifest})
asr_model.setup_validation_data(val_data_config={'manifest_filepath': test_manifest})

Known Issues

While significantly improved, this dataset may still contain a few Slightly misaligned samples. It has conserved most of the issues of the original dataset such as: 

  • Inconsistent transcriptions
  • Non-standardized naming conventions.
  • Language and spelling issues

Citation

If you use this dataset in your research or project, please credit the creators of the original datasets.

Downloads last month
21