Edit model card

A Besemah Wav2Vec2 model. This model is created by fine-tuning the multilingual XLS-R model on Besemah speech.

This model is part of the paper: Making More of Little Data: Improving Low-Resource Automatic Speech Recognition Using Data Augmentation. More information on GitHub.

Downloads last month
0
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.