HuggingFace, IISc partner to supercharge model building on India's diverse languages

Published February 27, 2025
Update on GitHub

The Indian Institute of Science IISc and ARTPARK partner with Hugging Face to enable developers across the globe to access Vaani, India's most diverse open-source, multi-modal, multi-lingual dataset. Both organisations share a commitment to building inclusive, accessible, and state-of-the-art AI technologies that honor linguistic and cultural diversity.

Partnership

The partnership between Hugging Face and IISc/ARTPARK aims to increase the accessibility and improve usability of the Vaani dataset, encouraging the development of AI systems that better understand India's diverse languages and cater to the digital needs of its people.

About Vaani Dataset

Launched in 2022 by IISc/ARTPARK and Google, Project Vaani is a pioneering initiative aimed at creating an open-source multi-modal dataset that truly represents India's linguistic diversity. This dataset is unique in its geo-centric approach, allowing for the collection of dialects and languages spoken in remote regions rather than focusing solely on mainstream languages.

Vaani targets the collection of over 150,000 hours of speech and 15,000 hours of transcribed text data from 1 million people across all 773 districts, ensuring diversity in language, dialects, and demographics.

The dataset is being built in phases, with Phase 1 covering 80 districts, which has already been open-sourced. Phase 2 is currently underway, expanding the dataset to 100 more districts, further strengthening Vaani's reach and impact across India's diverse linguistic landscape.

Key Highlights Key Highlights of the Vaani data set, open sourced so far: (as of 15-02-2025)

District wise language distribution

The Vaani dataset shows a rich distribution of languages across India's districts, highlighting linguistic diversity at a local level. This information is valuable for researchers, AI developers, and language technology innovators looking to build speech models tailored to specific regions and dialects. To explore the detailed district-wise language distribution, visit: Vaani Dataset on HuggingFace

Transcribed subset

If you need to access only transcribed data and you would like to skip untranscribed audio-only data, a subset of the larger dataset has been open sourced here. This dataset has 790 Hrs of transcribed audio, from ~7L speakers covering 70K images. This resource includes smaller, segmented audio units matched with precise transcriptions, allowing for different tasks including:

  • Speech Recognition: Training models to accurately transcribe spoken language.
  • Language Modeling: Building more refined language models.
  • Segmentation Tasks: Identifying distinct speech units for improved transcription accuracy.

This additional dataset complements the main Vaani dataset, making it possible to develop end-to-end speech recognition systems and more targeted AI solutions.

Utility of Vaani in the Age of LLMs

The Vaani dataset offers several key advantages, including extensive language coverage (54 languages), representation across diverse geographical regions, diverse educational and socio economic background, very large speaker coverage, spontaneous speech data, and real-life data collection environments. These features can enable inclusive AI models for:

  • Speech-to-Text and Text-to-Speech: Fine-tuning these models for both LLM and non-LLM-based applications. Additionally, the transcription tagging enables the development of code-switching (Indic and English language)ASR models.
  • Foundational Speech Models for Indic Languages: The dataset's significant linguistic and geographical coverage supports the development of robust foundational models for Indic languages.
  • Speaker Identification/Verification Models: With data from over 80,000 speakers, the dataset is well-suited for developing robust speaker identification and verification models.
  • Language Identification Models: Enables the creation of language identification models for various real-world applications.
  • Speech Enhancement Systems: The dataset's tagging system supports the development of advanced speech enhancement technologies.
  • Enhancing Multimodal LLMs: The unique data collection approach makes it valuable for building and improving multimodal capabilities in LLMs when combined with other multimodal datasets.
  • Performance Benchmarking: The dataset is an ideal choice for benchmarking speech models due to its diverse linguistic, geographical, and real-world data properties.

These AI models can power a wide range of Conversational AI applications. From educational tools to telemedicine platforms, healthcare solutions, voter helplines, media localization, and multilingual smart devices, the Vaani dataset can be a game-changer in real-world scenarios.

What's next

IISc/ARTPARK and Google have extended the partnership to Phase 2 (additional 100 districts). With this, Vaani covers all states in India! We are excited to bring this dataset to all of you.

Map of districts where data has been collected The map highlights the districts across India where data has been collected as of Feb 5,2025

How You Can Contribute

The most meaningful contribution you can make is to use the Vaani dataset. Whether building new AI applications, conducting research, or exploring innovative use cases, your engagement helps improve and expand the project.

We would be delighted to hear from you if you have feedback or insights from using the dataset. Please reach out to vaanicontact@gmail.com to share your experiences/inquire about collaboration opportunities or please do fill out this feedback form.

Made with ❤️ for India's linguistic diversity

Community

Sign up or log in to comment