--- language: fa pretty_name: Farsi Youtube 2024 ASR Dataset tags: - Farsi - Persian - ASR - youtube task_categories: - automatic-speech-recognition dataset_size: "N > 400k" dataset_info: splits: - name: unvalidated num_examples: 425468 license: cc0-1.0 --- # Farsi Youtube 2024 ASR Dataset This dataset consists of over **385** hours of transcribed audio extracted from various YouTube videos in the Persian language (more than 400k rows). ## Dataset Description The dataset includes Farsi content from various types of videos spanning from older productions up to mid-2024, including: - Podcasts - TV Shows - Educational Content - Interviews - Documentaries Utterances and sentences are extracted based on the timing of subtitles. The list of videos used in this dataset is stored in the `yt_ids.csv` file as follows: ``` 13XpMM7RT2c 20231207 سرگذشت پُل پوت هیولای کامبوج و رهبر خمرهای سرخ yU6LtnpVKLo 20231210 راز بزرگترین جاسوس عرب|بیوگرافی اشرف مروان b9cTFkO6Q18 20231214 دقیقا چه اتفاقی افتاده؟ بالاخره توی این درگیری کی پیروز شد؟ 7 -27 نوامبر wW76xHcxw48 20231217 حقایق شنیده نشده درباره نجات دنیا از جنگ هسته ای! pr1dNDD6viM 20231123 افشای زندگی صدام حسین! | قسمت دوم ... ``` ## Note This dataset contains raw, unvalidated auto-generated transcriptions. Transcriptions may include inaccuracies due to mal-transcriptions, and timing may occasionally be imprecise. Many efforts have been made to cleanse the data using various methods and software. Users are advised to: - Perform their own quality assessment - Create their own train/validation/test splits based on their specific needs - Validate a subset of the data if needed for their use case For validating the data you can use [AnnoTitan](https://github.com/dhpour/annotitan), which is a crowdsourcing app developed for such ASR data. ## Usage
Huggingface datasets library: ```python from datasets import load_dataset dataset = load_dataset('PerSets/youtube-persian-asr', trust_remote_code=True) ```