diff --git "a/train_notebook.ipynb" "b/train_notebook.ipynb"
new file mode 100644--- /dev/null
+++ "b/train_notebook.ipynb"
@@ -0,0 +1,10533 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "LBSYoWbi-45k"
+ },
+ "source": [
+ "# **Fine-tuning Multi-Lingual Speech Model with 🤗 Transformers**"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "nT_QrfWtsxIz"
+ },
+ "source": [
+ "This notebook shows how to fine-tune multi-lingual pretrained speech models for Automatic Speech Recognition."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "OK7AOAkwVOrx"
+ },
+ "source": [
+ "This notebook is built to run on the [Common Voice dataset](https://huggingface.co./datasets/common_voice) with any multi-lingual speech model checkpoint from the [Model Hub](https://huggingface.co./models?language=multilingual&pipeline_tag=automatic-speech-recognition&sort=downloads) as long as that model has a version with a Connectionist Temporal Classification (CTC) head. Depending on the model and the GPU you are using, you might need to adjust the batch size to avoid out-of-memory errors. Set those two parameters, then the rest of the notebook should run smoothly:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "i69NoS4Kvh_f"
+ },
+ "outputs": [],
+ "source": [
+ "\n",
+ "model_checkpoint = \"facebook/wav2vec2-xls-r-300m\"\n",
+ "batch_size = 16"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "fw_GGnvwVjOl"
+ },
+ "outputs": [],
+ "source": [
+ "model_checkpoint = \"facebook/wav2vec2-large-xlsr-53\"\n",
+ "batch_size = 16"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "ZUUqzyKnVpvc"
+ },
+ "source": [
+ "For a more in-detail explanation of how multi-lingual pretrained speech models function, please take a look at the [🤗 Blog](https://huggingface.co./blog/fine-tune-xlsr-wav2vec2)."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "e335hPmdtASZ"
+ },
+ "source": [
+ "Before we start, let's install both `datasets` and `transformers` from master. Also, we need the `torchaudio` and `librosa` package to load audio files and the `jiwer` to evaluate our fine-tuned model using the [word error rate (WER)](https://huggingface.co./metrics/wer) metric ${}^1$."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "c8eh87Hoee5d"
+ },
+ "outputs": [],
+ "source": [
+ "%%capture\n",
+ "!pip install datasets==1.18.3\n",
+ "#common_voice 7 below\n",
+ "#!pip install datasets==1.14\n",
+ "!pip install transformers==4.11.3\n",
+ "\n",
+ "!pip install torchaudio\n",
+ "!pip install librosa\n",
+ "!pip install jiwer"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "0xxt_LwxDQlO"
+ },
+ "source": [
+ "Next we strongly suggest to upload your training checkpoints directly to the [🤗 Hub](https://huggingface.co./) while training. The [🤗 Hub](https://huggingface.co./) has integrated version control so you can be sure that no model checkpoint is getting lost during training. \n",
+ "\n",
+ "To do so you have to store your authentication token from the Hugging Face website (sign up [here](https://huggingface.co./join) if you haven't already!)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 359,
+ "referenced_widgets": [
+ "f806b26119884e688585835e33bd9cda",
+ "62281d26fca2464b891e4e8dced07110",
+ "e8ed03cfa00c4941a2351dde3c2e4cb7",
+ "5f40b4c187664350ad4f7bce35518d47",
+ "400d6ff9e84d476cadeaddfc57ba7f31",
+ "68ab1ecb6d724fc29777d7216081a667",
+ "4cc4957451364dfab93f760d0f8cd0b6",
+ "32c43163ae214a73871c7952f91c1b79",
+ "343e824ec8014081aa0dc4656e5c1247",
+ "168bb48911bd4b398e1bef1e57282a4a",
+ "589efd9025484b199b2a6fe6f6b06027",
+ "8dcd01c2904745a3a5f9cfbe2339c344",
+ "888e4ce603eb413dbe1affc067555376",
+ "7f838aab3d2c4d98a50652fb30493b10",
+ "958d9ceb018941378ce9e62cafcc2930",
+ "40a18d78a96f450a96d127a2504c18cf",
+ "6a3e10c948c94bf5b4f35496f22feb4d"
+ ]
+ },
+ "id": "mlMSH3T3EazV",
+ "outputId": "ad0ddaaf-b3c8-4a55-b5f2-c655b3a32ba0"
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Login successful\n",
+ "Your token has been saved to /root/.huggingface/token\n",
+ "\u001b[1m\u001b[31mAuthenticated through git-credential store but this isn't the helper defined on your machine.\n",
+ "You might have to re-authenticate when pushing to the Hugging Face Hub. Run the following command in your terminal in case you want to set this credential helper as the default\n",
+ "\n",
+ "git config --global credential.helper store\u001b[0m\n"
+ ]
+ }
+ ],
+ "source": [
+ "from huggingface_hub import notebook_login\n",
+ "\n",
+ "notebook_login()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "ujdZ2TxhElk6"
+ },
+ "source": [
+ "\n",
+ "Then you need to install Git-LFS to upload your model checkpoints:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "WcR-d83OEkqb"
+ },
+ "outputs": [],
+ "source": [
+ "%%capture\n",
+ "!apt install git-lfs"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Mn9swf6EQ9Vd"
+ },
+ "source": [
+ "\n",
+ "\n",
+ "\n",
+ "---\n",
+ "\n",
+ "${}^1$ In the [paper](https://arxiv.org/pdf/2006.13979.pdf), the model was evaluated using the phoneme error rate (PER), but by far the most common metric in ASR is the word error rate (WER). To keep this notebook as general as possible we decided to evaluate the model using WER."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "0mW-C1Nt-j7k"
+ },
+ "source": [
+ "## Prepare Data, Tokenizer, Feature Extractor"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "BeBosnY9BH3e"
+ },
+ "source": [
+ "ASR models transcribe speech to text, which means that we both need a feature extractor that processes the speech signal to the model's input format, *e.g.* a feature vector, and a tokenizer that processes the model's output format to text. \n",
+ "\n",
+ "In 🤗 Transformers, speech recognition models are thus accompanied by both a tokenizer, and a feature extractor.\n",
+ "\n",
+ "Let's start by creating the tokenizer responsible for decoding the model's predictions."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "sEXEWEJGQPqD"
+ },
+ "source": [
+ "### Create Tokenizer for Speech Recognition"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "idBczw8mWzgt"
+ },
+ "source": [
+ "First, let's go to [Common Voice](https://commonvoice.mozilla.org/en/datasets) and pick a language to fine-tune XLSR-Wav2Vec2 on. For this notebook, we will use Turkish. \n",
+ "\n",
+ "For each language-specific dataset, you can find a language code corresponding to your chosen language. On [Common Voice](https://commonvoice.mozilla.org/en/datasets), look for the field \"Version\". The language code then corresponds to the prefix before the underscore. For Turkish, *e.g.* the language code is `\"tr\"`.\n",
+ "\n",
+ "Great, now we can use 🤗 Datasets' simple API to download the data. The dataset name will be `\"common_voice\"`, the config name corresponds to the language code - `\"tr\"` in our case."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "bee4g9rpLxll"
+ },
+ "source": [
+ "Common Voice has many different splits including `invalidated`, which refers to data that was not rated as \"clean enough\" to be considered useful. In this notebook, we will only make use of the splits `\"train\"`, `\"validation\"` and `\"test\"`. \n",
+ "\n",
+ "Because the Turkish dataset is so small, we will merge both the validation and training data into a training dataset and simply use the test data for validation."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "Swi42bp-hrvF",
+ "outputId": "2de050e3-97f1-447b-c807-c4c98b88e198"
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "1.18.3\n"
+ ]
+ }
+ ],
+ "source": [
+ "import datasets\n",
+ "print(datasets.__version__)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 217,
+ "referenced_widgets": [
+ "23bdf3807fe34d63a600fcea26520f78",
+ "2f73de7bd34b4377a14b90ccf962d327",
+ "cdcd548b670e4fdbb061a7a38f305fcd",
+ "bbc251ada7504759b531d9b846b06944",
+ "51bc2d6b195142149a6d224b38bb3c5e",
+ "50ab93d93cb64e19aea03166480ecbaf",
+ "8f308646f3634822ab957547ee9b9908",
+ "b52235b5022744a996059fe98fee4dd6",
+ "b1b9a92b0495411182ff7e8db879c344",
+ "0a526f64df90463383682a3aae6885d9",
+ "aa8bde4a54724abdb5f8c8afba39f898",
+ "170bfecf226142cdb87d0bf03be43842",
+ "9ec2db1ab1524dd5a5afad2e5ddb359f",
+ "f7c38c359728400e89b60af16013db3d",
+ "52401e8b880740e4ac1b43f3844c12e0",
+ "26525cfe8560466a84c0e16abb165fe5",
+ "70e5a9ef53a74510a04ced4f5f2a27f5",
+ "5da881b52302403db72ba1196dc9bc9f",
+ "2ffb88a2e26642e3b814e11b4e40abe0",
+ "1aa7a917a2cb4175adf869bd123108d5",
+ "6a0470c75f9d428b86406c502a15d934",
+ "e454ba90eb5d4cd8a2511004c1a6459c",
+ "a4416a28b7434c1196e1e1ee78a6524e",
+ "216bdc1df5074ad890005ab4c3cc430d",
+ "633c5b60966a4f54897c46f31b5aa519",
+ "00205bf74060408d937ef70d9dab37eb",
+ "3083964c340f463aad68704998106ce3",
+ "ccf257a118a44f9180dcfb6ab1a4ea3b",
+ "baf7a967c7554036a833a20dccf69b78",
+ "bcc0f0635473429c82e402b2a8e42f3e",
+ "59d3b516d58744c6b853a517585dde3d",
+ "ef9faa9bd0d14ddca4dabfc8fe344416",
+ "f3feaffb727f4eb0833b98d7f7d94687",
+ "1e97ed9ce1a342ffb1bc35dfca37ce8c",
+ "f707cca6e67f4562922c83e96d448d8f",
+ "a7673268a04c461da202565d6df0eea2",
+ "bc5ab57b856d45c8a05aea643621be29",
+ "136775a4af24494dbcbb8a26c557ff58",
+ "0183d4af17a24bc7951ad5d9c308acfd",
+ "e1da95804e38463da33aabfd34eaf038",
+ "348c3778b1984b7fb5a1f99d2dcd5d43",
+ "42668932cd7c478db86dd01620959ad6",
+ "4f2e8ff469b64b2d8aae0daba054f992",
+ "d94834ae37514f2a8fad199ebcc1b5e0",
+ "83985afb3b9b47a8a51ed826637d7ec3",
+ "8a8a52acf6314c47bd07cfcca47481e0",
+ "fdb6d241936c4bcc949101a3abdadc2a",
+ "7e7ac42e73974bc0a5080d2f68a95805",
+ "b109d9651a1f42089ff90ceed18550be",
+ "e541b0405b654a628643cbe8be11efe3",
+ "a41911630cba40e18799e467a00aa186",
+ "f2f67f2ee2d04b109e08510dbbecf7da",
+ "893ad16a62fa463982f2d302bba8ff8b",
+ "7d91952369dd49638b025241a51714a7",
+ "9bfa87491ded49fe9f528ab156019e49",
+ "becbd6e66ac84539828e278831b47711",
+ "bae3eaf9948c40438d70f852e96c39ee",
+ "a0cf5e9552fb4925a85440d8d936477d",
+ "2dc560e4c8b4411bbc1ac4745b0afe3d",
+ "2079848a4e064e23a465efc0e8b98298",
+ "f49e625b1e0545fcae35f3b38b421fe7",
+ "3336f37aa22443adb2e0dee5231625c2",
+ "b3c878fc99cc419d9a2c41c02b5363e6",
+ "904534fe556845379b0e23869b3c923a",
+ "df70cc4b5d0b4843a8bc9c9069261c19",
+ "ab93e17d9f31428fa3f1b650aaa19e3e",
+ "c161b2e6759043eb9e3cf56faaa46905",
+ "424e6fc321b14cabbe7f039373c5ff2f",
+ "75506937f12647769085ec8a6600a291",
+ "e38cab52706447b6ad4a30add3e52810",
+ "27eab27656254686a977caf7cffb9151",
+ "310142f6ad0b42c9a9d0b4a503edc5ac",
+ "f53f8de89a8b45ec84cb54192eb2a4f0",
+ "1d1c780f5f6e4166b993d4ccebceabc8",
+ "575f85f09cdf4913ad2239dbf35652e4",
+ "047b9191a9004c868eeb99a86933dcd7",
+ "378b7ab2af4048aeb9c6caca0d911ea4",
+ "1f7352d8f3724c338ec344933005834f",
+ "9caf41ec348640ac87d40c42d013296b",
+ "4966c62d4a5f4ff09256fca2d564832d",
+ "265a9350979d48bf95f821618a06e929",
+ "2b6cf7d6a8e44f3f90caad42e5ab4047",
+ "0a42f004666f4f2aa53cda3724dc974d",
+ "1ee83dcc3dcd416da37221a50c8c3aae",
+ "270657838f0a45bb92e692503d2606ec",
+ "c78b913a99d2496b9d1a00ea326bd6e2",
+ "a5b340de26464dd5aa481e3f9178db6e",
+ "2c8b77eeb22b4193b0fa343b33fceb89",
+ "6b5174e3338843408714243f6b60072f",
+ "7110a31dd43945c8a65e0f3e2f806bca",
+ "d2c676501a8842efbbd9a24620a1fe9e",
+ "4fe0d5443bbd4295b407ba807df9141b",
+ "5692a73de14b4a6e88bd59bc7bfe0a02",
+ "0e4a34472820466da3d414ba6f43db26",
+ "b0c11305ef7b4d909659a4732f0f050f",
+ "b058118bfff44c72bcd3309201fab9b1",
+ "2ede5c651c4c4a1ba36d48fcdb25d2dc",
+ "e155b7e1b96a410db90a35ffbd7e513b",
+ "83150dc5c23b4a549efdc36910618a47"
+ ]
+ },
+ "id": "YZXnJqOgJ4LS",
+ "outputId": "5f47edd6-bd71-4702-8fee-e8f5bbd54e03"
+ },
+ "outputs": [
+ {
+ "output_type": "display_data",
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "23bdf3807fe34d63a600fcea26520f78",
+ "version_minor": 0,
+ "version_major": 2
+ },
+ "text/plain": [
+ "Downloading: 0%| | 0.00/10.1k [00:00, ?B/s]"
+ ]
+ },
+ "metadata": {}
+ },
+ {
+ "output_type": "display_data",
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "170bfecf226142cdb87d0bf03be43842",
+ "version_minor": 0,
+ "version_major": 2
+ },
+ "text/plain": [
+ "Downloading: 0%| | 0.00/2.98k [00:00, ?B/s]"
+ ]
+ },
+ "metadata": {}
+ },
+ {
+ "output_type": "display_data",
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "a4416a28b7434c1196e1e1ee78a6524e",
+ "version_minor": 0,
+ "version_major": 2
+ },
+ "text/plain": [
+ "Downloading: 0%| | 0.00/53.1k [00:00, ?B/s]"
+ ]
+ },
+ "metadata": {}
+ },
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Downloading and preparing dataset common_voice/tr to /root/.cache/huggingface/datasets/mozilla-foundation___common_voice/tr/8.0.0/b8bc4d453193c06a43269b46cd87f075c70f152ac963b7f28f7a2760c45ec3e8...\n"
+ ]
+ },
+ {
+ "output_type": "display_data",
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "1e97ed9ce1a342ffb1bc35dfca37ce8c",
+ "version_minor": 0,
+ "version_major": 2
+ },
+ "text/plain": [
+ "Downloading: 0%| | 0.00/1.55G [00:00, ?B/s]"
+ ]
+ },
+ "metadata": {}
+ },
+ {
+ "output_type": "display_data",
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "83985afb3b9b47a8a51ed826637d7ec3",
+ "version_minor": 0,
+ "version_major": 2
+ },
+ "text/plain": [
+ "0 examples [00:00, ? examples/s]"
+ ]
+ },
+ "metadata": {}
+ },
+ {
+ "output_type": "display_data",
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "becbd6e66ac84539828e278831b47711",
+ "version_minor": 0,
+ "version_major": 2
+ },
+ "text/plain": [
+ "0 examples [00:00, ? examples/s]"
+ ]
+ },
+ "metadata": {}
+ },
+ {
+ "output_type": "display_data",
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "c161b2e6759043eb9e3cf56faaa46905",
+ "version_minor": 0,
+ "version_major": 2
+ },
+ "text/plain": [
+ "0 examples [00:00, ? examples/s]"
+ ]
+ },
+ "metadata": {}
+ },
+ {
+ "output_type": "display_data",
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "1f7352d8f3724c338ec344933005834f",
+ "version_minor": 0,
+ "version_major": 2
+ },
+ "text/plain": [
+ "0 examples [00:00, ? examples/s]"
+ ]
+ },
+ "metadata": {}
+ },
+ {
+ "output_type": "display_data",
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "6b5174e3338843408714243f6b60072f",
+ "version_minor": 0,
+ "version_major": 2
+ },
+ "text/plain": [
+ "0 examples [00:00, ? examples/s]"
+ ]
+ },
+ "metadata": {}
+ },
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Dataset common_voice downloaded and prepared to /root/.cache/huggingface/datasets/mozilla-foundation___common_voice/tr/8.0.0/b8bc4d453193c06a43269b46cd87f075c70f152ac963b7f28f7a2760c45ec3e8. Subsequent calls will reuse this data.\n"
+ ]
+ },
+ {
+ "output_type": "stream",
+ "name": "stderr",
+ "text": [
+ "Reusing dataset common_voice (/root/.cache/huggingface/datasets/mozilla-foundation___common_voice/tr/8.0.0/b8bc4d453193c06a43269b46cd87f075c70f152ac963b7f28f7a2760c45ec3e8)\n"
+ ]
+ }
+ ],
+ "source": [
+ "from datasets import load_dataset, load_metric, Audio\n",
+ "\n",
+ "common_voice_train = load_dataset(\"mozilla-foundation/common_voice_8_0\", \"tr\", use_auth_token=True, split=\"train+validation\")\n",
+ "common_voice_test = load_dataset(\"mozilla-foundation/common_voice_8_0\", \"tr\", use_auth_token=True, split=\"test\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 53
+ },
+ "id": "2MMXcWFFgCXU",
+ "outputId": "02d30af9-8201-4232-e2fa-c34b9f9b4000"
+ },
+ "outputs": [
+ {
+ "output_type": "execute_result",
+ "data": {
+ "application/vnd.google.colaboratory.intrinsic+json": {
+ "type": "string"
+ },
+ "text/plain": [
+ "'\\nfrom datasets import load_dataset, load_metric, Audio\\n\\ncommon_voice_train = load_dataset(\"common_voice\", \"tr\", split=\"train+validation\")\\ncommon_voice_test = load_dataset(\"common_voice\", \"tr\", split=\"test\")\\n'"
+ ]
+ },
+ "metadata": {},
+ "execution_count": 7
+ }
+ ],
+ "source": [
+ "'''\n",
+ "from datasets import load_dataset, load_metric, Audio\n",
+ "\n",
+ "common_voice_train = load_dataset(\"common_voice\", \"tr\", split=\"train+validation\")\n",
+ "common_voice_test = load_dataset(\"common_voice\", \"tr\", split=\"test\")\n",
+ "'''"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "ri5y5N_HMANq"
+ },
+ "source": [
+ "Many ASR datasets only provide the target text, `'sentence'` for each audio array `'audio'` and file `'path'`. Common Voice actually provides much more information about each audio file, such as the `'accent'`, etc. However, we want to keep the notebook as general as possible, so that we will only consider the transcribed text for fine-tuning.\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "kbyq6lDgQc2a"
+ },
+ "outputs": [],
+ "source": [
+ "common_voice_train = common_voice_train.remove_columns([\"accent\", \"age\", \"client_id\", \"down_votes\", \"gender\", \"locale\", \"segment\", \"up_votes\"])\n",
+ "common_voice_test = common_voice_test.remove_columns([\"accent\", \"age\", \"client_id\", \"down_votes\", \"gender\", \"locale\", \"segment\", \"up_votes\"])"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Go9Hq4e4NDT9"
+ },
+ "source": [
+ "Let's write a short function to display some random samples of the dataset and run it a couple of times to get a feeling for the transcriptions."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "72737oog2F6U"
+ },
+ "outputs": [],
+ "source": [
+ "from datasets import ClassLabel\n",
+ "import random\n",
+ "import pandas as pd\n",
+ "from IPython.display import display, HTML\n",
+ "\n",
+ "def show_random_elements(dataset, num_examples=10):\n",
+ " assert num_examples <= len(dataset), \"Can't pick more elements than there are in the dataset.\"\n",
+ " picks = []\n",
+ " for _ in range(num_examples):\n",
+ " pick = random.randint(0, len(dataset)-1)\n",
+ " while pick in picks:\n",
+ " pick = random.randint(0, len(dataset)-1)\n",
+ " picks.append(pick)\n",
+ " \n",
+ " df = pd.DataFrame(dataset[picks])\n",
+ " display(HTML(df.to_html()))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 363
+ },
+ "id": "K_JUmf3G3b9S",
+ "outputId": "ccf74737-e6eb-4b91-93e2-a1a91acbf2a7"
+ },
+ "outputs": [
+ {
+ "output_type": "display_data",
+ "data": {
+ "text/html": [
+ "
\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
sentence
\n",
+ "
\n",
+ " \n",
+ " \n",
+ "
\n",
+ "
0
\n",
+ "
El terazi, göz mizan.
\n",
+ "
\n",
+ "
\n",
+ "
1
\n",
+ "
Daha yüz yirmi getireceksin!
\n",
+ "
\n",
+ "
\n",
+ "
2
\n",
+ "
\"Adalet'in gerdanı açıktı.\"
\n",
+ "
\n",
+ "
\n",
+ "
3
\n",
+ "
Haydi, al şu yirmi beşi de, bu hesabı kapayalım…
\n",
+ "
\n",
+ "
\n",
+ "
4
\n",
+ "
Evrakı ve raporları savcılık kaleminde duruyor, takip eden olmadığı için sıra bekliyordu.
\n",
+ "
\n",
+ "
\n",
+ "
5
\n",
+ "
Yandık!
\n",
+ "
\n",
+ "
\n",
+ "
6
\n",
+ "
Anlat bakalım.
\n",
+ "
\n",
+ "
\n",
+ "
7
\n",
+ "
Ak akçe kara gün içindir.
\n",
+ "
\n",
+ "
\n",
+ "
8
\n",
+ "
Bindim vapura geldim. Hemen bara yerleştim. Beş on kuruş kazandım.
\n",
+ "
\n",
+ "
\n",
+ "
9
\n",
+ "
Ahlak sohbetleri.
\n",
+ "
\n",
+ " \n",
+ "
"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {}
+ }
+ ],
+ "source": [
+ "show_random_elements(common_voice_train.remove_columns([\"path\", \"audio\"]), num_examples=10)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "fowcOllGNNju"
+ },
+ "source": [
+ "Alright! The transcriptions look fairly clean. Having translated the transcribed sentences, it seems that the language corresponds more to written-out text than noisy dialogue. This makes sense considering that [Common Voice](https://huggingface.co./datasets/common_voice) is a crowd-sourced read speech corpus."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "vq7OR50LN49m"
+ },
+ "source": [
+ "We can see that the transcriptions contain some special characters, such as `,.?!;:`. Without a language model, it is much harder to classify speech chunks to such special characters because they don't really correspond to a characteristic sound unit. *E.g.*, the letter `\"s\"` has a more or less clear sound, whereas the special character `\".\"` does not.\n",
+ "Also in order to understand the meaning of a speech signal, it is usually not necessary to include special characters in the transcription.\n",
+ "\n",
+ "In addition, we normalize the text to only have lower case letters and append a word separator token at the end."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "svKzVJ_hQGK6"
+ },
+ "outputs": [],
+ "source": [
+ "import re\n",
+ "chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\\"\\“\\%\\‘\\”\\�]'\n",
+ "\n",
+ "def remove_special_characters(batch):\n",
+ " batch[\"sentence\"] = re.sub(chars_to_ignore_regex, '', batch[\"sentence\"]).lower() + \" \"\n",
+ " return batch"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 81,
+ "referenced_widgets": [
+ "da1051f673f747e4955f863369212aeb",
+ "d4d7845f9b8a49f592b61cd51e439e6d",
+ "9ec68324ba8f427eab819c06ea2e6c5f",
+ "ca95007c72854a118eb2826170a7e5ee",
+ "62c200f505f64e939ceaaea4cf5c1f1f",
+ "3df6c9b2730e4b10aaa46d59ef6db333",
+ "76c4fe0608734983a22e47799cb1c383",
+ "e951dfe6a3b845069aca8a81c7248232",
+ "4f35d24404484227801de58669037020",
+ "a1aee963e2764fbca86cf242647022e0",
+ "caa9274766ed4102988e69986c44aa0a",
+ "19ebe7b3d430420cb28da897a88c092c",
+ "d889e7cfdf9b4c758eae00b2e303a021",
+ "612da199c8dc4ef48d26b26e4fcfcd55",
+ "11cde7ee4f5b4a6796391b53bfb2b7da",
+ "d0487450690a41da805bc8b48c72271b",
+ "69d46dd2eea14bd88faf5a1ca4cfed7c",
+ "6b00d315427a4c9cb833605dc07801f6",
+ "ebdf3801c865463a9294893f62fd62f3",
+ "728ea53bb2954d4ba98c0e3898f381e5",
+ "682910d92af240b5942e46fbdf134421",
+ "4ad9488f342c4ff8bcb71def50f081ef"
+ ]
+ },
+ "id": "XIHocAuTQbBR",
+ "outputId": "2df8c280-45fd-4941-8f03-4d098e86e1a7"
+ },
+ "outputs": [
+ {
+ "output_type": "display_data",
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "da1051f673f747e4955f863369212aeb",
+ "version_minor": 0,
+ "version_major": 2
+ },
+ "text/plain": [
+ "0ex [00:00, ?ex/s]"
+ ]
+ },
+ "metadata": {}
+ },
+ {
+ "output_type": "display_data",
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "19ebe7b3d430420cb28da897a88c092c",
+ "version_minor": 0,
+ "version_major": 2
+ },
+ "text/plain": [
+ "0ex [00:00, ?ex/s]"
+ ]
+ },
+ "metadata": {}
+ }
+ ],
+ "source": [
+ "common_voice_train = common_voice_train.map(remove_special_characters)\n",
+ "common_voice_test = common_voice_test.map(remove_special_characters)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 363
+ },
+ "id": "RBDRAAYxRE6n",
+ "outputId": "ff1fdea5-bde6-48ff-b4f2-775cce206ccf"
+ },
+ "outputs": [
+ {
+ "output_type": "display_data",
+ "data": {
+ "text/html": [
+ "
\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
sentence
\n",
+ "
\n",
+ " \n",
+ " \n",
+ "
\n",
+ "
0
\n",
+ "
koca adama bir şeyler oluyor
\n",
+ "
\n",
+ "
\n",
+ "
1
\n",
+ "
güney istikametinde gidiyordum
\n",
+ "
\n",
+ "
\n",
+ "
2
\n",
+ "
sorma
\n",
+ "
\n",
+ "
\n",
+ "
3
\n",
+ "
boş ver onu
\n",
+ "
\n",
+ "
\n",
+ "
4
\n",
+ "
sana ihtiyacımız var
\n",
+ "
\n",
+ "
\n",
+ "
5
\n",
+ "
sonradan gelen devlet devlet değildir
\n",
+ "
\n",
+ "
\n",
+ "
6
\n",
+ "
bana öyle gelmiyor
\n",
+ "
\n",
+ "
\n",
+ "
7
\n",
+ "
bize de üsküdar'da toptaşı'na yakın ahşap bir ev bıraktı
\n",
+ "
\n",
+ "
\n",
+ "
8
\n",
+ "
akıllıca
\n",
+ "
\n",
+ "
\n",
+ "
9
\n",
+ "
diğerleri…
\n",
+ "
\n",
+ " \n",
+ "
"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {}
+ }
+ ],
+ "source": [
+ "show_random_elements(common_voice_train.remove_columns([\"path\",\"audio\"]))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "jwfaptH5RJwA"
+ },
+ "source": [
+ "Good! This looks better. We have removed most special characters from transcriptions and normalized them to lower-case only.\n",
+ "\n",
+ "In CTC, it is common to classify speech chunks into letters, so we will do the same here. \n",
+ "Let's extract all distinct letters of the training and test data and build our vocabulary from this set of letters.\n",
+ "\n",
+ "We write a mapping function that concatenates all transcriptions into one long transcription and then transforms the string into a set of chars. \n",
+ "It is important to pass the argument `batched=True` to the `map(...)` function so that the mapping function has access to all transcriptions at once."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "LwCshNbbeRZR"
+ },
+ "outputs": [],
+ "source": [
+ "def extract_all_chars(batch):\n",
+ " all_text = \" \".join(batch[\"sentence\"])\n",
+ " vocab = list(set(all_text))\n",
+ " return {\"vocab\": [vocab], \"all_text\": [all_text]}"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 81,
+ "referenced_widgets": [
+ "04805b7e6dbd4e878543b035c6ac9ffd",
+ "e30ada8532d34e279a5c917187030a35",
+ "88ba207d0a65433ca0e5ea1298055795",
+ "573756adc4da43ebbff370e6c5a6cee1",
+ "f13c45348bc346f8b0acdaa00b4fb75b",
+ "f223dcd00b224f229ffa30efb18a21bb",
+ "444a0eabada2472a8a0293d89d3056d4",
+ "f1909a70344a49849911d5221fe2e09c",
+ "5d497cd406e6459797fc16df24d06f39",
+ "24f87018abdd4a05ae26a0b123b81d4a",
+ "37d657c09da14715826ea3da6a3c8bf1",
+ "80d72bcc95bb4a2aaf27d16bc81c013f",
+ "ab6646153a4a49e6b468b2f2f1711605",
+ "c921133532a7448f90c9add21fb9098f",
+ "2154a5d07421448aaf6fe53c5c9568a4",
+ "7ef4dfd22751446fb255f9102f3419f9",
+ "e68c242c463e4e6895dad01ca33c9326",
+ "f6129bdabec34a01af416293eeb47ad7",
+ "b1909d011ebd4129b4992347797ef0a4",
+ "36228aad1ec24e5c97fc0d046e00561e",
+ "f0b1028065cd4765b310b7af3ae26b4b",
+ "7d5c696360e347cd8e00ef545f08ef43"
+ ]
+ },
+ "id": "_m6uUjjcfbjH",
+ "outputId": "77777471-db25-402e-d0a6-0eb95544d515"
+ },
+ "outputs": [
+ {
+ "output_type": "display_data",
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "04805b7e6dbd4e878543b035c6ac9ffd",
+ "version_minor": 0,
+ "version_major": 2
+ },
+ "text/plain": [
+ " 0%| | 0/1 [00:00, ?ba/s]"
+ ]
+ },
+ "metadata": {}
+ },
+ {
+ "output_type": "display_data",
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "80d72bcc95bb4a2aaf27d16bc81c013f",
+ "version_minor": 0,
+ "version_major": 2
+ },
+ "text/plain": [
+ " 0%| | 0/1 [00:00, ?ba/s]"
+ ]
+ },
+ "metadata": {}
+ }
+ ],
+ "source": [
+ "vocab_train = common_voice_train.map(\n",
+ " extract_all_chars, batched=True, \n",
+ " batch_size=-1, keep_in_memory=True, \n",
+ " remove_columns=common_voice_train.column_names\n",
+ ")\n",
+ "vocab_test = common_voice_test.map(\n",
+ " extract_all_chars, batched=True, \n",
+ " batch_size=-1, keep_in_memory=True, \n",
+ " remove_columns=common_voice_test.column_names\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "7oVgE8RZSJNP"
+ },
+ "source": [
+ "Now, we create the union of all distinct letters in the training dataset and test dataset and convert the resulting list into an enumerated dictionary."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "aQfneNsmlJI0"
+ },
+ "outputs": [],
+ "source": [
+ "vocab_list = list(set(vocab_train[\"vocab\"][0]) | set(vocab_test[\"vocab\"][0]))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "_0kRndSvqaKk",
+ "outputId": "f0fb07c2-4cff-4524-eeed-3f5b7b51dde2"
+ },
+ "outputs": [
+ {
+ "output_type": "execute_result",
+ "data": {
+ "text/plain": [
+ "{' ': 8,\n",
+ " \"'\": 6,\n",
+ " '(': 35,\n",
+ " ')': 34,\n",
+ " 'a': 2,\n",
+ " 'b': 0,\n",
+ " 'c': 23,\n",
+ " 'd': 42,\n",
+ " 'e': 12,\n",
+ " 'f': 20,\n",
+ " 'g': 5,\n",
+ " 'h': 24,\n",
+ " 'i': 22,\n",
+ " 'j': 38,\n",
+ " 'k': 26,\n",
+ " 'l': 36,\n",
+ " 'm': 40,\n",
+ " 'n': 39,\n",
+ " 'o': 18,\n",
+ " 'p': 14,\n",
+ " 'q': 19,\n",
+ " 'r': 7,\n",
+ " 's': 29,\n",
+ " 't': 10,\n",
+ " 'u': 21,\n",
+ " 'v': 32,\n",
+ " 'w': 13,\n",
+ " 'x': 4,\n",
+ " 'y': 3,\n",
+ " 'z': 30,\n",
+ " 'â': 9,\n",
+ " 'ç': 28,\n",
+ " 'é': 25,\n",
+ " 'ë': 41,\n",
+ " 'î': 16,\n",
+ " 'ö': 43,\n",
+ " 'û': 11,\n",
+ " 'ü': 27,\n",
+ " 'ğ': 31,\n",
+ " 'ı': 37,\n",
+ " 'ş': 15,\n",
+ " '̇': 1,\n",
+ " '’': 17,\n",
+ " '…': 33}"
+ ]
+ },
+ "metadata": {},
+ "execution_count": 17
+ }
+ ],
+ "source": [
+ "vocab_dict = {v: k for k, v in enumerate(vocab_list)}\n",
+ "vocab_dict"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "JOSzbvs9SXT1"
+ },
+ "source": [
+ "Cool, we see that all letters of the alphabet occur in the dataset (which is not really surprising) and we also extracted the special characters `\" \"` and `'`. Note that we did not exclude those special characters because: \n",
+ "\n",
+ "- The model has to learn to predict when a word is finished or else the model prediction would always be a sequence of chars which would make it impossible to separate words from each other.\n",
+ "- From the transcriptions above it seems that words that include an apostrophe, such as `maktouf'un` do exist in Turkish, so I decided to keep the apostrophe in the dataset. This might be a wrong assumption though.\n",
+ "\n",
+ "One should always keep in mind that the data-preprocessing is a very important step before training your model. E.g., we don't want our model to differentiate between `a` and `A` just because we forgot to normalize the data. The difference between `a` and `A` does not depend on the \"sound\" of the letter at all, but more on grammatical rules - *e.g.* use a capitalized letter at the beginning of the sentence. So it is sensible to remove the difference between capitalized and non-capitalized letters so that the model has an easier time learning to transcribe speech. \n",
+ "\n",
+ "It is always advantageous to get help from a native speaker of the language you would like to transcribe to verify whether the assumptions you made are sensible, *e.g.* I should have made sure that keeping `'`, but removing other special characters is a sensible choice for Turkish. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "b1fBRCn-TRaO"
+ },
+ "source": [
+ "To make it clearer to the reader that `\" \"` has its own token class, we give it a more visible character `|`. In addition, we also add an \"unknown\" token so that the model can later deal with characters not encountered in Common Voice's training set. \n",
+ "\n",
+ "Finally, we also add a padding token that corresponds to CTC's \"*blank token*\". The \"blank token\" is a core component of the CTC algorithm. For more information, please take a look at the \"Alignment\" section [here](https://distill.pub/2017/ctc/)."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "npbIbBoLgaFX"
+ },
+ "outputs": [],
+ "source": [
+ "vocab_dict[\"|\"] = vocab_dict[\" \"]\n",
+ "del vocab_dict[\" \"]"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "znF0bNunsjbl",
+ "outputId": "c65f4b9f-1865-4362-ccef-4691a163427f"
+ },
+ "outputs": [
+ {
+ "output_type": "execute_result",
+ "data": {
+ "text/plain": [
+ "46"
+ ]
+ },
+ "metadata": {},
+ "execution_count": 19
+ }
+ ],
+ "source": [
+ "vocab_dict[\"[UNK]\"] = len(vocab_dict)\n",
+ "vocab_dict[\"[PAD]\"] = len(vocab_dict)\n",
+ "len(vocab_dict)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "SFPGfet8U5sL"
+ },
+ "source": [
+ "Cool, now our vocabulary is complete and consists of 40 tokens, which means that the linear layer that we will add on top of the pretrained XLSR-Wav2Vec2 checkpoint will have an output dimension of 40."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "1CujRgBNVRaD"
+ },
+ "source": [
+ "Let's now save the vocabulary as a json file."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "ehyUoh9vk191"
+ },
+ "outputs": [],
+ "source": [
+ "import json\n",
+ "with open('vocab.json', 'w') as vocab_file:\n",
+ " json.dump(vocab_dict, vocab_file)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "SHJDaKlIVVim"
+ },
+ "source": [
+ "In a final step, we use the json file to instantiate a tokenizer object with the just created vocabulary file. The correct `tokenizer_type` can be retrieved from the model configuration. If a `tokenizer_class` is defined in the config, we can use it, else we assume the `tokenizer_type` corresponds to the `model_type`."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 49,
+ "referenced_widgets": [
+ "c0e164ae6da6409983df9cd716084642",
+ "a26e8267388b4b6fb72738b8bf015fec",
+ "1b14adb2df3f4dd6bc463b388996b1ba",
+ "fa59e61e499d4d44ae6c32e89be231dd",
+ "64cffbbc242a4201a3f842bc3081da3e",
+ "3f018a8c91924b85919b8713b591fd62",
+ "6393cf9b22e04d75a12f74c9dd7ef31c",
+ "749100e315ff432294bece9cad2417b7",
+ "1a463985b65a436e8880f28001f3f9ca",
+ "12455d049390474c98fb06298c5e7958",
+ "fc47bddc681a4e5b8c352cbcf232853c"
+ ]
+ },
+ "id": "1VKVaSZm7Clh",
+ "outputId": "a2286edb-c25c-4b36-c6d2-8cd6da2418d4"
+ },
+ "outputs": [
+ {
+ "output_type": "display_data",
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "c0e164ae6da6409983df9cd716084642",
+ "version_minor": 0,
+ "version_major": 2
+ },
+ "text/plain": [
+ "Downloading: 0%| | 0.00/1.53k [00:00, ?B/s]"
+ ]
+ },
+ "metadata": {}
+ }
+ ],
+ "source": [
+ "from transformers import AutoConfig\n",
+ "\n",
+ "config = AutoConfig.from_pretrained(model_checkpoint)\n",
+ "\n",
+ "tokenizer_type = config.model_type if config.tokenizer_class is None else None\n",
+ "config = config if config.tokenizer_class is not None else None"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "P5B_bVhf7H_K"
+ },
+ "source": [
+ "Now we can instantiate a tokenizer using `AutoTokenizer`. Additionally, we set the tokenizer's special tokens."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "xriFGEWQkO4M",
+ "outputId": "d6803d15-96ec-4562-ff4b-cd38e954b367"
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stderr",
+ "text": [
+ "file ./config.json not found\n",
+ "Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.\n"
+ ]
+ }
+ ],
+ "source": [
+ "from transformers import AutoTokenizer\n",
+ "\n",
+ "tokenizer = AutoTokenizer.from_pretrained(\n",
+ " \"./\",\n",
+ " config=config,\n",
+ " tokenizer_type=tokenizer_type,\n",
+ " unk_token=\"[UNK]\",\n",
+ " pad_token=\"[PAD]\",\n",
+ " word_delimiter_token=\"|\",\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "KvL12DrNV4cx"
+ },
+ "source": [
+ "If one wants to re-use the just created tokenizer with the fine-tuned model of this notebook, it is strongly advised to upload the `tokenizer` to the [🤗 Hub](https://huggingface.co./). Let's call the repo to which we will upload the files\n",
+ "`\"wav2vec2-large-xlsr-turkish-demo-colab\"`:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "A1XApZBAF2zr"
+ },
+ "outputs": [],
+ "source": [
+ "model_checkpoint_name = model_checkpoint.split(\"/\")[-1]\n",
+ "repo_name = f\"{model_checkpoint_name}-tr-CV8-v1\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "B1BiezWZF16d"
+ },
+ "source": [
+ "and upload the tokenizer to the [🤗 Hub](https://huggingface.co./)."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 160
+ },
+ "id": "zytE1175GAKM",
+ "outputId": "a22ad4b6-a5b2-47c2-b088-120f2e2be53e"
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stderr",
+ "text": [
+ "/usr/local/lib/python3.7/dist-packages/huggingface_hub/hf_api.py:1004: FutureWarning: `create_repo` now takes `token` as an optional positional argument. Be sure to adapt your code!\n",
+ " FutureWarning,\n",
+ "Cloning https://huggingface.co./emre/wav2vec2-xls-r-300m-tr-CV8-v1 into local empty directory.\n",
+ "To https://huggingface.co./emre/wav2vec2-xls-r-300m-tr-CV8-v1\n",
+ " e1e801f..b9dfb7e main -> main\n",
+ "\n"
+ ]
+ },
+ {
+ "output_type": "execute_result",
+ "data": {
+ "application/vnd.google.colaboratory.intrinsic+json": {
+ "type": "string"
+ },
+ "text/plain": [
+ "'https://huggingface.co./emre/wav2vec2-xls-r-300m-tr-CV8-v1/commit/b9dfb7ea3acf138a98773a2e1cea53ec73cbf18b'"
+ ]
+ },
+ "metadata": {},
+ "execution_count": 24
+ }
+ ],
+ "source": [
+ "tokenizer.push_to_hub(repo_name)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "SwQM8lH_GGuP"
+ },
+ "source": [
+ "Great, you can see the just created repository under `https://huggingface.co.//wav2vec2-large-xlsr-turkish-demo-colab` ."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "YFmShnl7RE35"
+ },
+ "source": [
+ "### Preprocess Data\n",
+ "\n",
+ "So far, we have not looked at the actual values of the speech signal but just the transcription. In addition to `sentence`, our datasets include two more column names `path` and `audio`. `path` states the absolute path of the audio file. Let's take a look.\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 35
+ },
+ "id": "TTCS7W6XJ9BG",
+ "outputId": "64e75171-4e98-429a-b6fe-42ee19686281"
+ },
+ "outputs": [
+ {
+ "output_type": "execute_result",
+ "data": {
+ "application/vnd.google.colaboratory.intrinsic+json": {
+ "type": "string"
+ },
+ "text/plain": [
+ "'cv-corpus-8.0-2022-01-19/tr/clips/common_voice_tr_17528071.mp3'"
+ ]
+ },
+ "metadata": {},
+ "execution_count": 25
+ }
+ ],
+ "source": [
+ "common_voice_train[0][\"path\"]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "T6ndIjHGFp0W"
+ },
+ "source": [
+ "`XLSR-Wav2Vec2` expects the input in the format of a 1-dimensional array of 16 kHz. This means that the audio file has to be loaded and resampled.\n",
+ "\n",
+ " Thankfully, `datasets` does this automatically by calling the other column `audio`. Let try it out. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "qj_z5Zc3GAs9",
+ "outputId": "ad749bc4-29e6-40fe-e529-26eefdd4f09a"
+ },
+ "outputs": [
+ {
+ "output_type": "execute_result",
+ "data": {
+ "text/plain": [
+ "{'array': array([ 0.0000000e+00, 0.0000000e+00, 0.0000000e+00, ...,\n",
+ " -2.9087067e-05, -2.5093555e-05, -5.6624413e-06], dtype=float32),\n",
+ " 'path': 'cv-corpus-8.0-2022-01-19/tr/clips/common_voice_tr_17528071.mp3',\n",
+ " 'sampling_rate': 48000}"
+ ]
+ },
+ "metadata": {},
+ "execution_count": 26
+ }
+ ],
+ "source": [
+ "common_voice_train[0][\"audio\"]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "WUUTgI1bGHW-"
+ },
+ "source": [
+ "Great, we can see that the audio file has automatically been loaded. This is thanks to the new [`\"Audio\"` feature](https://huggingface.co./docs/datasets/package_reference/main_classes.html?highlight=audio#datasets.Audio) introduced in `datasets == 1.13.3`, which loads and resamples audio files on-the-fly upon calling.\n",
+ "\n",
+ "In the example above we can see that the audio data is loaded with a sampling rate of 48kHz whereas 16kHz are expected by the model. We can set the audio feature to the correct sampling rate by making use of [`cast_column`](https://huggingface.co./docs/datasets/package_reference/main_classes.html?highlight=cast_column#datasets.DatasetDict.cast_column):"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "rrv65aj7G95i"
+ },
+ "outputs": [],
+ "source": [
+ "common_voice_train = common_voice_train.cast_column(\"audio\", Audio(sampling_rate=16_000))\n",
+ "common_voice_test = common_voice_test.cast_column(\"audio\", Audio(sampling_rate=16_000))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "PcnO4x-NGBEi"
+ },
+ "source": [
+ "Let's take a look at `\"audio\"` again."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "aKtkc1o_HWHC",
+ "outputId": "0db16df8-99c0-4bef-eb9d-bf49ee04c676"
+ },
+ "outputs": [
+ {
+ "output_type": "execute_result",
+ "data": {
+ "text/plain": [
+ "{'array': array([ 0.0000000e+00, 0.0000000e+00, 0.0000000e+00, ...,\n",
+ " -1.4103199e-05, 4.2269539e-06, -2.5639036e-05], dtype=float32),\n",
+ " 'path': 'cv-corpus-8.0-2022-01-19/tr/clips/common_voice_tr_17528071.mp3',\n",
+ " 'sampling_rate': 16000}"
+ ]
+ },
+ "metadata": {},
+ "execution_count": 28
+ }
+ ],
+ "source": [
+ "common_voice_train[0][\"audio\"]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "SOckzFd4Mbzq"
+ },
+ "source": [
+ "This seemed to have worked! Let's listen to a couple of audio files to better understand the dataset and verify that the audio was correctly loaded. \n",
+ "\n",
+ "**Note**: *You can click the following cell a couple of times to listen to different speech samples.*"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 93
+ },
+ "id": "dueM6U7Ev0OA",
+ "outputId": "6cc8d624-0884-454f-fe7b-d1181e395026"
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "haşan yolunu kesmiş emine demiş bu dünyada gönlüne karşı gelen babayiğit çıkmamış \n"
+ ]
+ },
+ {
+ "output_type": "execute_result",
+ "data": {
+ "text/html": [
+ "\n",
+ " \n",
+ " "
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "execution_count": 29
+ }
+ ],
+ "source": [
+ "import IPython.display as ipd\n",
+ "import numpy as np\n",
+ "import random\n",
+ "\n",
+ "rand_int = random.randint(0, len(common_voice_train)-1)\n",
+ "\n",
+ "print(common_voice_train[rand_int][\"sentence\"])\n",
+ "ipd.Audio(data=common_voice_train[rand_int][\"audio\"][\"array\"], autoplay=True, rate=16000)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "gY8m3vARHYTa"
+ },
+ "source": [
+ "It seems like the data is now correctly loaded and resampled. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "1MaL9J2dNVtG"
+ },
+ "source": [
+ "It can be heard, that the speakers change along with their speaking rate, accent, and background environment, etc. Overall, the recordings sound acceptably clear though, which is to be expected from a crowd-sourced read speech corpus.\n",
+ "\n",
+ "Let's do a final check that the data is correctly prepared, by printing the shape of the speech input, its transcription, and the corresponding sampling rate.\n",
+ "\n",
+ "**Note**: *You can click the following cell a couple of times to verify multiple samples.*"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "1Po2g7YPuRTx",
+ "outputId": "3a09dd08-01a8-4977-fe03-7e50a8c8c020"
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Target text: o hemen yerinden fırladı yeleğinin cebinden kibritini çıkardı \n",
+ "Input array shape: (89280,)\n",
+ "Sampling rate: 16000\n"
+ ]
+ }
+ ],
+ "source": [
+ "rand_int = random.randint(0, len(common_voice_train)-1)\n",
+ "\n",
+ "print(\"Target text:\", common_voice_train[rand_int][\"sentence\"])\n",
+ "print(\"Input array shape:\", common_voice_train[rand_int][\"audio\"][\"array\"].shape)\n",
+ "print(\"Sampling rate:\", common_voice_train[rand_int][\"audio\"][\"sampling_rate\"])"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "M9teZcSwOBJ4"
+ },
+ "source": [
+ "Good! Everything looks fine - the data is a 1-dimensional array, the sampling rate always corresponds to 16kHz, and the target text is normalized.\n",
+ "\n",
+ "Next, we should process the data with the model's feature extractor. Let's load the feature extractor"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 49,
+ "referenced_widgets": [
+ "2a2cbbf93c8c4ad69052b1b0478acfea",
+ "a4c449e803d942d7a4bdc9f5c923107d",
+ "475e841d2f254301a9c4a5da9be07bfe",
+ "189dcc43f91946669a77445e3acb1143",
+ "75c01c6a6b094b13995c8c061749e127",
+ "f1d9f253969b446faeaca0167b84afb8",
+ "210d7c933d564b769964e68668491128",
+ "feccd14aa429488fb5194cf7f18ddc05",
+ "06ecae394d624765baf9bdbaddde248e",
+ "da191a839e85403c8e0f9e27144b7a69",
+ "9b45662a7c774bf8913b547705021a5f"
+ ]
+ },
+ "id": "UuA-9bgBYT4x",
+ "outputId": "420cad05-e704-4253-a8d4-8a3d728df53a"
+ },
+ "outputs": [
+ {
+ "output_type": "display_data",
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "2a2cbbf93c8c4ad69052b1b0478acfea",
+ "version_minor": 0,
+ "version_major": 2
+ },
+ "text/plain": [
+ "Downloading: 0%| | 0.00/212 [00:00, ?B/s]"
+ ]
+ },
+ "metadata": {}
+ }
+ ],
+ "source": [
+ "from transformers import AutoFeatureExtractor\n",
+ "\n",
+ "feature_extractor = AutoFeatureExtractor.from_pretrained(model_checkpoint)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "VgcfxUSAYaCQ"
+ },
+ "source": [
+ "and wrap it into a `Wav2Vec2Processor` together with the tokenizer."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "byshSGTjYcdu"
+ },
+ "outputs": [],
+ "source": [
+ "from transformers import Wav2Vec2Processor\n",
+ "\n",
+ "processor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "GNFuvi26Yiw6"
+ },
+ "source": [
+ "Finally, we can leverage `Wav2Vec2Processor` to process the data to the format expected by the model for training. To do so let's make use of Dataset's [`map(...)`](https://huggingface.co./docs/datasets/package_reference/main_classes.html?highlight=map#datasets.DatasetDict.map) function.\n",
+ "\n",
+ "First, we load and resample the audio data, simply by calling `batch[\"audio\"]`.\n",
+ "Second, we extract the `input_values` from the loaded audio file. In our case, the `Wav2Vec2Processor` only normalizes the data. For other speech models, however, this step can include more complex feature extraction, such as [Log-Mel feature extraction](https://en.wikipedia.org/wiki/Mel-frequency_cepstrum). \n",
+ "Third, we encode the transcriptions to label ids."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "eJY7I0XAwe9p"
+ },
+ "outputs": [],
+ "source": [
+ "def prepare_dataset(batch):\n",
+ " audio = batch[\"audio\"]\n",
+ "\n",
+ " # batched output is \"un-batched\"\n",
+ " batch[\"input_values\"] = processor(audio[\"array\"], sampling_rate=audio[\"sampling_rate\"]).input_values[0]\n",
+ " batch[\"input_length\"] = len(batch[\"input_values\"])\n",
+ " \n",
+ " with processor.as_target_processor():\n",
+ " batch[\"labels\"] = processor(batch[\"sentence\"]).input_ids\n",
+ " return batch"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "q6Pg_WR3OGAP"
+ },
+ "source": [
+ "Let's apply the data preparation function to all examples."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 81,
+ "referenced_widgets": [
+ "a3606e7e00e54b3a98cf6d999519a559",
+ "9580a891a8964def85c2b4c20720bc12",
+ "d85dfe5ecb754a57ac52a593f3da5bce",
+ "6a5594fe7fdc451cb3b8c075638e046c",
+ "c674f0a97f67412b8a9ff5a3316074ac",
+ "d9132d9c79874897a7da4ffffbd374d3",
+ "d7e6852b58704eab8541e4250e229f98",
+ "8aab01eaed564597993f0095f4afeb62",
+ "56c2bd8eff2b45849191764a68a17826",
+ "be891c23f24145cbb6db1a24f66ecd16",
+ "72206d82d0ba4b1586da4598eb3b0202",
+ "f439211c04884bea98fe4afba8e7ca4a",
+ "45e0c7a8c1fa427cad3c9565cd068251",
+ "e04bd4466f384471a39f32f55d2780f6",
+ "33797b5723ac435ba7e42fcfaec4756d",
+ "486ebf97b1474c37b28107bfb23a0703",
+ "eab6aa2773994c44b4e9ee8c43eaaae5",
+ "c685d717186a493486a1a5b8afcece4e",
+ "29741efbd6e64365bcd010a066a09b34",
+ "8df4bdc8aae64be38bfaf68ddbbfb4be",
+ "87cc8498063f4fb6b1a2db3556f896ce",
+ "cadf486284ad4b1ea64ec93b3f7a938c"
+ ]
+ },
+ "id": "-np9xYK-wl8q",
+ "outputId": "81648a84-61a8-42e2-d6ae-6c79d170890d"
+ },
+ "outputs": [
+ {
+ "output_type": "display_data",
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "a3606e7e00e54b3a98cf6d999519a559",
+ "version_minor": 0,
+ "version_major": 2
+ },
+ "text/plain": [
+ "0ex [00:00, ?ex/s]"
+ ]
+ },
+ "metadata": {}
+ },
+ {
+ "output_type": "display_data",
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "f439211c04884bea98fe4afba8e7ca4a",
+ "version_minor": 0,
+ "version_major": 2
+ },
+ "text/plain": [
+ "0ex [00:00, ?ex/s]"
+ ]
+ },
+ "metadata": {}
+ }
+ ],
+ "source": [
+ "common_voice_train = common_voice_train.map(prepare_dataset, remove_columns=common_voice_train.column_names)\n",
+ "common_voice_test = common_voice_test.map(prepare_dataset, remove_columns=common_voice_test.column_names)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "nKcEWHvKI1by"
+ },
+ "source": [
+ "**Note**: Currently `datasets` make use of [`torchaudio`](https://pytorch.org/audio/stable/index.html) and [`librosa`](https://librosa.org/doc/latest/index.html) for audio loading and resampling. If you wish to implement your own costumized data loading/sampling, feel free to just make use of the `\"path\"` column instead and disregard the `\"audio\"` column."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "24CxHd5ewI4T"
+ },
+ "source": [
+ "Long input sequences require a lot of memory. Since speech models in `transformers` are based on `self-attention` the memory requirement scales quadratically with the input length for long input sequences (*cf.* with [this](https://www.reddit.com/r/MachineLearning/comments/genjvb/d_why_is_the_maximum_input_sequence_length_of/) reddit post). For this demo, let's filter all sequences that are longer than 5\n",
+ " seconds out of the training dataset."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 49,
+ "referenced_widgets": [
+ "77494b9ce086467bbac1cff55ab6e715",
+ "f305d2a7984c4b7d84e7458307ded2d2",
+ "d4cb7c13043c405fa2185f61b0b064ca",
+ "6245e8f435cb44a0a403362788f5f0df",
+ "857ed4101a09450e850f3f56072102f9",
+ "0fd358405d93489dba3efd6643120ebf",
+ "4994d7b0efe04c03a6bface4e986fd61",
+ "db7f11ef346042dc9d89e59750e0bca9",
+ "d9cfb56b15b44940be99fb5842608182",
+ "d4ce7ffc6596478d998dd614429f21d1",
+ "e18fb65bbfd247a997a22ce6f2e940de"
+ ]
+ },
+ "id": "tdHfbUJ_09iA",
+ "outputId": "a96b1a04-2de1-4211-8a3d-c5718451827b"
+ },
+ "outputs": [
+ {
+ "output_type": "display_data",
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "77494b9ce086467bbac1cff55ab6e715",
+ "version_minor": 0,
+ "version_major": 2
+ },
+ "text/plain": [
+ " 0%| | 0/26 [00:00, ?ba/s]"
+ ]
+ },
+ "metadata": {}
+ }
+ ],
+ "source": [
+ "max_input_length_in_sec = 5.0\n",
+ "common_voice_train = common_voice_train.filter(lambda x: x < max_input_length_in_sec * processor.feature_extractor.sampling_rate, input_columns=[\"input_length\"])"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "1ZWDCCKqwcfS"
+ },
+ "source": [
+ "Awesome, now we are ready to start training!"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "gYlQkKVoRUos"
+ },
+ "source": [
+ "## Training\n",
+ "\n",
+ "The data is processed so that we are ready to start setting up the training pipeline. We will make use of 🤗's [Trainer](https://huggingface.co./transformers/master/main_classes/trainer.html?highlight=trainer) for which we essentially need to do the following:\n",
+ "\n",
+ "- Define a data collator. In contrast to most NLP models, speech models usually have a much larger input length than output length. *E.g.*, a sample of input length 50000 for XLSR-Wav2Vec2 has an output length of no more than 100. Given the large input sizes, it is much more efficient to pad the training batches dynamically meaning that all training samples should only be padded to the longest sample in their batch and not the overall longest sample. Therefore, fine-tuning speech models requires a special padding data collator, which we will define below\n",
+ "\n",
+ "- Evaluation metric. During training, the model should be evaluated on the word error rate. We should define a `compute_metrics` function accordingly\n",
+ "\n",
+ "- Load a pretrained checkpoint. We need to load a pretrained checkpoint and configure it correctly for training.\n",
+ "\n",
+ "- Define the training configuration.\n",
+ "\n",
+ "After having fine-tuned the model, we will correctly evaluate it on the test data and verify that it has indeed learned to correctly transcribe speech."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Slk403unUS91"
+ },
+ "source": [
+ "### Set-up Trainer\n",
+ "\n",
+ "Let's start by defining the data collator. The code for the data collator was copied from [this example](https://github.com/huggingface/transformers/blob/9a06b6b11bdfc42eea08fa91d0c737d1863c99e3/examples/research_projects/wav2vec2/run_asr.py#L81).\n",
+ "\n",
+ "Without going into too many details, in contrast to the common data collators, this data collator treats the `input_values` and `labels` differently and thus applies to separate padding functions on them. This is necessary because in speech input and output are of different modalities meaning that they should not be treated by the same padding function.\n",
+ "Analogous to the common data collators, the padding tokens in the labels with `-100` so that those tokens are **not** taken into account when computing the loss."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "tborvC9hx88e"
+ },
+ "outputs": [],
+ "source": [
+ "import torch\n",
+ "\n",
+ "from dataclasses import dataclass, field\n",
+ "from typing import Any, Dict, List, Optional, Union\n",
+ "\n",
+ "@dataclass\n",
+ "class DataCollatorCTCWithPadding:\n",
+ " \"\"\"\n",
+ " Data collator that will dynamically pad the inputs received.\n",
+ " Args:\n",
+ " processor (:class:`~transformers.Wav2Vec2Processor`)\n",
+ " The processor used for proccessing the data.\n",
+ " padding (:obj:`bool`, :obj:`str` or :class:`~transformers.tokenization_utils_base.PaddingStrategy`, `optional`, defaults to :obj:`True`):\n",
+ " Select a strategy to pad the returned sequences (according to the model's padding side and padding index)\n",
+ " among:\n",
+ " * :obj:`True` or :obj:`'longest'`: Pad to the longest sequence in the batch (or no padding if only a single\n",
+ " sequence if provided).\n",
+ " * :obj:`'max_length'`: Pad to a maximum length specified with the argument :obj:`max_length` or to the\n",
+ " maximum acceptable input length for the model if that argument is not provided.\n",
+ " * :obj:`False` or :obj:`'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of\n",
+ " different lengths).\n",
+ " max_length (:obj:`int`, `optional`):\n",
+ " Maximum length of the ``input_values`` of the returned list and optionally padding length (see above).\n",
+ " max_length_labels (:obj:`int`, `optional`):\n",
+ " Maximum length of the ``labels`` returned list and optionally padding length (see above).\n",
+ " pad_to_multiple_of (:obj:`int`, `optional`):\n",
+ " If set will pad the sequence to a multiple of the provided value.\n",
+ " This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >=\n",
+ " 7.5 (Volta).\n",
+ " \"\"\"\n",
+ "\n",
+ " processor: Wav2Vec2Processor\n",
+ " padding: Union[bool, str] = True\n",
+ " max_length: Optional[int] = None\n",
+ " max_length_labels: Optional[int] = None\n",
+ " pad_to_multiple_of: Optional[int] = None\n",
+ " pad_to_multiple_of_labels: Optional[int] = None\n",
+ "\n",
+ " def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:\n",
+ " # split inputs and labels since they have to be of different lenghts and need\n",
+ " # different padding methods\n",
+ " input_features = [{\"input_values\": feature[\"input_values\"]} for feature in features]\n",
+ " label_features = [{\"input_ids\": feature[\"labels\"]} for feature in features]\n",
+ "\n",
+ " batch = self.processor.pad(\n",
+ " input_features,\n",
+ " padding=self.padding,\n",
+ " max_length=self.max_length,\n",
+ " pad_to_multiple_of=self.pad_to_multiple_of,\n",
+ " return_tensors=\"pt\",\n",
+ " )\n",
+ " with self.processor.as_target_processor():\n",
+ " labels_batch = self.processor.pad(\n",
+ " label_features,\n",
+ " padding=self.padding,\n",
+ " max_length=self.max_length_labels,\n",
+ " pad_to_multiple_of=self.pad_to_multiple_of_labels,\n",
+ " return_tensors=\"pt\",\n",
+ " )\n",
+ "\n",
+ " # replace padding with -100 to ignore loss correctly\n",
+ " labels = labels_batch[\"input_ids\"].masked_fill(labels_batch.attention_mask.ne(1), -100)\n",
+ "\n",
+ " batch[\"labels\"] = labels\n",
+ "\n",
+ " return batch"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "lbQf5GuZyQ4_"
+ },
+ "outputs": [],
+ "source": [
+ "data_collator = DataCollatorCTCWithPadding(processor=processor, padding=True)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "xO-Zdj-5cxXp"
+ },
+ "source": [
+ "Next, the evaluation metric is defined. As mentioned earlier, the \n",
+ "predominant metric in ASR is the word error rate (WER), hence we will use it in this notebook as well."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 49,
+ "referenced_widgets": [
+ "3584a24565254be4a665568c91261a40",
+ "ecc615a58d1a45cbadde4079b14b2a8c",
+ "608329d74f56402fae1bd3ba3318a0ff",
+ "3ddeca68918e44eaa65a975acba73081",
+ "71f27170a8064045826d41aa63fbf29e",
+ "0fc84d9b108c4cb38c0f18b4d1a3d27a",
+ "542355f122e248bfb7f6a8d902459e16",
+ "bf1febb1eb594bbd90c9cb94363536a1",
+ "01fa773649da4e3fa955110d97a57793",
+ "10073907af5a4e9b9fd03f41221c328c",
+ "98cbd815a3f4437bbf80f67e0964d2bb"
+ ]
+ },
+ "id": "9Xsux2gmyXso",
+ "outputId": "e9c74e6e-b2ea-4a53-8d46-ea9748ea3dc3"
+ },
+ "outputs": [
+ {
+ "output_type": "display_data",
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "3584a24565254be4a665568c91261a40",
+ "version_minor": 0,
+ "version_major": 2
+ },
+ "text/plain": [
+ "Downloading: 0%| | 0.00/1.90k [00:00, ?B/s]"
+ ]
+ },
+ "metadata": {}
+ }
+ ],
+ "source": [
+ "wer_metric = load_metric(\"wer\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "E1qZU5p-deqB"
+ },
+ "source": [
+ "The model will return a sequence of logit vectors:\n",
+ "$\\mathbf{y}_1, \\ldots, \\mathbf{y}_m$ with $\\mathbf{y}_1 = f_{\\theta}(x_1, \\ldots, x_n)[0]$ and $n >> m$.\n",
+ "\n",
+ "A logit vector $\\mathbf{y}_1$ contains the log-odds for each word in the vocabulary we defined earlier, thus $\\text{len}(\\mathbf{y}_i) =$ `config.vocab_size`. We are interested in the most likely prediction of the model and thus take the `argmax(...)` of the logits. Also, we transform the encoded labels back to the original string by replacing `-100` with the `pad_token_id` and decoding the ids while making sure that consecutive tokens are **not** grouped to the same token in CTC style ${}^1$."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "1XZ-kjweyTy_"
+ },
+ "outputs": [],
+ "source": [
+ "def compute_metrics(pred):\n",
+ " pred_logits = pred.predictions\n",
+ " pred_ids = np.argmax(pred_logits, axis=-1)\n",
+ "\n",
+ " pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id\n",
+ "\n",
+ " pred_str = processor.batch_decode(pred_ids)\n",
+ " # we do not want to group tokens when computing the metrics\n",
+ " label_str = processor.batch_decode(pred.label_ids, group_tokens=False)\n",
+ "\n",
+ " wer = wer_metric.compute(predictions=pred_str, references=label_str)\n",
+ "\n",
+ " return {\"wer\": wer}"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Xmgrx4bRwLIH"
+ },
+ "source": [
+ "Now, we can load the pretrained speech checkpoint. The tokenizer's `pad_token_id` must be to define the model's `pad_token_id` or in the case of a CTC speech model also CTC's *blank token* ${}^2$.\n",
+ "\n",
+ "Because the dataset is quite small (~6h of training data) and because Common Voice is quite noisy, fine-tuning might require some hyper-parameter tuning, which is why a couple of hyperparameters are set in the following.\n",
+ "\n",
+ "**Note**: When using this notebook to train speech models on another language of Common Voice those hyper-parameter settings might not work very well. Feel free to adapt those depending on your use case. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 156,
+ "referenced_widgets": [
+ "e204b2ca0ef444779bf10e8d37943284",
+ "46ad1ddaa7814ff38877f14a4f3167fc",
+ "114d8e5de5a94e1d8a61fad25bfce8f9",
+ "ce0f4b3a12f74e9d9dbcfdf07de58c83",
+ "c9d344680b9f428a8398c4c492fe60d8",
+ "50123cd248334539882bc450f0c76315",
+ "bc3f2ee267264173ba9bc54d1ebf5d0b",
+ "f58a3b6fc7d54398a502b620e9180751",
+ "62853e9a99f3490b9b5b268d52c2102d",
+ "98d1ddd134964990b74392bfc94f6bb6",
+ "c5ad073383524924be6e8ce7c1d9a1d5"
+ ]
+ },
+ "id": "e7cqAWIayn6w",
+ "outputId": "ce06bac2-ff48-409b-c34f-216531b9cffe"
+ },
+ "outputs": [
+ {
+ "output_type": "display_data",
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "e204b2ca0ef444779bf10e8d37943284",
+ "version_minor": 0,
+ "version_major": 2
+ },
+ "text/plain": [
+ "Downloading: 0%| | 0.00/1.18G [00:00, ?B/s]"
+ ]
+ },
+ "metadata": {}
+ },
+ {
+ "output_type": "stream",
+ "name": "stderr",
+ "text": [
+ "Some weights of the model checkpoint at facebook/wav2vec2-xls-r-300m were not used when initializing Wav2Vec2ForCTC: ['project_hid.bias', 'project_q.bias', 'quantizer.codevectors', 'quantizer.weight_proj.weight', 'project_q.weight', 'project_hid.weight', 'quantizer.weight_proj.bias']\n",
+ "- This IS expected if you are initializing Wav2Vec2ForCTC from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n",
+ "- This IS NOT expected if you are initializing Wav2Vec2ForCTC from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n",
+ "Some weights of Wav2Vec2ForCTC were not initialized from the model checkpoint at facebook/wav2vec2-xls-r-300m and are newly initialized: ['lm_head.bias', 'lm_head.weight']\n",
+ "You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\n"
+ ]
+ }
+ ],
+ "source": [
+ "from transformers import AutoModelForCTC\n",
+ "\n",
+ "model = AutoModelForCTC.from_pretrained(\n",
+ " model_checkpoint,\n",
+ " attention_dropout=0.094,\n",
+ " hidden_dropout=0.047,\n",
+ " feat_proj_dropout=0.04,\n",
+ " mask_time_prob=0.4,\n",
+ " layerdrop=0.041,\n",
+ " ctc_loss_reduction=\"mean\", \n",
+ " pad_token_id=processor.tokenizer.pad_token_id,\n",
+ " vocab_size=len(processor.tokenizer)\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "1DwR3XLSzGDD"
+ },
+ "source": [
+ "The first component of most transformer-based speech models consists of a stack of CNN layers that are used to extract acoustically meaningful - but contextually independent - features from the raw speech signal. This part of the model has already been sufficiently trained during pretraining and as stated in the [paper](https://arxiv.org/pdf/2006.13979.pdf) does not need to be fine-tuned anymore. \n",
+ "Thus, we can set the `requires_grad` to `False` for all parameters of the *feature extraction* part."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "oGI8zObtZ3V0"
+ },
+ "outputs": [],
+ "source": [
+ "if hasattr(model, \"freeze_feature_extractor\"):\n",
+ " model.freeze_feature_extractor()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "lD4aGhQM0K-D"
+ },
+ "source": [
+ "In a final step, we define all parameters related to training. \n",
+ "To give more explanation on some of the parameters:\n",
+ "- `group_by_length` makes training more efficient by grouping training samples of similar input length into one batch. This can significantly speed up training time by heavily reducing the overall number of useless padding tokens that are passed through the model\n",
+ "- `learning_rate` and `weight_decay` were heuristically tuned until fine-tuning has become stable. Note that those parameters strongly depend on the Common Voice dataset and might be suboptimal for other speech datasets.\n",
+ "\n",
+ "For more explanations on other parameters, one can take a look at the [docs](https://huggingface.co./transformers/master/main_classes/trainer.html?highlight=trainer#trainingarguments).\n",
+ "\n",
+ "During training, a checkpoint will be uploaded asynchronously to the hub every 400 training steps. It allows you to also play around with the demo widget even while your model is still training.\n",
+ "\n",
+ "**Note**: If one does not want to upload the model checkpoints to the hub, simply set `push_to_hub=False`."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "KbeKSV7uzGPP"
+ },
+ "outputs": [],
+ "source": [
+ "from transformers import TrainingArguments\n",
+ "\n",
+ "training_args = TrainingArguments(\n",
+ " output_dir=repo_name,\n",
+ " group_by_length=True,\n",
+ " per_device_train_batch_size=batch_size,\n",
+ " gradient_accumulation_steps=2,\n",
+ " evaluation_strategy=\"steps\",\n",
+ " num_train_epochs=30,\n",
+ " gradient_checkpointing=True,\n",
+ " fp16=True,\n",
+ " save_steps=500,\n",
+ " eval_steps=500,\n",
+ " logging_steps=500,\n",
+ " learning_rate=1e-4,\n",
+ " warmup_steps=300,\n",
+ " save_total_limit=1,\n",
+ " push_to_hub=True,\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 140
+ },
+ "id": "-ptmRKjAd4Vr",
+ "outputId": "66187b79-442b-4fc6-d6a2-85bdb3d77e9b"
+ },
+ "outputs": [
+ {
+ "output_type": "execute_result",
+ "data": {
+ "application/vnd.google.colaboratory.intrinsic+json": {
+ "type": "string"
+ },
+ "text/plain": [
+ "' hyperparameters\\ntraining_args = TrainingArguments(\\n output_dir=repo_name,\\n group_by_length=True,\\n #per_device_train_batch_size=batch_size,\\n gradient_accumulation_steps=2,\\n evaluation_strategy=\"steps\",\\n num_train_epochs=5,\\n per_device_eval_batch_size=\"8\",\\n per_device_train_batch_size=\"32\",\\n gradient_checkpointing=True,\\n fp16=True,\\n save_steps=100,\\n eval_steps=100,\\n logging_steps=100,\\n learning_rate=1e-4,\\n warmup_steps=300,\\n save_total_limit=2,\\n push_to_hub=True,\\n freeze_feature_extractor=True,\\n save_total_limit=\"1\",\\n feat_proj_dropout=\"0.04\",\\n layerdrop=\"0.041\",\\n attention_dropout=\"0.094\",\\n activation_dropout=\"0.055\",\\n hidden_dropout=\"0.047\",\\n mask_time_prob=\"0.4\",\\n dataloader_num_workers=\"8\"\\n)\\n'"
+ ]
+ },
+ "metadata": {},
+ "execution_count": 43
+ }
+ ],
+ "source": [
+ "''' hyperparameters\n",
+ "training_args = TrainingArguments(\n",
+ " output_dir=repo_name,\n",
+ " group_by_length=True,\n",
+ " #per_device_train_batch_size=batch_size,\n",
+ " gradient_accumulation_steps=2,\n",
+ " evaluation_strategy=\"steps\",\n",
+ " num_train_epochs=5,\n",
+ " per_device_eval_batch_size=\"8\",\n",
+ " per_device_train_batch_size=\"32\",\n",
+ " gradient_checkpointing=True,\n",
+ " fp16=True,\n",
+ " save_steps=100,\n",
+ " eval_steps=100,\n",
+ " logging_steps=100,\n",
+ " learning_rate=1e-4,\n",
+ " warmup_steps=300,\n",
+ " save_total_limit=2,\n",
+ " push_to_hub=True,\n",
+ " freeze_feature_extractor=True,\n",
+ " save_total_limit=\"1\",\n",
+ " feat_proj_dropout=\"0.04\",\n",
+ " layerdrop=\"0.041\",\n",
+ " attention_dropout=\"0.094\",\n",
+ " activation_dropout=\"0.055\",\n",
+ " hidden_dropout=\"0.047\",\n",
+ " mask_time_prob=\"0.4\",\n",
+ " dataloader_num_workers=\"8\"\n",
+ ")\n",
+ "'''"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "OsW-WZcL1ZtN"
+ },
+ "source": [
+ "Now, all instances can be passed to Trainer and we are ready to start training!"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "rY7vBmFCPFgC",
+ "outputId": "63d63141-ac40-4d03-d984-726733f8ca00"
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stderr",
+ "text": [
+ "/content/wav2vec2-xls-r-300m-tr-CV8-v1 is already a clone of https://huggingface.co./emre/wav2vec2-xls-r-300m-tr-CV8-v1. Make sure you pull the latest changes with `repo.git_pull()`.\n",
+ "Using amp fp16 backend\n"
+ ]
+ }
+ ],
+ "source": [
+ "from transformers import Trainer\n",
+ "\n",
+ "trainer = Trainer(\n",
+ " model=model,\n",
+ " data_collator=data_collator,\n",
+ " args=training_args,\n",
+ " compute_metrics=compute_metrics,\n",
+ " train_dataset=common_voice_train,\n",
+ " eval_dataset=common_voice_test,\n",
+ " tokenizer=processor.feature_extractor,\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "UoXBx1JAA0DX"
+ },
+ "source": [
+ "\n",
+ "\n",
+ "---\n",
+ "\n",
+ "${}^1$ To allow models to become independent of the speaker rate, in CTC, consecutive tokens that are identical are simply grouped as a single token. However, the encoded labels should not be grouped when decoding since they don't correspond to the predicted tokens of the model, which is why the `group_tokens=False` parameter has to be passed. If we wouldn't pass this parameter a word like `\"hello\"` would incorrectly be encoded, and decoded as `\"helo\"`.\n",
+ "\n",
+ "${}^2$ The blank token allows the model to predict a word, such as `\"hello\"` by forcing it to insert the blank token between the two l's. A CTC-conform prediction of `\"hello\"` of our model would be `[PAD] [PAD] \"h\" \"e\" \"e\" \"l\" \"l\" [PAD] \"l\" \"o\" \"o\" [PAD]`."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "rpvZHM1xReIW"
+ },
+ "source": [
+ "### Training"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "j-3oKSzZ1hGq"
+ },
+ "source": [
+ "Training will take multiple hours depending on the GPU allocated to this notebook. Every `save_steps`, the current checkpoint will be uploaded to the Hub."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 1000
+ },
+ "id": "d2G6z5RGAq5p",
+ "outputId": "f385f8ee-aba8-4945-db45-d1f5ef6682e9"
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stderr",
+ "text": [
+ "The following columns in the training set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
+ "***** Running training *****\n",
+ " Num examples = 20916\n",
+ " Num Epochs = 30\n",
+ " Instantaneous batch size per device = 16\n",
+ " Total train batch size (w. parallel, distributed & accumulation) = 32\n",
+ " Gradient Accumulation steps = 2\n",
+ " Total optimization steps = 19620\n",
+ "/usr/local/lib/python3.7/dist-packages/transformers/feature_extraction_utils.py:158: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.)\n",
+ " tensor = as_tensor(value)\n",
+ "/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py:882: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\n",
+ " return (input_length - kernel_size) // stride + 1\n"
+ ]
+ },
+ {
+ "output_type": "display_data",
+ "data": {
+ "text/html": [
+ "\n",
+ "
\n",
+ " "
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {}
+ },
+ {
+ "output_type": "stream",
+ "name": "stderr",
+ "text": [
+ "The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
+ "***** Running Evaluation *****\n",
+ " Num examples = 8339\n",
+ " Batch size = 8\n",
+ "Saving model checkpoint to wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-500\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-500/config.json\n",
+ "Model weights saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-500/pytorch_model.bin\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-500/preprocessor_config.json\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/preprocessor_config.json\n",
+ "/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py:882: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\n",
+ " return (input_length - kernel_size) // stride + 1\n",
+ "The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
+ "***** Running Evaluation *****\n",
+ " Num examples = 8339\n",
+ " Batch size = 8\n",
+ "Saving model checkpoint to wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-1000\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-1000/config.json\n",
+ "Model weights saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-1000/pytorch_model.bin\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-1000/preprocessor_config.json\n",
+ "Deleting older checkpoint [wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-500] due to args.save_total_limit\n",
+ "/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py:882: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\n",
+ " return (input_length - kernel_size) // stride + 1\n",
+ "The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
+ "***** Running Evaluation *****\n",
+ " Num examples = 8339\n",
+ " Batch size = 8\n",
+ "Saving model checkpoint to wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-1500\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-1500/config.json\n",
+ "Model weights saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-1500/pytorch_model.bin\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-1500/preprocessor_config.json\n",
+ "Deleting older checkpoint [wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-1000] due to args.save_total_limit\n",
+ "/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py:882: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\n",
+ " return (input_length - kernel_size) // stride + 1\n",
+ "The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
+ "***** Running Evaluation *****\n",
+ " Num examples = 8339\n",
+ " Batch size = 8\n",
+ "Saving model checkpoint to wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-2000\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-2000/config.json\n",
+ "Model weights saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-2000/pytorch_model.bin\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-2000/preprocessor_config.json\n",
+ "Deleting older checkpoint [wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-1500] due to args.save_total_limit\n",
+ "/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py:882: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\n",
+ " return (input_length - kernel_size) // stride + 1\n",
+ "The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
+ "***** Running Evaluation *****\n",
+ " Num examples = 8339\n",
+ " Batch size = 8\n",
+ "Saving model checkpoint to wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-2500\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-2500/config.json\n",
+ "Model weights saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-2500/pytorch_model.bin\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-2500/preprocessor_config.json\n",
+ "Deleting older checkpoint [wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-2000] due to args.save_total_limit\n",
+ "/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py:882: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\n",
+ " return (input_length - kernel_size) // stride + 1\n",
+ "The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
+ "***** Running Evaluation *****\n",
+ " Num examples = 8339\n",
+ " Batch size = 8\n",
+ "Saving model checkpoint to wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-3000\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-3000/config.json\n",
+ "Model weights saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-3000/pytorch_model.bin\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-3000/preprocessor_config.json\n",
+ "Deleting older checkpoint [wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-2500] due to args.save_total_limit\n",
+ "/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py:882: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\n",
+ " return (input_length - kernel_size) // stride + 1\n",
+ "The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
+ "***** Running Evaluation *****\n",
+ " Num examples = 8339\n",
+ " Batch size = 8\n",
+ "Saving model checkpoint to wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-3500\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-3500/config.json\n",
+ "Model weights saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-3500/pytorch_model.bin\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-3500/preprocessor_config.json\n",
+ "Deleting older checkpoint [wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-3000] due to args.save_total_limit\n",
+ "/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py:882: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\n",
+ " return (input_length - kernel_size) // stride + 1\n",
+ "The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
+ "***** Running Evaluation *****\n",
+ " Num examples = 8339\n",
+ " Batch size = 8\n",
+ "Saving model checkpoint to wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-4000\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-4000/config.json\n",
+ "Model weights saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-4000/pytorch_model.bin\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-4000/preprocessor_config.json\n",
+ "Deleting older checkpoint [wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-3500] due to args.save_total_limit\n",
+ "/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py:882: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\n",
+ " return (input_length - kernel_size) // stride + 1\n",
+ "The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
+ "***** Running Evaluation *****\n",
+ " Num examples = 8339\n",
+ " Batch size = 8\n",
+ "Saving model checkpoint to wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-4500\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-4500/config.json\n",
+ "Model weights saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-4500/pytorch_model.bin\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-4500/preprocessor_config.json\n",
+ "Deleting older checkpoint [wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-4000] due to args.save_total_limit\n",
+ "/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py:882: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\n",
+ " return (input_length - kernel_size) // stride + 1\n",
+ "The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
+ "***** Running Evaluation *****\n",
+ " Num examples = 8339\n",
+ " Batch size = 8\n",
+ "Saving model checkpoint to wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-5000\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-5000/config.json\n",
+ "Model weights saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-5000/pytorch_model.bin\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-5000/preprocessor_config.json\n",
+ "Deleting older checkpoint [wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-4500] due to args.save_total_limit\n",
+ "/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py:882: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\n",
+ " return (input_length - kernel_size) // stride + 1\n",
+ "The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
+ "***** Running Evaluation *****\n",
+ " Num examples = 8339\n",
+ " Batch size = 8\n",
+ "Saving model checkpoint to wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-5500\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-5500/config.json\n",
+ "Model weights saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-5500/pytorch_model.bin\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-5500/preprocessor_config.json\n",
+ "Deleting older checkpoint [wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-5000] due to args.save_total_limit\n",
+ "/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py:882: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\n",
+ " return (input_length - kernel_size) // stride + 1\n",
+ "The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
+ "***** Running Evaluation *****\n",
+ " Num examples = 8339\n",
+ " Batch size = 8\n",
+ "Saving model checkpoint to wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-6000\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-6000/config.json\n",
+ "Model weights saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-6000/pytorch_model.bin\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-6000/preprocessor_config.json\n",
+ "Deleting older checkpoint [wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-5500] due to args.save_total_limit\n",
+ "/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py:882: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\n",
+ " return (input_length - kernel_size) // stride + 1\n",
+ "The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
+ "***** Running Evaluation *****\n",
+ " Num examples = 8339\n",
+ " Batch size = 8\n",
+ "Saving model checkpoint to wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-6500\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-6500/config.json\n",
+ "Model weights saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-6500/pytorch_model.bin\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-6500/preprocessor_config.json\n",
+ "Deleting older checkpoint [wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-6000] due to args.save_total_limit\n",
+ "/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py:882: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\n",
+ " return (input_length - kernel_size) // stride + 1\n",
+ "The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
+ "***** Running Evaluation *****\n",
+ " Num examples = 8339\n",
+ " Batch size = 8\n",
+ "Saving model checkpoint to wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-7000\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-7000/config.json\n",
+ "Model weights saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-7000/pytorch_model.bin\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-7000/preprocessor_config.json\n",
+ "Deleting older checkpoint [wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-6500] due to args.save_total_limit\n",
+ "/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py:882: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\n",
+ " return (input_length - kernel_size) // stride + 1\n",
+ "The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
+ "***** Running Evaluation *****\n",
+ " Num examples = 8339\n",
+ " Batch size = 8\n",
+ "Saving model checkpoint to wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-7500\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-7500/config.json\n",
+ "Model weights saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-7500/pytorch_model.bin\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-7500/preprocessor_config.json\n",
+ "Deleting older checkpoint [wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-7000] due to args.save_total_limit\n",
+ "/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py:882: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\n",
+ " return (input_length - kernel_size) // stride + 1\n",
+ "The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
+ "***** Running Evaluation *****\n",
+ " Num examples = 8339\n",
+ " Batch size = 8\n",
+ "Saving model checkpoint to wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-8000\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-8000/config.json\n",
+ "Model weights saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-8000/pytorch_model.bin\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-8000/preprocessor_config.json\n",
+ "Deleting older checkpoint [wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-7500] due to args.save_total_limit\n",
+ "/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py:882: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\n",
+ " return (input_length - kernel_size) // stride + 1\n",
+ "The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
+ "***** Running Evaluation *****\n",
+ " Num examples = 8339\n",
+ " Batch size = 8\n",
+ "Saving model checkpoint to wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-8500\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-8500/config.json\n",
+ "Model weights saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-8500/pytorch_model.bin\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-8500/preprocessor_config.json\n",
+ "Deleting older checkpoint [wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-8000] due to args.save_total_limit\n",
+ "/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py:882: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\n",
+ " return (input_length - kernel_size) // stride + 1\n",
+ "The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
+ "***** Running Evaluation *****\n",
+ " Num examples = 8339\n",
+ " Batch size = 8\n",
+ "Saving model checkpoint to wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-9000\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-9000/config.json\n",
+ "Model weights saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-9000/pytorch_model.bin\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-9000/preprocessor_config.json\n",
+ "Deleting older checkpoint [wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-8500] due to args.save_total_limit\n",
+ "/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py:882: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\n",
+ " return (input_length - kernel_size) // stride + 1\n",
+ "The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
+ "***** Running Evaluation *****\n",
+ " Num examples = 8339\n",
+ " Batch size = 8\n",
+ "Saving model checkpoint to wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-9500\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-9500/config.json\n",
+ "Model weights saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-9500/pytorch_model.bin\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-9500/preprocessor_config.json\n",
+ "Deleting older checkpoint [wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-9000] due to args.save_total_limit\n",
+ "/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py:882: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\n",
+ " return (input_length - kernel_size) // stride + 1\n",
+ "The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
+ "***** Running Evaluation *****\n",
+ " Num examples = 8339\n",
+ " Batch size = 8\n",
+ "Saving model checkpoint to wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-10000\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-10000/config.json\n",
+ "Model weights saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-10000/pytorch_model.bin\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-10000/preprocessor_config.json\n",
+ "Deleting older checkpoint [wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-9500] due to args.save_total_limit\n",
+ "/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py:882: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\n",
+ " return (input_length - kernel_size) // stride + 1\n",
+ "The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
+ "***** Running Evaluation *****\n",
+ " Num examples = 8339\n",
+ " Batch size = 8\n",
+ "Saving model checkpoint to wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-10500\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-10500/config.json\n",
+ "Model weights saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-10500/pytorch_model.bin\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-10500/preprocessor_config.json\n",
+ "Deleting older checkpoint [wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-10000] due to args.save_total_limit\n",
+ "/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py:882: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\n",
+ " return (input_length - kernel_size) // stride + 1\n",
+ "The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
+ "***** Running Evaluation *****\n",
+ " Num examples = 8339\n",
+ " Batch size = 8\n",
+ "Saving model checkpoint to wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-11000\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-11000/config.json\n",
+ "Model weights saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-11000/pytorch_model.bin\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-11000/preprocessor_config.json\n",
+ "Deleting older checkpoint [wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-10500] due to args.save_total_limit\n",
+ "/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py:882: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\n",
+ " return (input_length - kernel_size) // stride + 1\n",
+ "The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
+ "***** Running Evaluation *****\n",
+ " Num examples = 8339\n",
+ " Batch size = 8\n",
+ "Saving model checkpoint to wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-11500\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-11500/config.json\n",
+ "Model weights saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-11500/pytorch_model.bin\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-11500/preprocessor_config.json\n",
+ "Deleting older checkpoint [wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-11000] due to args.save_total_limit\n",
+ "/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py:882: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\n",
+ " return (input_length - kernel_size) // stride + 1\n",
+ "The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
+ "***** Running Evaluation *****\n",
+ " Num examples = 8339\n",
+ " Batch size = 8\n",
+ "Saving model checkpoint to wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-12000\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-12000/config.json\n",
+ "Model weights saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-12000/pytorch_model.bin\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-12000/preprocessor_config.json\n",
+ "Deleting older checkpoint [wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-11500] due to args.save_total_limit\n",
+ "/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py:882: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\n",
+ " return (input_length - kernel_size) // stride + 1\n",
+ "The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
+ "***** Running Evaluation *****\n",
+ " Num examples = 8339\n",
+ " Batch size = 8\n",
+ "Saving model checkpoint to wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-12500\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-12500/config.json\n",
+ "Model weights saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-12500/pytorch_model.bin\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-12500/preprocessor_config.json\n",
+ "Deleting older checkpoint [wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-12000] due to args.save_total_limit\n",
+ "/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py:882: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\n",
+ " return (input_length - kernel_size) // stride + 1\n",
+ "The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
+ "***** Running Evaluation *****\n",
+ " Num examples = 8339\n",
+ " Batch size = 8\n",
+ "Saving model checkpoint to wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-13000\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-13000/config.json\n",
+ "Model weights saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-13000/pytorch_model.bin\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-13000/preprocessor_config.json\n",
+ "Deleting older checkpoint [wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-12500] due to args.save_total_limit\n",
+ "/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py:882: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\n",
+ " return (input_length - kernel_size) // stride + 1\n",
+ "The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
+ "***** Running Evaluation *****\n",
+ " Num examples = 8339\n",
+ " Batch size = 8\n",
+ "Saving model checkpoint to wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-13500\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-13500/config.json\n",
+ "Model weights saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-13500/pytorch_model.bin\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-13500/preprocessor_config.json\n",
+ "Deleting older checkpoint [wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-13000] due to args.save_total_limit\n",
+ "/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py:882: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\n",
+ " return (input_length - kernel_size) // stride + 1\n",
+ "The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
+ "***** Running Evaluation *****\n",
+ " Num examples = 8339\n",
+ " Batch size = 8\n",
+ "Saving model checkpoint to wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-14000\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-14000/config.json\n",
+ "Model weights saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-14000/pytorch_model.bin\n",
+ "Configuration saved in wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-14000/preprocessor_config.json\n",
+ "Deleting older checkpoint [wav2vec2-xls-r-300m-tr-CV8-v1/checkpoint-13500] due to args.save_total_limit\n",
+ "/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py:882: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\n",
+ " return (input_length - kernel_size) // stride + 1\n",
+ "The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
+ "***** Running Evaluation *****\n",
+ " Num examples = 8339\n",
+ " Batch size = 8\n"
+ ]
+ }
+ ],
+ "source": [
+ "trainer.train()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "kPNxoqifM7R3"
+ },
+ "source": [
+ "You can now upload the final result of the training to the Hub. Just execute this instruction:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "FKgqc1FhAxAP",
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 165
+ },
+ "outputId": "8b001341-0116-4451-b5fc-92bfc3beb4e2"
+ },
+ "outputs": [
+ {
+ "output_type": "error",
+ "ename": "NameError",
+ "evalue": "ignored",
+ "traceback": [
+ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
+ "\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)",
+ "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m()\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0mtrainer\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mpush_to_hub\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m",
+ "\u001b[0;31mNameError\u001b[0m: name 'trainer' is not defined"
+ ]
+ }
+ ],
+ "source": [
+ "trainer.push_to_hub()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "gKSOruDqNJxO"
+ },
+ "source": [
+ "You can now share this model with all your friends, family, favorite pets: they can all load it with the identifier \"your-username/the-name-you-picked\" so for instance:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "2x-GybJDNQoH"
+ },
+ "source": [
+ "```python\n",
+ "from transformers import AutoModelForCTC, Wav2Vec2Processor\n",
+ "\n",
+ "model = AutoModelForCTC.from_pretrained(\"patrickvonplaten/wav2vec2-large-xlsr-turkish-demo-colab\")\n",
+ "processor = Wav2Vec2Processor.from_pretrained(\"patrickvonplaten/wav2vec2-large-xlsr-turkish-demo-colab\")\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "RHIVc44_fY2N"
+ },
+ "source": [
+ "To fine-tune larger models on larger datasets using CTC loss, one should take a look at the official speech-recognition examples [here](https://github.com/huggingface/transformers/tree/master/examples/pytorch/speech-recognition#connectionist-temporal-classification-without-language-model-ctc-wo-lm) 🤗."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "8Yj_7_SS2XMb"
+ },
+ "outputs": [],
+ "source": [
+ "import argparse\n",
+ "import re\n",
+ "from typing import Dict\n",
+ "\n",
+ "import torch\n",
+ "from datasets import Audio, Dataset, load_dataset, load_metric\n",
+ "\n",
+ "from transformers import AutoFeatureExtractor, pipeline\n",
+ "\n",
+ "\n",
+ "def log_results(result: Dataset, args: Dict[str, str]):\n",
+ " \"\"\"DO NOT CHANGE. This function computes and logs the result metrics.\"\"\"\n",
+ "\n",
+ " log_outputs = args.log_outputs\n",
+ " dataset_id = \"_\".join(args.dataset.split(\"/\") + [args.config, args.split])\n",
+ "\n",
+ " # load metric\n",
+ " wer = load_metric(\"wer\")\n",
+ " cer = load_metric(\"cer\")\n",
+ "\n",
+ " # compute metrics\n",
+ " wer_result = wer.compute(references=result[\"target\"], predictions=result[\"prediction\"])\n",
+ " cer_result = cer.compute(references=result[\"target\"], predictions=result[\"prediction\"])\n",
+ "\n",
+ " # print & log results\n",
+ " result_str = f\"WER: {wer_result}\\n\" f\"CER: {cer_result}\"\n",
+ " print(result_str)\n",
+ "\n",
+ " with open(f\"{dataset_id}_eval_results.txt\", \"w\") as f:\n",
+ " f.write(result_str)\n",
+ "\n",
+ " # log all results in text file. Possibly interesting for analysis\n",
+ " if log_outputs is not None:\n",
+ " pred_file = f\"log_{dataset_id}_predictions.txt\"\n",
+ " target_file = f\"log_{dataset_id}_targets.txt\"\n",
+ "\n",
+ " with open(pred_file, \"w\") as p, open(target_file, \"w\") as t:\n",
+ "\n",
+ " # mapping function to write output\n",
+ " def write_to_file(batch, i):\n",
+ " p.write(f\"{i}\" + \"\\n\")\n",
+ " p.write(batch[\"prediction\"] + \"\\n\")\n",
+ " t.write(f\"{i}\" + \"\\n\")\n",
+ " t.write(batch[\"target\"] + \"\\n\")\n",
+ "\n",
+ " result.map(write_to_file, with_indices=True)\n",
+ "\n",
+ "\n",
+ "def normalize_text(text: str) -> str:\n",
+ " \"\"\"DO ADAPT FOR YOUR USE CASE. this function normalizes the target text.\"\"\"\n",
+ "\n",
+ " chars_to_ignore_regex = '[,?.!\\-\\;\\:\"“%‘”�—’…–]' # noqa: W605 IMPORTANT: this should correspond to the chars that were ignored during training\n",
+ "\n",
+ " text = re.sub(chars_to_ignore_regex, \"\", text.lower())\n",
+ "\n",
+ " # In addition, we can normalize the target text, e.g. removing new lines characters etc...\n",
+ " # note that order is important here!\n",
+ " token_sequences_to_ignore = [\"\\n\\n\", \"\\n\", \" \", \" \"]\n",
+ "\n",
+ " for t in token_sequences_to_ignore:\n",
+ " text = \" \".join(text.split(t))\n",
+ "\n",
+ " return text\n",
+ "\n",
+ "\n",
+ "def main(args):\n",
+ " # load dataset\n",
+ " dataset = load_dataset(args.dataset, args.config, split=args.split, use_auth_token=True)\n",
+ "\n",
+ " # for testing: only process the first two examples as a test\n",
+ " # dataset = dataset.select(range(10))\n",
+ "\n",
+ " # load processor\n",
+ " feature_extractor = AutoFeatureExtractor.from_pretrained(args.model_id)\n",
+ " sampling_rate = feature_extractor.sampling_rate\n",
+ "\n",
+ " # resample audio\n",
+ " dataset = dataset.cast_column(\"audio\", Audio(sampling_rate=sampling_rate))\n",
+ "\n",
+ " # load eval pipeline\n",
+ " if args.device is None:\n",
+ " args.device = 0 if torch.cuda.is_available() else -1\n",
+ " asr = pipeline(\"automatic-speech-recognition\", model=args.model_id, device=args.device)\n",
+ "\n",
+ " # map function to decode audio\n",
+ " def map_to_pred(batch):\n",
+ " prediction = asr(\n",
+ " batch[\"audio\"][\"array\"], chunk_length_s=args.chunk_length_s, stride_length_s=args.stride_length_s\n",
+ " )\n",
+ "\n",
+ " batch[\"prediction\"] = prediction[\"text\"]\n",
+ " batch[\"target\"] = normalize_text(batch[\"sentence\"])\n",
+ " return batch\n",
+ "\n",
+ " # run inference on all examples\n",
+ " result = dataset.map(map_to_pred, remove_columns=dataset.column_names)\n",
+ "\n",
+ " # compute and log_results\n",
+ " # do not change function below\n",
+ " log_results(result, args)\n",
+ "\n",
+ "'''\n",
+ "if __name__ == \"__main__\":\n",
+ " parser = argparse.ArgumentParser()\n",
+ "\n",
+ " parser.add_argument(\n",
+ " \"--model_id\", type=str, required=True, help=\"Model identifier. Should be loadable with 🤗 Transformers\"\n",
+ " )\n",
+ " parser.add_argument(\n",
+ " \"--dataset\",\n",
+ " type=str,\n",
+ " required=True,\n",
+ " help=\"Dataset name to evaluate the `model_id`. Should be loadable with 🤗 Datasets\",\n",
+ " )\n",
+ " parser.add_argument(\n",
+ " \"--config\", type=str, required=True, help=\"Config of the dataset. *E.g.* `'en'` for Common Voice\"\n",
+ " )\n",
+ " parser.add_argument(\"--split\", type=str, required=True, help=\"Split of the dataset. *E.g.* `'test'`\")\n",
+ " parser.add_argument(\n",
+ " \"--chunk_length_s\", type=float, default=None, help=\"Chunk length in seconds. Defaults to 5 seconds.\"\n",
+ " )\n",
+ " parser.add_argument(\n",
+ " \"--stride_length_s\", type=float, default=None, help=\"Stride of the audio chunks. Defaults to 1 second.\"\n",
+ " )\n",
+ " parser.add_argument(\n",
+ " \"--log_outputs\", action=\"store_true\", help=\"If defined, write outputs to log file for analysis.\"\n",
+ " )\n",
+ " parser.add_argument(\n",
+ " \"--device\",\n",
+ " type=int,\n",
+ " default=None,\n",
+ " help=\"The device to run the pipeline on. -1 for CPU (default), 0 for the first GPU and so on.\",\n",
+ " )\n",
+ " args = parser.parse_args()\n",
+ "\n",
+ " main(args)"
+ ]
+ }
+ ],
+ "metadata": {
+ "accelerator": "GPU",
+ "colab": {
+ "collapsed_sections": [],
+ "machine_shape": "hm",
+ "name": "Emre-Turkish-Fine-Tune Multi-Lingual Speech Recognition Model with 🤗 Transformers using CTC.ipynb",
+ "provenance": []
+ },
+ "kernelspec": {
+ "display_name": "Python 3",
+ "name": "python3"
+ },
+ "widgets": {
+ "application/vnd.jupyter.widget-state+json": {
+ "f806b26119884e688585835e33bd9cda": {
+ "model_module": "@jupyter-widgets/controls",
+ "model_name": "VBoxModel",
+ "model_module_version": "1.5.0",
+ "state": {
+ "_view_name": "VBoxView",
+ "_dom_classes": [],
+ "_model_name": "VBoxModel",
+ "_view_module": "@jupyter-widgets/controls",
+ "_model_module_version": "1.5.0",
+ "_view_count": null,
+ "_view_module_version": "1.5.0",
+ "box_style": "",
+ "layout": "IPY_MODEL_62281d26fca2464b891e4e8dced07110",
+ "_model_module": "@jupyter-widgets/controls",
+ "children": [
+ "IPY_MODEL_e8ed03cfa00c4941a2351dde3c2e4cb7",
+ "IPY_MODEL_5f40b4c187664350ad4f7bce35518d47",
+ "IPY_MODEL_400d6ff9e84d476cadeaddfc57ba7f31",
+ "IPY_MODEL_68ab1ecb6d724fc29777d7216081a667",
+ "IPY_MODEL_4cc4957451364dfab93f760d0f8cd0b6"
+ ]
+ }
+ },
+ "62281d26fca2464b891e4e8dced07110": {
+ "model_module": "@jupyter-widgets/base",
+ "model_name": "LayoutModel",
+ "model_module_version": "1.2.0",
+ "state": {
+ "_view_name": "LayoutView",
+ "grid_template_rows": null,
+ "right": null,
+ "justify_content": null,
+ "_view_module": "@jupyter-widgets/base",
+ "overflow": null,
+ "_model_module_version": "1.2.0",
+ "_view_count": null,
+ "flex_flow": "column",
+ "width": "50%",
+ "min_width": null,
+ "border": null,
+ "align_items": "center",
+ "bottom": null,
+ "_model_module": "@jupyter-widgets/base",
+ "top": null,
+ "grid_column": null,
+ "overflow_y": null,
+ "overflow_x": null,
+ "grid_auto_flow": null,
+ "grid_area": null,
+ "grid_template_columns": null,
+ "flex": null,
+ "_model_name": "LayoutModel",
+ "justify_items": null,
+ "grid_row": null,
+ "max_height": null,
+ "align_content": null,
+ "visibility": null,
+ "align_self": null,
+ "height": null,
+ "min_height": null,
+ "padding": null,
+ "grid_auto_rows": null,
+ "grid_gap": null,
+ "max_width": null,
+ "order": null,
+ "_view_module_version": "1.2.0",
+ "grid_template_areas": null,
+ "object_position": null,
+ "object_fit": null,
+ "grid_auto_columns": null,
+ "margin": null,
+ "display": "flex",
+ "left": null
+ }
+ },
+ "e8ed03cfa00c4941a2351dde3c2e4cb7": {
+ "model_module": "@jupyter-widgets/controls",
+ "model_name": "HTMLModel",
+ "model_module_version": "1.5.0",
+ "state": {
+ "_view_name": "HTMLView",
+ "style": "IPY_MODEL_32c43163ae214a73871c7952f91c1b79",
+ "_dom_classes": [],
+ "description": "",
+ "_model_name": "HTMLModel",
+ "placeholder": "",
+ "_view_module": "@jupyter-widgets/controls",
+ "_model_module_version": "1.5.0",
+ "value": "
\n\n \nCopy a token from your Hugging Face tokens page and paste it below.\n \nImmediately click login after copying your token or it might be stored in plain text in this notebook file.\n