Update README.md
Browse files
README.md
CHANGED
@@ -5,27 +5,26 @@ language:
|
|
5 |
Whisper ASR for Kyrgyz Language is an automatic speech recognition (ASR) solution customized for the Kyrgyz language. It is based on the pre-trained Whisper model and has undergone fine-tuning and adaptation to accurately transcribe Kyrgyz speech, taking into account its specific phonetic intricacies.
|
6 |
|
7 |
To run the model, first install:
|
8 |
-
|
9 |
!pip install datasets>=2.6.1
|
10 |
!pip install git+https://github.com/huggingface/transformers
|
11 |
!pip install librosa
|
12 |
!pip install evaluate>=0.30
|
13 |
!pip install jiwer
|
14 |
!pip install gradio==3.50.2
|
15 |
-
|
16 |
-
|
17 |
|
18 |
Linking the notebook to the Hub is straightforward - it simply requires entering your Hub authentication token when prompted.
|
19 |
-
|
20 |
from huggingface_hub import notebook_login
|
21 |
|
22 |
notebook_login()
|
23 |
-
|
24 |
|
25 |
Now that we've fine-tuned our model, we can build a demo to show off its ASR capabilities! We'll use 🤗 Transformers pipeline, which will take care of the entire ASR pipeline, right from pre-processing the audio inputs to decoding the model predictions. We'll build our interactive demo with Gradio. Gradio is arguably the most straightforward way of building machine learning demos; with Gradio, we can build a demo in just a matter of minutes!
|
26 |
|
27 |
Running the example below will generate a Gradio demo where we can record speech through the microphone of our computer and input it to our fine-tuned Whisper model to transcribe the corresponding text:
|
28 |
-
|
29 |
from transformers import pipeline
|
30 |
import gradio as gr
|
31 |
|
@@ -44,4 +43,4 @@ iface = gr.Interface(
|
|
44 |
)
|
45 |
|
46 |
iface.launch()
|
47 |
-
|
|
|
5 |
Whisper ASR for Kyrgyz Language is an automatic speech recognition (ASR) solution customized for the Kyrgyz language. It is based on the pre-trained Whisper model and has undergone fine-tuning and adaptation to accurately transcribe Kyrgyz speech, taking into account its specific phonetic intricacies.
|
6 |
|
7 |
To run the model, first install:
|
8 |
+
```bash
|
9 |
!pip install datasets>=2.6.1
|
10 |
!pip install git+https://github.com/huggingface/transformers
|
11 |
!pip install librosa
|
12 |
!pip install evaluate>=0.30
|
13 |
!pip install jiwer
|
14 |
!pip install gradio==3.50.2
|
15 |
+
```
|
|
|
16 |
|
17 |
Linking the notebook to the Hub is straightforward - it simply requires entering your Hub authentication token when prompted.
|
18 |
+
```python
|
19 |
from huggingface_hub import notebook_login
|
20 |
|
21 |
notebook_login()
|
22 |
+
```
|
23 |
|
24 |
Now that we've fine-tuned our model, we can build a demo to show off its ASR capabilities! We'll use 🤗 Transformers pipeline, which will take care of the entire ASR pipeline, right from pre-processing the audio inputs to decoding the model predictions. We'll build our interactive demo with Gradio. Gradio is arguably the most straightforward way of building machine learning demos; with Gradio, we can build a demo in just a matter of minutes!
|
25 |
|
26 |
Running the example below will generate a Gradio demo where we can record speech through the microphone of our computer and input it to our fine-tuned Whisper model to transcribe the corresponding text:
|
27 |
+
```python
|
28 |
from transformers import pipeline
|
29 |
import gradio as gr
|
30 |
|
|
|
43 |
)
|
44 |
|
45 |
iface.launch()
|
46 |
+
```
|