Update README.md
Browse files
README.md
CHANGED
@@ -1,199 +1,282 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
tags:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
-
|
7 |
|
8 |
-
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
|
|
|
10 |
|
|
|
|
|
|
|
11 |
|
12 |
-
|
13 |
|
14 |
-
|
15 |
|
16 |
-
|
17 |
|
18 |
-
|
19 |
|
20 |
-
|
21 |
-
- **Funded by [optional]:** [More Information Needed]
|
22 |
-
- **Shared by [optional]:** [More Information Needed]
|
23 |
-
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
|
28 |
-
### Model Sources [optional]
|
29 |
|
30 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
31 |
|
32 |
-
|
33 |
-
- **Paper [optional]:** [More Information Needed]
|
34 |
-
- **Demo [optional]:** [More Information Needed]
|
35 |
|
36 |
-
|
37 |
|
38 |
-
|
39 |
|
40 |
-
|
41 |
|
42 |
-
|
|
|
|
|
43 |
|
44 |
-
[More Information Needed]
|
45 |
|
46 |
-
###
|
47 |
|
48 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
49 |
|
50 |
-
|
51 |
|
52 |
-
|
|
|
|
|
|
|
|
|
53 |
|
54 |
-
|
55 |
|
56 |
-
|
|
|
|
|
57 |
|
58 |
-
|
|
|
59 |
|
60 |
-
|
|
|
61 |
|
62 |
-
|
|
|
|
|
|
|
63 |
|
64 |
-
###
|
65 |
|
66 |
-
|
67 |
|
68 |
-
|
|
|
|
|
|
|
|
|
69 |
|
70 |
-
|
71 |
|
72 |
-
|
|
|
|
|
73 |
|
74 |
-
|
|
|
75 |
|
76 |
-
|
|
|
77 |
|
78 |
-
|
|
|
|
|
|
|
79 |
|
80 |
-
|
81 |
|
82 |
-
|
|
|
83 |
|
84 |
-
|
|
|
|
|
|
|
|
|
85 |
|
86 |
-
|
87 |
|
88 |
-
|
|
|
89 |
|
90 |
-
|
|
|
91 |
|
|
|
|
|
92 |
|
93 |
-
|
|
|
|
|
|
|
94 |
|
95 |
-
|
96 |
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
|
115 |
-
|
116 |
-
|
117 |
-
|
118 |
-
|
119 |
-
|
120 |
-
|
121 |
-
|
122 |
-
|
123 |
-
|
124 |
-
|
125 |
-
|
126 |
-
|
127 |
-
|
128 |
-
|
129 |
-
|
130 |
-
|
131 |
-
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
|
136 |
-
|
137 |
-
|
138 |
-
|
139 |
-
[
|
140 |
-
|
141 |
-
|
142 |
-
|
143 |
-
|
144 |
-
|
145 |
-
|
146 |
-
|
147 |
-
- **
|
148 |
-
|
149 |
-
- **
|
150 |
-
|
151 |
-
|
152 |
-
|
153 |
-
|
154 |
-
|
155 |
-
|
156 |
-
|
157 |
-
|
158 |
-
|
159 |
-
|
160 |
-
|
161 |
-
|
162 |
-
|
163 |
-
|
164 |
-
|
165 |
-
|
166 |
-
|
167 |
-
|
168 |
-
|
169 |
-
|
170 |
-
|
171 |
-
|
172 |
-
|
173 |
-
|
174 |
-
|
175 |
-
|
176 |
-
|
177 |
-
|
178 |
-
|
179 |
-
|
180 |
-
|
181 |
-
|
182 |
-
|
183 |
-
|
184 |
-
|
185 |
-
|
186 |
-
|
187 |
-
|
188 |
-
|
189 |
-
|
190 |
-
|
191 |
-
|
192 |
-
|
193 |
-
##
|
194 |
-
|
195 |
-
|
196 |
-
|
197 |
-
|
198 |
-
|
199 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
tags:
|
4 |
+
- text-to-speech
|
5 |
+
- annotation
|
6 |
+
license: apache-2.0
|
7 |
+
language:
|
8 |
+
- en
|
9 |
+
- as
|
10 |
+
- bn
|
11 |
+
- gu
|
12 |
+
- hi
|
13 |
+
- kn
|
14 |
+
- ks
|
15 |
+
- or
|
16 |
+
- ml
|
17 |
+
- mr
|
18 |
+
- ne
|
19 |
+
- pa
|
20 |
+
- sa
|
21 |
+
- sd
|
22 |
+
- ta
|
23 |
+
- te
|
24 |
+
- ur
|
25 |
+
- om
|
26 |
+
pipeline_tag: text-to-speech
|
27 |
+
inference: false
|
28 |
+
datasets:
|
29 |
+
- ai4b-hf/GLOBE-annotated
|
30 |
---
|
31 |
|
32 |
+
<img src="https://huggingface.co/datasets/parler-tts/images/resolve/main/thumbnail.png" alt="Parler Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
|
33 |
|
|
|
34 |
|
35 |
+
# Indic Parler-TTS
|
36 |
|
37 |
+
<a target="_blank" href="https://huggingface.co/spaces/PHBJT/multi_parler_tts">
|
38 |
+
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
|
39 |
+
</a>
|
40 |
|
41 |
+
**Indic Parler-TTS** is a multilingual Indic extension of [Parler-TTS Mini](https://huggingface.co/parler-tts/parler-tts-mini-v1.1).
|
42 |
|
43 |
+
It is a fine-tuned version, trained on a **8,385 hours** multilingual Indic and English dataset.
|
44 |
|
45 |
+
**Indic Parler-TTS Mini** can officially speak in 20 Indic languages, making it comprehensive for regional language technologies, and in English. The **21 languages** supported are: Assamese, Bengali, Bodo, Chhattisgarhi, Dogri, English, Gujarati, Hindi, Kannada, Konkani, Maithili, Malayalam, Manipuri, Marathi, Nepali, Odia, Sanskrit, Santali, Sindhi, Tamil, Telugu, and Urdu.
|
46 |
|
47 |
+
Thanks to its **better prompt tokenizer**, it can easily be extended to other languages. This tokenizer has a larger vocabulary and handles byte fallback, which simplifies multilingual training.
|
48 |
|
49 |
+
🚨 This work is the result of a collaboration between the **HuggingFace audio team** and the **[AI4Bharat](https://ai4bharat.iitm.ac.in/) team**. 🚨
|
|
|
|
|
|
|
|
|
|
|
|
|
50 |
|
|
|
51 |
|
52 |
+
## 📖 Quick Index
|
53 |
+
* [👨💻 Installation](#👨💻-installation)
|
54 |
+
* [🎲 Using a random voice](#🎲-random-voice)
|
55 |
+
* [🌍 Switching languages](#[🌍-switching-languages)
|
56 |
+
* [🎯 Using a specific speaker](#🎯-using-a-specific-speaker)
|
57 |
+
* [📐Evaluation](#📐-evaluation)
|
58 |
+
* [Motivation](#motivation)
|
59 |
+
* [Optimizing inference](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md)
|
60 |
|
61 |
+
## 🛠️ Usage
|
|
|
|
|
62 |
|
63 |
+
🚨 Unlike previous versions of Parler-TTS, here we use two tokenizers - one for the prompt and one for the description. 🚨
|
64 |
|
65 |
+
### 👨💻 Installation
|
66 |
|
67 |
+
Using Parler-TTS is as simple as "bonjour". Simply install the library once:
|
68 |
|
69 |
+
```sh
|
70 |
+
pip install git+https://github.com/huggingface/parler-tts.git
|
71 |
+
```
|
72 |
|
|
|
73 |
|
74 |
+
### 🎲 Random voice
|
75 |
|
|
|
76 |
|
77 |
+
**Parler-TTS** has been trained to generate speech with features that can be controlled with a simple text prompt, for example:
|
78 |
|
79 |
+
```py
|
80 |
+
import torch
|
81 |
+
from parler_tts import ParlerTTSForConditionalGeneration
|
82 |
+
from transformers import AutoTokenizer
|
83 |
+
import soundfile as sf
|
84 |
|
85 |
+
device = "cuda:0" if torch.cuda.is_available() else "cpu"
|
86 |
|
87 |
+
model = ParlerTTSForConditionalGeneration.from_pretrained("ai4b-hf/indic-parler-tts-pretrained-v3").to(device)
|
88 |
+
tokenizer = AutoTokenizer.from_pretrained("ai4b-hf/indic-parler-tts-pretrained-v3")
|
89 |
+
description_tokenizer = AutoTokenizer.from_pretrained(model.config.text_encoder._name_or_path)
|
90 |
|
91 |
+
prompt = "Hey, how are you doing today?"
|
92 |
+
description = "A female speaker with a British accent delivers a slightly expressive and animated speech with a moderate speed and pitch. The recording is of very high quality, with the speaker's voice sounding clear and very close up."
|
93 |
|
94 |
+
input_ids = description_tokenizer(description, return_tensors="pt").input_ids.to(device)
|
95 |
+
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
|
96 |
|
97 |
+
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
|
98 |
+
audio_arr = generation.cpu().numpy().squeeze()
|
99 |
+
sf.write("indic_tts_out.wav", audio_arr, model.config.sampling_rate)
|
100 |
+
```
|
101 |
|
102 |
+
### 🌍 Switching languages
|
103 |
|
104 |
+
The template automatically adapts to the language it detects in the prompt. You don't need to specify the language you want to use. For example, to switch to Hindi, simply use an Hindi prompt:
|
105 |
|
106 |
+
```py
|
107 |
+
import torch
|
108 |
+
from parler_tts import ParlerTTSForConditionalGeneration
|
109 |
+
from transformers import AutoTokenizer
|
110 |
+
import soundfile as sf
|
111 |
|
112 |
+
device = "cuda:0" if torch.cuda.is_available() else "cpu"
|
113 |
|
114 |
+
model = ParlerTTSForConditionalGeneration.from_pretrained("ai4b-hf/indic-parler-tts-pretrained-v3").to(device)
|
115 |
+
tokenizer = AutoTokenizer.from_pretrained("ai4b-hf/indic-parler-tts-pretrained-v3")
|
116 |
+
description_tokenizer = AutoTokenizer.from_pretrained(model.config.text_encoder._name_or_path)
|
117 |
|
118 |
+
prompt = "अरे, तुम आज कैसे हो?"
|
119 |
+
description = "A female speaker delivers a slightly expressive and animated speech with a moderate speed and pitch. The recording is of very high quality, with the speaker's voice sounding clear and very close up."
|
120 |
|
121 |
+
input_ids = description_tokenizer(description, return_tensors="pt").input_ids.to(device)
|
122 |
+
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
|
123 |
|
124 |
+
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
|
125 |
+
audio_arr = generation.cpu().numpy().squeeze()
|
126 |
+
sf.write("indic_tts_out.wav", audio_arr, model.config.sampling_rate)
|
127 |
+
```
|
128 |
|
129 |
+
### 🎯 Using a specific speaker
|
130 |
|
131 |
+
To ensure speaker consistency across generations, this checkpoint was also trained on pre-determined speakers, characterized by name (e.g. Rohit, Karan, Leela, Maya, Sita, ...).
|
132 |
+
To take advantage of this, simply adapt your text description to specify which speaker to use: `Divya's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise.`
|
133 |
|
134 |
+
```py
|
135 |
+
import torch
|
136 |
+
from parler_tts import ParlerTTSForConditionalGeneration
|
137 |
+
from transformers import AutoTokenizer
|
138 |
+
import soundfile as sf
|
139 |
|
140 |
+
device = "cuda:0" if torch.cuda.is_available() else "cpu"
|
141 |
|
142 |
+
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-v1").to(device)
|
143 |
+
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-v1")
|
144 |
|
145 |
+
prompt = "अरे, तुम आज कैसे हो?"
|
146 |
+
description = "Divya's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise."
|
147 |
|
148 |
+
input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
|
149 |
+
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
|
150 |
|
151 |
+
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
|
152 |
+
audio_arr = generation.cpu().numpy().squeeze()
|
153 |
+
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
|
154 |
+
```
|
155 |
|
156 |
+
Here is the list of speakers:
|
157 |
|
158 |
+
| Language | Male | Female |
|
159 |
+
|---------------|----------|----------|
|
160 |
+
| Assamese | Amit | Aditi |
|
161 |
+
| Bengali | Arjun | Sita |
|
162 |
+
| Bodo | Bikram | Maya |
|
163 |
+
| Chhattisgarhi | Bhanu | Champa |
|
164 |
+
| Dogri | Karan | Leela |
|
165 |
+
| English | Thoma | Mary |
|
166 |
+
| Gujarati | Yash | Neha |
|
167 |
+
| Hindi | Rohit | Divya |
|
168 |
+
| Kannada | Suresh | Anu |
|
169 |
+
| Kashmiri | Rohit | Shabnam |
|
170 |
+
| Konkani | Deepak | Sunita |
|
171 |
+
| Maithili | Anil | Divya |
|
172 |
+
| Malayalam | Harish | Anjali |
|
173 |
+
| Manipuri | Ranjit | Laishram |
|
174 |
+
| Marathi | Sanjay | Sunita |
|
175 |
+
| Nepali | Ram | Amrita |
|
176 |
+
| Odia | Manas | Debjani |
|
177 |
+
| Punjabi | Gurpreet | Divjot |
|
178 |
+
| Sanskrit | Aryan | Vasudha |
|
179 |
+
| Santali | Arjun | Pushpa |
|
180 |
+
| Sindhi | Rajesh | Rekha |
|
181 |
+
| Tamil | NA | Jaya |
|
182 |
+
| Telugu | Prakash | Lalitha |
|
183 |
+
| Urdu | Rohit | Zainab |
|
184 |
+
|
185 |
+
|
186 |
+
**Tips**:
|
187 |
+
* We've set up an [inference guide](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md) to make generation faster. Think SDPA, torch.compile, batching and streaming!
|
188 |
+
* Include the term "very clear audio" to generate the highest quality audio, and "very noisy audio" for high levels of background noise
|
189 |
+
* Punctuation can be used to control the prosody of the generations, e.g. use commas to add small breaks in speech
|
190 |
+
* The remaining speech features (gender, speaking rate, pitch and reverberation) can be controlled directly through the prompt
|
191 |
+
|
192 |
+
|
193 |
+
## 📐 Evaluation
|
194 |
+
|
195 |
+
## Motivation
|
196 |
+
|
197 |
+
Parler-TTS is a reproduction of work from the paper [Natural language guidance of high-fidelity text-to-speech with synthetic annotations](https://www.text-description-to-speech.com) by Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively.
|
198 |
+
|
199 |
+
Parler-TTS was released alongside:
|
200 |
+
* [The Parler-TTS repository](https://github.com/huggingface/parler-tts) - you can train and fine-tuned your own version of the model.
|
201 |
+
* [The Data-Speech repository](https://github.com/huggingface/dataspeech) - a suite of utility scripts designed to annotate speech datasets.
|
202 |
+
* [The Parler-TTS organization](https://huggingface.co/parler-tts) - where you can find the annotated datasets as well as the future checkpoints.
|
203 |
+
|
204 |
+
|
205 |
+
## Training dataset
|
206 |
+
|
207 |
+
- **Description**:
|
208 |
+
The model was trained on an internal **Indic-Parler-Dataset**, a large-scale multilingual speech corpus designed to train the **Indic Parler-TTS** model. It provides comprehensive coverage of 24 languages, which includes all the 22 official languages of India along with Chattisgarhi and English, making it an invaluable resource for speech technologies focused on the subcontinent.
|
209 |
+
|
210 |
+
- **Key Statistics**:
|
211 |
+
|
212 |
+
| Dataset | Duration (hrs) | Languages Covered | No. of Utterances | License |
|
213 |
+
|:---------------:|:--------------:|:-----------------:|:-----------------:|:------------:|
|
214 |
+
| GLOBE | 535.0 | 1 | 581,725 | CC V1 |
|
215 |
+
| IndicTTS | 382.0 | 12 | 220,606 | CC BY 4.0 |
|
216 |
+
| IndicVoices | 2,651.0 | 22 | 1,121,104 | CC BY 4.0 |
|
217 |
+
| IndicVoices-R | 4,067.0 | 22 | 1,749,066 | CC BY 4.0 |
|
218 |
+
| LIMMITS | 568.0 | 7 | 246,008 | CC BY 4.0 |
|
219 |
+
| Rasa | 288.0 | 9 | 155,734 | CC BY 4.0 |
|
220 |
+
|
221 |
+
- **Languages Covered**:
|
222 |
+
The dataset supports **22 official languages** of India, along with English and Chhattisgarhi, making it comprehensive for regional language technologies. These languages include Assamese, Bengali, Bodo, Chhattisgarhi, Dogri, English, Gujarati, Hindi, Kannada, Kashmiri, Konkani, Maithili, Malayalam, Manipuri, Marathi, Nepali, Odia, Punjabi, Sanskrit, Santali, Sindhi, Tamil, Telugu, and Urdu.
|
223 |
+
|
224 |
+
- **Language-Wise Data Breakdown**:
|
225 |
+
|
226 |
+
| Language | Duration (hrs) | No. of Utterances |
|
227 |
+
|:---------------:|:--------------:|:-----------------:|
|
228 |
+
| Assamese | 563.87 | 256,102 |
|
229 |
+
| Bengali | 561.18 | 234,663 |
|
230 |
+
| Bodo | 637.79 | 320,584 |
|
231 |
+
| Chhattisgarhi | 80.11 | 38,148 |
|
232 |
+
| Dogri | 263.22 | 109,348 |
|
233 |
+
| English | 765.56 | 711,196 |
|
234 |
+
| Gujarati | 31.68 | 11,845 |
|
235 |
+
| Hindi | 396.75 | 162,343 |
|
236 |
+
| Kannada | 364.26 | 154,994 |
|
237 |
+
| Kashmiri | 216.17 | 93,343 |
|
238 |
+
| Konkani | 205.16 | 91,804 |
|
239 |
+
| Maithili | 473.07 | 197,886 |
|
240 |
+
| Malayalam | 394.33 | 168,281 |
|
241 |
+
| Manipuri | 112.47 | 57,068 |
|
242 |
+
| Marathi | 333.32 | 142,925 |
|
243 |
+
| Nepali | 542.69 | 244,007 |
|
244 |
+
| Odia | 264.16 | 105,469 |
|
245 |
+
| Punjabi | 280.48 | 109,795 |
|
246 |
+
| Sanskrit | 143.12 | 63,908 |
|
247 |
+
| Santali | 298.19 | 148,184 |
|
248 |
+
| Sindhi | 66.62 | 27,578 |
|
249 |
+
| Tamil | 561.92 | 236,293 |
|
250 |
+
| Telugu | 560.76 | 213,858 |
|
251 |
+
| Urdu | 268.46 | 112,932 |
|
252 |
+
|
253 |
+
|
254 |
+
## Citation
|
255 |
+
|
256 |
+
If you found this repository useful, please consider citing this work and also the original Stability AI paper:
|
257 |
+
|
258 |
+
```
|
259 |
+
@misc{lacombe-etal-2024-parler-tts,
|
260 |
+
author = {Yoach Lacombe and Vaibhav Srivastav and Sanchit Gandhi},
|
261 |
+
title = {Parler-TTS},
|
262 |
+
year = {2024},
|
263 |
+
publisher = {GitHub},
|
264 |
+
journal = {GitHub repository},
|
265 |
+
howpublished = {\url{https://github.com/huggingface/parler-tts}}
|
266 |
+
}
|
267 |
+
```
|
268 |
+
|
269 |
+
```
|
270 |
+
@misc{lyth2024natural,
|
271 |
+
title={Natural language guidance of high-fidelity text-to-speech with synthetic annotations},
|
272 |
+
author={Dan Lyth and Simon King},
|
273 |
+
year={2024},
|
274 |
+
eprint={2402.01912},
|
275 |
+
archivePrefix={arXiv},
|
276 |
+
primaryClass={cs.SD}
|
277 |
+
}
|
278 |
+
```
|
279 |
+
|
280 |
+
## License
|
281 |
+
|
282 |
+
This model is permissively licensed under the Apache 2.0 license.
|