Text-to-Speech
PyTorch
ONNX
Catalan
matcha-tts
acoustic modelling
speech
multispeaker
File size: 6,479 Bytes
4522849
e3c6df5
 
 
e69747c
 
e3c6df5
2913be0
e3c6df5
 
aa55120
 
3bbf249
4522849
e3c6df5
651d66f
e3c6df5
 
 
 
 
 
 
 
 
 
b9d9e1d
e3c6df5
 
 
 
2a49114
e3c6df5
651d66f
5c145f9
 
 
6a5a7ed
651d66f
d79d134
b780397
2a49114
e3c6df5
b9d9e1d
d79d134
 
b9d9e1d
 
d79d134
b9d9e1d
62c40e0
7705de4
 
 
5c145f9
 
 
 
 
 
 
7705de4
5c145f9
7705de4
5c145f9
651d66f
7705de4
 
 
 
 
 
 
 
 
 
 
 
 
e3c6df5
7930598
 
5c145f9
0fd399b
62c40e0
7930598
 
62c40e0
7930598
62c40e0
5c145f9
 
 
7930598
d927332
 
 
 
651d66f
 
d927332
e96083b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7930598
e96083b
7930598
 
 
d927332
fea5c8c
 
7705de4
d927332
 
c870205
 
 
d927332
2a49114
e3c6df5
 
 
1517df0
db09460
917b3c1
 
e43747c
f33f2e7
db09460
2a49114
e3c6df5
3e00824
 
 
 
917b3c1
 
 
5d63ecb
 
 
 
 
 
 
 
 
 
 
d8a4b8f
 
5d63ecb
e3c6df5
 
 
1b57147
d8a4b8f
 
 
 
e3c6df5
1b57147
 
6a5a7ed
 
 
 
 
 
 
 
 
 
 
 
 
 
e3c6df5
2a49114
e3c6df5
 
 
 
 
 
 
 
 
 
 
8d445aa
e3c6df5
 
88a3509
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
---
language:
- ca
tags:
- matcha-tts
- acoustic modelling
- speech
- multispeaker
pipeline_tag: text-to-speech
datasets:
- projecte-aina/festcat_trimmed_denoised
- projecte-aina/openslr-slr69-ca-trimmed-denoised
license: apache-2.0
---

# 🍵 Matxa-TTS Catalan Multispeaker

## Table of Contents
<details>
<summary>Click to expand</summary>

- [Model description](#model-description)
- [Intended uses and limitations](#intended-uses-and-limitations)
- [How to use](#how-to-use)
- [Training](#training)
- [Evaluation](#evaluation)
- [Citation](#citation)
- [Additional information](#additional-information)

</details>

## Model Description

🍵 **Matxa-TTS** is based on **Matcha-TTS** that is an encoder-decoder architecture designed for fast acoustic modelling in TTS. 
The encoder part is based on a text encoder and a phoneme duration prediction that together predict averaged acoustic features.
And the decoder has essentially a U-Net backbone inspired by [Grad-TTS](https://arxiv.org/pdf/2105.06337.pdf), which is based on the Transformer architecture. 
In the latter, by replacing 2D CNNs by 1D CNNs, a large reduction in memory consumption and fast synthesis is achieved.

**Matxa-TTS** is a non-autorregressive model trained with optimal-transport conditional flow matching (OT-CFM). 
This yields an ODE-based decoder capable of generating high output quality in fewer synthesis steps than models trained using score matching.

## Intended Uses and Limitations

This model is intended to serve as an acoustic feature generator for multispeaker text-to-speech systems for the Catalan language. 
It has been finetuned using a Catalan phonemizer, therefore if the model is used for other languages it may will not produce intelligible samples after mapping 
its output into a speech waveform. 

The quality of the samples can vary depending on the speaker. 
This may be due to the sensitivity of the model in learning specific frequencies and also due to the quality of samples for each speaker.

## How to Get Started with the Model

### Installation

This model has been trained using the espeak-ng open source text-to-speech software. 
The espeak-ng containing the Catalan phonemizer can be found [here](https://github.com/projecte-aina/espeak-ng)

Create a virtual environment:
```bash
python -m venv /path/to/venv
```
```bash
source /path/to/venv/bin/activate
```

For training and inferencing with Catalan Matxa-TTS you need to compile the provided espeak-ng with the Catalan phonemizer:
```bash
git clone https://github.com/projecte-aina/espeak-ng.git

export PYTHON=/path/to/env/<env_name>/bin/python
cd /path/to/espeak-ng
./autogen.sh
./configure --prefix=/path/to/espeak-ng
make
make install

pip cache purge
pip install mecab-python3
pip install unidic-lite
```
Clone the repository:

```bash
git clone -b dev-cat https://github.com/langtech-bsc/Matcha-TTS.git
cd Matcha-TTS

```
Install the package from source:
```bash
pip install -e .

```


### For Inference

#### PyTorch

Speech end-to-end inference can be done together with **Catalan Matxa-TTS**. 
Both models (Catalan Matxa-TTS and alVoCat) are loaded remotely from the HF hub.  

First, export the following environment variables to include the installed espeak-ng version:

```bash
export PYTHON=/path/to/your/venv/bin/python
export ESPEAK_DATA_PATH=/path/to/espeak-ng/espeak-ng-data
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/path/to/espeak-ng/lib
export PATH="/path/to/espeak-ng/bin:$PATH"

```
Then you can run the inference script:
```bash
cd Matcha-TTS
python3 matcha_vocos_inference.py --output_path=/output/path --text_input="Bon dia Manel, avui anem a la muntanya."

```
You can also modify the length scale (speech rate) and the temperature of the generated sample:
```bash
python3 matcha_vocos_inference.py --output_path=/output/path --text_input="Bon dia Manel, avui anem a la muntanya." --length_scale=0.8 --temperature=0.7

```

#### ONNX

We also release a ONNX version of the model

### For Training

The entire checkpoint is also released to continue training or finetuning. 
See the [repo instructions](https://github.com/langtech-bsc/Matcha-TTS/tree/dev-cat)


## Training Details

### Training data

The model was trained on 2 **Catalan** speech datasets

| Dataset             | Language | Hours   | Num. Speakers   |
|---------------------|----------|---------|-----------------|
| [Festcat](https://huggingface.co./datasets/projecte-aina/festcat_trimmed_denoised)             | ca       | 22      | 11              |
| [OpenSLR69](https://huggingface.co./datasets/projecte-aina/openslr-slr69-ca-trimmed-denoised)           | ca       | 5       | 36              |

### Training procedure

***Catalan Matcha-TTS*** was finetuned from the English multispeaker checkpoint, 
which was trained with the [VCTK dataset](https://huggingface.co./datasets/vctk) and provided by the model authors.

The embedding layer was initialized with the number of catalan speakers (47) and the original hyperparameters were kept.

### Training Hyperparameters

* batch size: 32 (x2 GPUs)
* learning rate: 1e-4
* number of speakers: 47
* n_fft: 1024
* n_feats: 80
* sample_rate: 22050
* hop_length: 256
* win_length: 1024
* f_min: 0
* f_max: 8000
* data_statistics:
  * mel_mean: -6578195
  * mel_std: 2.538758
* number of samples: 13340

## Evaluation

Validation values obtained from tensorboard from epoch 2399*: 

* val_dur_loss_epoch: 0.38
* val_prior_loss_epoch: 0.97
* val_diff_loss_epoch: 2.195

(Note that the finetuning started from epoch 1864, as previous ones were trained with VCTK dataset)

## Citation

If this code contributes to your research, please cite the work:

```
@misc{mehta2024matchatts,
      title={Matcha-TTS: A fast TTS architecture with conditional flow matching}, 
      author={Shivam Mehta and Ruibo Tu and Jonas Beskow and Éva Székely and Gustav Eje Henter},
      year={2024},
      eprint={2309.03199},
      archivePrefix={arXiv},
      primaryClass={eess.AS}
}
```

## Additional Information

### Author
The Language Technologies Unit from Barcelona Supercomputing Center.

### Contact
For further information, please send an email to <[email protected]>.

### Copyright
Copyright(c) 2023 by Language Technologies Unit, Barcelona Supercomputing Center.

### License
[Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)

### Funding
This work has been promoted and financed by the Generalitat de Catalunya through the [Aina project](https://projecteaina.cat/).