Update README.md
Browse files
README.md
CHANGED
@@ -22,12 +22,12 @@ This model is an end-to-end deep-learning-based Kinyarwanda Text-to-Speech (TTS)
|
|
22 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
23 |
Install the Coqui's TTS library:
|
24 |
```
|
25 |
-
pip install
|
26 |
```
|
27 |
Download the files from this repo, then run:
|
28 |
|
29 |
```
|
30 |
-
tts --text "text" --model_path model.pth --
|
31 |
```
|
32 |
Where the conditioning audio is a wav file(s) to condition a multi-speaker TTS model with a Speaker Encoder, you can give multiple file paths. The d_vectors is computed as their average.
|
33 |
# References
|
|
|
22 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
23 |
Install the Coqui's TTS library:
|
24 |
```
|
25 |
+
pip install TTS
|
26 |
```
|
27 |
Download the files from this repo, then run:
|
28 |
|
29 |
```
|
30 |
+
tts --text "text" --model_path model.pth --config_path config.json --speakers_file_path speakers.pth --speaker_wav conditioning_audio.wav --out_path out.wav
|
31 |
```
|
32 |
Where the conditioning audio is a wav file(s) to condition a multi-speaker TTS model with a Speaker Encoder, you can give multiple file paths. The d_vectors is computed as their average.
|
33 |
# References
|