Update README.md
Browse files
README.md
CHANGED
@@ -1,12 +1,12 @@
|
|
1 |
---
|
2 |
license: mit
|
|
|
|
|
3 |
---
|
4 |
|
5 |
-
#
|
6 |
|
7 |
-
We provide the
|
8 |
-
|
9 |
-
A [VITS](https://github.com/open-mmlab/Amphion/tree/main/egs/tts/VITS) pretrained checkpoint trained on LJSpeech, which consists of 13,100 short audio clips of a single speaker and have a total length of approximately 24 hours.
|
10 |
|
11 |
|
12 |
|
@@ -45,5 +45,4 @@ sh egs/tts/VITS/run.sh --stage 3 --gpu "0" \
|
|
45 |
--infer_output_dir ckpts/tts/vits-ljspeech/result \
|
46 |
--infer_mode "single" \
|
47 |
--infer_text "This is a clip of generated speech with the given text from a TTS model."
|
48 |
-
```
|
49 |
-
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
+
language:
|
4 |
+
- en
|
5 |
---
|
6 |
|
7 |
+
# Pretrained Model of Amphion VITS
|
8 |
|
9 |
+
We provide the pre-trained checkpoint of [VITS](https://github.com/open-mmlab/Amphion/tree/main/egs/tts/VITS) trained on LJSpeech, which consists of 13,100 short audio clips of a single speaker and have a total length of approximately 24 hours.
|
|
|
|
|
10 |
|
11 |
|
12 |
|
|
|
45 |
--infer_output_dir ckpts/tts/vits-ljspeech/result \
|
46 |
--infer_mode "single" \
|
47 |
--infer_text "This is a clip of generated speech with the given text from a TTS model."
|
48 |
+
```
|
|