Pretrained Model of Amphion FastSpeech 2
We provide the pre-trained checkpoint of FastSpeech 2 trained on LJSpeech, which consists of 13,100 short audio clips of a single speaker and has a total length of approximately 24 hours.
Quick Start
To utilize the pre-trained models, just run the following commands:
Step1: Download the checkpoint
git lfs install
git clone https://huggingface.co./amphion/fastspeech2_ljspeech
Step2: Clone the Amphion's Source Code of GitHub
git clone https://github.com/open-mmlab/Amphion.git
Step3: Specify the checkpoint's path
Use the soft link to specify the downloaded checkpoint in the first step:
cd Amphion
mkdir -p ckpts/tts
ln -s ../../../fastspeech2_ljspeech ckpts/tts/
Step4: Inference
You can follow the inference part of this recipe to generate speech from text. For example, if you want to synthesize a clip of speech with the text of "This is a clip of generated speech with the given text from a TTS model.", just, run:
sh egs/tts/FastSpeech2/run.sh --stage 3 \
--config ckpts/tts/fastspeech2_ljspeech/args.json \
--infer_expt_dir ckpts/tts/fastspeech2_ljspeech/ \
--infer_output_dir ckpts/tts/fastspeech2_ljspeech/results \
--infer_mode "single" \
--infer_text "This is a clip of generated speech with the given text from a TTS model." \
--vocoder_dir ckpts/vocoder/hifigan_ljspeech/checkpoints/ \
*Noted: Inference FastSpeech 2 requires a vocoder to reconstruct waveform from Mel spectrogram. The pre-trained Amphion HiFi-GAN Vocoder that matches this Amphion FastSpeech 2 can be download here.