Pretrained Model of Amphion HiFi-GAN

We provide the pre-trained checkpoint of HiFi-GAN trained on LJSpeech, which consists of 13,100 short audio clips of a single speaker and has a total length of approximately 24 hours.

Quick Start

To utilize the pre-trained models, just run the following commands:

Step1: Download the checkpoint

git lfs install
git clone https://huggingface.co./amphion/hifigan_ljspeech

Step2: Clone the Amphion's Source Code of GitHub

git clone https://github.com/open-mmlab/Amphion.git

Step3: Specify the checkpoint's path

Use the soft link to specify the downloaded checkpoint in the first step:

cd Amphion
mkdir -p ckpts/tts
ln -s  ../../../hifigan_ljspeech  ckpts/tts/

Step4: Inference

This HiFi-GAN Vocoder is pre-trained to support Amphion FastSpeech 2 to generate speech waveform from Mel spectrogram. You can follow the inference part of this recipe to generate speech. For example, if you want to synthesize a clip of speech with the text of "This is a clip of generated speech with the given text from a TTS model.", just, run:


sh egs/tts/FastSpeech2/run.sh --stage 3 \
    --config ckpts/tts/fastspeech2_ljspeech/args.json \
    --infer_expt_dir ckpts/tts/fastspeech2_ljspeech/ \
    --infer_output_dir ckpts/tts/fastspeech2_ljspeech/results \
    --infer_mode "single" \
    --infer_text "This is a clip of generated speech with the given text from a TTS model." \
    --vocoder_dir ckpts/vocoder/hifigan_ljspeech/checkpoints/ \

*Noted: Pre-trained Amphion FastSpeech 2 can be download here.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .