Yet another PyTorch implementation of Natural TTS Synthesis By Conditioning Wavenet On Mel Spectrogram Predictions. The project is highly based on these. I made some modification to improve speed and performance of both training and inference.
Results generated using a WaveNet vocoder is abvailable now!
- Pipeline for training a vocoder.
- Python >= 3.5.2
- torch >= 1.0.0
- numpy
- scipy
- pillow
- inflect
- librosa
- Unidecode
- matplotlib
- tensorboardX
Currently only support LJSpeech dataset. No need to do preprocessing if you use the dataset with 22050 sample rate.
For traing with different sample rate, you should deal with the audio files yourself and modified hparams.py
.
- For training Tacotron2, run the following command.
python3 train.py --data_dir=<dir/to/dataset> --ckpt_dir=<dir/to/models>
- For training using a pretrained model, run the following command.
python3 train.py --data_dir=<dir/to/dataset> --ckpt_dir=<dir/to/models> --ckpt_pth=<pth/to/pretrained/model>
- For using Tensorboard (optional), run the following command.
python3 train.py --data_dir=<dir/to/dataset> --ckpt_dir=<dir/to/models> --log_dir=<dir/to/logs>
You can find alinment images and synthesized audio clips during training. Recording freqency and text to synthesize can be set in hparams.py
.
- For synthesizing wav files, run the following command.
python3 inference.py --ckpt_pth=<pth/to/model> --img_pth=<pth/to/save/alignment> --wav_pth=<pth/to/save/wavs> --text=<text/to/synthesize>
Now you can download pretrained models from here. The hyperparameter(git commit: 301943f) for training the pretrained models is also in the directory.
Vocoder is not implemented yet. For now I just reconstuct the linear spectrogram from the mel spectrogram directly and use Griffim-Lim to synthesize the waveform. A neural vocoder will be implemented in the future. Or you can refer to Wavenet, FFTNet, or WaveGlow.
You can find some samples in results or here. These results are generated using either pseudo inverse (using provided 22k pretrained model) or wavenet (using provided 16k pretrained model).
The alignment of the attention is pretty well now (about 100k training steps), the following figure is one sample.
This figure shows the mel spectrogram from the decoder without the postnet, the mel spectrgram with the postnet, and the alignment of the attention.
This project is highly based on the works below.