JDI-T: Jointly Trained Duration Informed Transformer for Text-To-Speech without Explicit Alignment

Dan Lim, Won Jang, Gyeonghwan O, Heayoung Park, Bongwan Kim, Jaesam Yoon


We propose Jointly trained Duration Informed Transformer (JDI-T), a feed-forward Transformer with a duration predictor jointly trained without explicit alignments in order to generate an acoustic feature sequence from an input text. In this work, inspired by the recent success of the duration informed networks such as FastSpeech and DurIAN, we further simplify its sequential, two-stage training pipeline to a single-stage training. Specifically, we extract the phoneme duration from the autoregressive Transformer on the fly during the joint training instead of pretraining the autoregressive model and using it as a phoneme duration extractor. To our best knowledge, it is the first implementation to jointly train the feed-forward Transformer without relying on a pre-trained phoneme duration extractor in a single training pipeline. We evaluate the effectiveness of the proposed model on the publicly available Korean Single speaker Speech (KSS) dataset compared to the baseline text-to-speech (TTS) models trained by ESPnet-TTS.


 DOI: 10.21437/Interspeech.2020-2123

Cite as: Lim, D., Jang, W., O, G., Park, H., Kim, B., Yoon, J. (2020) JDI-T: Jointly Trained Duration Informed Transformer for Text-To-Speech without Explicit Alignment. Proc. Interspeech 2020, 4004-4008, DOI: 10.21437/Interspeech.2020-2123.


@inproceedings{Lim2020,
  author={Dan Lim and Won Jang and Gyeonghwan O and Heayoung Park and Bongwan Kim and Jaesam Yoon},
  title={{JDI-T: Jointly Trained Duration Informed Transformer for Text-To-Speech without Explicit Alignment}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={4004--4008},
  doi={10.21437/Interspeech.2020-2123},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2123}
}