DurIAN: Duration Informed Attention Network for Speech Synthesis

Chengzhu Yu, Heng Lu, Na Hu, Meng Yu, Chao Weng, Kun Xu, Peng Liu, Deyi Tuo, Shiyin Kang, Guangzhi Lei, Dan Su, Dong Yu


In this paper, we present a robust and effective speech synthesis system that generates highly natural speech. The key component of proposed system is Duration Informed Attention Network (DurIAN), an autoregressive model in which the alignments between the input text and the output acoustic features are inferred from a duration model. This is different from the attention mechanism used in existing end-to-end speech synthesis systems that accounts for various unavoidable artifacts. To improve the audio generation efficiency of neural vocoders, we also propose a multi-band audio generation framework exploiting the sparseness characteristics of neural network. With proposed multi-band processing framework, the total computational complexity of WaveRNN model can be effectively reduced from 9.8 to 3.6 GFLOPS without any performance loss. Finally, we show that proposed DurIAN system could generate highly natural speech that is on par with current state of the art end-to-end systems, while being robust and stable at the same time.


 DOI: 10.21437/Interspeech.2020-2968

Cite as: Yu, C., Lu, H., Hu, N., Yu, M., Weng, C., Xu, K., Liu, P., Tuo, D., Kang, S., Lei, G., Su, D., Yu, D. (2020) DurIAN: Duration Informed Attention Network for Speech Synthesis. Proc. Interspeech 2020, 2027-2031, DOI: 10.21437/Interspeech.2020-2968.


@inproceedings{Yu2020,
  author={Chengzhu Yu and Heng Lu and Na Hu and Meng Yu and Chao Weng and Kun Xu and Peng Liu and Deyi Tuo and Shiyin Kang and Guangzhi Lei and Dan Su and Dong Yu},
  title={{DurIAN: Duration Informed Attention Network for Speech Synthesis}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={2027--2031},
  doi={10.21437/Interspeech.2020-2968},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2968}
}