Multi-Speaker Text-to-Speech Synthesis Using Deep Gaussian Processes

Kentaro Mitsui, Tomoki Koriyama, Hiroshi Saruwatari


Multi-speaker speech synthesis is a technique for modeling multiple speakers’ voices with a single model. Although many approaches using deep neural networks (DNNs) have been proposed, DNNs are prone to overfitting when the amount of training data is limited. We propose a framework for multi-speaker speech synthesis using deep Gaussian processes (DGPs); a DGP is a deep architecture of Bayesian kernel regressions and thus robust to overfitting. In this framework, speaker information is fed to duration/acoustic models using speaker codes. We also examine the use of deep Gaussian process latent variable models (DGPLVMs). In this approach, the representation of each speaker is learned simultaneously with other model parameters, and therefore the similarity or dissimilarity of speakers is considered efficiently. We experimentally evaluated two situations to investigate the effectiveness of the proposed methods. In one situation, the amount of data from each speaker is balanced (speaker-balanced), and in the other, the data from certain speakers are limited (speaker-imbalanced). Subjective and objective evaluation results showed that both the DGP and DGPLVM synthesize multi-speaker speech more effective than a DNN in the speaker-balanced situation. We also found that the DGPLVM outperforms the DGP significantly in the speaker-imbalanced situation.


 DOI: 10.21437/Interspeech.2020-3167

Cite as: Mitsui, K., Koriyama, T., Saruwatari, H. (2020) Multi-Speaker Text-to-Speech Synthesis Using Deep Gaussian Processes. Proc. Interspeech 2020, 2032-2036, DOI: 10.21437/Interspeech.2020-3167.


@inproceedings{Mitsui2020,
  author={Kentaro Mitsui and Tomoki Koriyama and Hiroshi Saruwatari},
  title={{Multi-Speaker Text-to-Speech Synthesis Using Deep Gaussian Processes}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={2032--2036},
  doi={10.21437/Interspeech.2020-3167},
  url={http://dx.doi.org/10.21437/Interspeech.2020-3167}
}