Voice Conversion Using Sequence-to-Sequence Learning of Context Posterior Probabilities

Hiroyuki Miyoshi, Yuki Saito, Shinnosuke Takamichi, Hiroshi Saruwatari


Voice conversion (VC) using sequence-to-sequence learning of context posterior probabilities is proposed. Conventional VC using shared context posterior probabilities predicts target speech parameters from the context posterior probabilities estimated from the source speech parameters. Although conventional VC can be built from non-parallel data, it is difficult to convert speaker individuality such as phonetic property and speaking rate contained in the posterior probabilities because the source posterior probabilities are directly used for predicting target speech parameters. In this work, we assume that the training data partly include parallel speech data and propose sequence-to-sequence learning between the source and target posterior probabilities. The conversion models perform non-linear and variable-length transformation from the source probability sequence to the target one. Further, we propose a joint training algorithm for the modules. In contrast to conventional VC, which separately trains the speech recognition that estimates posterior probabilities and the speech synthesis that predicts target speech parameters, our proposed method jointly trains these modules along with the proposed probability conversion modules. Experimental results demonstrate that our approach outperforms the conventional VC.


 DOI: 10.21437/Interspeech.2017-247

Cite as: Miyoshi, H., Saito, Y., Takamichi, S., Saruwatari, H. (2017) Voice Conversion Using Sequence-to-Sequence Learning of Context Posterior Probabilities. Proc. Interspeech 2017, 1268-1272, DOI: 10.21437/Interspeech.2017-247.


@inproceedings{Miyoshi2017,
  author={Hiroyuki Miyoshi and Yuki Saito and Shinnosuke Takamichi and Hiroshi Saruwatari},
  title={Voice Conversion Using Sequence-to-Sequence Learning of Context Posterior Probabilities},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={1268--1272},
  doi={10.21437/Interspeech.2017-247},
  url={http://dx.doi.org/10.21437/Interspeech.2017-247}
}