Multi-Speaker Emotion Conversion via Latent Variable Regularization and a Chained Encoder-Decoder-Predictor Network

Ravi Shankar, Hsi-Wei Hsieh, Nicolas Charon, Archana Venkataraman


We propose a novel method for emotion conversion in speech based on a chained encoder-decoder-predictor neural network architecture. The encoder constructs a latent embedding of the fundamental frequency (F0) contour and the spectrum, which we regularize using the Large Diffeomorphic Metric Mapping (LDDMM) registration framework. The decoder uses this embedding to predict the modified F0 contour in a target emotional class. Finally, the predictor uses the original spectrum and the modified F0 contour to generate a corresponding target spectrum. Our joint objective function simultaneously optimizes the parameters of three model blocks. We show that our method outperforms the existing state-of-the-art approaches on both, the saliency of emotion conversion and the quality of resynthesized speech. In addition, the LDDMM regularization allows our model to convert phrases that were not present in training, thus providing evidence for out-of-sample generalization.


 DOI: 10.21437/Interspeech.2020-1323

Cite as: Shankar, R., Hsieh, H., Charon, N., Venkataraman, A. (2020) Multi-Speaker Emotion Conversion via Latent Variable Regularization and a Chained Encoder-Decoder-Predictor Network. Proc. Interspeech 2020, 3391-3395, DOI: 10.21437/Interspeech.2020-1323.


@inproceedings{Shankar2020,
  author={Ravi Shankar and Hsi-Wei Hsieh and Nicolas Charon and Archana Venkataraman},
  title={{Multi-Speaker Emotion Conversion via Latent Variable Regularization and a Chained Encoder-Decoder-Predictor Network}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3391--3395},
  doi={10.21437/Interspeech.2020-1323},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1323}
}