Non-Parallel Emotion Conversion Using a Deep-Generative Hybrid Network and an Adversarial Pair Discriminator

Ravi Shankar, Jacob Sager, Archana Venkataraman


We introduce a novel method for emotion conversion in speech that does not require parallel training data. Our approach loosely relies on a cycle-GAN schema to minimize the reconstruction error from converting back and forth between emotion pairs. However, unlike the conventional cycle-GAN, our discriminator classifies whether a pair of input real and generated samples corresponds to the desired emotion conversion (e.g., A→B) or to its inverse (B→A). We will show that this setup, which we refer to as a variational cycle-GAN (VCGAN), is equivalent to minimizing the empirical KL divergence between the source features and their cyclic counterpart. In addition, our generator combines a trainable deep network with a fixed generative block to implement a smooth and invertible transformation on the input features, in our case, the fundamental frequency (F0) contour. This hybrid architecture regularizes our adversarial training procedure. We use crowd sourcing to evaluate both the emotional saliency and the quality of synthesized speech. Finally, we show that our model generalizes to new speakers by modifying speech produced by Wavenet.


 DOI: 10.21437/Interspeech.2020-1325

Cite as: Shankar, R., Sager, J., Venkataraman, A. (2020) Non-Parallel Emotion Conversion Using a Deep-Generative Hybrid Network and an Adversarial Pair Discriminator. Proc. Interspeech 2020, 3396-3400, DOI: 10.21437/Interspeech.2020-1325.


@inproceedings{Shankar2020,
  author={Ravi Shankar and Jacob Sager and Archana Venkataraman},
  title={{Non-Parallel Emotion Conversion Using a Deep-Generative Hybrid Network and an Adversarial Pair Discriminator}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3396--3400},
  doi={10.21437/Interspeech.2020-1325},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1325}
}