Controlling the Strength of Emotions in Speech-Like Emotional Sound Generated by WaveNet

Kento Matsumoto, Sunao Hara, Masanobu Abe


This paper proposes a method to enhance the controllability of a Speech-like Emotional Sound (SES). In our previous study, we proposed an algorithm to generate SES by employing WaveNet as a sound generator and confirmed that SES can successfully convey emotional information. The proposed algorithm generates SES using only emotional IDs, which results in having no linguistic information. We call the generated sounds “speech-like” because they sound as if they are uttered by human beings although they contain no linguistic information. We could synthesize natural sounding acoustic signals that are fairly different from vocoder sounds to make the best use of WaveNet. To flexibly control the strength of emotions, this paper proposes to use a state of voiced, unvoiced, and silence (VUS) as auxiliary features. Three types of emotional speech, namely, neutral, angry, and happy, were generated and subjectively evaluated. Experimental results reveal the following: (1) VUS can control the strength of SES by changing the durations of VUS states, (2) VUS with narrow F0 distribution can express stronger emotions than that with wide F0 distribution, and (3) the smaller the unvoiced percentage is, the stronger the emotional impression is.


 DOI: 10.21437/Interspeech.2020-2064

Cite as: Matsumoto, K., Hara, S., Abe, M. (2020) Controlling the Strength of Emotions in Speech-Like Emotional Sound Generated by WaveNet. Proc. Interspeech 2020, 3421-3425, DOI: 10.21437/Interspeech.2020-2064.


@inproceedings{Matsumoto2020,
  author={Kento Matsumoto and Sunao Hara and Masanobu Abe},
  title={{Controlling the Strength of Emotions in Speech-Like Emotional Sound Generated by WaveNet}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3421--3425},
  doi={10.21437/Interspeech.2020-2064},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2064}
}