Speech Driven Talking Head Generation via Attentional Landmarks Based Representation

Wentao Wang, Yan Wang, Jianqing Sun, Qingsong Liu, Jiaen Liang, Teng Li


Previous talking head generation methods mostly focus on frontal face synthesis while neglecting natural person head motion. In this paper, a generative adversarial network (GAN) based method is proposed to generate talking head video with not only high quality facial appearance, accurate lip movement, but also natural head motion. To this aim, the facial landmarks are detected and used to represent lip motion and head pose, and the conversions from speech to these middle level representations are learned separately through Convolutional Neural Networks (CNN) with wingloss. The Gated Recurrent Unit (GRU) is adopted to regularize the sequential transition. The representations for different factors of talking head are jointly feeded to a Generative Adversarial Network (GAN) based model with an attentional mechanism to synthesize the talking video. Extensive experiments on the benchmark dataset as well as our own collected dataset validate that the propose method can yield talking videos with natural head motions, and the performance is superior to state-of-the-art talking face generation methods.


 DOI: 10.21437/Interspeech.2020-2304

Cite as: Wang, W., Wang, Y., Sun, J., Liu, Q., Liang, J., Li, T. (2020) Speech Driven Talking Head Generation via Attentional Landmarks Based Representation. Proc. Interspeech 2020, 1326-1330, DOI: 10.21437/Interspeech.2020-2304.


@inproceedings{Wang2020,
  author={Wentao Wang and Yan Wang and Jianqing Sun and Qingsong Liu and Jiaen Liang and Teng Li},
  title={{Speech Driven Talking Head Generation via Attentional Landmarks Based Representation}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={1326--1330},
  doi={10.21437/Interspeech.2020-2304},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2304}
}