Lite Audio-Visual Speech Enhancement

Shang-Yi Chuang, Yu Tsao, Chen-Chou Lo, Hsin-Min Wang


Previous studies have confirmed the effectiveness of incorporating visual information into speech enhancement (SE) systems. Despite improved denoising performance, two problems may be encountered when implementing an audio-visual SE (AVSE) system: (1) additional processing costs are incurred to incorporate visual input and (2) the use of face or lip images may cause privacy problems. In this study, we propose a Lite AVSE (LAVSE) system to address these problems. The system includes two visual data compression techniques and removes the visual feature extraction network from the training model, yielding better online computation efficiency. Our experimental results indicate that the proposed LAVSE system can provide notably better performance than an audio-only SE system with a similar number of model parameters. In addition, the experimental results confirm the effectiveness of the two techniques for visual data compression.


 DOI: 10.21437/Interspeech.2020-1617

Cite as: Chuang, S., Tsao, Y., Lo, C., Wang, H. (2020) Lite Audio-Visual Speech Enhancement. Proc. Interspeech 2020, 1131-1135, DOI: 10.21437/Interspeech.2020-1617.


@inproceedings{Chuang2020,
  author={Shang-Yi Chuang and Yu Tsao and Chen-Chou Lo and Hsin-Min Wang},
  title={{Lite Audio-Visual Speech Enhancement}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={1131--1135},
  doi={10.21437/Interspeech.2020-1617},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1617}
}