Using visual speech information and perceptually motivated loss functions for binary mask estimation

Danny Websdale, Ben Milner


This work is concerned with using deep neural networks for estimating binary masks within a speech enhancement frame- work. We first examine the effect of supplementing the audio features used in mask estimation with visual speech information. Visual speech is known to be robust to noise although not necessarily as discriminative as audio features, particularly at higher signal-to-noise ratios. Furthermore, most DNN approaches to mask estimate use the cross-entropy (CE) loss function which aims to maximise classification accuracy. However, we first propose a loss function that aims to maximise the hit minus false-alarm (HIT-FA) rate of the mask, which is known to correlate more closely to speech intelligibility than classification accuracy. We then extend this to a hybrid loss function that combines both the CE and HIT-FA loss functions to pro- vide a balance between classification accuracy and HIT-FA rate of the resulting masks. Evaluations of the perceptually motivated loss functions are carried out using the GRID and larger RM-3000 datasets and show improvements to HIT-FA rate and ESTOI across all noises and SNRs tested. Tests also found that supplementing audio with visual information into a single bi- modal audio-visual system gave best performance for all measures and conditions tested.


 DOI: 10.21437/AVSP.2017-9

Cite as: Websdale, D., Milner, B. (2017) Using visual speech information and perceptually motivated loss functions for binary mask estimation. Proc. The 14th International Conference on Auditory-Visual Speech Processing, 41-46, DOI: 10.21437/AVSP.2017-9.


@inproceedings{Websdale2017,
  author={Danny Websdale and Ben Milner},
  title={ Using visual speech information and perceptually motivated loss functions for binary mask estimation},
  year=2017,
  booktitle={Proc. The 14th International Conference on Auditory-Visual Speech Processing},
  pages={41--46},
  doi={10.21437/AVSP.2017-9},
  url={http://dx.doi.org/10.21437/AVSP.2017-9}
}