On Front-End Gain Invariant Modeling for Wake Word Spotting

Yixin Gao, Noah D. Stein, Chieh-Chi Kao, Yunliang Cai, Ming Sun, Tao Zhang, Shiv Naga Prasad Vitaladevuni


Wake word (WW) spotting is challenging in far-field due to the complexities and variations in acoustic conditions and the environmental interference in signal transmission. A suite of carefully designed and optimized audio front-end (AFE) algorithms help mitigate these challenges and provide better quality audio signals to the downstream modules such as WW spotter. Since the WW model is trained with the AFE-processed audio data, its performance is sensitive to AFE variations, such as gain changes. In addition, when deploying to new devices, the WW performance is not guaranteed because the AFE is unknown to the WW model. To address these issues, we propose a novel approach to use a new feature called ΔLFBE to decouple the AFE gain variations from the WW model. We modified the neural network architectures to accommodate the delta computation, with the feature extraction module unchanged. We evaluate our WW models using data collected from real household settings and showed the models with the ΔLFBE is robust to AFE gain changes. Specifically, when AFE gain changes up to ±12dB, the baseline CNN model lost up to relative 19.0% in false alarm rate or 34.3% in false reject rate, while the model with ΔLFBE demonstrates no performance loss.


 DOI: 10.21437/Interspeech.2020-1992

Cite as: Gao, Y., Stein, N.D., Kao, C., Cai, Y., Sun, M., Zhang, T., Vitaladevuni, S.N.P. (2020) On Front-End Gain Invariant Modeling for Wake Word Spotting. Proc. Interspeech 2020, 991-995, DOI: 10.21437/Interspeech.2020-1992.


@inproceedings{Gao2020,
  author={Yixin Gao and Noah D. Stein and Chieh-Chi Kao and Yunliang Cai and Ming Sun and Tao Zhang and Shiv Naga Prasad Vitaladevuni},
  title={{On Front-End Gain Invariant Modeling for Wake Word Spotting}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={991--995},
  doi={10.21437/Interspeech.2020-1992},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1992}
}