MLNET: An Adaptive Multiple Receptive-Field Attention Neural Network for Voice Activity Detection

Zhenpeng Zheng, Jianzong Wang, Ning Cheng, Jian Luo, Jing Xiao


Voice activity detection (VAD) makes a distinction between speech and non-speech and its performance is of crucial importance for speech based services. Recently, deep neural network (DNN)-based VADs have achieved better performance than conventional signal processing methods. The existed DNN-based models always handcrafted a fixed window to make use of the contextual speech information to improve the performance of VAD. However, the fixed window of contextual speech information can’t handle various unpredictable noise environments and highlight the critical speech information to VAD task. In order to solve this problem, this paper proposed an adaptive multiple receptive-field attention neural network, called MLNET, to finish VAD task. The MLNET leveraged multi-branches to extract multiple contextual speech information and investigated an effective attention block to weight the most crucial parts of the context for final classification. Experiments in real-world scenarios demonstrated that the proposed MLNET-based model outperformed other baselines.


 DOI: 10.21437/Interspeech.2020-2392

Cite as: Zheng, Z., Wang, J., Cheng, N., Luo, J., Xiao, J. (2020) MLNET: An Adaptive Multiple Receptive-Field Attention Neural Network for Voice Activity Detection. Proc. Interspeech 2020, 3695-3699, DOI: 10.21437/Interspeech.2020-2392.


@inproceedings{Zheng2020,
  author={Zhenpeng Zheng and Jianzong Wang and Ning Cheng and Jian Luo and Jing Xiao},
  title={{MLNET: An Adaptive Multiple Receptive-Field Attention Neural Network for Voice Activity Detection}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3695--3699},
  doi={10.21437/Interspeech.2020-2392},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2392}
}