Dual Attention in Time and Frequency Domain for Voice Activity Detection

Joohyung Lee, Youngmoon Jung, Hoirin Kim


Voice activity detection (VAD) is a challenging task in low signal-to-noise ratio (SNR) environment, especially in non-stationary noise. To deal with this issue, we propose a novel attention module that can be integrated in Long Short-Term Memory (LSTM). Our proposed attention module refines each LSTM layer’s hidden states so as to make it possible to adaptively focus on both time and frequency domain. Experiments are conducted on various noisy conditions using Aurora 4 database. Our proposed method obtains the 95.58% area under the ROC curve (AUC), achieving 22.05%relative improvement compared to baseline, with only 2.44% increase in the number of parameters. Besides, we utilize focal loss for alleviating the performance degradation caused by imbalance between speech and non-speech sections in training sets. The results show that the focal loss can improve the performance in various imbalance situations compared to the cross entropy loss, a commonly used loss function in VAD.


 DOI: 10.21437/Interspeech.2020-0997

Cite as: Lee, J., Jung, Y., Kim, H. (2020) Dual Attention in Time and Frequency Domain for Voice Activity Detection. Proc. Interspeech 2020, 3670-3674, DOI: 10.21437/Interspeech.2020-0997.


@inproceedings{Lee2020,
  author={Joohyung Lee and Youngmoon Jung and Hoirin Kim},
  title={{Dual Attention in Time and Frequency Domain for Voice Activity Detection}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3670--3674},
  doi={10.21437/Interspeech.2020-0997},
  url={http://dx.doi.org/10.21437/Interspeech.2020-0997}
}