Statistical and Neural Network Based Speech Activity Detection in Non-Stationary Acoustic Environments

Jens Heitkaemper, Joerg Schmalenstroeer, Reinhold Haeb-Umbach


Speech activity detection (SAD), which often rests on the fact that the noise is “more” stationary than speech, is particularly challenging in non-stationary environments, because the time variance of the acoustic scene makes it difficult to discriminate speech from noise. We propose two approaches to SAD, where one is based on statistical signal processing, while the other utilizes neural networks. The former employs sophisticated signal processing to track the noise and speech energies and is meant to support the case for a resource efficient, unsupervised signal processing approach. The latter introduces a recurrent network layer that operates on short segments of the input speech to do temporal smoothing in the presence of non-stationary noise. The systems are tested on the Fearless Steps challenge database, which consists of the transmission data from the Apollo-11 space mission. The statistical SAD achieves comparable detection performance to earlier proposed neural network based SADs, while the neural network based approach leads to a decision cost function of 1.07% on the evaluation set of the 2020 Fearless Steps Challenge, which sets a new state of the art.


 DOI: 10.21437/Interspeech.2020-1252

Cite as: Heitkaemper, J., Schmalenstroeer, J., Haeb-Umbach, R. (2020) Statistical and Neural Network Based Speech Activity Detection in Non-Stationary Acoustic Environments. Proc. Interspeech 2020, 2597-2601, DOI: 10.21437/Interspeech.2020-1252.


@inproceedings{Heitkaemper2020,
  author={Jens Heitkaemper and Joerg Schmalenstroeer and Reinhold Haeb-Umbach},
  title={{Statistical and Neural Network Based Speech Activity Detection in Non-Stationary Acoustic Environments}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={2597--2601},
  doi={10.21437/Interspeech.2020-1252},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1252}
}