Detecting Adversarial Examples for Speech Recognition via Uncertainty Quantification

Sina Däubener, Lea Schönherr, Asja Fischer, Dorothea Kolossa


Machine learning systems and also, specifically, automatic speech recognition (ASR) systems are vulnerable against adversarial attacks, where an attacker maliciously changes the input. In the case of ASR systems, the most interesting cases are targeted attacks, in which an attacker aims to force the system into recognizing given target transcriptions in an arbitrary audio sample. The increasing number of sophisticated, quasi imperceptible attacks raises the question of countermeasures.

In this paper, we focus on hybrid ASR systems and compare four acoustic models regarding their ability to indicate uncertainty under attack: a feed-forward neural network and three neural networks specifically designed for uncertainty quantification, namely a Bayesian neural network, Monte Carlo dropout, and a deep ensemble.

We employ uncertainty measures of the acoustic model to construct a simple one-class classification model for assessing whether inputs are benign or adversarial. Based on this approach, we are able to detect adversarial examples with an area under the receiving operator curve score of more than 0.99. The neural networks for uncertainty quantification simultaneously diminish the vulnerability to the attack, which is reflected in a lower recognition accuracy of the malicious target text in comparison to a standard hybrid ASR system.


 DOI: 10.21437/Interspeech.2020-2734

Cite as: Däubener, S., Schönherr, L., Fischer, A., Kolossa, D. (2020) Detecting Adversarial Examples for Speech Recognition via Uncertainty Quantification. Proc. Interspeech 2020, 4661-4665, DOI: 10.21437/Interspeech.2020-2734.


@inproceedings{Däubener2020,
  author={Sina Däubener and Lea Schönherr and Asja Fischer and Dorothea Kolossa},
  title={{Detecting Adversarial Examples for Speech Recognition via Uncertainty Quantification}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={4661--4665},
  doi={10.21437/Interspeech.2020-2734},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2734}
}