Inaudible Adversarial Perturbations for Targeted Attack in Speaker Recognition

Qing Wang, Pengcheng Guo, Lei Xie

Speaker recognition is a popular topic in biometric authentication and many deep learning approaches have achieved extraordinary performances. However, it has been shown in both image and speech applications that deep neural networks are vulnerable to adversarial examples. In this study, we aim to exploit this weakness to perform targeted adversarial attacks against the x-vector based speaker recognition system. We propose to generate inaudible adversarial perturbations based on the psychoacoustic principle of frequency masking, achieving targeted white-box attacks to speaker recognition system. Specifically, we constrict the perturbation under the masking threshold of original audio, instead of using a common lp norm to measure the perturbations. Experiments on Aishell-1 corpus show that our approach yields up to 98.5% attack success rate to arbitrary gender speaker targets, while retaining indistinguishable attribute to listeners. Furthermore, we also achieve an effective speaker attack when applying the proposed approach to a completely irrelevant waveform, such as music.

 DOI: 10.21437/Interspeech.2020-1955

Cite as: Wang, Q., Guo, P., Xie, L. (2020) Inaudible Adversarial Perturbations for Targeted Attack in Speaker Recognition. Proc. Interspeech 2020, 4228-4232, DOI: 10.21437/Interspeech.2020-1955.

  author={Qing Wang and Pengcheng Guo and Lei Xie},
  title={{Inaudible Adversarial Perturbations for Targeted Attack in Speaker Recognition}},
  booktitle={Proc. Interspeech 2020},