Replay Attack Detection Using DNN for Channel Discrimination

Parav Nagarsheth, Elie Khoury, Kailash Patil, Matt Garland


Voice is projected to be the next input interface for portable devices. The increased use of audio interfaces can be mainly attributed to the success of speech and speaker recognition technologies. With these advances comes the risk of criminal threats where attackers are reportedly trying to access sensitive information using diverse voice spoofing techniques. Among them, replay attacks pose a real challenge to voice biometrics. This paper addresses the problem by proposing a deep learning architecture in tandem with low-level cepstral features. We investigate the use of a deep neural network (DNN) to discriminate between the different channel conditions available in the ASVSpoof 2017 dataset, namely recording, playback and session conditions. The high-level feature vectors derived from this network are used to discriminate between genuine and spoofed audio. Two kinds of low-level features are utilized: state-of-the-art constant-Q cepstral coefficients (CQCC), and our proposed high-frequency cepstral coefficients (HFCC) that derive from the high-frequency spectrum of the audio. The fusion of both features proved to be effective in generalizing well across diverse replay attacks seen in the evaluation of the ASVSpoof 2017 challenge, with an equal error rate of 11.5%, that is 53% better than the baseline Gaussian Mixture Model (GMM) applied on CQCC.


 DOI: 10.21437/Interspeech.2017-1377

Cite as: Nagarsheth, P., Khoury, E., Patil, K., Garland, M. (2017) Replay Attack Detection Using DNN for Channel Discrimination. Proc. Interspeech 2017, 97-101, DOI: 10.21437/Interspeech.2017-1377.


@inproceedings{Nagarsheth2017,
  author={Parav Nagarsheth and Elie Khoury and Kailash Patil and Matt Garland},
  title={Replay Attack Detection Using DNN for Channel Discrimination},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={97--101},
  doi={10.21437/Interspeech.2017-1377},
  url={http://dx.doi.org/10.21437/Interspeech.2017-1377}
}