Speaker Identification for Household Scenarios with Self-Attention and Adversarial Training

Ruirui Li, Jyun-Yu Jiang, Xian Wu, Chu-Cheng Hsieh, Andreas Stolcke


Speaker identification based on voice input is a fundamental capability in speech processing enabling versatile downstream applications, such as personalization and authentication. With the advent of deep learning, most state-of-the-art methods apply machine learning techniques and derive acoustic embeddings from utterances with convolutional neural networks (CNNs) and recurrent neural networks (RNNs). This paper addresses two inherent limitations of current approaches. First, voice characteristics over long time spans might not be fully captured by CNNs and RNNs, as they are designed to focus on local feature extraction and adjacent dependencies modeling, respectively. Second, complex deep learning models can be fragile with regard to subtle but intentional changes in model inputs, also known as adversarial perturbations. To distill informative global acoustic embedding representations from utterances and be robust to adversarial perturbations, we propose a Self-Attentive Adversarial Speaker-Identification method ( SAASI). In experiments on the VCTK dataset, SAASI significantly outperforms four state-of-the-art baselines in identifying both known and new speakers.


 DOI: 10.21437/Interspeech.2020-3025

Cite as: Li, R., Jiang, J., Wu, X., Hsieh, C., Stolcke, A. (2020) Speaker Identification for Household Scenarios with Self-Attention and Adversarial Training. Proc. Interspeech 2020, 2272-2276, DOI: 10.21437/Interspeech.2020-3025.


@inproceedings{Li2020,
  author={Ruirui Li and Jyun-Yu Jiang and Xian Wu and Chu-Cheng Hsieh and Andreas Stolcke},
  title={{Speaker Identification for Household Scenarios with Self-Attention and Adversarial Training}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={2272--2276},
  doi={10.21437/Interspeech.2020-3025},
  url={http://dx.doi.org/10.21437/Interspeech.2020-3025}
}