Length- and Noise-Aware Training Techniques for Short-Utterance Speaker Recognition

Wenda Chen, Jonathan Huang, Tobias Bocklet


Speaker recognition performance has been greatly improved with the emergence of deep learning. Deep neural networks show the capacity to effectively deal with impacts of noise and reverberation, making them attractive to far-field speaker recognition systems. The x-vector framework is a popular choice for generating speaker embeddings in recent literature due to its robust training mechanism and excellent performance in various test sets. In this paper, we start with early work on including invariant representation learning (IRL) to the loss function and modify the approach with centroid alignment (CA) and length variability cost (LVC) techniques to further improve robustness in noisy, far-field applications. This work mainly focuses on improvements for short-duration test utterances (1-8s). We also present improved results on long-duration tasks. In addition, this work discusses a novel self-attention mechanism. On the VOiCES far-field corpus, the combination of the proposed techniques achieves relative improvements of 7.0% for extremely short and 8.2% for full-duration test utterances on equal error rate (EER) over our baseline system.


 DOI: 10.21437/Interspeech.2020-2872

Cite as: Chen, W., Huang, J., Bocklet, T. (2020) Length- and Noise-Aware Training Techniques for Short-Utterance Speaker Recognition. Proc. Interspeech 2020, 3835-3839, DOI: 10.21437/Interspeech.2020-2872.


@inproceedings{Chen2020,
  author={Wenda Chen and Jonathan Huang and Tobias Bocklet},
  title={{Length- and Noise-Aware Training Techniques for Short-Utterance Speaker Recognition}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3835--3839},
  doi={10.21437/Interspeech.2020-2872},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2872}
}