In Defence of Metric Learning for Speaker Recognition

Joon Son Chung, Jaesung Huh, Seongkyu Mun, Minjae Lee, Hee-Soo Heo, Soyeon Choe, Chiheon Ham, Sunghwan Jung, Bong-Jin Lee, Icksang Han

The objective of this paper is ‘open-set’ speaker recognition of unseen speakers, where ideal embeddings should be able to condense information into a compact utterance-level representation that has small intra-speaker and large inter-speaker distance.

A popular belief in speaker recognition is that networks trained with classification objectives outperform metric learning methods. In this paper, we present an extensive evaluation of most popular loss functions for speaker recognition on the VoxCeleb dataset. We demonstrate that the vanilla triplet loss shows competitive performance compared to classification-based losses, and those trained with our proposed metric learning objective outperform state-of-the-art methods.

 DOI: 10.21437/Interspeech.2020-1064

Cite as: Chung, J.S., Huh, J., Mun, S., Lee, M., Heo, H., Choe, S., Ham, C., Jung, S., Lee, B., Han, I. (2020) In Defence of Metric Learning for Speaker Recognition. Proc. Interspeech 2020, 2977-2981, DOI: 10.21437/Interspeech.2020-1064.

  author={Joon Son Chung and Jaesung Huh and Seongkyu Mun and Minjae Lee and Hee-Soo Heo and Soyeon Choe and Chiheon Ham and Sunghwan Jung and Bong-Jin Lee and Icksang Han},
  title={{In Defence of Metric Learning for Speaker Recognition}},
  booktitle={Proc. Interspeech 2020},