Segment Aggregation for Short Utterances Speaker Verification Using Raw Waveforms

Seung-bin Kim, Jee-weon Jung, Hye-jin Shim, Ju-ho Kim, Ha-Jin Yu


Most studies on speaker verification systems focus on long-duration utterances, which are composed of sufficient phonetic information. However, the performances of these systems are known to degrade when short-duration utterances are inputted due to the lack of phonetic information as compared to the long utterances. In this paper, we propose a method that compensates for the performance degradation of speaker verification for short utterances, referred to as “ segment aggregation”. The proposed method adopts an ensemble-based design to improve the stability and accuracy of speaker verification systems. The proposed method segments an input utterance into several short utterances and then aggregates the segment embeddings extracted from the segmented inputs to compose a speaker embedding. Then, this method simultaneously trains the segment embeddings and the aggregated speaker embedding. In addition, we also modified the teacher-student learning method for the proposed method. Experimental results on different input duration using the VoxCeleb1 test set demonstrate that the proposed technique improves speaker verification performance by about 45.37% relatively compared to the baseline system with 1-second test utterance condition.


 DOI: 10.21437/Interspeech.2020-1564

Cite as: Kim, S., Jung, J., Shim, H., Kim, J., Yu, H. (2020) Segment Aggregation for Short Utterances Speaker Verification Using Raw Waveforms. Proc. Interspeech 2020, 1521-1525, DOI: 10.21437/Interspeech.2020-1564.


@inproceedings{Kim2020,
  author={Seung-bin Kim and Jee-weon Jung and Hye-jin Shim and Ju-ho Kim and Ha-Jin Yu},
  title={{Segment Aggregation for Short Utterances Speaker Verification Using Raw Waveforms}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={1521--1525},
  doi={10.21437/Interspeech.2020-1564},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1564}
}