Attentive Statistics Pooling for Deep Speaker Embedding

Koji Okabe, Takafumi Koshinaka, Koichi Shinoda

This paper proposes attentive statistics pooling for deep speaker embedding in text-independent speaker verification. In conventional speaker embedding, frame-level features are averaged over all the frames of a single utterance to form an utterance-level feature. Our method utilizes an attention mechanism to give different weights to different frames and generates not only weighted means but also weighted standard deviations. In this way, it can capture long-term variations in speaker characteristics more effectively. An evaluation on the NIST SRE 2012 and the VoxCeleb data sets shows that it reduces equal error rates (EERs) from the conventional method by 7.5% and 8.1%, respectively.

 DOI: 10.21437/Interspeech.2018-993

Cite as: Okabe, K., Koshinaka, T., Shinoda, K. (2018) Attentive Statistics Pooling for Deep Speaker Embedding. Proc. Interspeech 2018, 2252-2256, DOI: 10.21437/Interspeech.2018-993.

  author={Koji Okabe and Takafumi Koshinaka and Koichi Shinoda},
  title={Attentive Statistics Pooling for Deep Speaker Embedding},
  booktitle={Proc. Interspeech 2018},