Improving Multi-Scale Aggregation Using Feature Pyramid Module for Robust Speaker Verification of Variable-Duration Utterances

Youngmoon Jung, Seong Min Kye, Yeunju Choi, Myunghun Jung, Hoirin Kim


Currently, the most widely used approach for speaker verification is the deep speaker embedding learning. In this approach, we obtain a speaker embedding vector by pooling single-scale features that are extracted from the last layer of a speaker feature extractor. Multi-scale aggregation (MSA), which utilizes multi-scale features from different layers of the feature extractor, has recently been introduced and shows superior performance for variable-duration utterances. To increase the robustness dealing with utterances of arbitrary duration, this paper improves the MSA by using a feature pyramid module. The module enhances speaker-discriminative information of features from multiple layers via a top-down pathway and lateral connections. We extract speaker embeddings using the enhanced features that contain rich speaker information with different time scales. Experiments on the VoxCeleb dataset show that the proposed module improves previous MSA methods with a smaller number of parameters. It also achieves better performance than state-of-the-art approaches for both short and long utterances.


 DOI: 10.21437/Interspeech.2020-1025

Cite as: Jung, Y., Kye, S.M., Choi, Y., Jung, M., Kim, H. (2020) Improving Multi-Scale Aggregation Using Feature Pyramid Module for Robust Speaker Verification of Variable-Duration Utterances. Proc. Interspeech 2020, 1501-1505, DOI: 10.21437/Interspeech.2020-1025.


@inproceedings{Jung2020,
  author={Youngmoon Jung and Seong Min Kye and Yeunju Choi and Myunghun Jung and Hoirin Kim},
  title={{Improving Multi-Scale Aggregation Using Feature Pyramid Module for Robust Speaker Verification of Variable-Duration Utterances}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={1501--1505},
  doi={10.21437/Interspeech.2020-1025},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1025}
}