Strategies for End-to-End Text-Independent Speaker Verification

Weiwei Lin, Man-Wai Mak, Jen-Tzung Chien


State-of-the-art speaker verification (SV) systems typically consist of two distinct components: a deep neural network (DNN) for creating speaker embeddings and a backend for improving the embeddings’ discriminative ability. The question which arises is: Can we train an SV system without a backend? We believe that the backend is to compensate for the fact that the network is trained entirely on short speech segments. This paper shows that with several modifications to the x-vector system, DNN embeddings can be directly used for verification. The proposed modifications include: (1) a mask-pooling layer that augments the training samples by randomly masking the frame-level activations and then computing temporal statistics, (2) a sampling scheme that produces diverse training samples by randomly splicing several speech segments from each utterance, and (3) additional convolutional layers designed to reduce the temporal resolution to save computational cost. Experiments on NIST SRE 2016 and 2018 show that our method can achieve state-of-the-art performance with simple cosine similarity and requires only half of the computational cost of the x-vector network.


 DOI: 10.21437/Interspeech.2020-2092

Cite as: Lin, W., Mak, M., Chien, J. (2020) Strategies for End-to-End Text-Independent Speaker Verification. Proc. Interspeech 2020, 4308-4312, DOI: 10.21437/Interspeech.2020-2092.


@inproceedings{Lin2020,
  author={Weiwei Lin and Man-Wai Mak and Jen-Tzung Chien},
  title={{Strategies for End-to-End Text-Independent Speaker Verification}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={4308--4312},
  doi={10.21437/Interspeech.2020-2092},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2092}
}