An Adaptive X-Vector Model for Text-Independent Speaker Verification

Bin Gu, Wu Guo, Fenglin Ding, Zhen-Hua Ling, Jun Du

In this paper, adaptive mechanisms are applied in deep neural network (DNN) training for x-vector-based text-independent speaker verification. First, adaptive convolutional neural networks (ACNNs) are employed in frame-level embedding layers, where the parameters of the convolution filters are adjusted based on the input features. Compared with conventional CNNs, ACNNs have more flexibility in capturing speaker information. Moreover, we replace conventional batch normalization (BN) with adaptive batch normalization (ABN). By dynamically generating the scaling and shifting parameters in BN, ABN adapts models to the acoustic variability arising from various factors such as channel and environmental noises. Finally, we incorporate these two methods to further improve performance. Experiments are carried out on the speaker in the wild (SITW) and VOiCES databases. The results demonstrate that the proposed methods significantly outperform the original x-vector approach.

 DOI: 10.21437/Interspeech.2020-1071

Cite as: Gu, B., Guo, W., Ding, F., Ling, Z., Du, J. (2020) An Adaptive X-Vector Model for Text-Independent Speaker Verification. Proc. Interspeech 2020, 1506-1510, DOI: 10.21437/Interspeech.2020-1071.

  author={Bin Gu and Wu Guo and Fenglin Ding and Zhen-Hua Ling and Jun Du},
  title={{An Adaptive X-Vector Model for Text-Independent Speaker Verification}},
  booktitle={Proc. Interspeech 2020},