Unsupervised Training of Siamese Networks for Speaker Verification

Umair Khan, Javier Hernando


Speaker labeled background data is an essential requirement for most state-of-the-art approaches in speaker recognition, e.g., x-vectors and i-vector/PLDA. However, in reality it is difficult to access large amount of labeled data. In this work, we propose siamese networks for speaker verification without using speaker labels. We propose two different siamese networks having two and three branches, respectively, where each branch is a CNN encoder. Since the goal is to avoid speaker labels, we propose to generate the training pairs in an unsupervised manner. The client samples are selected within one database according to highest cosine scores with the anchor in i-vector space. The impostor samples are selected in the same way but from another database. Our double-branch siamese performs binary classification using cross entropy loss during training. In testing phase, we obtain speaker verification scores directly from its output layer. Whereas, our triple-branch siamese is trained to learn speaker embeddings using triplet loss. During testing, we extract speaker embeddings from its output layer, which are scored in the experiments using cosine scoring. The evaluation is performed on VoxCeleb-1 database, which show that using the proposed unsupervised systems, solely or in fusion, the results get closer to supervised baseline.


 DOI: 10.21437/Interspeech.2020-1882

Cite as: Khan, U., Hernando, J. (2020) Unsupervised Training of Siamese Networks for Speaker Verification. Proc. Interspeech 2020, 3002-3006, DOI: 10.21437/Interspeech.2020-1882.


@inproceedings{Khan2020,
  author={Umair Khan and Javier Hernando},
  title={{Unsupervised Training of Siamese Networks for Speaker Verification}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3002--3006},
  doi={10.21437/Interspeech.2020-1882},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1882}
}