Cosine-Distance Virtual Adversarial Training for Semi-Supervised Speaker-Discriminative Acoustic Embeddings

Florian L. Kreyssig, Philip C. Woodland


In this paper, we propose a semi-supervised learning (SSL) technique for training deep neural networks (DNNs) to generate speaker-discriminative acoustic embeddings (speaker embeddings). Obtaining large amounts of speaker recognition training data can be difficult for desired target domains, especially under privacy constraints. The proposed technique reduces requirements for labelled data by leveraging unlabelled data. The technique is a variant of virtual adversarial training (VAT) [1] in the form of a loss that is defined as the robustness of the speaker embedding against input perturbations, as measured by the cosine-distance. Thus, we term the technique cosine-distance virtual adversarial training (CD-VAT). In comparison to many existing SSL techniques, the unlabelled data does not have to come from the same set of classes (here speakers) as the labelled data. The effectiveness of CD-VAT is shown on the 2750+ hour VoxCeleb data set, where on a speaker verification task it achieves a reduction in equal error rate (EER) of 11.1% relative to a purely supervised baseline. This is 32.5% of the improvement that would be achieved from supervised training if the speaker labels for the unlabelled data were available.


 DOI: 10.21437/Interspeech.2020-2270

Cite as: Kreyssig, F.L., Woodland, P.C. (2020) Cosine-Distance Virtual Adversarial Training for Semi-Supervised Speaker-Discriminative Acoustic Embeddings. Proc. Interspeech 2020, 3241-3245, DOI: 10.21437/Interspeech.2020-2270.


@inproceedings{Kreyssig2020,
  author={Florian L. Kreyssig and Philip C. Woodland},
  title={{Cosine-Distance Virtual Adversarial Training for Semi-Supervised Speaker-Discriminative Acoustic Embeddings}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3241--3245},
  doi={10.21437/Interspeech.2020-2270},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2270}
}