Adversarial Domain Adaptation for Speaker Verification Using Partially Shared Network

Zhengyang Chen, Shuai Wang, Yanmin Qian


Speaker verification systems usually suffer from large performance degradation when applied to a new dataset from a different domain. In this work, we will study the domain adaption strategy between datasets with different languages using domain adversarial training. We introduce a partially shared network based domain adversarial training architecture to learn an asymmetric mapping for source and target domain embedding extractor. This architecture can help the embedding extractor learn domain invariant feature without sacrificing the ability on speaker discrimination. When doing the evaluation on cross-lingual domain adaption, the source domain data is in English from NIST SRE04-10 and Switchboard, and the target domain data is in Cantonese and Tagalog from NIST SRE16. Our results show that the usual adversarial training mode will indeed harm the speaker discrimination when the source and target domain embedding extractors are fully shared, and in contrast the newly proposed architecture solves this problem and achieves ~25.0% relative average Equal Error Rate (EER) improvement on SRE16 Cantonese and Tagalog evaluation.


 DOI: 10.21437/Interspeech.2020-2226

Cite as: Chen, Z., Wang, S., Qian, Y. (2020) Adversarial Domain Adaptation for Speaker Verification Using Partially Shared Network. Proc. Interspeech 2020, 3017-3021, DOI: 10.21437/Interspeech.2020-2226.


@inproceedings{Chen2020,
  author={Zhengyang Chen and Shuai Wang and Yanmin Qian},
  title={{Adversarial Domain Adaptation for Speaker Verification Using Partially Shared Network}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3017--3021},
  doi={10.21437/Interspeech.2020-2226},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2226}
}