Domain-Invariant Speaker Vector Projection by Model-Agnostic Meta-Learning

Jiawen Kang, Ruiqi Liu, Lantian Li, Yunqi Cai, Dong Wang, Thomas Fang Zheng


Domain generalization remains a critical problem for speaker recognition, even with the state-of-the-art architectures based on deep neural nets. For example, a model trained on reading speech may largely fail when applied to scenarios of singing or movie. In this paper, we propose a domain-invariant projection to improve the generalizability of speaker vectors. This projection is a simple neural net and is trained following the Model-Agnostic Meta-Learning (MAML) principle, for which the objective is to classify speakers in one domain if it had been updated with speech data in another domain. We tested the proposed method on CNCeleb, a new dataset consisting of single-speaker multi-condition (SSMC) data. The results demonstrated that the MAML-based domain-invariant projection can produce more generalizable speaker vectors, and effectively improve the performance in unseen domains.


 DOI: 10.21437/Interspeech.2020-2562

Cite as: Kang, J., Liu, R., Li, L., Cai, Y., Wang, D., Zheng, T.F. (2020) Domain-Invariant Speaker Vector Projection by Model-Agnostic Meta-Learning. Proc. Interspeech 2020, 3825-3829, DOI: 10.21437/Interspeech.2020-2562.


@inproceedings{Kang2020,
  author={Jiawen Kang and Ruiqi Liu and Lantian Li and Yunqi Cai and Dong Wang and Thomas Fang Zheng},
  title={{Domain-Invariant Speaker Vector Projection by Model-Agnostic Meta-Learning}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3825--3829},
  doi={10.21437/Interspeech.2020-2562},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2562}
}