Intra-Class Variation Reduction of Speaker Representation in Disentanglement Framework

Yoohwan Kwon, Soo-Whan Chung, Hong-Goo Kang


In this paper, we propose an effective training strategy to extract robust speaker representations from a speech signal. One of the key challenges in speaker recognition tasks is to learn latent representations or embeddings containing solely speaker characteristic information in order to be robust in terms of intra-speaker variations. By modifying the network architecture to generate both speaker-related and speaker-unrelated representations, we exploit a learning criterion which minimizes the mutual information between these disentangled embeddings. We also introduce an identity change loss criterion which utilizes a reconstruction error to different utterances spoken by the same speaker. Since the proposed criteria reduce the variation of speaker characteristics caused by changes in background environment or spoken content, the resulting embeddings of each speaker become more consistent. The effectiveness of the proposed method is demonstrated through two tasks; disentanglement performance, and improvement of speaker recognition accuracy compared to the baseline model on a benchmark dataset, VoxCeleb1. Ablation studies also show the impact of each criterion on overall performance.


 DOI: 10.21437/Interspeech.2020-2075

Cite as: Kwon, Y., Chung, S., Kang, H. (2020) Intra-Class Variation Reduction of Speaker Representation in Disentanglement Framework. Proc. Interspeech 2020, 3231-3235, DOI: 10.21437/Interspeech.2020-2075.


@inproceedings{Kwon2020,
  author={Yoohwan Kwon and Soo-Whan Chung and Hong-Goo Kang},
  title={{Intra-Class Variation Reduction of Speaker Representation in Disentanglement Framework}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3231--3235},
  doi={10.21437/Interspeech.2020-2075},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2075}
}