Recursive Whitening Transformation for Speaker Recognition on Language Mismatched Condition

Suwon Shon, Seongkyu Mun, Hanseok Ko


Recently in speaker recognition, performance degradation due to the channel domain mismatched condition has been actively addressed. However, the mismatches arising from language is yet to be sufficiently addressed. This paper proposes an approach which employs recursive whitening transformation to mitigate the language mismatched condition. The proposed method is based on the multiple whitening transformation, which is intended to remove un-whitened residual components in the dataset associated with i-vector length normalization. The experiments were conducted on the Speaker Recognition Evaluation 2016 trials of which the task is non-English speaker recognition using development dataset consist of both a large scale out-of-domain (English) dataset and an extremely low-quantity in-domain (non-English) dataset. For performance comparison, we develop a state-of-the-art system using deep neural network and bottleneck feature, which is based on a phonetically aware model. From the experimental results, along with other prior studies, effectiveness of the proposed method on language mismatched condition is validated.


 DOI: 10.21437/Interspeech.2017-545

Cite as: Shon, S., Mun, S., Ko, H. (2017) Recursive Whitening Transformation for Speaker Recognition on Language Mismatched Condition. Proc. Interspeech 2017, 2869-2873, DOI: 10.21437/Interspeech.2017-545.


@inproceedings{Shon2017,
  author={Suwon Shon and Seongkyu Mun and Hanseok Ko},
  title={Recursive Whitening Transformation for Speaker Recognition on Language Mismatched Condition},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={2869--2873},
  doi={10.21437/Interspeech.2017-545},
  url={http://dx.doi.org/10.21437/Interspeech.2017-545}
}