Speaker Re-Identification with Speaker Dependent Speech Enhancement

Yanpei Shi, Qiang Huang, Thomas Hain


While the use of deep neural networks has significantly boosted speaker recognition performance, it is still challenging to separate speakers in poor acoustic environments. Here speech enhancement methods have traditionally allowed improved performance. The recent works have shown that adapting speech enhancement can lead to further gains. This paper introduces a novel approach that cascades speech enhancement and speaker recognition. In the first step, a speaker embedding vector is generated, which is used in the second step to enhance the speech quality and re-identify the speakers. Models are trained in an integrated framework with joint optimisation. The proposed approach is evaluated using the Voxceleb1 dataset, which aims to assess speaker recognition in real world situations. In addition three types of noise at different signal-noise-ratios were added for this work. The obtained results show that the proposed approach using speaker dependent speech enhancement can yield better speaker recognition and speech enhancement performances than two baselines in various noise conditions.


 DOI: 10.21437/Interspeech.2020-1772

Cite as: Shi, Y., Huang, Q., Hain, T. (2020) Speaker Re-Identification with Speaker Dependent Speech Enhancement. Proc. Interspeech 2020, 1530-1534, DOI: 10.21437/Interspeech.2020-1772.


@inproceedings{Shi2020,
  author={Yanpei Shi and Qiang Huang and Thomas Hain},
  title={{Speaker Re-Identification with Speaker Dependent Speech Enhancement}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={1530--1534},
  doi={10.21437/Interspeech.2020-1772},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1772}
}