Identify Speakers in Cocktail Parties with End-to-End Attention

Junzhe Zhu, Mark Hasegawa-Johnson, Leda Sarı


In scenarios where multiple speakers talk at the same time, it is important to be able to identify the talkers accurately. This paper presents an end-to-end system that integrates speech source extraction and speaker identification, and proposes a new way to jointly optimize these two parts by max-pooling the speaker predictions along the channel dimension. Residual attention permits us to learn spectrogram masks that are optimized for the purpose of speaker identification, while residual forward connections permit dilated convolution with a sufficiently large context window to guarantee correct streaming across syllable boundaries. End-to-end training results in a system that recognizes one speaker in a two-speaker broadcast speech mixture with 99.9% accuracy and both speakers with 93.9% accuracy, and that recognizes all speakers in three-speaker scenarios with 81.2% accuracy.1


 DOI: 10.21437/Interspeech.2020-2430

Cite as: Zhu, J., Hasegawa-Johnson, M., Sarı, L. (2020) Identify Speakers in Cocktail Parties with End-to-End Attention. Proc. Interspeech 2020, 3092-3096, DOI: 10.21437/Interspeech.2020-2430.


@inproceedings{Zhu2020,
  author={Junzhe Zhu and Mark Hasegawa-Johnson and Leda Sarı},
  title={{Identify Speakers in Cocktail Parties with End-to-End Attention}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3092--3096},
  doi={10.21437/Interspeech.2020-2430},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2430}
}