Joint Speaker Counting, Speech Recognition, and Speaker Identification for Overlapped Speech of any Number of Speakers

Naoyuki Kanda, Yashesh Gaur, Xiaofei Wang, Zhong Meng, Zhuo Chen, Tianyan Zhou, Takuya Yoshioka


We propose an end-to-end speaker-attributed automatic speech recognition model that unifies speaker counting, speech recognition, and speaker identification on monaural overlapped speech. Our model is built on serialized output training (SOT) with attention-based encoder-decoder, a recently proposed method for recognizing overlapped speech comprising an arbitrary number of speakers. We extend SOT by introducing a speaker inventory as an auxiliary input to produce speaker labels as well as multi-speaker transcriptions. All model parameters are optimized by speaker-attributed maximum mutual information criterion, which represents a joint probability for overlapped speech recognition and speaker identification. Experiments on LibriSpeech corpus show that our proposed method achieves significantly better speaker-attributed word error rate than the baseline that separately performs overlapped speech recognition and speaker identification.


 DOI: 10.21437/Interspeech.2020-1085

Cite as: Kanda, N., Gaur, Y., Wang, X., Meng, Z., Chen, Z., Zhou, T., Yoshioka, T. (2020) Joint Speaker Counting, Speech Recognition, and Speaker Identification for Overlapped Speech of any Number of Speakers. Proc. Interspeech 2020, 36-40, DOI: 10.21437/Interspeech.2020-1085.


@inproceedings{Kanda2020,
  author={Naoyuki Kanda and Yashesh Gaur and Xiaofei Wang and Zhong Meng and Zhuo Chen and Tianyan Zhou and Takuya Yoshioka},
  title={{Joint Speaker Counting, Speech Recognition, and Speaker Identification for Overlapped Speech of any Number of Speakers}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={36--40},
  doi={10.21437/Interspeech.2020-1085},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1085}
}