End-to-End Multilingual Multi-Speaker Speech Recognition

Hiroshi Seki, Takaaki Hori, Shinji Watanabe, Jonathan Le Roux, John R. Hershey

The expressive power of end-to-end automatic speech recognition (ASR) systems enables direct estimation of a character or word label sequence from a sequence of acoustic features. Direct optimization of the whole system is advantageous because it not only eliminates the internal linkage necessary for hybrid systems, but also extends the scope of potential applications by training the model for various objectives. In this paper, we tackle the challenging task of multilingual multi-speaker ASR using such an all-in-one end-to-end system. Several multilingual ASR systems were recently proposed based on a monolithic neural network architecture without language-dependent modules, showing that modeling of multiple languages is well within the capabilities of an end-to-end framework. There has also been growing interest in multi-speaker speech recognition, which enables generation of multiple label sequences from single-channel mixed speech. In particular, a multi-speaker end-to-end ASR system that can directly model one-to-many mappings without additional auxiliary clues was recently proposed. The proposed model, which integrates the capabilities of these two systems, is evaluated using mixtures of two speakers generated by using 10 languages, including code-switching utterances.

 DOI: 10.21437/Interspeech.2019-3038

Cite as: Seki, H., Hori, T., Watanabe, S., Roux, J.L., Hershey, J.R. (2019) End-to-End Multilingual Multi-Speaker Speech Recognition. Proc. Interspeech 2019, 3755-3759, DOI: 10.21437/Interspeech.2019-3038.

  author={Hiroshi Seki and Takaaki Hori and Shinji Watanabe and Jonathan Le Roux and John R. Hershey},
  title={{End-to-End Multilingual Multi-Speaker Speech Recognition}},
  booktitle={Proc. Interspeech 2019},