Meeting Transcription Using Asynchronous Distant Microphones

Takuya Yoshioka, Dimitrios Dimitriadis, Andreas Stolcke, William Hinthorn, Zhuo Chen, Michael Zeng, Xuedong Huang

We describe a system that generates speaker-annotated transcripts of meetings by using multiple asynchronous distant microphones. The system is composed of continuous audio stream alignment, blind beamforming, speech recognition, speaker diarization, and system combination. While the idea of improving the meeting transcription accuracy by leveraging multiple recordings has been investigated in certain specific technology areas such as beamforming, our objective is to assess the feasibility of a complete system with a set of mobile devices and conduct a detailed analysis. With seven input audio streams, our system achieves a word error rate (WER) of 22.3% and a speaker-attributed WER (SAWER) of 26.7%, and comes within 3% of the close-talking microphone WER on non-overlapping speech. The relative gains in SAWER over a single-device system are 14.8%, 20.3%, and 22.4% for three, five, and seven microphones, respectively. The full system achieves a 13.6% diarization error rate, 10% of which are due to overlapped speech.

 DOI: 10.21437/Interspeech.2019-3088

Cite as: Yoshioka, T., Dimitriadis, D., Stolcke, A., Hinthorn, W., Chen, Z., Zeng, M., Huang, X. (2019) Meeting Transcription Using Asynchronous Distant Microphones. Proc. Interspeech 2019, 2968-2972, DOI: 10.21437/Interspeech.2019-3088.

  author={Takuya Yoshioka and Dimitrios Dimitriadis and Andreas Stolcke and William Hinthorn and Zhuo Chen and Michael Zeng and Xuedong Huang},
  title={{Meeting Transcription Using Asynchronous Distant Microphones}},
  booktitle={Proc. Interspeech 2019},