Ninth International Conference on Spoken Language Processing

Pittsburgh, PA, USA
September 17-21, 2006

The Segmentation of Multi-Channel Meeting Recordings for Automatic Speech Recognition

John Dines (1), Jithendra Vepa (1), Thomas Hain (2)

(1) IDIAP Research Institute, Switzerland; (2) University of Sheffield, UK

One major research challenge in the domain of the analysis of meeting room data is the automatic transcription of what is spoken during meetings, a task which has gained considerable attention within the ASR research community through the NIST rich transcription evaluations conducted over the last three years. One of the major difficulties in carrying out automatic speech recognition (ASR) on this data is dealing with the challenging recording environment, which has instigated the development of novel audio pre-processing approaches. In this paper we present a system for the automatic segmentation of multiple-channel individual headset microphone (IHM) meeting recordings for automatic speech recognition. The system relies on an MLP classifier trained from several meeting room corpora to identify speech/non-speech segments of the recordings. We give a detailed analysis of the segmentation performance for a number of system configurations, with our best system achieving ASR performance on automatically generated segments within 1.3% (3.7% relative) of a manual segmentation of the data.

Full Paper

Bibliographic reference.  Dines, John / Vepa, Jithendra / Hain, Thomas (2006): "The segmentation of multi-channel meeting recordings for automatic speech recognition", In INTERSPEECH-2006, paper 1548-Tue3A1O.4.