INTERSPEECH 2013
14thAnnual Conference of the International Speech Communication Association

Lyon, France
August 25-29, 2013

Robust Speech Enhancement Techniques for ASR in Non-Stationary Noise and Dynamic Environments

Gang Liu (1), Dimitrios Dimitriadis (2), Enrico Bocchieri (2)

(1) University of Texas at Dallas, USA
(2) AT&T Labs Research, USA

In the current ASR systems the presence of competing speakers greatly degrades the recognition performance. This phenomenon is getting even more prominent in the case of hands-free, far-field ASR systems like the "Smart-TV" systems, where reverberation and non-stationary noise pose additional challenges. Furthermore, speakers are, most often, not standing still while speaking. To address these issues, we propose a cascaded system that includes Time Differences of Arrival estimation, multi-channel Wiener Filtering, non-negative matrix factorization (NMF), multi-condition training, and robust feature extraction, whereas each of them additively improves the overall performance. The final cascaded system presents an average of 50% and 45% relative improvement in ASR word accuracy for the CHiME 2011 (non-stationary noise) and CHiME 2012 (non-stationary noise plus speaker head movement) tasks, respectively.

Full Paper

Bibliographic reference.  Liu, Gang / Dimitriadis, Dimitrios / Bocchieri, Enrico (2013): "Robust speech enhancement techniques for ASR in non-stationary noise and dynamic environments", In INTERSPEECH-2013, 3017-3021.