End-to-End Far-Field Speech Recognition with Unified Dereverberation and Beamforming

Wangyou Zhang, Aswin Shanmugam Subramanian, Xuankai Chang, Shinji Watanabe, Yanmin Qian


Despite successful applications of end-to-end approaches in multi-channel speech recognition, the performance still degrades severely when the speech is corrupted by reverberation. In this paper, we integrate the dereverberation module into the end-to-end multi-channel speech recognition system and explore two different frontend architectures. First, a multi-source mask-based weighted prediction error (WPE) module is incorporated in the frontend for dereverberation. Second, another novel frontend architecture is proposed, which extends the weighted power minimization distortionless response (WPD) convolutional beamformer to perform simultaneous separation and dereverberation. We derive a new formulation from the original WPD, which can handle multi-source input, and replace eigenvalue decomposition with the matrix inverse operation to make the back-propagation algorithm more stable. The above two architectures are optimized in a fully end-to-end manner, only using the speech recognition criterion. Experiments on both spatialized wsj1-2mix corpus and REVERB show that our proposed model outperformed the conventional methods in reverberant scenarios.


 DOI: 10.21437/Interspeech.2020-2432

Cite as: Zhang, W., Subramanian, A.S., Chang, X., Watanabe, S., Qian, Y. (2020) End-to-End Far-Field Speech Recognition with Unified Dereverberation and Beamforming. Proc. Interspeech 2020, 324-328, DOI: 10.21437/Interspeech.2020-2432.


@inproceedings{Zhang2020,
  author={Wangyou Zhang and Aswin Shanmugam Subramanian and Xuankai Chang and Shinji Watanabe and Yanmin Qian},
  title={{End-to-End Far-Field Speech Recognition with Unified Dereverberation and Beamforming}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={324--328},
  doi={10.21437/Interspeech.2020-2432},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2432}
}