Unsupervised Training of Neural Mask-Based Beamforming

Lukas Drude, Jahn Heymann, Reinhold Haeb-Umbach

We present an unsupervised training approach for a neural network-based mask estimator in an acoustic beamforming application. The network is trained to maximize a likelihood criterion derived from a spatial mixture model of the observations. It is trained from scratch without requiring any parallel data consisting of degraded input and clean training targets. Thus, training can be carried out on real recordings of noisy speech rather than simulated ones. In contrast to previous work on unsupervised training of neural mask estimators, our approach avoids the need for a possibly pre-trained teacher model entirely. We demonstrate the effectiveness of our approach by speech recognition experiments on two different datasets: one mainly deteriorated by noise (CHiME 4) and one by reverberation (REVERB). The results show that the performance of the proposed system is on par with a supervised system using oracle target masks for training and with a system trained using a model-based teacher.

 DOI: 10.21437/Interspeech.2019-2549

Cite as: Drude, L., Heymann, J., Haeb-Umbach, R. (2019) Unsupervised Training of Neural Mask-Based Beamforming. Proc. Interspeech 2019, 1253-1257, DOI: 10.21437/Interspeech.2019-2549.

  author={Lukas Drude and Jahn Heymann and Reinhold Haeb-Umbach},
  title={{Unsupervised Training of Neural Mask-Based Beamforming}},
  booktitle={Proc. Interspeech 2019},