Improved Speech Enhancement Using a Time-Domain GAN with Mask Learning

Ju Lin, Sufeng Niu, Adriaan J. van Wijngaarden, Jerome L. McClendon, Melissa C. Smith, Kuang-Ching Wang


Speech enhancement is an essential component in robust automatic speech recognition (ASR) systems. Most speech enhancement methods are nowadays based on neural networks that use feature-mapping or mask-learning. This paper proposes a novel speech enhancement method that integrates time-domain feature mapping and mask learning into a unified framework using a Generative Adversarial Network (GAN). The proposed framework processes the received waveform and decouples speech and noise signals, which are fed into two short-time Fourier transform (STFT) convolution 1-D layers that map the waveforms to spectrograms in the complex domain. These speech and noise spectrograms are then used to compute the speech mask loss. The proposed method is evaluated using the TIMIT data set for seen and unseen signal-to-noise ratio conditions. It is shown that the proposed method outperforms the speech enhancement methods that use Deep Neural Network (DNN) based speech enhancement or a Speech Enhancement Generative Adversarial Network (SEGAN).


 DOI: 10.21437/Interspeech.2020-1946

Cite as: Lin, J., Niu, S., Wijngaarden, A.J.V., McClendon, J.L., Smith, M.C., Wang, K. (2020) Improved Speech Enhancement Using a Time-Domain GAN with Mask Learning. Proc. Interspeech 2020, 3286-3290, DOI: 10.21437/Interspeech.2020-1946.


@inproceedings{Lin2020,
  author={Ju Lin and Sufeng Niu and Adriaan J. van Wijngaarden and Jerome L. McClendon and Melissa C. Smith and Kuang-Ching Wang},
  title={{Improved Speech Enhancement Using a Time-Domain GAN with Mask Learning}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3286--3290},
  doi={10.21437/Interspeech.2020-1946},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1946}
}