On Loss Functions and Recurrency Training for GAN-Based Speech Enhancement Systems

Zhuohuang Zhang, Chengyun Deng, Yi Shen, Donald S. Williamson, Yongtao Sha, Yi Zhang, Hui Song, Xiangang Li


Recent work has shown that it is feasible to use generative adversarial networks (GANs) for speech enhancement, however, these approaches have not been compared to state-of-the-art (SOTA) non GAN-based approaches. Additionally, many loss functions have been proposed for GAN-based approaches, but they have not been adequately compared. In this study, we propose novel convolutional recurrent GAN (CRGAN) architectures for speech enhancement. Multiple loss functions are adopted to enable direct comparisons to other GAN-based systems. The benefits of including recurrent layers are also explored. Our results show that the proposed CRGAN model outperforms the SOTA GAN-based models using the same loss functions and it outperforms other non-GAN based systems, indicating the benefits of using a GAN for speech enhancement. Overall, the CRGAN model that combines an objective metric loss function with the mean squared error (MSE) provides the best performance over comparison approaches across many evaluation metrics.


 DOI: 10.21437/Interspeech.2020-1169

Cite as: Zhang, Z., Deng, C., Shen, Y., Williamson, D.S., Sha, Y., Zhang, Y., Song, H., Li, X. (2020) On Loss Functions and Recurrency Training for GAN-Based Speech Enhancement Systems. Proc. Interspeech 2020, 3266-3270, DOI: 10.21437/Interspeech.2020-1169.


@inproceedings{Zhang2020,
  author={Zhuohuang Zhang and Chengyun Deng and Yi Shen and Donald S. Williamson and Yongtao Sha and Yi Zhang and Hui Song and Xiangang Li},
  title={{On Loss Functions and Recurrency Training for GAN-Based Speech Enhancement Systems}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3266--3270},
  doi={10.21437/Interspeech.2020-1169},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1169}
}