SERIL: Noise Adaptive Speech Enhancement Using Regularization-Based Incremental Learning

Chi-Chang Lee, Yu-Chen Lin, Hsuan-Tien Lin, Hsin-Min Wang, Yu Tsao


Numerous noise adaptation techniques have been proposed to fine-tune deep-learning models in speech enhancement (SE) for mismatched noise environments. Nevertheless, adaptation to a new environment may lead to catastrophic forgetting of the previously learned environments. The catastrophic forgetting issue degrades the performance of SE in real-world embedded devices, which often revisit previous noise environments. The nature of embedded devices does not allow solving the issue with additional storage of all pre-trained models or earlier training data. In this paper, we propose a regularization-based incremental learning SE (SERIL) strategy, complementing existing noise adaptation strategies without using additional storage. With a regularization constraint, the parameters are updated to the new noise environment while retaining the knowledge of the previous noise environments. The experimental results show that, when faced with a new noise domain, the SERIL model outperforms the unadapted SE model. Meanwhile, compared with the current adaptive technique based on fine-tuning, the SERIL model can reduce the forgetting of previous noise environments by 52%. The results verify that the SERIL model can effectively adjust itself to new noise environments while overcoming the catastrophic forgetting issue. The results make SERIL a favorable choice for real-world SE applications, where the noise environment changes frequently.


 DOI: 10.21437/Interspeech.2020-2213

Cite as: Lee, C., Lin, Y., Lin, H., Wang, H., Tsao, Y. (2020) SERIL: Noise Adaptive Speech Enhancement Using Regularization-Based Incremental Learning. Proc. Interspeech 2020, 2432-2436, DOI: 10.21437/Interspeech.2020-2213.


@inproceedings{Lee2020,
  author={Chi-Chang Lee and Yu-Chen Lin and Hsuan-Tien Lin and Hsin-Min Wang and Yu Tsao},
  title={{SERIL: Noise Adaptive Speech Enhancement Using Regularization-Based Incremental Learning}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={2432--2436},
  doi={10.21437/Interspeech.2020-2213},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2213}
}