Single-Channel Speech Enhancement by Subspace Affinity Minimization

Dung N. Tran, Kazuhito Koishida


In data-driven speech enhancement frameworks, learning informative representations is crucial to obtain a high-quality estimate of the target speech. State-of-the-art speech enhancement methods based on deep neural networks (DNN) commonly learn a single embedding from the noisy input to predict clean speech. This compressed representation inevitably contains both noise and speech information leading to speech distortion and poor noise reduction performance. To alleviate this issue, we proposed to learn from the noisy input separate embeddings for speech and noise and introduced a subspace affinity loss function to prevent information leaking between the two representations. We rigorously proved that minimizing this loss function yields maximally uncorrelated speech and noise representations, which can block information leaking. We empirically showed that our proposed framework outperforms traditional and state-of-the-art speech enhancement methods in various unseen nonstationary noise environments. Our results suggest that learning uncorrelated speech and noise embeddings can improve noise reduction and reduces speech distortion in speech enhancement applications.


 DOI: 10.21437/Interspeech.2020-2982

Cite as: Tran, D.N., Koishida, K. (2020) Single-Channel Speech Enhancement by Subspace Affinity Minimization. Proc. Interspeech 2020, 2447-2451, DOI: 10.21437/Interspeech.2020-2982.


@inproceedings{Tran2020,
  author={Dung N. Tran and Kazuhito Koishida},
  title={{Single-Channel Speech Enhancement by Subspace Affinity Minimization}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={2447--2451},
  doi={10.21437/Interspeech.2020-2982},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2982}
}