Monoaural Audio Source Separation Using Variational Autoencoders

Laxmi Pandey, Anurendra Kumar, Vinay Namboodiri

We introduce a monaural audio source separation framework using a latent generative model. Traditionally, discriminative training for source separation is proposed using deep neural networks or non-negative matrix factorization. In this paper, we propose a principled generative approach using variational autoencoders (VAE) for audio source separation. VAE computes efficient Bayesian inference which leads to a continuous latent representation of the input data(spectrogram). It contains a probabilistic encoder which projects an input data to latent space and a probabilistic decoder which projects data from latent space back to input space. This allows us to learn a robust latent representation of sources corrupted with noise and other sources. The latent representation is then fed to the decoder to yield the separated source. Both encoder and decoder are implemented via multilayer perceptron (MLP). In contrast to prevalent techniques, we argue that VAE is a more principled approach to source separation. Experimentally, we find that the proposed framework yields reasonable improvements when compared to baseline methods available in the literature i.e. DNN and RNN with different masking functions and autoencoders. We show that our method performs better than best of the relevant methods with ∼ 2 dB improvement in the source to distortion ratio.

 DOI: 10.21437/Interspeech.2018-1140

Cite as: Pandey, L., Kumar, A., Namboodiri, V. (2018) Monoaural Audio Source Separation Using Variational Autoencoders. Proc. Interspeech 2018, 3489-3493, DOI: 10.21437/Interspeech.2018-1140.

  author={Laxmi Pandey and Anurendra Kumar and Vinay Namboodiri},
  title={Monoaural Audio Source Separation Using Variational Autoencoders},
  booktitle={Proc. Interspeech 2018},