Unsupervised Audio Source Separation Using Generative Priors

Vivek Narayanaswamy, Jayaraman J. Thiagarajan, Rushil Anirudh, Andreas Spanias

State-of-the-art under-determined audio source separation systems rely on supervised end to end training of carefully tailored neural network architectures operating either in the time or the spectral domain. However, these methods are severely challenged in terms of requiring access to expensive source level labeled data and being specific to a given set of sources and the mixing process, which demands complete re-training when those assumptions change. This strongly emphasizes the need for unsupervised methods that can leverage the recent advances in data-driven modeling, and compensate for the lack of labeled data through meaningful priors. To this end, we propose a novel approach for audio source separation based on generative priors trained on individual sources. Through the use of projected gradient descent optimization, our approach simultaneously searches in the source-specific latent spaces to effectively recover the constituent sources. Though the generative priors can be defined in the time domain directly, e.g. WaveGAN, we find that using spectral domain loss functions for our optimization leads to good-quality source estimates. Our empirical studies on standard spoken digit and instrument datasets clearly demonstrate the effectiveness of our approach over classical as well as state-of-the-art unsupervised baselines.

 DOI: 10.21437/Interspeech.2020-3115

Cite as: Narayanaswamy, V., Thiagarajan, J.J., Anirudh, R., Spanias, A. (2020) Unsupervised Audio Source Separation Using Generative Priors. Proc. Interspeech 2020, 2657-2661, DOI: 10.21437/Interspeech.2020-3115.

  author={Vivek Narayanaswamy and Jayaraman J. Thiagarajan and Rushil Anirudh and Andreas Spanias},
  title={{Unsupervised Audio Source Separation Using Generative Priors}},
  booktitle={Proc. Interspeech 2020},