Separating Varying Numbers of Sources with Auxiliary Autoencoding Loss

Yi Luo, Nima Mesgarani


Many recent source separation systems are designed to separate a fixed number of sources out of a mixture. In the cases where the source activation patterns are unknown, such systems have to either adjust the number of outputs or to identify invalid outputs from the valid ones. Iterative separation methods have gain much attention in the community as they can flexibly decide the number of outputs, however (1) they typically rely on long-term information to determine the stopping time for the iterations, which makes them hard to operate in a causal setting; (2) they lack a “fault tolerance” mechanism when the estimated number of sources is different from the actual number. In this paper, we propose a simple training method, the auxiliary autoencoding permutation invariant training (A2PIT), to alleviate the two issues. A2PIT assumes a fixed number of outputs and uses auxiliary autoencoding loss to force the invalid outputs to be the copies of the input mixture, and detects invalid outputs in a fully unsupervised way during inference phase. Experiment results show that A2PIT is able to improve the separation performance across various numbers of speakers and effectively detect the number of speakers in a mixture.


 DOI: 10.21437/Interspeech.2020-0034

Cite as: Luo, Y., Mesgarani, N. (2020) Separating Varying Numbers of Sources with Auxiliary Autoencoding Loss. Proc. Interspeech 2020, 2622-2626, DOI: 10.21437/Interspeech.2020-0034.


@inproceedings{Luo2020,
  author={Yi Luo and Nima Mesgarani},
  title={{Separating Varying Numbers of Sources with Auxiliary Autoencoding Loss}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={2622--2626},
  doi={10.21437/Interspeech.2020-0034},
  url={http://dx.doi.org/10.21437/Interspeech.2020-0034}
}