Speech Separation Based on Multi-Stage Elaborated Dual-Path Deep BiLSTM with Auxiliary Identity Loss

Ziqiang Shi, Rujie Liu, Jiqing Han


Deep neural network with dual-path bi-directional long short-term memory (BiLSTM) block has been proved to be very effective in sequence modeling, especially in speech separation. This work investigates how to extend dual-path BiLSTM to result in a new state-of-the-art approach, called TasTas, for multi-talker monaural speech separation (a.k.a cocktail party problem). TasTas introduces two simple but effective improvements, one is an iterative multi-stage refinement scheme, and the other is to correct the speech with imperfect separation through a loss of speaker identity consistency between the separated speech and original speech, to boost the performance of dual-path BiLSTM based networks. TasTas takes the mixed utterance of two speakers and maps it to two separated utterances, where each utterance contains only one speaker’s voice. Our experiments on the notable benchmark WSJ0-2mix data corpus result in 20.55dB SDR improvement, 20.35dB SI-SDR improvement, 3.69 of PESQ, and 94.86% of ESTOI, which shows that our proposed networks can lead to big performance improvement on the speaker separation task. We have open sourced our reimplementation of the DPRNN-TasNet here1, and our TasTas is realized based on this implementation of DPRNN-TasNet, it is believed that the results in this paper can be reproduced with ease.


 DOI: 10.21437/Interspeech.2020-1537

Cite as: Shi, Z., Liu, R., Han, J. (2020) Speech Separation Based on Multi-Stage Elaborated Dual-Path Deep BiLSTM with Auxiliary Identity Loss. Proc. Interspeech 2020, 2682-2686, DOI: 10.21437/Interspeech.2020-1537.


@inproceedings{Shi2020,
  author={Ziqiang Shi and Rujie Liu and Jiqing Han},
  title={{Speech Separation Based on Multi-Stage Elaborated Dual-Path Deep BiLSTM with Auxiliary Identity Loss}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={2682--2686},
  doi={10.21437/Interspeech.2020-1537},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1537}
}