Neural Speech Separation Using Spatially Distributed Microphones

Dongmei Wang, Zhuo Chen, Takuya Yoshioka


This paper proposes a neural network based speech separation method using spatially distributed microphones. Unlike with traditional microphone array settings, neither the number of microphones nor their spatial arrangement is known in advance, which hinders the use of conventional multi-channel speech separation neural networks based on fixed size input. To overcome this, a novel network architecture is proposed that interleaves inter-channel processing layers and temporal processing layers. The inter-channel processing layers apply a self-attention mechanism along the channel dimension to exploit the information obtained with a varying number of microphones. The temporal processing layers are based on a bidirectional long short term memory (BLSTM) model and applied to each channel independently. The proposed network leverages information across time and space by stacking these two kinds of layers alternately. Our network estimates time-frequency (TF) masks for each speaker, which are then used to generate enhanced speech signals either with TF masking or beamforming. Speech recognition experimental results show that the proposed method significantly outperforms baseline multi-channel speech separation systems.


 DOI: 10.21437/Interspeech.2020-1089

Cite as: Wang, D., Chen, Z., Yoshioka, T. (2020) Neural Speech Separation Using Spatially Distributed Microphones. Proc. Interspeech 2020, 339-343, DOI: 10.21437/Interspeech.2020-1089.


@inproceedings{Wang2020,
  author={Dongmei Wang and Zhuo Chen and Takuya Yoshioka},
  title={{Neural Speech Separation Using Spatially Distributed Microphones}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={339--343},
  doi={10.21437/Interspeech.2020-1089},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1089}
}