Gated Recurrent Fusion of Spatial and Spectral Features for Multi-Channel Speech Separation with Deep Embedding Representations

Cunhang Fan, Jianhua Tao, Bin Liu, Jiangyan Yi, Zhengqi Wen


Multi-channel deep clustering (MDC) has acquired a good performance for speech separation. However, MDC only applies the spatial features as the additional information, which does not fuse them with the spectral features very well. So it is difficult to learn mutual relationship between spatial and spectral features. Besides, the training objective of MDC is defined at embedding vectors, rather than real separated sources, which may damage the separation performance. In this work, we deal with spatial and spectral features as two different modalities. We propose the gated recurrent fusion (GRF) method to adaptively select and fuse the relevant information from spectral and spatial features by making use of the gate and memory modules. In addition, to solve the training objective problem of MDC, the real separated sources are used as the training objectives. Specifically, we apply the deep clustering network to extract deep embedding features. Instead of using the unsupervised K-means clustering to estimate binary masks, another supervised network is utilized to learn soft masks from these deep embedding features. Our experiments are conducted on a spatialized reverberant version of WSJ0-2mix dataset. Experimental results show that the proposed method outperforms MDC baseline and even better than the oracle ideal binary mask (IBM).


 DOI: 10.21437/Interspeech.2020-1548

Cite as: Fan, C., Tao, J., Liu, B., Yi, J., Wen, Z. (2020) Gated Recurrent Fusion of Spatial and Spectral Features for Multi-Channel Speech Separation with Deep Embedding Representations. Proc. Interspeech 2020, 3321-3325, DOI: 10.21437/Interspeech.2020-1548.


@inproceedings{Fan2020,
  author={Cunhang Fan and Jianhua Tao and Bin Liu and Jiangyan Yi and Zhengqi Wen},
  title={{Gated Recurrent Fusion of Spatial and Spectral Features for Multi-Channel Speech Separation with Deep Embedding Representations}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3321--3325},
  doi={10.21437/Interspeech.2020-1548},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1548}
}