Computationally Efficient and Versatile Framework for Joint Optimization of Blind Speech Separation and Dereverberation

Tomohiro Nakatani, Rintaro Ikeshita, Keisuke Kinoshita, Hiroshi Sawada, Shoko Araki


This paper proposes new blind signal processing techniques for optimizing a multi-input multi-output (MIMO) convolutional beamformer (CBF) in a computationally efficient way to simultaneously perform dereverberation and source separation. For effective CBF optimization, a conventional technique factorizes it into a multiple-target weighted prediction error (WPE) based dereverberation filter and a separation matrix. However, this technique requires the calculation of a huge spatio-temporal covariance matrix that reflects the statistics of all the sources, which makes the computational cost very high. For computationally efficient optimization, this paper introduces two techniques: one that decomposes the huge covariance matrix into ones for individual sources, and another that decomposes the CBF into sub-filters for estimating individual sources. Both techniques effectively and substantively reduce the size of the covariance matrices that must calculated, and allow us to greatly reduce the computational cost without loss of optimality.


 DOI: 10.21437/Interspeech.2020-2138

Cite as: Nakatani, T., Ikeshita, R., Kinoshita, K., Sawada, H., Araki, S. (2020) Computationally Efficient and Versatile Framework for Joint Optimization of Blind Speech Separation and Dereverberation. Proc. Interspeech 2020, 91-95, DOI: 10.21437/Interspeech.2020-2138.


@inproceedings{Nakatani2020,
  author={Tomohiro Nakatani and Rintaro Ikeshita and Keisuke Kinoshita and Hiroshi Sawada and Shoko Araki},
  title={{Computationally Efficient and Versatile Framework for Joint Optimization of Blind Speech Separation and Dereverberation}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={91--95},
  doi={10.21437/Interspeech.2020-2138},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2138}
}