MIRNet: Learning Multiple Identities Representations in Overlapped Speech

Hyewon Han, Soo-Whan Chung, Hong-Goo Kang


Many approaches can derive information about a single speaker’s identity from the speech by learning to recognize consistent characteristics of acoustic parameters. However, it is challenging to determine identity information when there are multiple concurrent speakers in a given signal. In this paper, we propose a novel deep speaker representation strategy that can reliably extract multiple speaker identities from an overlapped speech. We design a network that can extract a high-level embedding that contains information about each speaker’s identity from a given mixture. Unlike conventional approaches that need reference acoustic features for training, our proposed algorithm only requires the speaker identity labels of the overlapped speech segments. We demonstrate the effectiveness and usefulness of our algorithm in a speaker verification task and a speech separation system conditioned on the target speaker embeddings obtained through the proposed method.


 DOI: 10.21437/Interspeech.2020-2076

Cite as: Han, H., Chung, S., Kang, H. (2020) MIRNet: Learning Multiple Identities Representations in Overlapped Speech. Proc. Interspeech 2020, 4303-4307, DOI: 10.21437/Interspeech.2020-2076.


@inproceedings{Han2020,
  author={Hyewon Han and Soo-Whan Chung and Hong-Goo Kang},
  title={{MIRNet: Learning Multiple Identities Representations in Overlapped Speech}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={4303--4307},
  doi={10.21437/Interspeech.2020-2076},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2076}
}