Speaker Attribution with Voice Profiles by Graph-Based Semi-Supervised Learning

Jixuan Wang, Xiong Xiao, Jian Wu, Ranjani Ramamurthy, Frank Rudzicz, Michael Brudno


Speaker attribution is required in many real-world applications, such as meeting transcription, where speaker identity is assigned to each utterance according to speaker voice profiles. In this paper, we propose to solve the speaker attribution problem by using graph-based semi-supervised learning methods. A graph of speech segments is built for each session, on which segments from voice profiles are represented by labeled nodes while segments from test utterances are unlabeled nodes. The weight of edges between nodes is evaluated by the similarities between the pretrained speaker embeddings of speech segments. Speaker attribution then becomes a semi-supervised learning problem on graphs, on which two graph-based methods are applied: label propagation (LP) and graph neural networks (GNNs). The proposed approaches are able to utilize the structural information of the graph to improve speaker attribution performance. Experimental results on real meeting data show that the graph based approaches reduce speaker attribution error by up to 68% compared to a baseline speaker identification approach that processes each utterance independently.


 DOI: 10.21437/Interspeech.2020-1950

Cite as: Wang, J., Xiao, X., Wu, J., Ramamurthy, R., Rudzicz, F., Brudno, M. (2020) Speaker Attribution with Voice Profiles by Graph-Based Semi-Supervised Learning. Proc. Interspeech 2020, 289-293, DOI: 10.21437/Interspeech.2020-1950.


@inproceedings{Wang2020,
  author={Jixuan Wang and Xiong Xiao and Jian Wu and Ranjani Ramamurthy and Frank Rudzicz and Michael Brudno},
  title={{Speaker Attribution with Voice Profiles by Graph-Based Semi-Supervised Learning}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={289--293},
  doi={10.21437/Interspeech.2020-1950},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1950}
}