Spot the Conversation: Speaker Diarisation in the Wild

Joon Son Chung, Jaesung Huh, Arsha Nagrani, Triantafyllos Afouras, Andrew Zisserman


The goal of this paper is speaker diarisation of videos collected ‘in the wild’.

We make three key contributions. First, we propose an automatic audio-visual diarisation method for YouTube videos. Our method consists of active speaker detection using audio-visual methods and speaker verification using self-enrolled speaker models. Second, we integrate our method into a semi-automatic dataset creation pipeline which significantly reduces the number of hours required to annotate videos with diarisation labels. Finally, we use this pipeline to create a large-scale diarisation dataset called VoxConverse, collected from ‘in the wild’ videos, which we will release publicly to the research community. Our dataset consists of overlapping speech, a large and diverse speaker pool, and challenging background conditions.


 DOI: 10.21437/Interspeech.2020-2337

Cite as: Chung, J.S., Huh, J., Nagrani, A., Afouras, T., Zisserman, A. (2020) Spot the Conversation: Speaker Diarisation in the Wild. Proc. Interspeech 2020, 299-303, DOI: 10.21437/Interspeech.2020-2337.


@inproceedings{Chung2020,
  author={Joon Son Chung and Jaesung Huh and Arsha Nagrani and Triantafyllos Afouras and Andrew Zisserman},
  title={{Spot the Conversation: Speaker Diarisation in the Wild}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={299--303},
  doi={10.21437/Interspeech.2020-2337},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2337}
}