Audio-Visual Speaker Recognition with a Cross-Modal Discriminative Network

Ruijie Tao, Rohan Kumar Das, Haizhou Li


Audio-visual speaker recognition is one of the tasks in the recent 2019 NIST speaker recognition evaluation (SRE). Studies in neuroscience and computer science all point to the fact that vision and auditory neural signals interact in the cognitive process. This motivated us to study a cross-modal network, namely voice-face discriminative network (VFNet) that establishes the general relation between human voice and face. Experiments show that VFNet provides additional speaker discriminative information. With VFNet, we achieve 16.54% equal error rate relative reduction over the score level fusion audio-visual baseline on evaluation set of 2019 NIST SRE.


 DOI: 10.21437/Interspeech.2020-1814

Cite as: Tao, R., Das, R.K., Li, H. (2020) Audio-Visual Speaker Recognition with a Cross-Modal Discriminative Network. Proc. Interspeech 2020, 2242-2246, DOI: 10.21437/Interspeech.2020-1814.


@inproceedings{Tao2020,
  author={Ruijie Tao and Rohan Kumar Das and Haizhou Li},
  title={{Audio-Visual Speaker Recognition with a Cross-Modal Discriminative Network}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={2242--2246},
  doi={10.21437/Interspeech.2020-1814},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1814}
}