Speaker-Utterance Dual Attention for Speaker and Utterance Verification

Tianchi Liu, Rohan Kumar Das, Maulik Madhavi, Shengmei Shen, Haizhou Li


In this paper, we study a novel technique that exploits the interaction between speaker traits and linguistic content to improve both speaker verification and utterance verification performance. We implement an idea of speaker-utterance dual attention (SUDA) in a unified neural network. The dual attention refers to an attention mechanism for the two tasks of speaker and utterance verification. The proposed SUDA features an attention mask mechanism to learn the interaction between the speaker and utterance information streams. This helps to focus only on the required information for respective task by masking the irrelevant counterparts. The studies conducted on RSR2015 corpus confirm that the proposed SUDA outperforms the framework without attention mask as well as several competitive systems for both speaker and utterance verification.


 DOI: 10.21437/Interspeech.2020-1818

Cite as: Liu, T., Das, R.K., Madhavi, M., Shen, S., Li, H. (2020) Speaker-Utterance Dual Attention for Speaker and Utterance Verification. Proc. Interspeech 2020, 4293-4297, DOI: 10.21437/Interspeech.2020-1818.


@inproceedings{Liu2020,
  author={Tianchi Liu and Rohan Kumar Das and Maulik Madhavi and Shengmei Shen and Haizhou Li},
  title={{Speaker-Utterance Dual Attention for Speaker and Utterance Verification}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={4293--4297},
  doi={10.21437/Interspeech.2020-1818},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1818}
}