Multi-Modal Fusion with Gating Using Audio, Lexical and Disfluency Features for Alzheimer’s Dementia Recognition from Spontaneous Speech

Morteza Rohanian, Julian Hough, Matthew Purver


This paper is a submission to the Alzheimer’s Dementia Recognition through Spontaneous Speech (ADReSS) challenge, which aims to develop methods that can assist in the automated prediction of severity of Alzheimer’s Disease from speech data. We focus on acoustic and natural language features for cognitive impairment detection in spontaneous speech in the context of Alzheimer’s Disease Diagnosis and the mini-mental state examination (MMSE) score prediction. We proposed a model that obtains unimodal decisions from different LSTMs, one for each modality of text and audio, and then combines them using a gating mechanism for the final prediction. We focused on sequential modelling of text and audio and investigated whether the disfluencies present in individuals’ speech relate to the extent of their cognitive impairment. Our results show that the proposed classification and regression schemes obtain very promising results on both development and test sets. This suggests Alzheimer’s Disease can be detected successfully with sequence modeling of the speech data of medical sessions.


 DOI: 10.21437/Interspeech.2020-2721

Cite as: Rohanian, M., Hough, J., Purver, M. (2020) Multi-Modal Fusion with Gating Using Audio, Lexical and Disfluency Features for Alzheimer’s Dementia Recognition from Spontaneous Speech. Proc. Interspeech 2020, 2187-2191, DOI: 10.21437/Interspeech.2020-2721.


@inproceedings{Rohanian2020,
  author={Morteza Rohanian and Julian Hough and Matthew Purver},
  title={{Multi-Modal Fusion with Gating Using Audio, Lexical and Disfluency Features for Alzheimer’s Dementia Recognition from Spontaneous Speech}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={2187--2191},
  doi={10.21437/Interspeech.2020-2721},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2721}
}