The INESC-ID Multi-Modal System for the ADReSS 2020 Challenge

Anna Pompili, Thomas Rolland, Alberto Abad


This paper describes a multi-modal approach for the automatic detection of Alzheimer’s disease proposed in the context of the INESC-ID Human Language Technology Laboratory participation in the ADReSS 2020 challenge. Our classification framework takes advantage of both acoustic and textual feature embeddings, which are extracted independently and later combined. Speech signals are encoded into acoustic features using DNN speaker embeddings extracted from pre-trained models. For textual input, contextual embedding vectors are first extracted using an English Bert model and then used either to directly compute sentence embeddings or to feed a bidirectional LSTM-RNNs with attention. Finally, an SVM classifier with linear kernel is used for the individual evaluation of the three systems. Our best system, based on the combination of linguistic and acoustic information, attained a classification accuracy of 81.25%. Results have shown the importance of linguistic features in the classification of Alzheimer’s Disease, which outperforms the acoustic ones in terms of accuracy. Early stage features fusion did not provide additional improvements, confirming that the discriminant ability conveyed by speech in this case is smooth out by linguistic data.


 DOI: 10.21437/Interspeech.2020-2833

Cite as: Pompili, A., Rolland, T., Abad, A. (2020) The INESC-ID Multi-Modal System for the ADReSS 2020 Challenge. Proc. Interspeech 2020, 2202-2206, DOI: 10.21437/Interspeech.2020-2833.


@inproceedings{Pompili2020,
  author={Anna Pompili and Thomas Rolland and Alberto Abad},
  title={{The INESC-ID Multi-Modal System for the ADReSS 2020 Challenge}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={2202--2206},
  doi={10.21437/Interspeech.2020-2833},
  url={http://dx.doi.org/10.21437/Interspeech.2020-2833}
}