Exploiting Multi-Modal Features from Pre-Trained Networks for Alzheimer’s Dementia Recognition

Junghyun Koo, Jie Hwan Lee, Jaewoo Pyo, Yujin Jo, Kyogu Lee

Collecting and accessing a large amount of medical data is very time-consuming and laborious, not only because it is difficult to find specific patients but also because it is required to resolve the confidentiality of a patient’s medical records. On the other hand, there are deep learning models, trained on easily collectible, large scale datasets such as Youtube or Wikipedia, offering useful representations. It could therefore be very advantageous to utilize the features from these pre-trained networks for handling a small amount of data at hand. In this work, we exploit various multi-modal features extracted from pre-trained networks to recognize Alzheimer’s Dementia using a neural network, with a small dataset provided by the ADReSS Challenge at INTERSPEECH 2020. The challenge regards to discern patients suspicious of Alzheimer’s Dementia by providing acoustic and textual data. With the multi-modal features, we modify a Convolutional Recurrent Neural Network based structure to perform classification and regression tasks simultaneously and is capable of computing conversations with variable lengths. Our test results surpass baseline’s accuracy by 18.75%, and our validation result for the regression task shows the possibility of classifying 4 classes of cognitive impairment with an accuracy of 78.70%.

 DOI: 10.21437/Interspeech.2020-3153

Cite as: Koo, J., Lee, J.H., Pyo, J., Jo, Y., Lee, K. (2020) Exploiting Multi-Modal Features from Pre-Trained Networks for Alzheimer’s Dementia Recognition. Proc. Interspeech 2020, 2217-2221, DOI: 10.21437/Interspeech.2020-3153.

  author={Junghyun Koo and Jie Hwan Lee and Jaewoo Pyo and Yujin Jo and Kyogu Lee},
  title={{Exploiting Multi-Modal Features from Pre-Trained Networks for Alzheimer’s Dementia Recognition}},
  booktitle={Proc. Interspeech 2020},