Your browser doesn't support javascript.
loading
Multimodal attention-based deep learning for Alzheimer's disease diagnosis.
Golovanevsky, Michal; Eickhoff, Carsten; Singh, Ritambhara.
Afiliación
  • Golovanevsky M; Department of Computer Science, Brown University, Providence, Rhode Island, USA.
  • Eickhoff C; Department of Computer Science, Brown University, Providence, Rhode Island, USA.
  • Singh R; Center for Biomedical Informatics, Brown University, Providence, Rhode Island, USA.
J Am Med Inform Assoc ; 29(12): 2014-2022, 2022 11 14.
Article en En | MEDLINE | ID: mdl-36149257
ABSTRACT

OBJECTIVE:

Alzheimer's disease (AD) is the most common neurodegenerative disorder with one of the most complex pathogeneses, making effective and clinically actionable decision support difficult. The objective of this study was to develop a novel multimodal deep learning framework to aid medical professionals in AD diagnosis. MATERIALS AND

METHODS:

We present a Multimodal Alzheimer's Disease Diagnosis framework (MADDi) to accurately detect the presence of AD and mild cognitive impairment (MCI) from imaging, genetic, and clinical data. MADDi is novel in that we use cross-modal attention, which captures interactions between modalities-a method not previously explored in this domain. We perform multi-class classification, a challenging task considering the strong similarities between MCI and AD. We compare with previous state-of-the-art models, evaluate the importance of attention, and examine the contribution of each modality to the model's performance.

RESULTS:

MADDi classifies MCI, AD, and controls with 96.88% accuracy on a held-out test set. When examining the contribution of different attention schemes, we found that the combination of cross-modal attention with self-attention performed the best, and no attention layers in the model performed the worst, with a 7.9% difference in F1-scores.

DISCUSSION:

Our experiments underlined the importance of structured clinical data to help machine learning models contextualize and interpret the remaining modalities. Extensive ablation studies showed that any multimodal mixture of input features without access to structured clinical information suffered marked performance losses.

CONCLUSION:

This study demonstrates the merit of combining multiple input modalities via cross-modal attention to deliver highly accurate AD diagnostic decision support.
Asunto(s)
Palabras clave

Texto completo: 1 Bases de datos: MEDLINE Asunto principal: Enfermedad de Alzheimer / Disfunción Cognitiva / Aprendizaje Profundo Tipo de estudio: Diagnostic_studies / Prognostic_studies / Qualitative_research Límite: Humans Idioma: En Revista: J Am Med Inform Assoc Asunto de la revista: INFORMATICA MEDICA Año: 2022 Tipo del documento: Article País de afiliación: Estados Unidos

Texto completo: 1 Bases de datos: MEDLINE Asunto principal: Enfermedad de Alzheimer / Disfunción Cognitiva / Aprendizaje Profundo Tipo de estudio: Diagnostic_studies / Prognostic_studies / Qualitative_research Límite: Humans Idioma: En Revista: J Am Med Inform Assoc Asunto de la revista: INFORMATICA MEDICA Año: 2022 Tipo del documento: Article País de afiliación: Estados Unidos