Your browser doesn't support javascript.
loading
Development of a deep learning network for Alzheimer's disease classification with evaluation of imaging modality and longitudinal data.
Deatsch, Alison; Perovnik, Matej; Namías, Mauro; Trost, Maja; Jeraj, Robert.
Afiliação
  • Deatsch A; University of Wisconsin-Madison; 1111 Highland Ave, Madison, WI 53705, United States of America.
  • Perovnik M; University Medical Centre Ljubljana; Zaloska cesta 2, 1000 Ljubljana, Slovenia.
  • Namías M; Fundaciόn Centro Diagnόstico Nuclear; Av Nazca 3449, Buenos Aires C1417CVE, Argentina.
  • Trost M; University Medical Centre Ljubljana; Zaloska cesta 2, 1000 Ljubljana, Slovenia.
  • Jeraj R; University of Wisconsin-Madison; 1111 Highland Ave, Madison, WI 53705, United States of America.
Phys Med Biol ; 67(19)2022 09 30.
Article em En | MEDLINE | ID: mdl-36055243
Objective. Neuroimaging uncovers important information about disease in the brain. Yet in Alzheimer's disease (AD), there remains a clear clinical need for reliable tools to extract diagnoses from neuroimages. Significant work has been done to develop deep learning (DL) networks using neuroimaging for AD diagnosis. However, no particular model has emerged as optimal. Due to a lack of direct comparisons and evaluations on independent data, there is no consensus on which modality is best for diagnostic models or whether longitudinal information enhances performance. The purpose of this work was (1) to develop a generalizable DL model to distinguish neuroimaging scans of AD patients from controls and (2) to evaluate the influence of imaging modality and longitudinal data on performance.Approach. We trained a 2-class convolutional neural network (CNN) with and without a cascaded recurrent neural network (RNN). We used datasets of 772 (NAD = 364,Ncontrol= 408) 3D18F-FDG PET scans and 780 (NAD = 280,Ncontrol= 500) T1-weighted volumetric-3D MR images (containing 131 and 144 patients with multiple timepoints) from the Alzheimer's Disease Neuroimaging Initiative, plus an independent set of 104 (NAD = 63,NNC = 41)18F-FDG PET scans (one per patient) for validation.Main Results. ROC analysis showed that PET-trained models outperformed MRI-trained, achieving maximum AUC with the CNN + RNN model of 0.93 ± 0.08, with accuracy 82.5 ± 8.9%. Adding longitudinal information offered significant improvement to performance on18F-FDG PET, but not on T1-MRI. CNN model validation with an independent18F-FDG PET dataset achieved AUC of 0.99. Layer-wise relevance propagation heatmaps added CNN interpretability.Significance. The development of a high-performing tool for AD diagnosis, with the direct evaluation of key influences, reveals the advantage of using18F-FDG PET and longitudinal data over MRI and single timepoint analysis. This has significant implications for the potential of neuroimaging for future research on AD diagnosis and clinical management of suspected AD patients.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Doença de Alzheimer / Aprendizado Profundo Limite: Humans Idioma: En Revista: Phys Med Biol Ano de publicação: 2022 Tipo de documento: Article País de afiliação: Estados Unidos

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Doença de Alzheimer / Aprendizado Profundo Limite: Humans Idioma: En Revista: Phys Med Biol Ano de publicação: 2022 Tipo de documento: Article País de afiliação: Estados Unidos