Your browser doesn't support javascript.
loading
Alzheimer's disease diagnosis from multi-modal data via feature inductive learning and dual multilevel graph neural network.
Lei, Baiying; Li, Yafeng; Fu, Wanyi; Yang, Peng; Chen, Shaobin; Wang, Tianfu; Xiao, Xiaohua; Niu, Tianye; Fu, Yu; Wang, Shuqiang; Han, Hongbin; Qin, Jing.
Afiliação
  • Lei B; National-Regional Key Technology Engineering Lab. for Medical Ultrasound, Guangdong Key Lab. for Biomedical Measurements and Ultrasound Imaging, Marshall Lab. of Biomedical Engineering, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, 518060, China
  • Li Y; National-Regional Key Technology Engineering Lab. for Medical Ultrasound, Guangdong Key Lab. for Biomedical Measurements and Ultrasound Imaging, Marshall Lab. of Biomedical Engineering, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, 518060, China
  • Fu W; Department of Electronic Engineering, Tsinghua University, Beijing Key Laboratory of Magnetic Resonance Imaging Devices and Technology, China.
  • Yang P; National-Regional Key Technology Engineering Lab. for Medical Ultrasound, Guangdong Key Lab. for Biomedical Measurements and Ultrasound Imaging, Marshall Lab. of Biomedical Engineering, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, 518060, China
  • Chen S; National-Regional Key Technology Engineering Lab. for Medical Ultrasound, Guangdong Key Lab. for Biomedical Measurements and Ultrasound Imaging, Marshall Lab. of Biomedical Engineering, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, 518060, China
  • Wang T; National-Regional Key Technology Engineering Lab. for Medical Ultrasound, Guangdong Key Lab. for Biomedical Measurements and Ultrasound Imaging, Marshall Lab. of Biomedical Engineering, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, 518060, China
  • Xiao X; The First Affiliated Hospital of Shenzhen University, Shenzhen University Medical School, Shenzhen University, Shenzhen Second People's Hospital, Shenzhen, 530031, China.
  • Niu T; Shenzhen Bay Laboratory, Shenzhen, 518067, China.
  • Fu Y; Department of Neurology, Peking University Third Hospital, No. 49, North Garden Rd., Haidian District, Beijing, 100191, China. Electronic address: lilac_fu@126.com.
  • Wang S; Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China. Electronic address: sq.wang@siat.ac.cn.
  • Han H; Institute of Medical Technology, Peking University Health Science Center, Department of Radiology, Peking University Third Hospital, Beijing Key Laboratory of Magnetic Resonance Imaging Devices and Technology, Beijing, 100191, China; The second hospital of Dalian Medical University,Research and deve
  • Qin J; Center for Smart Health, School of Nursing, The Hong Kong Polytechnic University, Hong Kong, China.
Med Image Anal ; 97: 103213, 2024 May 28.
Article em En | MEDLINE | ID: mdl-38850625
ABSTRACT
Multi-modal data can provide complementary information of Alzheimer's disease (AD) and its development from different perspectives. Such information is closely related to the diagnosis, prevention, and treatment of AD, and hence it is necessary and critical to study AD through multi-modal data. Existing learning methods, however, usually ignore the influence of feature heterogeneity and directly fuse features in the last stages. Furthermore, most of these methods only focus on local fusion features or global fusion features, neglecting the complementariness of features at different levels and thus not sufficiently leveraging information embedded in multi-modal data. To overcome these shortcomings, we propose a novel framework for AD diagnosis that fuses gene, imaging, protein, and clinical data. Our framework learns feature representations under the same feature space for different modalities through a feature induction learning (FIL) module, thereby alleviating the impact of feature heterogeneity. Furthermore, in our framework, local and global salient multi-modal feature interaction information at different levels is extracted through a novel dual multilevel graph neural network (DMGNN). We extensively validate the proposed method on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset and experimental results demonstrate our method consistently outperforms other state-of-the-art multi-modal fusion methods. The code is publicly available on the GitHub website. (https//github.com/xiankantingqianxue/MIA-code.git).
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Med Image Anal Assunto da revista: DIAGNOSTICO POR IMAGEM Ano de publicação: 2024 Tipo de documento: Article País de afiliação: China

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Med Image Anal Assunto da revista: DIAGNOSTICO POR IMAGEM Ano de publicação: 2024 Tipo de documento: Article País de afiliação: China
...