Your browser doesn't support javascript.
loading
CI-GNN: A Granger causality-inspired graph neural network for interpretable brain network-based psychiatric diagnosis.
Zheng, Kaizhong; Yu, Shujian; Chen, Badong.
Afiliação
  • Zheng K; National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, China. Electronic address: kzzheng@stu.xjtu.edu.cn.
  • Yu S; Department of Computer Science, Vrije Universiteit Amsterdam, Amsterdam, Netherlands; Machine Learning Group, UiT - Arctic University of Norway, Tromsø, Norway. Electronic address: yusj9011@gmail.com.
  • Chen B; National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, China. Electronic address: chenbd@mail.xjtu.edu.cn.
Neural Netw ; 172: 106147, 2024 Apr.
Article em En | MEDLINE | ID: mdl-38306785
ABSTRACT
There is a recent trend to leverage the power of graph neural networks (GNNs) for brain-network based psychiatric diagnosis, which, in turn, also motivates an urgent need for psychiatrists to fully understand the decision behavior of the used GNNs. However, most of the existing GNN explainers are either post-hoc in which another interpretive model needs to be created to explain a well-trained GNN, or do not consider the causal relationship between the extracted explanation and the decision, such that the explanation itself contains spurious correlations and suffers from weak faithfulness. In this work, we propose a granger causality-inspired graph neural network (CI-GNN), a built-in interpretable model that is able to identify the most influential subgraph (i.e., functional connectivity within brain regions) that is causally related to the decision (e.g., major depressive disorder patients or healthy controls), without the training of an auxillary interpretive network. CI-GNN learns disentangled subgraph-level representations α and ß that encode, respectively, the causal and non-causal aspects of original graph under a graph variational autoencoder framework, regularized by a conditional mutual information (CMI) constraint. We theoretically justify the validity of the CMI regulation in capturing the causal relationship. We also empirically evaluate the performance of CI-GNN against three baseline GNNs and four state-of-the-art GNN explainers on synthetic data and three large-scale brain disease datasets. We observe that CI-GNN achieves the best performance in a wide range of metrics and provides more reliable and concise explanations which have clinical evidence. The source code and implementation details of CI-GNN are freely available at GitHub repository (https//github.com/ZKZ-Brain/CI-GNN/).
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Transtorno Depressivo Maior / Transtornos Mentais Tipo de estudo: Diagnostic_studies / Etiology_studies / Prognostic_studies Limite: Humans Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Transtorno Depressivo Maior / Transtornos Mentais Tipo de estudo: Diagnostic_studies / Etiology_studies / Prognostic_studies Limite: Humans Idioma: En Ano de publicação: 2024 Tipo de documento: Article