Your browser doesn't support javascript.
loading
Chromatic fusion: generative multimodal neuroimaging data fusion provides multi-informed insights into schizophrenia.
Geenjaar, Eloy P T; Lewis, Noah L; Fedorov, Alex; Wu, Lei; Ford, Judith M; Preda, Adrian; Plis, Sergey M; Calhoun, Vince D.
Afiliação
  • Geenjaar EPT; School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA.
  • Lewis NL; Tri-Institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State, Georgia Tech, Emory, Atlanta, GA, 30303, USA.
  • Fedorov A; Tri-Institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State, Georgia Tech, Emory, Atlanta, GA, 30303, USA.
  • Wu L; School of Computational Science and Engineering, Georgia Institute of Technology, Atlanta, GA, USA.
  • Ford JM; School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA.
  • Preda A; Tri-Institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State, Georgia Tech, Emory, Atlanta, GA, 30303, USA.
  • Plis SM; Tri-Institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State, Georgia Tech, Emory, Atlanta, GA, 30303, USA.
  • Calhoun VD; San Francisco Veterans Affairs Medical Center, San Francisco, CA, USA.
medRxiv ; 2023 May 26.
Article em En | MEDLINE | ID: mdl-37292973
This work proposes a novel generative multimodal approach to jointly analyze multimodal data while linking the multimodal information to colors. By linking colors to private and shared information from modalities, we introduce chromatic fusion, a framework that allows for intuitively interpreting multimodal data. We test our framework on structural, functional, and diffusion modality pairs. In this framework, we use a multimodal variational autoencoder to learn separate latent subspaces; a private space for each modality, and a shared space between both modalities. These subspaces are then used to cluster subjects, and colored based on their distance from the variational prior, to obtain meta-chromatic patterns (MCPs). Each subspace corresponds to a different color, red is the private space of the first modality, green is the shared space, and blue is the private space of the second modality. We further analyze the most schizophrenia-enriched MCPs for each modality pair and find that distinct schizophrenia subgroups are captured by schizophrenia-enriched MCPs for different modality pairs, emphasizing schizophrenia's heterogeneity. For the FA-sFNC, sMRI-ICA, and sMRI-ICA MCPs, we generally find decreased fractional corpus callosum anisotropy and decreased spatial ICA map and voxel-based morphometry strength in the superior frontal lobe for schizophrenia patients. To additionally highlight the importance of the shared space between modalities, we perform a robustness analysis of the latent dimensions in the shared space across folds. These robust latent dimensions are subsequently correlated with schizophrenia to reveal that for each modality pair, multiple shared latent dimensions strongly correlate with schizophrenia. In particular, for FA-sFNC and sMRI-sFNC shared latent dimensions, we respectively observe a reduction in the modularity of the functional connectivity and a decrease in visual-sensorimotor connectivity for schizophrenia patients. The reduction in modularity couples with increased fractional anisotropy in the left part of the cerebellum dorsally. The reduction in the visual-sensorimotor connectivity couples with a reduction in the voxel-based morphometry generally but increased dorsal cerebellum voxel-based morphometry. Since the modalities are trained jointly, we can also use the shared space to try and reconstruct one modality from the other. We show that cross-reconstruction is possible with our network and is generally much better than depending on the variational prior. In sum, we introduce a powerful new multimodal neuroimaging framework designed to provide a rich and intuitive understanding of the data that we hope challenges the reader to think differently about how modalities interact.

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: MedRxiv Ano de publicação: 2023 Tipo de documento: Article País de afiliação: Estados Unidos

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: MedRxiv Ano de publicação: 2023 Tipo de documento: Article País de afiliação: Estados Unidos