Your browser doesn't support javascript.
loading
Integrating Multimodal and Longitudinal Neuroimaging Data with Multi-Source Network Representation Learning.
Zhang, Wen; Braden, B Blair; Miranda, Gustavo; Shu, Kai; Wang, Suhang; Liu, Huan; Wang, Yalin.
Afiliação
  • Zhang W; School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, P.O. Box 878809, Tempe, AZ, 85287, USA.
  • Braden BB; College of Health Solutions, Arizona State University, Tempe, AZ, USA.
  • Miranda G; School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, P.O. Box 878809, Tempe, AZ, 85287, USA.
  • Shu K; Department of Computer Science, Illinois Institute of Technology, 10 W. 31st Street Room 226D, Chicago, IL, 60616, USA.
  • Wang S; College of Information Sciences and Technology, Penn State University, E397 Westgate Building, University Park, PA, 16802, USA.
  • Liu H; School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, P.O. Box 878809, Tempe, AZ, 85287, USA.
  • Wang Y; School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, P.O. Box 878809, Tempe, AZ, 85287, USA. ylwang@asu.edu.
Neuroinformatics ; 20(2): 301-316, 2022 04.
Article em En | MEDLINE | ID: mdl-33978926
ABSTRACT
Uncovering the complex network of the brain is of great interest to the field of neuroimaging. Mining from these rich datasets, scientists try to unveil the fundamental biological mechanisms in the human brain. However, neuroimaging data collected for constructing brain networks is generally costly, and thus extracting useful information from a limited sample size of brain networks is demanding. Currently, there are two common trends in neuroimaging data collection that could be exploited to gain more information 1) multimodal data, and 2) longitudinal data. It has been shown that these two types of data provide complementary information. Nonetheless, it is challenging to learn brain network representations that can simultaneously capture network properties from multimodal as well as longitudinal datasets. Here we propose a general fusion framework for multi-source learning of brain networks - multimodal brain network fusion with longitudinal coupling (MMLC). In our framework, three layers of information are considered, including cross-sectional similarity, multimodal coupling, and longitudinal consistency. Specifically, we jointly factorize multimodal networks and construct a rotation-based constraint to couple network variance across time. We also adopt the consensus factorization as the group consistent pattern. Using two publicly available brain imaging datasets, we demonstrate that MMLC may better predict psychometric scores than some other state-of-the-art brain network representation learning algorithms. Additionally, the discovered significant brain regions are synergistic with previous literature. Our new approach may boost statistical power and sheds new light on neuroimaging network biomarkers for future psychometric prediction research by integrating longitudinal and multimodal neuroimaging data.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Algoritmos / Neuroimagem Tipo de estudo: Observational_studies / Prevalence_studies / Prognostic_studies / Risk_factors_studies Limite: Humans Idioma: En Ano de publicação: 2022 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Algoritmos / Neuroimagem Tipo de estudo: Observational_studies / Prevalence_studies / Prognostic_studies / Risk_factors_studies Limite: Humans Idioma: En Ano de publicação: 2022 Tipo de documento: Article