Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros

Banco de datos
Asunto principal
Tipo del documento
Asunto de la revista
País de afiliación
Intervalo de año de publicación
1.
Hum Brain Mapp ; 44(17): 5828-5845, 2023 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-37753705

RESUMEN

This work proposes a novel generative multimodal approach to jointly analyze multimodal data while linking the multimodal information to colors. We apply our proposed framework, which disentangles multimodal data into private and shared sets of features from pairs of structural (sMRI), functional (sFNC and ICA), and diffusion MRI data (FA maps). With our approach, we find that heterogeneity in schizophrenia is potentially a function of modality pairs. Results show (1) schizophrenia is highly multimodal and includes changes in specific networks, (2) non-linear relationships with schizophrenia are observed when interpolating among shared latent dimensions, and (3) we observe a decrease in the modularity of functional connectivity and decreased visual-sensorimotor connectivity for schizophrenia patients for the FA-sFNC and sMRI-sFNC modality pairs, respectively. Additionally, our results generally indicate decreased fractional corpus callosum anisotropy, and decreased spatial ICA map and voxel-based morphometry strength in the superior frontal lobe as found in the FA-sFNC, sMRI-FA, and sMRI-ICA modality pair clusters. In sum, we introduce a powerful new multimodal neuroimaging framework designed to provide a rich and intuitive understanding of the data which we hope challenges the reader to think differently about how modalities interact.


Asunto(s)
Esquizofrenia , Humanos , Esquizofrenia/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Encéfalo/diagnóstico por imagen , Neuroimagen , Imagen de Difusión por Resonancia Magnética
2.
medRxiv ; 2023 May 26.
Artículo en Inglés | MEDLINE | ID: mdl-37292973

RESUMEN

This work proposes a novel generative multimodal approach to jointly analyze multimodal data while linking the multimodal information to colors. By linking colors to private and shared information from modalities, we introduce chromatic fusion, a framework that allows for intuitively interpreting multimodal data. We test our framework on structural, functional, and diffusion modality pairs. In this framework, we use a multimodal variational autoencoder to learn separate latent subspaces; a private space for each modality, and a shared space between both modalities. These subspaces are then used to cluster subjects, and colored based on their distance from the variational prior, to obtain meta-chromatic patterns (MCPs). Each subspace corresponds to a different color, red is the private space of the first modality, green is the shared space, and blue is the private space of the second modality. We further analyze the most schizophrenia-enriched MCPs for each modality pair and find that distinct schizophrenia subgroups are captured by schizophrenia-enriched MCPs for different modality pairs, emphasizing schizophrenia's heterogeneity. For the FA-sFNC, sMRI-ICA, and sMRI-ICA MCPs, we generally find decreased fractional corpus callosum anisotropy and decreased spatial ICA map and voxel-based morphometry strength in the superior frontal lobe for schizophrenia patients. To additionally highlight the importance of the shared space between modalities, we perform a robustness analysis of the latent dimensions in the shared space across folds. These robust latent dimensions are subsequently correlated with schizophrenia to reveal that for each modality pair, multiple shared latent dimensions strongly correlate with schizophrenia. In particular, for FA-sFNC and sMRI-sFNC shared latent dimensions, we respectively observe a reduction in the modularity of the functional connectivity and a decrease in visual-sensorimotor connectivity for schizophrenia patients. The reduction in modularity couples with increased fractional anisotropy in the left part of the cerebellum dorsally. The reduction in the visual-sensorimotor connectivity couples with a reduction in the voxel-based morphometry generally but increased dorsal cerebellum voxel-based morphometry. Since the modalities are trained jointly, we can also use the shared space to try and reconstruct one modality from the other. We show that cross-reconstruction is possible with our network and is generally much better than depending on the variational prior. In sum, we introduce a powerful new multimodal neuroimaging framework designed to provide a rich and intuitive understanding of the data that we hope challenges the reader to think differently about how modalities interact.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA