Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
1.
Neuroimage ; 285: 120485, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38110045

RESUMEN

In recent years, deep learning approaches have gained significant attention in predicting brain disorders using neuroimaging data. However, conventional methods often rely on single-modality data and supervised models, which provide only a limited perspective of the intricacies of the highly complex brain. Moreover, the scarcity of accurate diagnostic labels in clinical settings hinders the applicability of the supervised models. To address these limitations, we propose a novel self-supervised framework for extracting multiple representations from multimodal neuroimaging data to enhance group inferences and enable analysis without resorting to labeled data during pre-training. Our approach leverages Deep InfoMax (DIM), a self-supervised methodology renowned for its efficacy in learning representations by estimating mutual information without the need for explicit labels. While DIM has shown promise in predicting brain disorders from single-modality MRI data, its potential for multimodal data remains untapped. This work extends DIM to multimodal neuroimaging data, allowing us to identify disorder-relevant brain regions and explore multimodal links. We present compelling evidence of the efficacy of our multimodal DIM analysis in uncovering disorder-relevant brain regions, including the hippocampus, caudate, insula, - and multimodal links with the thalamus, precuneus, and subthalamus hypothalamus. Our self-supervised representations demonstrate promising capabilities in predicting the presence of brain disorders across a spectrum of Alzheimer's phenotypes. Comparative evaluations against state-of-the-art unsupervised methods based on autoencoders, canonical correlation analysis, and supervised models highlight the superiority of our proposed method in achieving improved classification performance, capturing joint information, and interpretability capabilities. The computational efficiency of the decoder-free strategy enhances its practical utility, as it saves compute resources without compromising performance. This work offers a significant step forward in addressing the challenge of understanding multimodal links in complex brain disorders, with potential applications in neuroimaging research and clinical diagnosis.


Asunto(s)
Encefalopatías , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Neuroimagen/métodos , Encéfalo/diagnóstico por imagen , Imagen Multimodal/métodos
2.
Hum Brain Mapp ; 44(17): 5828-5845, 2023 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-37753705

RESUMEN

This work proposes a novel generative multimodal approach to jointly analyze multimodal data while linking the multimodal information to colors. We apply our proposed framework, which disentangles multimodal data into private and shared sets of features from pairs of structural (sMRI), functional (sFNC and ICA), and diffusion MRI data (FA maps). With our approach, we find that heterogeneity in schizophrenia is potentially a function of modality pairs. Results show (1) schizophrenia is highly multimodal and includes changes in specific networks, (2) non-linear relationships with schizophrenia are observed when interpolating among shared latent dimensions, and (3) we observe a decrease in the modularity of functional connectivity and decreased visual-sensorimotor connectivity for schizophrenia patients for the FA-sFNC and sMRI-sFNC modality pairs, respectively. Additionally, our results generally indicate decreased fractional corpus callosum anisotropy, and decreased spatial ICA map and voxel-based morphometry strength in the superior frontal lobe as found in the FA-sFNC, sMRI-FA, and sMRI-ICA modality pair clusters. In sum, we introduce a powerful new multimodal neuroimaging framework designed to provide a rich and intuitive understanding of the data which we hope challenges the reader to think differently about how modalities interact.


Asunto(s)
Esquizofrenia , Humanos , Esquizofrenia/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Encéfalo/diagnóstico por imagen , Neuroimagen , Imagen de Difusión por Resonancia Magnética
3.
Res Sq ; 2023 Dec 13.
Artículo en Inglés | MEDLINE | ID: mdl-38168287

RESUMEN

Alzheimer's disease (AD) is a prevalent neurodegenerative disorder requiring accurate and early diagnosis for effective treatment. Resting-state functional magnetic resonance imaging (rs-fMRI) and gray matter volume analysis from structural MRI have emerged as valuable tools for investigating AD-related brain alterations. However, the potential benefits of integrating these modalities using deep learning techniques remain unexplored. In this study, we propose a novel framework that fuses composite images of multiple rs-fMRI networks (called voxelwise intensity projection) and gray matter segmentation images through a deep learning approach for improved AD classification. We demonstrate the superiority of fMRI networks over commonly used metrics such as amplitude of low-frequency fluctuations (ALFF) and fractional ALFF in capturing spatial maps critical for AD classification. We use a multi-channel convolutional neural network incorporating the AlexNet dropout architecture to effectively model spatial and temporal dependencies in the integrated data. Extensive experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset of AD patients and cognitively normal (CN) validate the efficacy of our approach, showcasing improved classification performance of 94.12% test accuracy and an area under the curve (AUC) score of 97.79 compared to existing methods. Our results show that the fusion results generally outperformed the unimodal results. The saliency visualizations also show significant differences in the hippocampus, amygdala, putamen, caudate nucleus, and regions of basal ganglia which are in line with the previous neurobiological literature. Our research offers a novel method to enhance our grasp of AD pathology. By integrating data from various functional networks with structural MRI insights, we significantly improve diagnostic accuracy. This accuracy is further boosted by the effective visualization of this combined information. This lays the groundwork for further studies focused on providing a more accurate and personalized approach to AD diagnosis. The proposed framework and insights gained from fMRI networks provide a promising avenue for future research in deep multimodal fusion and neuroimaging analysis.

4.
medRxiv ; 2023 May 26.
Artículo en Inglés | MEDLINE | ID: mdl-37292973

RESUMEN

This work proposes a novel generative multimodal approach to jointly analyze multimodal data while linking the multimodal information to colors. By linking colors to private and shared information from modalities, we introduce chromatic fusion, a framework that allows for intuitively interpreting multimodal data. We test our framework on structural, functional, and diffusion modality pairs. In this framework, we use a multimodal variational autoencoder to learn separate latent subspaces; a private space for each modality, and a shared space between both modalities. These subspaces are then used to cluster subjects, and colored based on their distance from the variational prior, to obtain meta-chromatic patterns (MCPs). Each subspace corresponds to a different color, red is the private space of the first modality, green is the shared space, and blue is the private space of the second modality. We further analyze the most schizophrenia-enriched MCPs for each modality pair and find that distinct schizophrenia subgroups are captured by schizophrenia-enriched MCPs for different modality pairs, emphasizing schizophrenia's heterogeneity. For the FA-sFNC, sMRI-ICA, and sMRI-ICA MCPs, we generally find decreased fractional corpus callosum anisotropy and decreased spatial ICA map and voxel-based morphometry strength in the superior frontal lobe for schizophrenia patients. To additionally highlight the importance of the shared space between modalities, we perform a robustness analysis of the latent dimensions in the shared space across folds. These robust latent dimensions are subsequently correlated with schizophrenia to reveal that for each modality pair, multiple shared latent dimensions strongly correlate with schizophrenia. In particular, for FA-sFNC and sMRI-sFNC shared latent dimensions, we respectively observe a reduction in the modularity of the functional connectivity and a decrease in visual-sensorimotor connectivity for schizophrenia patients. The reduction in modularity couples with increased fractional anisotropy in the left part of the cerebellum dorsally. The reduction in the visual-sensorimotor connectivity couples with a reduction in the voxel-based morphometry generally but increased dorsal cerebellum voxel-based morphometry. Since the modalities are trained jointly, we can also use the shared space to try and reconstruct one modality from the other. We show that cross-reconstruction is possible with our network and is generally much better than depending on the variational prior. In sum, we introduce a powerful new multimodal neuroimaging framework designed to provide a rich and intuitive understanding of the data that we hope challenges the reader to think differently about how modalities interact.

5.
Sci Rep ; 12(1): 12023, 2022 07 21.
Artículo en Inglés | MEDLINE | ID: mdl-35864279

RESUMEN

Brain dynamics are highly complex and yet hold the key to understanding brain function and dysfunction. The dynamics captured by resting-state functional magnetic resonance imaging data are noisy, high-dimensional, and not readily interpretable. The typical approach of reducing this data to low-dimensional features and focusing on the most predictive features comes with strong assumptions and can miss essential aspects of the underlying dynamics. In contrast, introspection of discriminatively trained deep learning models may uncover disorder-relevant elements of the signal at the level of individual time points and spatial locations. Yet, the difficulty of reliable training on high-dimensional low sample size datasets and the unclear relevance of the resulting predictive markers prevent the widespread use of deep learning in functional neuroimaging. In this work, we introduce a deep learning framework to learn from high-dimensional dynamical data while maintaining stable, ecologically valid interpretations. Results successfully demonstrate that the proposed framework enables learning the dynamics of resting-state fMRI directly from small data and capturing compact, stable interpretations of features predictive of function and dysfunction.


Asunto(s)
Encéfalo , Imagen por Resonancia Magnética , Encéfalo/diagnóstico por imagen , Mapeo Encefálico/métodos , Neuroimagen Funcional , Imagen por Resonancia Magnética/métodos
6.
J Neurosci Methods ; 339: 108701, 2020 06 01.
Artículo en Inglés | MEDLINE | ID: mdl-32275915

RESUMEN

BACKGROUND: The unparalleled performance of deep learning approaches in generic image processing has motivated its extension to neuroimaging data. These approaches learn abstract neuroanatomical and functional brain alterations that could enable exceptional performance in classification of brain disorders, predicting disease progression, and localizing brain abnormalities. NEW METHOD: This work investigates the suitability of a modified form of deep residual neural networks (ResNet) for studying neuroimaging data in the specific application of predicting progression from mild cognitive impairment (MCI) to Alzheimer's disease (AD). Prediction was conducted first by training the deep models using MCI individuals only, followed by a domain transfer learning version that additionally trained on AD and controls. We also demonstrate a network occlusion based method to localize abnormalities. RESULTS: The implemented framework captured non-linear features that successfully predicted AD progression and also conformed to the spectrum of various clinical scores. In a repeated cross-validated setup, the learnt predictive models showed highly similar peak activations that corresponded to previous AD reports. COMPARISON WITH EXISTING METHODS: The implemented architecture achieved a significant performance improvement over the classical support vector machine and the stacked autoencoder frameworks (p <  0.005), numerically better than state-of-the-art performance using sMRI data alone (> 7% than the second-best performing method) and within 1% of the state-of-the-art performance considering learning using multiple neuroimaging modalities as well. CONCLUSIONS: The explored frameworks reflected the high potential of deep learning architectures in learning subtle predictive features and utility in critical applications such as predicting and understanding disease progression.


Asunto(s)
Enfermedad de Alzheimer , Disfunción Cognitiva , Enfermedad de Alzheimer/diagnóstico por imagen , Encéfalo/diagnóstico por imagen , Disfunción Cognitiva/diagnóstico por imagen , Progresión de la Enfermedad , Humanos , Imagen por Resonancia Magnética , Neuroimagen
7.
Neuroimage Clin ; 22: 101747, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30921608

RESUMEN

Brain functional networks identified from fMRI data can provide potential biomarkers for brain disorders. Group independent component analysis (GICA) is popular for extracting brain functional networks from multiple subjects. In GICA, different strategies exist for reconstructing subject-specific networks from the group-level networks. However, it is unknown whether these strategies have different sensitivities to group differences and abilities in distinguishing patients. Among GICA, spatio-temporal regression (STR) and spatially constrained ICA approaches such as group information guided ICA (GIG-ICA) can be used to propagate components (indicating networks) to a new subject that is not included in the original subjects. In this study, based on the same a priori network maps, we reconstructed subject-specific networks using these two methods separately from resting-state fMRI data of 151 schizophrenia patients (SZs) and 163 healthy controls (HCs). We investigated group differences in the estimated functional networks and the functional network connectivity (FNC) obtained by each method. The networks were also used as features in a cross-validated support vector machine (SVM) for classifying SZs and HCs. We selected features using different strategies to provide a comprehensive comparison between the two methods. GIG-ICA generally showed greater sensitivity in statistical analysis and better classification performance (accuracy 76.45 ±â€¯8.9%, sensitivity 0.74 ±â€¯0.11, specificity 0.79 ±â€¯0.11) than STR (accuracy 67.45 ±â€¯8.13%, sensitivity 0.65 ±â€¯0.11, specificity 0.71 ±â€¯0.11). Importantly, results were also consistent when applied to an independent dataset including 82 HCs and 82 SZs. Our work suggests that the functional networks estimated by GIG-ICA are more sensitive to group differences, and GIG-ICA is promising for identifying image-derived biomarkers of brain disease.


Asunto(s)
Encéfalo/diagnóstico por imagen , Bases de Datos Factuales/clasificación , Imagen por Resonancia Magnética/métodos , Red Nerviosa/diagnóstico por imagen , Esquizofrenia/clasificación , Esquizofrenia/diagnóstico por imagen , Adulto , Femenino , Humanos , Masculino , Análisis de Componente Principal/clasificación
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA