Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros












Base de dados
Intervalo de ano de publicação
1.
Heliyon ; 9(7): e18086, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37519689

RESUMO

Deep neural networks (DNNs) have been adopted widely as classifiers for functional magnetic resonance imaging (fMRI) data, advancing beyond traditional machine learning models. Consequently, transfer learning of the pre-trained DNN becomes crucial to enhance DNN classification performance, specifically by alleviating an overfitting issue that occurs when a substantial number of DNN parameters are fitted to a relatively small number of fMRI samples. In this study, we first systematically compared the two most popularly used, unsupervised pretraining models for resting-state fMRI (rfMRI) volume data to pre-train the DNNs, namely autoencoder (AE) and restricted Boltzmann machine (RBM). The group in-brain mask used when training AE and RBM displayed a sizable overlap ratio with Yeo's seven functional brain networks (FNs). The parcellated FNs obtained from the RBM were fine-grained compared to those from the AE. The pre-trained AE and RBM served as the weight parameters of the first of the two hidden DNN layers, and the DNN fulfilled the task classifier role for fMRI (tfMRI) data in the Human Connectome Project (HCP). We tested two transfer learning schemes: (1) fixing and (2) fine-tuning the DNN's pre-trained AE or RBM weights. The DNN with transfer learning was compared to a baseline DNN, trained using random initial weights. Overall, DNN classification performance from the transfer learning proved superior when the pre-trained RBM weights were fixed and when the pre-trained AE weights were fine-tuned (average error rates: 14.8% for fixed RBM, 15.1% fine-tuned AE, and 15.5% for the baseline model) compared to the alternative scenarios of DNN transfer learning schemes. Moreover, the optimal transfer learning scheme between the fixed RBM and fine-tuned AE varied according to seven task conditions in the HCP. Nonetheless, the computational load reduced substantially for the fixed-weight-based transfer learning compared to the fine-tuning-based transfer learning (e.g., the number of weight parameters for the fixed-weight-based DNN model reduced to 1.9% compared with a baseline/fine-tuned DNN model). Our findings suggest that weight initialization at the DNN's first layer using RBM-based pre-trained weights provides the most promising approach when the whole-brain fMRI volume supports associated task classification. We believe that our proposed scheme could be applied to a variety of task conditions to improve their classification performance and to utilize computational resources efficiently using our AE/RBM-based pre-trained weights compared to random initial weights for DNN training.

2.
J Psychiatr Res ; 158: 114-125, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36580867

RESUMO

The general psychopathology factor (p-factor) represents shared variance across mental disorders based on psychopathologic symptoms. The Adolescent Brain Cognitive Development (ABCD) Study offers an unprecedented opportunity to investigate functional networks (FNs) from functional magnetic resonance imaging (fMRI) associated with the psychopathology of an adolescent cohort (n > 10,000). However, the heterogeneities associated with the use of multiple sites and multiple scanners in the ABCD Study need to be overcome to improve the prediction of the p-factor using fMRI. We proposed a scanner-generalization neural network (SGNN) to predict the individual p-factor by systematically reducing the scanner effect for resting-state functional connectivity (RSFC). We included 6905 adolescents from 18 sites whose fMRI data were collected using either Siemens or GE scanners. The p-factor was estimated based on the Child Behavior Checklist (CBCL) scores available in the ABCD study using exploratory factor analysis. We evaluated the Pearson's correlation coefficients (CCs) for p-factor prediction via leave-one/two-site-out cross-validation (LOSOCV/LTSOCV) and identified important FNs from the weight features (WFs) of the SGNN. The CCs were higher for the SGNN than for alternative models when using both LOSOCV (0.1631 ± 0.0673 for the SGNN vs. 0.1497 ± 0.0710 for kernel ridge regression [KRR]; p < 0.05 from a two-tailed paired t-test) and LTSOCV (0.1469 ± 0.0381 for the SGNN vs. 0.1394 ± 0.0359 for KRR; p = 0.01). It was found that (a) the default-mode and dorsal attention FNs were important for p-factor prediction, and (b) the intra-visual FN was important for scanner generalization. We demonstrated the efficacy of our novel SGNN model for p-factor prediction while simultaneously eliminating scanner-related confounding effects for RSFC.


Assuntos
Encéfalo , Transtornos Mentais , Adolescente , Criança , Humanos , Mapeamento Encefálico/métodos , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Vias Neurais/diagnóstico por imagem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...