Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Comput Biol Med ; 155: 106664, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36803794

RESUMO

Deep belief networks have been widely used in medical image analysis. However, the high-dimensional but small-sample-size characteristic of medical image data makes the model prone to dimensional disaster and overfitting. Meanwhile, the traditional DBN is driven by performance and ignores the explainability which is important for medical image analysis. In this paper, a sparse non-convex based explainable deep belief network is proposed by combining DBN with non-convex sparsity learning. For sparsity, the non-convex regularization and Kullback-Leibler divergence penalty are embedded into DBN to obtain the sparse connection and sparse response representation of the network. It effectively reduces the complexity of the model and improves the generalization ability of the model. Considering explainability, the crucial features for decision-making are selected through the feature back-selection based on the row norm of each layer's weight after network training. We apply the model to schizophrenia data and demonstrate it achieves the best performance among several typical feature selection models. It reveals 28 functional connections highly correlated with schizophrenia, which provides an effective foundation for the treatment and prevention of schizophrenia and methodological assurance for similar brain disorders.


Assuntos
Encefalopatias , Esquizofrenia , Humanos , Algoritmos , Aprendizagem , Encéfalo
2.
Neural Netw ; 159: 185-197, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36580711

RESUMO

Multi-paradigm deep learning models show great potential for dynamic functional connectivity (dFC) analysis by integrating complementary information. However, many of them cannot use information from different paradigms effectively and have poor explainability, that is, the ability to identify significant features that contribute to decision making. In this paper, we propose a multi-paradigm fusion-based explainable deep sparse autoencoder (MF-EDSAE) to address these issues. Considering explainability, the MF-EDSAE is constructed based on a deep sparse autoencoder (DSAE). For integrating information effectively, the MF-EDASE contains the nonlinear fusion layer and multi-paradigm hypergraph regularization. We apply the model to the Philadelphia Neurodevelopmental Cohort and demonstrate it achieves better performance in detecting dynamic FC (dFC) that differ significantly during brain development than the single-paradigm DSAE. The experimental results show that children have more dispersive dFC patterns than adults. The function of the brain transits from undifferentiated systems to specialized networks during brain development. Meanwhile, adults have stronger connectivities between task-related functional networks for a given task than children. As the brain develops, the patterns of the global dFC change more quickly when stimulated by a task.


Assuntos
Mapeamento Encefálico , Imageamento por Ressonância Magnética , Adulto , Criança , Humanos , Mapeamento Encefálico/métodos , Imageamento por Ressonância Magnética/métodos , Vias Neurais/diagnóstico por imagem , Encéfalo/diagnóstico por imagem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...