Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Adv Sci (Weinh) ; : e2400929, 2024 Jun 20.
Artigo em Inglês | MEDLINE | ID: mdl-38900070

RESUMO

To elucidate the brain-wide information interactions that vary and contribute to individual differences in schizophrenia (SCZ), an information-resolved method is employed to construct individual synergistic and redundant interaction matrices based on regional pairwise BOLD time-series from 538 SCZ and 540 normal controls (NC). This analysis reveals a stable pattern of regionally-specific synergy dysfunction in SCZ. Furthermore, a hierarchical Bayesian model is applied to deconstruct the patterns of whole-brain synergy dysfunction into three latent factors that explain symptom heterogeneity in SCZ. Factor 1 exhibits a significant positive correlation with Positive and Negative Syndrome Scale (PANSS) positive scores, while factor 3 demonstrates significant negative correlations with PANSS negative and general scores. By integrating the neuroimaging data with normative gene expression information, this study identifies that each of these three factors corresponded to a subset of the SCZ risk gene set. Finally, by combining data from NeuroSynth and open molecular imaging sources, along with a spatially heterogeneous mean-field model, this study delineates three SCZ synergy factors corresponding to distinct symptom profiles and implicating unique cognitive, neurodynamic, and neurobiological mechanisms.

2.
IEEE Trans Image Process ; 31: 3825-3837, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35609094

RESUMO

Recently, owing to the superior performances, knowledge distillation-based (kd-based) methods with the exemplar rehearsal have been widely applied in class incremental learning (CIL). However, we discover that they suffer from the feature uncalibration problem, which is caused by directly transferring knowledge from the old model immediately to the new model when learning a new task. As the old model confuses the feature representations between the learned and new classes, the kd loss and the classification loss used in kd-based methods are heterogeneous. This is detrimental if we learn the existing knowledge from the old model directly in the way as in typical kd-based methods. To tackle this problem, the feature calibration network (FCN) is proposed, which is used to calibrate the existing knowledge to alleviate the feature representation confusion of the old model. In addition, to relieve the task-recency bias of FCN caused by the limited storage memory in CIL, we propose a novel image-feature hybrid sample rehearsal strategy to train FCN by splitting the memory budget to store the image-and-feature exemplars of the previous tasks. As feature embeddings of images have much lower-dimensions, this allows us to store more samples to train FCN. Based on these two improvements, we propose the Cascaded Knowledge Distillation Framework (CKDF) including three main stages. The first stage is used to train FCN to calibrate the existing knowledge of the old model. Then, the new model is trained simultaneously by transferring knowledge from the calibrated teacher model through the knowledge distillation strategy and learning new classes. Finally, after completing the new task learning, the feature exemplars of previous tasks are updated. Importantly, we demonstrate that the proposed CKDF is a general framework that can be applied to various kd-based methods. Experimental results show that our method achieves state-of-the-art performances on several CIL benchmarks.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA