Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Nat Hum Behav ; 2024 Jul 31.
Artigo em Inglês | MEDLINE | ID: mdl-39085406

RESUMO

Brain-phenotype predictive models seek to identify reproducible and generalizable brain-phenotype associations. External validation, or the evaluation of a model in external datasets, is the gold standard in evaluating the generalizability of models in neuroimaging. Unlike typical studies, external validation involves two sample sizes: the training and the external sample sizes. Thus, traditional power calculations may not be appropriate. Here we ran over 900 million resampling-based simulations in functional and structural connectivity data to investigate the relationship between training sample size, external sample size, phenotype effect size, theoretical power and simulated power. Our analysis included a wide range of datasets: the Healthy Brain Network, the Adolescent Brain Cognitive Development Study, the Human Connectome Project (Development and Young Adult), the Philadelphia Neurodevelopmental Cohort, the Queensland Twin Adolescent Brain Project, and the Chinese Human Connectome Project; and phenotypes: age, body mass index, matrix reasoning, working memory, attention problems, anxiety/depression symptoms and relational processing. High effect size predictions achieved adequate power with training and external sample sizes of a few hundred individuals, whereas low and medium effect size predictions required hundreds to thousands of training and external samples. In addition, most previous external validation studies used sample sizes prone to low power, and theoretical power curves should be adjusted for the training sample size. Furthermore, model performance in internal validation often informed subsequent external validation performance (Pearson's r difference <0.2), particularly for well-harmonized datasets. These results could help decide how to power future external validation studies.

2.
bioRxiv ; 2024 Apr 11.
Artigo em Inglês | MEDLINE | ID: mdl-38645002

RESUMO

High-amplitude co-activation patterns are sparsely present during resting-state fMRI but drive functional connectivity1-5. Further, they resemble task activation patterns and are well-studied3,5-10. However, little research has characterized the remaining majority of the resting-state signal. In this work, we introduced caricaturing-a method to project resting-state data to a subspace orthogonal to a manifold of co-activation patterns estimated from the task fMRI data. Projecting to this subspace removes linear combinations of these co-activation patterns from the resting-state data to create Caricatured connectomes. We used rich task data from the Human Connectome Project (HCP)11 and the UCLA Consortium for Neuropsychiatric Phenomics12 to construct a manifold of task co-activation patterns. Caricatured connectomes were created by projecting resting-state data from the HCP and the Yale Test-Retest13 datasets away from this manifold. Like caricatures, these connectomes emphasized individual differences by reducing between-individual similarity and increasing individual identification14. They also improved predictive modeling of brain-phenotype associations. As caricaturing removes group-relevant task variance, it is an initial attempt to remove task-like co-activations from rest. Therefore, our results suggest that there is a useful signal beyond the dominating co-activations that drive resting-state functional connectivity, which may better characterize the brain's intrinsic functional architecture.

3.
Artigo em Inglês | MEDLINE | ID: mdl-37734478

RESUMO

BACKGROUND: The test-retest reliability of functional magnetic resonance imaging is critical to identifying reproducible biomarkers for psychiatric illness. Recent work has shown how reliability limits the observable effect size of brain-behavior associations, hindering detection of these effects. However, while a fast-growing literature has explored both univariate and multivariate reliability in healthy individuals, relatively few studies have explored reliability in populations with psychiatric illnesses or how this interacts with age. METHODS: Here, we investigated functional connectivity reliability over the course of 1 year in a longitudinal cohort of 88 adolescents (age at baseline = 15.63 ± 1.29 years; 64 female) with major depressive disorder (MDD) and without MDD (healthy volunteers [HVs]). We compared a univariate metric, intraclass correlation coefficient, and 2 multivariate metrics, fingerprinting and discriminability. RESULTS: Adolescents with MDD had marginally higher mean intraclass correlation coefficient (µMDD = 0.34, 95% CI, 0.12-0.54; µHV = 0.27, 95% CI, 0.05-0.52), but both groups had poor average intraclass correlation coefficients (<0.4). Fingerprinting index was greater than chance and did not differ between groups (fingerprinting indexMDD = 0.75; fingerprinting indexHV = 0.91; Poisson tests p < .001). Discriminability indicated high multivariate reliability in both groups (discriminabilityMDD = 0.80; discriminabilityHV = 0.82; permutation tests p < .01). Neither univariate nor multivariate reliability was associated with symptom severity or edge-level effect size of group differences. CONCLUSIONS: Overall, we found little evidence for a relationship between depression and reliability of functional connectivity during adolescence. These findings suggest that biomarker identification in depression is not limited due to reliability compared with healthy samples and support the shift toward multivariate analysis for improved power and reliability.


Assuntos
Transtorno Depressivo Maior , Humanos , Feminino , Adolescente , Depressão , Reprodutibilidade dos Testes , Encéfalo , Mapeamento Encefálico
4.
bioRxiv ; 2023 Oct 30.
Artigo em Inglês | MEDLINE | ID: mdl-37961654

RESUMO

Identifying reproducible and generalizable brain-phenotype associations is a central goal of neuroimaging. Consistent with this goal, prediction frameworks evaluate brain-phenotype models in unseen data. Most prediction studies train and evaluate a model in the same dataset. However, external validation, or the evaluation of a model in an external dataset, provides a better assessment of robustness and generalizability. Despite the promise of external validation and calls for its usage, the statistical power of such studies has yet to be investigated. In this work, we ran over 60 million simulations across several datasets, phenotypes, and sample sizes to better understand how the sizes of the training and external datasets affect statistical power. We found that prior external validation studies used sample sizes prone to low power, which may lead to false negatives and effect size inflation. Furthermore, increases in the external sample size led to increased simulated power directly following theoretical power curves, whereas changes in the training dataset size offset the simulated power curves. Finally, we compared the performance of a model within a dataset to the external performance. The within-dataset performance was typically within r=0.2 of the cross-dataset performance, which could help decide how to power future external validation studies. Overall, our results illustrate the importance of considering the sample sizes of both the training and external datasets when performing external validation.

5.
Med Image Anal ; 88: 102864, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37352650

RESUMO

Open-source, publicly available neuroimaging datasets - whether from large-scale data collection efforts or pooled from multiple smaller studies - offer unprecedented sample sizes and promote generalization efforts. Releasing data can democratize science, increase the replicability of findings, and lead to discoveries. Partly due to patient privacy, computational, and data storage concerns, researchers typically release preprocessed data with the voxelwise time series parcellated into a map of predefined regions, known as an atlas. However, releasing preprocessed data also limits the choices available to the end-user. This is especially true for connectomics, as connectomes created from different atlases are not directly comparable. Since there exist several atlases with no gold standards, it is unrealistic to have processed, open-source data available from all atlases. Together, these limitations directly inhibit the potential benefits of open-source neuroimaging data. To address these limitations, we introduce Cross Atlas Remapping via Optimal Transport (CAROT) to find a mapping between two atlases. This approach allows data processed from one atlas to be directly transformed into a connectome based on another atlas without the need for raw data access. To validate CAROT, we compare reconstructed connectomes against their original counterparts (i.e., connectomes generated directly from an atlas), demonstrate the utility of transformed connectomes in downstream analyses, and show how a connectome-based predictive model can generalize to publicly available data that was processed with different atlases. Overall, CAROT can reconstruct connectomes from an extensive set of atlases - without needing the raw data - allowing already processed connectomes to be easily reused in a wide range of analyses while eliminating redundant processing efforts. We share this tool as both source code and as a stand-alone web application (http://carotproject.com/).


Assuntos
Conectoma , Humanos , Conectoma/métodos , Encéfalo/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Software
6.
Biol Psychiatry ; 93(10): 893-904, 2023 05 15.
Artigo em Inglês | MEDLINE | ID: mdl-36759257

RESUMO

Predictive models in neuroimaging are increasingly designed with the intent to improve risk stratification and support interventional efforts in psychiatry. Many of these models have been developed in samples of children school-aged or older. Nevertheless, despite growing evidence that altered brain maturation during the fetal, infant, and toddler (FIT) period modulates risk for poor mental health outcomes in childhood, these models are rarely implemented in FIT samples. Applications of predictive modeling in children of these ages provide an opportunity to develop powerful tools for improved characterization of the neural mechanisms underlying development. To facilitate the broader use of predictive models in FIT neuroimaging, we present a brief primer and systematic review on the methods used in current predictive modeling FIT studies. Reflecting on current practices in more than 100 studies conducted over the past decade, we provide an overview of topics, modalities, and methods commonly used in the field and under-researched areas. We then outline ethical and future considerations for neuroimaging researchers interested in predicting health outcomes in early life, including researchers who may be relatively new to either advanced machine learning methods or using FIT data. Altogether, the last decade of FIT research in machine learning has provided a foundation for accelerating the prediction of early-life trajectories across the full spectrum of illness and health.


Assuntos
Aprendizado de Máquina , Neuroimagem , Criança , Pré-Escolar , Humanos , Lactente , Neuroimagem/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA