RESUMO
Neuroscience is advancing standardization and tool development to support rigor and transparency. Consequently, data pipeline complexity has increased, hindering FAIR (findable, accessible, interoperable and reusable) access. brainlife.io was developed to democratize neuroimaging research. The platform provides data standardization, management, visualization and processing and automatically tracks the provenance history of thousands of data objects. Here, brainlife.io is described and evaluated for validity, reliability, reproducibility, replicability and scientific utility using four data modalities and 3,200 participants.
Assuntos
Computação em Nuvem , Neurociências , Neurociências/métodos , Humanos , Neuroimagem/métodos , Reprodutibilidade dos Testes , Software , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagemRESUMO
Inference in neuroimaging typically occurs at the level of focal brain areas or circuits. Yet, increasingly, well-powered studies paint a much richer picture of broad-scale effects distributed throughout the brain, suggesting that many focal reports may only reflect the tip of the iceberg of underlying effects. How focal versus broad-scale perspectives influence the inferences we make has not yet been comprehensively evaluated using real data. Here, we compare sensitivity and specificity across procedures representing multiple levels of inference using an empirical benchmarking procedure that resamples task-based connectomes from the Human Connectome Project dataset (â¼1,000 subjects, 7 tasks, 3 resampling group sizes, 7 inferential procedures). Only broad-scale (network and whole brain) procedures obtained the traditional 80% statistical power level to detect an average effect, reflecting >20% more statistical power than focal (edge and cluster) procedures. Power also increased substantially for false discovery rate- compared with familywise error rate-controlling procedures. The downsides are fairly limited; the loss in specificity for broad-scale and FDR procedures was relatively modest compared to the gains in power. Furthermore, the broad-scale methods we introduce are simple, fast, and easy to use, providing a straightforward starting point for researchers. This also points to the promise of more sophisticated broad-scale methods for not only functional connectivity but also related fields, including task-based activation. Altogether, this work demonstrates that shifting the scale of inference and choosing FDR control are both immediately attainable and can help remedy the issues with statistical power plaguing typical studies in the field.
Assuntos
Conectoma , Imageamento por Ressonância Magnética , Encéfalo/fisiologia , Conectoma/métodos , Humanos , Imageamento por Ressonância Magnética/métodosRESUMO
INTRODUCTION: Manual motor problems have been reported in mild cognitive impairment (MCI) and Alzheimer's disease (AD), but the specific aspects that are affected, their neuropathology, and potential value for classification modeling is unknown. The current study examined if multiple measures of motor strength, dexterity, and speed are affected in MCI and AD, related to AD biomarkers, and are able to classify MCI or AD. METHODS: Fifty-three cognitively normal (CN), 33 amnestic MCI, and 28 AD subjects completed five manual motor measures: grip force, Trail Making Test A, spiral tracing, finger tapping, and a simulated feeding task. Analyses included (1) group differences in manual performance; (2) associations between manual function and AD biomarkers (PET amyloid ß, hippocampal volume, and APOE ε4 alleles); and (3) group classification accuracy of manual motor function using machine learning. RESULTS: Amnestic MCI and AD subjects exhibited slower psychomotor speed and AD subjects had weaker dominant hand grip strength than CN subjects. Performance on these measures was related to amyloid ß deposition (both) and hippocampal volume (psychomotor speed only). Support vector classification well-discriminated control and AD subjects (area under the curve of 0.73 and 0.77, respectively) but poorly discriminated MCI from controls or AD. CONCLUSION: Grip strength and spiral tracing appear preserved, while psychomotor speed is affected in amnestic MCI and AD. The association of motor performance with amyloid ß deposition and atrophy could indicate that this is due to amyloid deposition in and atrophy of motor brain regions, which generally occurs later in the disease process. The promising discriminatory abilities of manual motor measures for AD emphasize their value alongside other cognitive and motor assessment outcomes in classification and prediction models, as well as potential enrichment of outcome variables in AD clinical trials.
Assuntos
Doença de Alzheimer , Disfunção Cognitiva , Humanos , Disfunção Cognitiva/diagnóstico , Disfunção Cognitiva/classificação , Disfunção Cognitiva/fisiopatologia , Doença de Alzheimer/classificação , Doença de Alzheimer/diagnóstico , Doença de Alzheimer/fisiopatologia , Feminino , Masculino , Idoso , Força da Mão/fisiologia , Idoso de 80 Anos ou mais , Desempenho Psicomotor/fisiologia , Peptídeos beta-Amiloides/metabolismo , Hipocampo/patologia , Pessoa de Meia-Idade , Tomografia por Emissão de Pósitrons/métodos , Testes NeuropsicológicosRESUMO
Functional MRI (fMRI) data may be contaminated by artifacts arising from a myriad of sources, including subject head motion, respiration, heartbeat, scanner drift, and thermal noise. These artifacts cause deviations from common distributional assumptions, introduce spatial and temporal outliers, and reduce the signal-to-noise ratio of the data-all of which can have negative consequences for the accuracy and power of downstream statistical analysis. Scrubbing is a technique for excluding fMRI volumes thought to be contaminated by artifacts and generally comes in two flavors. Motion scrubbing based on subject head motion-derived measures is popular but suffers from a number of drawbacks, among them the need to choose a threshold, a lack of generalizability to multiband acquisitions, and high rates of censoring of individual volumes and entire subjects. Alternatively, data-driven scrubbing methods like DVARS are based on observed noise in the processed fMRI timeseries and may avoid some of these issues. Here we propose "projection scrubbing", a novel data-driven scrubbing method based on a statistical outlier detection framework and strategic dimension reduction, including independent component analysis (ICA), to isolate artifactual variation. We undertake a comprehensive comparison of motion scrubbing with data-driven projection scrubbing and DVARS. We argue that an appropriate metric for the success of scrubbing is maximal data retention subject to reasonable performance on typical benchmarks such as the validity, reliability, and identifiability of functional connectivity. We find that stringent motion scrubbing yields worsened validity, worsened reliability, and produced small improvements to fingerprinting. Meanwhile, data-driven scrubbing methods tend to yield greater improvements to fingerprinting while not generally worsening validity or reliability. Importantly, however, data-driven scrubbing excludes a fraction of the number of volumes or entire sessions compared to motion scrubbing. The ability of data-driven fMRI scrubbing to improve data retention without negatively impacting the quality of downstream analysis has major implications for sample sizes in population neuroscience research.
Assuntos
Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Humanos , Imageamento por Ressonância Magnética/métodos , Reprodutibilidade dos Testes , Processamento de Imagem Assistida por Computador/métodos , Artefatos , Movimento (Física) , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico/métodosRESUMO
Most neuroimaging studies display results that represent only a tiny fraction of the collected data. While it is conventional to present "only the significant results" to the reader, here we suggest that this practice has several negative consequences for both reproducibility and understanding. This practice hides away most of the results of the dataset and leads to problems of selection bias and irreproducibility, both of which have been recognized as major issues in neuroimaging studies recently. Opaque, all-or-nothing thresholding, even if well-intentioned, places undue influence on arbitrary filter values, hinders clear communication of scientific results, wastes data, is antithetical to good scientific practice, and leads to conceptual inconsistencies. It is also inconsistent with the properties of the acquired data and the underlying biology being studied. Instead of presenting only a few statistically significant locations and hiding away the remaining results, studies should "highlight" the former while also showing as much as possible of the rest. This is distinct from but complementary to utilizing data sharing repositories: the initial presentation of results has an enormous impact on the interpretation of a study. We present practical examples and extensions of this approach for voxelwise, regionwise and cross-study analyses using publicly available data that was analyzed previously by 70 teams (NARPS; Botvinik-Nezer, et al., 2020), showing that it is possible to balance the goals of displaying a full set of results with providing the reader reasonably concise and "digestible" findings. In particular, the highlighting approach sheds useful light on the kind of variability present among the NARPS teams' results, which is primarily a varied strength of agreement rather than disagreement. Using a meta-analysis built on the informative "highlighting" approach shows this relative agreement, while one using the standard "hiding" approach does not. We describe how this simple but powerful change in practice-focusing on highlighting results, rather than hiding all but the strongest ones-can help address many large concerns within the field, or at least to provide more complete information about them. We include a list of practical suggestions for results reporting to improve reproducibility, cross-study comparisons and meta-analyses.
Assuntos
Neuroimagem , Humanos , Reprodutibilidade dos Testes , Viés , Viés de SeleçãoRESUMO
Science is undergoing rapid change with the movement to improve science focused largely on reproducibility/replicability and open science practices. This moment of change-in which science turns inward to examine its methods and practices-provides an opportunity to address its historic lack of diversity and noninclusive culture. Through network modeling and semantic analysis, we provide an initial exploration of the structure, cultural frames, and women's participation in the open science and reproducibility literatures (n = 2,926 articles and conference proceedings). Network analyses suggest that the open science and reproducibility literatures are emerging relatively independently of each other, sharing few common papers or authors. We next examine whether the literatures differentially incorporate collaborative, prosocial ideals that are known to engage members of underrepresented groups more than independent, winner-takes-all approaches. We find that open science has a more connected, collaborative structure than does reproducibility. Semantic analyses of paper abstracts reveal that these literatures have adopted different cultural frames: open science includes more explicitly communal and prosocial language than does reproducibility. Finally, consistent with literature suggesting the diversity benefits of communal and prosocial purposes, we find that women publish more frequently in high-status author positions (first or last) within open science (vs. reproducibility). Furthermore, this finding is further patterned by team size and time. Women are more represented in larger teams within reproducibility, and women's participation is increasing in open science over time and decreasing in reproducibility. We conclude with actionable suggestions for cultivating a more prosocial and diverse culture of science.
Assuntos
Reprodutibilidade dos Testes , Ciência/tendências , Mulheres , Autoria , Humanos , Disseminação de Informação , Publicação de Acesso AbertoRESUMO
There is significant interest in adopting surface- and grayordinate-based analysis of MR data for a number of reasons, including improved whole-cortex visualization, the ability to perform surface smoothing to avoid issues associated with volumetric smoothing, improved inter-subject alignment, and reduced dimensionality. The CIFTI grayordinate file format introduced by the Human Connectome Project further advances grayordinate-based analysis by combining gray matter data from the left and right cortical hemispheres with gray matter data from the subcortex and cerebellum into a single file. Analyses performed in grayordinate space are well-suited to leverage information shared across the brain and across subjects through both traditional analysis techniques and more advanced statistical methods, including Bayesian methods. The R statistical environment facilitates use of advanced statistical techniques, yet little support for grayordinates analysis has been previously available in R. Indeed, few comprehensive programmatic tools for working with CIFTI files have been available in any language. Here, we present the ciftiTools R package, which provides a unified environment for reading, writing, visualizing, and manipulating CIFTI files and related data formats. We illustrate ciftiTools' convenient and user-friendly suite of tools for working with grayordinates and surface geometry data in R, and we describe how ciftiTools is being utilized to advance the statistical analysis of grayordinate-based functional MRI data.
Assuntos
Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética , Neuroimagem , Conectoma , Interpretação Estatística de Dados , Humanos , SoftwareRESUMO
The general linear model (GLM) is a widely popular and convenient tool for estimating the functional brain response and identifying areas of significant activation during a task or stimulus. However, the classical GLM is based on a massive univariate approach that does not explicitly leverage the similarity of activation patterns among neighboring brain locations. As a result, it tends to produce noisy estimates and be underpowered to detect significant activations, particularly in individual subjects and small groups. A recently proposed alternative, a cortical surface-based spatial Bayesian GLM, leverages spatial dependencies among neighboring cortical vertices to produce more accurate estimates and areas of functional activation. The spatial Bayesian GLM can be applied to individual and group-level analysis. In this study, we assess the reliability and power of individual and group-average measures of task activation produced via the surface-based spatial Bayesian GLM. We analyze motor task data from 45 subjects in the Human Connectome Project (HCP) and HCP Retest datasets. We also extend the model to multi-run analysis and employ subject-specific cortical surfaces rather than surfaces inflated to a sphere for more accurate distance-based modeling. Results show that the surface-based spatial Bayesian GLM produces highly reliable activations in individual subjects and is powerful enough to detect trait-like functional topologies. Additionally, spatial Bayesian modeling enhances reliability of group-level analysis even in moderately sized samples (n=45). Notably, the power of the spatial Bayesian GLM to detect activations above a scientifically meaningful effect size is nearly invariant to sample size, exhibiting high power even in small samples (n=10). The spatial Bayesian GLM is computationally efficient in individuals and groups and is convenient to implement with the open-source BayesfMRI R package.
Assuntos
Córtex Cerebral/diagnóstico por imagem , Córtex Cerebral/fisiologia , Conectoma/normas , Imageamento por Ressonância Magnética/normas , Modelos Teóricos , Análise e Desempenho de Tarefas , Adulto , Teorema de Bayes , Conectoma/métodos , Humanos , Modelos Lineares , Imageamento por Ressonância Magnética/métodos , Reprodutibilidade dos TestesRESUMO
Longitudinal fMRI studies hold great promise for the study of neurodegenerative diseases, development and aging, but realizing their full potential depends on extracting accurate fMRI-based measures of brain function and organization in individual subjects over time. This is especially true for studies of rare, heterogeneous and/or rapidly progressing neurodegenerative diseases. These often involve small samples with heterogeneous functional features, making traditional group-difference analyses of limited utility. One such disease is amyotrophic lateral sclerosis (ALS), a severe disease resulting in extreme loss of motor function and eventual death. Here, we use an advanced individualized task fMRI analysis approach to analyze a rich longitudinal dataset containing 190 hand clench fMRI scans from 16 ALS patients (78 scans) and 22 age-matched healthy controls (112 scans). Specifically, we adopt our cortical surface-based spatial Bayesian general linear model (GLM), which has high power and precision to detect activations in individual subjects, and propose a novel longitudinal extension to leverage information shared across visits. We perform all analyses in native surface space to preserve individual anatomical and functional features. Using mixed-effects models to subsequently study the relationship between size of activation and ALS disease progression, we observe for the first time an inverted U-shaped trajectory of motor activations: at relatively mild motor disability we observe enlarging activations, while at higher levels of motor disability we observe severely diminished activation, reflecting progression toward complete loss of motor function. We further observe distinct trajectories depending on clinical progression rate, with faster progressors exhibiting more extreme changes at an earlier stage of disability. These differential trajectories suggest that initial hyper-activation is likely attributable to loss of inhibitory neurons, rather than functional compensation as earlier assumed. These findings substantially advance scientific understanding of the ALS disease process. This study also provides the first real-world example of how surface-based spatial Bayesian analysis of task fMRI can further scientific understanding of neurodegenerative disease and other phenomena. The surface-based spatial Bayesian GLM is implemented in the BayesfMRI R package.
Assuntos
Esclerose Lateral Amiotrófica , Pessoas com Deficiência , Transtornos Motores , Doenças Neurodegenerativas , Esclerose Lateral Amiotrófica/diagnóstico por imagem , Teorema de Bayes , Progressão da Doença , Humanos , Modelos Lineares , Imageamento por Ressonância Magnética , Doenças Neurodegenerativas/diagnóstico por imagemRESUMO
BACKGROUND: Classic psychedelics, such as psilocybin and LSD, and other serotonin 2A receptor (5-HT2AR) agonists evoke acute alterations in perception and cognition. Altered thalamocortical connectivity has been hypothesized to underlie these effects, which is supported by some functional MRI (fMRI) studies. These studies have treated the thalamus as a unitary structure, despite known differential 5-HT2AR expression and functional specificity of different intrathalamic nuclei. Independent Component Analysis (ICA) has been previously used to identify reliable group-level functional subdivisions of the thalamus from resting-state fMRI (rsfMRI) data. We build on these efforts with a novel data-maximizing ICA-based approach to examine psilocybin-induced changes in intrathalamic functional organization and thalamocortical connectivity in individual participants. METHODS: Baseline rsfMRI data (n=38) from healthy individuals with a long-term meditation practice was utilized to generate a statistical template of thalamic functional subdivisions. This template was then applied in a novel ICA-based analysis of the acute effects of psilocybin on intra- and extra-thalamic functional organization and connectivity in follow-up scans from a subset of the same individuals (n=18). We examined correlations with subjective reports of drug effect and compared with a previously reported analytic approach (treating the thalamus as a single functional unit). RESULTS: Several intrathalamic components showed significant psilocybin-induced alterations in spatial organization, with effects of psilocybin largely localized to the mediodorsal and pulvinar nuclei. The magnitude of changes in individual participants correlated with reported subjective effects. These components demonstrated predominant decreases in thalamocortical connectivity, largely with visual and default mode networks. Analysis in which the thalamus is treated as a singular unitary structure showed an overall numerical increase in thalamocortical connectivity, consistent with previous literature using this approach, but this increase did not reach statistical significance. CONCLUSIONS: We utilized a novel analytic approach to discover psilocybin-induced changes in intra- and extra-thalamic functional organization and connectivity of intrathalamic nuclei and cortical networks known to express the 5-HT2AR. These changes were not observed using whole-thalamus analyses, suggesting that psilocybin may cause widespread but modest increases in thalamocortical connectivity that are offset by strong focal decreases in functionally relevant intrathalamic nuclei.
Assuntos
Psilocibina , Serotonina , Córtex Cerebral/fisiologia , Humanos , Imageamento por Ressonância Magnética , Vias Neurais/fisiologia , Psilocibina/farmacologia , Descanso , Tálamo/fisiologiaRESUMO
I applaud the authors on their innovative generalized independent component analysis (ICA) framework for neuroimaging data. Although ICA has enjoyed great popularity for the analysis of functional magnetic resonance imaging (fMRI) data, its applicability to other modalities has been limited because standard ICA algorithms may not be directly applicable to a diversity of data representations. This is particularly true for single-subject structural neuroimaging, where only a single measurement is collected at each location in the brain. The ingenious idea of Wu et al. (2021) is to transform the data to a vector of probabilities via a mixture distribution with K components, which (following a simple transformation to R K - 1 $\mathbb {R}^{K-1}$ ) can be directly analyzed with standard ICA algorithms, such as infomax (Bell and Sejnowski, 1995) or fastICA (Hyvarinen, 1999). The underlying distribution forming the basis of the mixture is customized to the particular modality being analyzed. This framework, termed distributional ICA (DICA), is applicable in theory to nearly any neuroimaging modality. This has substantial implications for ICA as a general tool for neuroimaging analysis, with particular promise for structural modalities and multimodal studies. This invited commentary focuses on the applicability and potential of DICA for different neuroimaging modalities, questions around details of implementation and performance, and limitations of the validation study presented in the paper.
Assuntos
Algoritmos , Imageamento por Ressonância Magnética , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico/métodos , Imageamento por Ressonância Magnética/métodos , Neuroimagem , Análise de Componente PrincipalRESUMO
Reliability of subject-level resting-state functional connectivity (FC) is determined in part by the statistical techniques employed in its estimation. Methods that pool information across subjects to inform estimation of subject-level effects (e.g., Bayesian approaches) have been shown to enhance reliability of subject-level FC. However, fully Bayesian approaches are computationally demanding, while empirical Bayesian approaches typically rely on using repeated measures to estimate the variance components in the model. Here, we avoid the need for repeated measures by proposing a novel measurement error model for FC describing the different sources of variance and error, which we use to perform empirical Bayes shrinkage of subject-level FC towards the group average. In addition, since the traditional intra-class correlation coefficient (ICC) is inappropriate for biased estimates, we propose a new reliability measure denoted the mean squared error intra-class correlation coefficient (ICCMSE) to properly assess the reliability of the resulting (biased) estimates. We apply the proposed techniques to test-retest resting-state fMRI data on 461 subjects from the Human Connectome Project to estimate connectivity between 100 regions identified through independent components analysis (ICA). We consider both correlation and partial correlation as the measure of FC and assess the benefit of shrinkage for each measure, as well as the effects of scan duration. We find that shrinkage estimates of subject-level FC exhibit substantially greater reliability than traditional estimates across various scan durations, even for the most reliable connections and regardless of connectivity measure. Additionally, we find partial correlation reliability to be highly sensitive to the choice of penalty term, and to be generally worse than that of full correlations except for certain connections and a narrow range of penalty values. This suggests that the penalty needs to be chosen carefully when using partial correlations.
Assuntos
Encéfalo/fisiologia , Conectoma/métodos , Processamento de Imagem Assistida por Computador/métodos , Rede Nervosa/fisiologia , Teorema de Bayes , Encéfalo/anatomia & histologia , Humanos , Imageamento por Ressonância Magnética/métodos , Rede Nervosa/anatomia & histologiaRESUMO
Outlier detection for high-dimensional (HD) data is a popular topic in modern statistical research. However, one source of HD data that has received relatively little attention is functional magnetic resonance images (fMRI), which consists of hundreds of thousands of measurements sampled at hundreds of time points. At a time when the availability of fMRI data is rapidly growing-primarily through large, publicly available grassroots datasets-automated quality control and outlier detection methods are greatly needed. We propose principal components analysis (PCA) leverage and demonstrate how it can be used to identify outlying time points in an fMRI run. Furthermore, PCA leverage is a measure of the influence of each observation on the estimation of principal components, which are often of interest in fMRI data. We also propose an alternative measure, PCA robust distance, which is less sensitive to outliers and has controllable statistical properties. The proposed methods are validated through simulation studies and are shown to be highly accurate. We also conduct a reliability study using resting-state fMRI data from the Autism Brain Imaging Data Exchange and find that removal of outliers using the proposed methods results in more reliable estimation of subject-level resting-state networks using independent components analysis.
Assuntos
Imageamento por Ressonância Magnética , Análise de Componente Principal , Algoritmos , Transtorno Autístico/diagnóstico por imagem , Encéfalo/diagnóstico por imagem , Humanos , Reprodutibilidade dos TestesRESUMO
Quantitative T1 maps estimate T1 relaxation times and can be used to assess diffuse tissue abnormalities within normal-appearing tissue. T1 maps are popular for studying the progression and treatment of multiple sclerosis (MS). However, their inclusion in standard imaging protocols remains limited due to the additional scanning time and expert calibration required and susceptibility to bias and noise. Here, we propose a new method of estimating T1 maps using four conventional MR images, which are intensity-normalized using cerebellar gray matter as a reference tissue and related to T1 using a smooth regression model. Using cross-validation, we generate statistical T1 maps for 61 subjects with MS. The statistical maps are less noisy than the acquired maps and show similar reproducibility. Tests of group differences in normal-appearing white matter across MS subtypes give similar results using both methods.
Assuntos
Algoritmos , Encéfalo/diagnóstico por imagem , Imagem de Tensor de Difusão/métodos , Interpretação de Imagem Assistida por Computador/métodos , Modelos Estatísticos , Esclerose Múltipla/diagnóstico por imagem , Substância Branca/diagnóstico por imagem , Adulto , Encéfalo/patologia , Simulação por Computador , Interpretação Estatística de Dados , Feminino , Humanos , Aumento da Imagem/métodos , Masculino , Pessoa de Meia-Idade , Esclerose Múltipla/patologia , Análise de Regressão , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Substância Branca/patologiaRESUMO
A recent interest in resting state functional magnetic resonance imaging (rsfMRI) lies in subdividing the human brain into anatomically and functionally distinct regions of interest. For example, brain parcellation is often a necessary step for defining the network nodes used in connectivity studies. While inference has traditionally been performed on group-level data, there is a growing interest in parcellating single subject data. However, this is difficult due to the inherent low signal-to-noise ratio of rsfMRI data, combined with typically short scan lengths. A large number of brain parcellation approaches employ clustering, which begins with a measure of similarity or distance between voxels. The goal of this work is to improve the reproducibility of single-subject parcellation using shrinkage-based estimators of such measures, allowing the noisy subject-specific estimator to "borrow strength" in a principled manner from a larger population of subjects. We present several empirical Bayes shrinkage estimators and outline methods for shrinkage when multiple scans are not available for each subject. We perform shrinkage on raw inter-voxel correlation estimates and use both raw and shrinkage estimates to produce parcellations by performing clustering on the voxels. While we employ a standard spectral clustering approach, our proposed method is agnostic to the choice of clustering method and can be used as a pre-processing step for any clustering algorithm. Using two datasets - a simulated dataset where the true parcellation is known and is subject-specific and a test-retest dataset consisting of two 7-minute resting-state fMRI scans from 20 subjects - we show that parcellations produced from shrinkage correlation estimates have higher reliability and validity than those produced from raw correlation estimates. Application to test-retest data shows that using shrinkage estimators increases the reproducibility of subject-specific parcellations of the motor cortex by up to 30%.
Assuntos
Encéfalo/fisiologia , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Descanso/fisiologia , Algoritmos , Teorema de Bayes , Encéfalo/anatomia & histologia , Mapeamento Encefálico , Análise por Conglomerados , Humanos , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Modelos Estatísticos , Córtex Motor/anatomia & histologia , Córtex Motor/fisiologia , Vias Neurais/fisiologia , Reprodutibilidade dos Testes , Razão Sinal-RuídoRESUMO
Resting-state functional connectivity is a widely used approach to study the functional brain network organization during early brain development. However, the estimation of functional connectivity networks in individual infants has been rather elusive due to the unique challenges involved with functional magnetic resonance imaging (fMRI) data from young populations. Here, we use fMRI data from the developing Human Connectome Project (dHCP) database to characterize individual variability in a large cohort of term-born infants (N = 289) using a novel data-driven Bayesian framework. To enhance alignment across individuals, the analysis was conducted exclusively on the cortical surface, employing surface-based registration guided by age-matched neonatal atlases. Using 10 minutes of resting-state fMRI data, we successfully estimated subject-level maps for fourteen brain networks/subnetworks along with individual functional parcellation maps that revealed differences between subjects. We also found a significant relationship between age and mean connectivity strength in all brain regions, including previously unreported findings in higher-order networks. These results illustrate the advantages of surface-based methods and Bayesian statistical approaches in uncovering individual variability within very young populations.
RESUMO
Independent component analysis is commonly applied to functional magnetic resonance imaging (fMRI) data to extract independent components (ICs) representing functional brain networks. While ICA produces reliable group-level estimates, single-subject ICA often produces noisy results. Template ICA is a hierarchical ICA model using empirical population priors to produce more reliable subject-level estimates. However, this and other hierarchical ICA models assume unrealistically that subject effects are spatially independent. Here, we propose spatial template ICA (stICA), which incorporates spatial priors into the template ICA framework for greater estimation efficiency. Additionally, the joint posterior distribution can be used to identify brain regions engaged in each network using an excursions set approach. By leveraging spatial dependencies and avoiding massive multiple comparisons, stICA has high power to detect true effects. We derive an efficient expectation-maximization algorithm to obtain maximum likelihood estimates of the model parameters and posterior moments of the latent fields. Based on analysis of simulated data and fMRI data from the Human Connectome Project, we find that stICA produces estimates that are more accurate and reliable than benchmark approaches, and identifies larger and more reliable areas of engagement. The algorithm is computationally tractable, achieving convergence within 12 hours for whole-cortex fMRI analysis.
RESUMO
BACKGROUND: Despite reports of gross motor problems in mild cognitive impairment (MCI) and Alzheimer's disease (AD), fine motor function has been relatively understudied. OBJECTIVE: We examined if finger tapping is affected in AD, related to AD biomarkers, and able to classify MCI or AD. METHODS: Forty-seven cognitively normal, 27 amnestic MCI, and 26 AD subjects completed unimanual and bimanual computerized tapping tests. We tested 1) group differences in tapping with permutation models; 2) associations between tapping and biomarkers (PET amyloid-ß, hippocampal volume, and APOEÉ4 alleles) with linear regression; and 3) the predictive value of tapping for group classification using machine learning. RESULTS: AD subjects had slower reaction time and larger speed variability than controls during all tapping conditions, except for dual tapping. MCI subjects performed worse than controls on reaction time and speed variability for dual and non-dominant hand tapping. Tapping speed and variability were related to hippocampal volume, but not to amyloid-ß deposition or APOEÉ4 alleles. Random forest classification (overall accuracyâ=â70%) discriminated control and AD subjects, but poorly discriminated MCI from controls or AD. CONCLUSIONS: MCI and AD are linked to more variable finger tapping with slower reaction time. Associations between finger tapping and hippocampal volume, but not amyloidosis, suggest that tapping deficits are related to neuropathology that presents later during the disease. Considering that tapping performance is able to differentiate between control and AD subjects, it can offer a cost-efficient tool for augmenting existing AD biomarkers.