Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros

Bases de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Neuroimage ; 212: 116601, 2020 05 15.
Artículo en Inglés | MEDLINE | ID: mdl-32036019

RESUMEN

Replicating results (i.e. obtaining consistent results using a new independent dataset) is an essential part of good science. As replicability has consequences for theories derived from empirical studies, it is of utmost importance to better understand the underlying mechanisms influencing it. A popular tool for non-invasive neuroimaging studies is functional magnetic resonance imaging (fMRI). While the effect of underpowered studies is well documented, the empirical assessment of the interplay between sample size and replicability of results for task-based fMRI studies remains limited. In this work, we extend existing work on this assessment in two ways. Firstly, we use a large database of 1400 subjects performing four types of tasks from the IMAGEN project to subsample a series of independent samples of increasing size. Secondly, replicability is evaluated using a multi-dimensional framework consisting of 3 different measures: (un)conditional test-retest reliability, coherence and stability. We demonstrate not only a positive effect of sample size, but also a trade-off between spatial resolution and replicability. When replicability is assessed voxelwise or when observing small areas of activation, a larger sample size than typically used in fMRI is required to replicate results. On the other hand, when focussing on clusters of voxels, we observe a higher replicability. In addition, we observe variability in the size of clusters of activation between experimental paradigms or contrasts of parameter estimates within these.


Asunto(s)
Mapeo Encefálico/normas , Imagen por Resonancia Magnética/métodos , Imagen por Resonancia Magnética/normas , Tamaño de la Muestra , Mapeo Encefálico/métodos , Humanos , Reproducibilidad de los Resultados
2.
Neuroinformatics ; 22(1): 5-22, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37924428

RESUMEN

Decisions made during the analysis or reporting of an fMRI study influence the eligibility of that study to be entered into a meta-analysis. In a meta-analysis, results of different studies on the same topic are combined. To combine the results, it is necessary that all studies provide equivalent pieces of information. However, in task-based fMRI studies we see a large variety in reporting styles. Several specific meta-analysis methods have been developed to deal with the reporting practices occurring in task-based fMRI studies, therefore each requiring a specific type of input. In this manuscript we provide an overview of the meta-analysis methods and the specific input they require. Subsequently we discuss how decisions made during the study influence the eligibility of a study for a meta-analysis and finally we formulate some recommendations about how to report an fMRI study so that it complies with as many meta-analysis methods as possible.


Asunto(s)
Imagen por Resonancia Magnética
3.
Neuroinformatics ; 21(1): 221-242, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36199009

RESUMEN

What are the standards for the reporting methods and results of fMRI studies, and how have they evolved over the years? To answer this question we reviewed 160 papers published between 2004 and 2019. Reporting styles for methods and results of fMRI studies can differ greatly between published studies. However, adequate reporting is essential for the comprehension, replication and reuse of the study (for instance in a meta-analysis). To aid authors in reporting the methods and results of their task-based fMRI study the COBIDAS report was published in 2016, which provides researchers with clear guidelines on how to report the design, acquisition, preprocessing, statistical analysis and results (including data sharing) of fMRI studies (Nichols et al. in Best Practices in Data Analysis and Sharing in Neuroimaging using fMRI, 2016). In the past reviews have been published that evaluate how fMRI methods are reported based on the 2008 guidelines, but they did not focus on how task based fMRI results are reported. This review updates reporting practices of fMRI methods, and adds an extra focus on how fMRI results are reported. We discuss reporting practices about the design stage, specific participant characteristics, scanner characteristics, data processing methods, data analysis methods and reported results.


Asunto(s)
Imagen por Resonancia Magnética , Neuroimagen , Humanos , Imagen por Resonancia Magnética/métodos , Proyectos de Investigación
4.
Front Neurosci ; 11: 745, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-29403344

RESUMEN

Given the increasing amount of neuroimaging studies, there is a growing need to summarize published results. Coordinate-based meta-analyses use the locations of statistically significant local maxima with possibly the associated effect sizes to aggregate studies. In this paper, we investigate the influence of key characteristics of a coordinate-based meta-analysis on (1) the balance between false and true positives and (2) the activation reliability of the outcome from a coordinate-based meta-analysis. More particularly, we consider the influence of the chosen group level model at the study level [fixed effects, ordinary least squares (OLS), or mixed effects models], the type of coordinate-based meta-analysis [Activation Likelihood Estimation (ALE) that only uses peak locations, fixed effects, and random effects meta-analysis that take into account both peak location and height] and the amount of studies included in the analysis (from 10 to 35). To do this, we apply a resampling scheme on a large dataset (N = 1,400) to create a test condition and compare this with an independent evaluation condition. The test condition corresponds to subsampling participants into studies and combine these using meta-analyses. The evaluation condition corresponds to a high-powered group analysis. We observe the best performance when using mixed effects models in individual studies combined with a random effects meta-analysis. Moreover the performance increases with the number of studies included in the meta-analysis. When peak height is not taken into consideration, we show that the popular ALE procedure is a good alternative in terms of the balance between type I and II errors. However, it requires more studies compared to other procedures in terms of activation reliability. Finally, we discuss the differences, interpretations, and limitations of our results.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA