Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
JBI Evid Synth ; 22(3): 413-433, 2024 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-38475899

RESUMEN

Individual participant data meta-analysis is a commonly used alternative to the traditional aggregate data meta-analysis. It is popular because it avoids relying on published results and enables direct adjustment for relevant covariates. However, a practical challenge is that the studies being combined often vary in terms of the potential confounders that were measured. Furthermore, it will inevitably be the case that some individuals have missing values for some of those covariates. In this paper, we demonstrate how these challenges can be resolved using a propensity score approach, combined with multiple imputation, as a strategy to adjust for covariates in the context of individual participant data meta-analysis. To illustrate, we analyze data from the Bill and Melinda Gates Foundation-funded Healthy Birth, Growth, and Development Knowledge Integration project to investigate the relationship between physical growth rate in the first year of life and cognition measured later during childhood. We found that the overall effect of average growth velocity on cognitive outcome is slightly, but significantly, positive with an estimated effect size of 0.36 (95% CI 0.18, 0.55).

2.
BMC Med Res Methodol ; 22(1): 208, 2022 07 27.
Artículo en Inglés | MEDLINE | ID: mdl-35896966

RESUMEN

BACKGROUND: Estimations of causal effects from observational data are subject to various sources of bias. One method for adjusting for the residual biases in the estimation of treatment effects is through the use of negative control outcomes, which are outcomes not believed to be affected by the treatment of interest. The empirical calibration procedure is a technique that uses negative control outcomes to calibrate p-values. An extension of this technique calibrates the coverage of the 95% confidence interval of a treatment effect estimate by using negative control outcomes as well as positive control outcomes, which are outcomes for which the treatment of interest has known effects. Although empirical calibration has been used in several large observational studies, there is no systematic examination of its effect under different bias scenarios. METHODS: The effect of empirical calibration of confidence intervals was analyzed using simulated datasets with known treatment effects. The simulations consisted of binary treatment and binary outcome, with biases resulting from unmeasured confounder, model misspecification, measurement error, and lack of positivity. The performance of the empirical calibration was evaluated by determining the change in the coverage of the confidence interval and the bias in the treatment effect estimate. RESULTS: Empirical calibration increased coverage of the 95% confidence interval of the treatment effect estimate under most bias scenarios but was inconsistent in adjusting the bias in the treatment effect estimate. Empirical calibration of confidence intervals was most effective when adjusting for the unmeasured confounding bias. Suitable negative controls had a large impact on the adjustment made by empirical calibration, but small improvements in the coverage of the outcome of interest were also observable when using unsuitable negative controls. CONCLUSIONS: This work adds evidence to the efficacy of empirical calibration of the confidence intervals in observational studies. Calibration of confidence intervals is most effective where there are biases due to unmeasured confounding. Further research is needed on the selection of suitable negative controls.


Asunto(s)
Proyectos de Investigación , Sesgo , Calibración , Causalidad , Humanos
3.
Biom J ; 62(2): 270-281, 2020 03.
Artículo en Inglés | MEDLINE | ID: mdl-31515855

RESUMEN

The advent of the big data age has changed the landscape for statisticians. Public and private organizations alike these days are interested in capturing and analyzing complex customer data in order to improve their service and drive efficiency gains. However, the large volume of data involved often means that standard statistical methods fail and new ways of thinking are needed. Although great gains can be obtained through the use of more advanced computing environments or through developing sophisticated new statistical algorithms that handle data in a more efficient way, there are also many simpler things that can be done to handle large data sets in an efficient and intuitive manner. These include the use of distributed analysis methodologies, clever subsampling, data coarsening, and clever data reductions that exploit concepts such as sufficiency. These kinds of strategies represent exciting opportunities for statisticians to remain front and center in the data science world.


Asunto(s)
Biometría/métodos , Algoritmos , Programas Informáticos , Factores de Tiempo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...