Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros




Base de datos
Intervalo de año de publicación
1.
Eval Rev ; 47(5): 895-931, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37072684

RESUMEN

Modern policies are commonly evaluated not with randomized experiments but with repeated measures designs like difference-in-differences (DID) and the comparative interrupted time series (CITS). The key benefit of these designs is that they control for unobserved confounders that are fixed over time. However, DID and CITS designs only result in unbiased impact estimates when the model assumptions are consistent with the data at hand. In this paper, we empirically test whether the assumptions of repeated measures designs are met in field settings. Using a within-study comparison design, we compare experimental estimates of the impact of patient-directed care on medical expenditures to non-experimental DID and CITS estimates for the same target population and outcome. Our data come from a multi-site experiment that includes participants receiving Medicaid in Arkansas, Florida, and New Jersey. We present summary measures of repeated measures bias across three states, four comparison groups, two model specifications, and two outcomes. We find that, on average, bias resulting from repeated measures designs are very close to zero (less than 0.01 standard deviations; SDs). Further, we find that comparison groups which have pre-treatment trends that are visibly parallel to the treatment group result in less bias than those with visibly divergent trends. However, CITS models that control for baseline trends produced slightly more bias and were less precise than DID models that only control for baseline means. Overall, we offer optimistic evidence in favor of repeated measures designs when randomization is not feasible.


Asunto(s)
Proyectos de Investigación , Estados Unidos , Humanos , Análisis de Series de Tiempo Interrumpido , Causalidad , Arkansas , Florida
2.
Eval Rev ; 42(2): 147-175, 2018 04.
Artículo en Inglés | MEDLINE | ID: mdl-30317886

RESUMEN

Given the widespread use of nonexperimental (NE) methods for assessing program impacts, there is a strong need to know whether NE approaches yield causally valid results in field settings. In within-study comparison (WSC) designs, the researcher compares treatment effects from an NE with those obtained from a randomized experiment that shares the same target population. The goal is to assess whether the stringent assumptions required for NE methods are likely to be met in practice. This essay provides an overview of recent efforts to empirically evaluate NE method performance in field settings. We discuss a brief history of the design, highlighting methodological innovations along the way. We also describe papers that are included in this two-volume special issue on WSC approaches and suggest future areas for consideration in the design, implementation, and analysis of WSCs.


Asunto(s)
Investigación Empírica , Aprendizaje , Proyectos de Investigación , Evaluación de Programas y Proyectos de Salud
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA