Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros

Bases de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Am J Eval ; 42(4): 586-601, 2021 Dec 01.
Artículo en Inglés | MEDLINE | ID: mdl-34966242

RESUMEN

This article shares lessons learned in applying system evaluation theory (SET) to evaluate a Clinical and Translational Research Center (CTR) funded by the National Institutes of Health. After describing how CTR support cores are intended to work interdependently as a system, the case is made for SET as the best fit for evaluating this evaluand. The article then details how the evaluation was also challenged to facilitate a CTR culture shift, helping support cores to move from working autonomously to working together and understanding how the cores' individual operating processes impact each other. This was achieved by incorporating the Homeland Security Exercise and Evaluation Program (HSEEP) building block approach to implement SET. Each of the seven HSEEP building blocks is examined for alignment with each of SET's three steps and the ability to systematically support the goal of moving CTR cores toward working interdependently. The implications of using HSEEP to support SET implementation for future evaluations are discussed.

2.
Can J Program Eval ; 37(1): 142-154, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35979063

RESUMEN

The article proposes three evaluation utility metrics to assist evaluators in evaluating the quality of their evaluation. After an overview of reflective practice in evaluation, the different ways in which evaluators can hold themselves accountable are discussed. It is argued that reflective practice requires evaluators to go beyond evaluation quality (i.e., technical quality and methodological rigor) when assessing evaluation practice to include an evaluation of evaluation utility (i.e., specific actions taken in response to evaluation recommendations). Three Evaluation Utility Metrics (EUMs) are proposed to evaluate utility: whether recommendations are considered (EUMc), adopted (EUMa), and (if adopted) level of influence of recommendations (EUMli). The authors then reflect on their experience in using the EUMs, noting the importance of managing expectations through negotiation to ensure EUM data is collected and the need to consider contextual nuances (e.g., adoption and influence of recommendations are influenced by multiple factors beyond the control of the evaluators). Recommendations for increasing EUM rates by paying attention to the frequency and timing of recommendations are also shared. Results of implementing these EUMs in a real-world evaluation provide evidence of their potential value: practice tips led to an EUMc = 100% and EUMa > 80%. Methods for considering and applying all three EUMs together to facilitate practice improvement are also discussed.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA