RESUMO
While evaluations play a critical role in accounting for and learning from context, it is unclear how evaluations can take account of climate change. Our objective was to explore how climate change and its interaction with other contextual factors influenced One Health food safety programs. To do so, we integrated questions about climate change into a qualitative evaluation study of an ongoing, multi-sectoral program aiming to improve pork safety in Vietnam called SafePORK. We conducted remote interviews with program researchers (n = 7) and program participants (n = 23). Based on our analysis, researchers believed climate change had potential impacts on the program but noted evidence was lacking, while program participants (slaughterhouse workers and retailers) shared how they were experiencing and adapting to the impacts of climate change. Climate change also interacted with other contextual factors to introduce additional complexities. Our study underscored the importance of assessing climate factors in evaluation and building adaptive capacity in programming.
RESUMO
Evaluation capacity building (ECB) continues to attract attention. Over the past two decades, a broad literature has emerged-covering the dimensions, contexts, and practices of ECB. This article presents findings from a bibliometric analysis of ECB articles published in six evaluation journals from 2000 to 2019. The findings shed light on the communities of scholars that contribute to the ECB knowledge base, the connections between these communities, and the themes they cover. Informed by the findings, future directions for ECB scholarship and how bibliometric analysis can supplement more established approaches to literature reviews are discussed.
Assuntos
Fortalecimento Institucional , Publicações , Humanos , Avaliação de Programas e Projetos de Saúde , BibliometriaRESUMO
Evaluation capacity building (ECB) continues to attract the attention and interest of scholars and practitioners. Over the years, models, frameworks, strategies, and practices related to ECB have been developed and implemented. Although ECB is highly contextual, the evolution of knowledge in this area depends on learning from past efforts in a structured approach. The purpose of the present article is to integrate the ECB literature in evaluation journals. More specifically, the article aims to answer three questions: What types of articles and themes comprise the current literature on ECB? How are current practices of ECB described in the literature? And what is the current status of research on ECB? Informed by the findings of the review, the article concludes with suggestions for future ECB practice and scholarship.
RESUMO
The purpose of this exploratory research on evaluation study was to examine how modified and unmodified scales of critical thinking and interest in science careers would affect the evaluation conclusions. Surveys measuring various outcomes that are used in program evaluation are frequently modified from their original versions in response to the unique context of programs. Modifying existing published surveys by removing or adding items can affect the psychometric properties of the original scale and may produce differing results. The results of the comparisons found that unmodified and modified surveys had similar reliabilities; however, one of the scales produced contradictory evaluation findings. Lessons learned from this study suggest that scales can be modified in evaluation, but great care is needed to address the potential strengths and limitations of the modified scale and balance the technical needs with responsiveness to program context.
Assuntos
Pensamento , Humanos , Avaliação de Programas e Projetos de Saúde , Inquéritos e Questionários , Psicometria , Reprodutibilidade dos TestesRESUMO
Objective. The objective is to examine the scope of health communication media campaign process evaluation methods, findings, and dissemination practices. Data Source. A systematic review of peer-reviewed literature was conducted using database searches. Study Inclusion and Exclusion Criteria. Published studies on process and implementation evaluation of health campaigns with a media component were included. Exclusion criteria included not health, non-empirical, no media campaign, or a focus on other evaluation types. Data Extraction. Articles were assessed for general campaign information, theory use, and details about process evaluation plan and procedures. Data Synthesis. A coding scheme based on 9 process evaluation best practice elements (e.g., fidelity and context) was applied. Process evaluation methods, measures, and reporting themes were synthesized. Results. Among 691 unique records, 46 articles were included. Process evaluation was the main focus for 71.7% of articles, yet only 39.1% reported how process evaluation informed campaign implementation strategy. Articles reported 4.39 elements on average (SD = 1.99; range 1-9), with reach (87.0%) and recruitment (73.9%) described most frequently, yet reporting was inconsistent. Further, the level of detail in reporting methods, theory, and analysis varied. Conclusions. Process evaluation provides insight about mechanisms and intervening variables that could meaningfully impact interpretations of outcome evaluations; however, process evaluations are less often included in literature. Recommendations for evidence-based process evaluation components to guide evaluation are discussed.
Assuntos
Comunicação em Saúde , Promoção da Saúde/métodos , Humanos , Meios de Comunicação de MassaRESUMO
BACKGROUND: Stakeholders are often involved in evaluation, such as in the selection of specific research questions and the interpretation of results. Except for the topic of whether stakeholder involvement increases use, a paucity of research exists to guide practice regarding stakeholders. OBJECTIVES: We address two questions: (1) If a third-party observer knows stakeholders were involved in an evaluation, does that affect the perceived credibility, fairness, and relevance of the evaluation? (2) Among individuals with a possible stake in an evaluation, which stakeholder group(s) do they want to see participate; in particular, do they prefer that multiple stakeholder groups, rather than a single group, participate? RESEARCH DESIGN: Six studies are reported. All studies address the former question, while Studies 3 to 5 also focus on the latter question. To study effects of stakeholder involvement on third-party views, participants read summaries of ostensible evaluations, with stakeholder involvement noted or not. To examine a priori preferences among potential stakeholders, participants completed a survey about alternative stakeholder group involvement in an evaluation in which they would likely have an interest. RESULTS AND CONCLUSIONS: Across studies, effects of reported stakeholder participation on third-parties' views were not robust; however, small effects on perceived fairness sometimes, but not always, occurred after stakeholder involvement and its rationales had been made salient. All surveys showed a large preference for the involvement of multiple, rather than single stakeholder groups. We discuss implications for research and practice regarding stakeholder involvement, and for research on evaluation more generally.
Assuntos
Participação dos Interessados , HumanosRESUMO
Evaluation has been described as a political act. Programs and policies are generated from a political process, and the decision to evaluate and how to use the evaluation are manifestations of the political dynamic. This exploratory study was conducted with practicing evaluators to understand what they view as political situations in the evaluation process and how they responded to these situations. Findings suggest that, in relation to the potential evaluation phases in which each respondent has been involved, evaluations are susceptible to politics when initially attempting to identify stakeholders and when it's time to report the evaluation findings. Evaluators have also developed multiple strategies for dealing with these situations, including finding allies for the evaluation and working to explain the evaluation process and its implications. We hope that this study will help to inform novice and expert evaluators about the various political situations they may encounter in their practice.
Assuntos
Políticas , Política , Humanos , Avaliação de Programas e Projetos de SaúdeRESUMO
Many in the data visualization and evaluation communities recommend conveying the message or takeaway of the visualization in the visualization's title. This study tested that recommendation by examining how informative or generic titles impact a visualization's visual efficiency, aesthetics, credibility, and the perceived effectiveness of the hypothetical program examined. Furthermore, this study tested how simple or complex graphs, and positive, negative, or mixed results (i.e., valence of the results) affected outcomes. Participants were randomly assigned to one of 12 conditions, representing a 2 (graph: simple or complex) x 2 (title: generic or informative) x 3 (valence: positive, negative, mixed) between-subjects study. The results indicated that informative titles required less mental effort and were viewed as more aesthetically pleasing, but otherwise did not lead to greater accuracy, credibility, or perceived effectiveness. Furthermore, titles did not interact with graph type or the valence of the findings. While the results suggest it is worthwhile to consider adding an informative title to data visualizations as they can reduce mental effort for the viewer, the intended goal of the visualization should be taken into consideration. Considering the goal of the visualization can be a deciding factor of the type of graph and title that will best serve its intended purposes. Overall, this suggests that data visualization recommendations that impact evaluation reporting practices should be scrutinized more closely through research.
Assuntos
Visualização de Dados , Humanos , Avaliação de Programas e Projetos de SaúdeRESUMO
In realist evaluation, where researchers aim to make program theories explicit, they can encounter competing explanations as to how programs work. Managing explanatory tensions from different sources of evidence in multi-stakeholder projects can challenge external evaluators, especially when access to pertinent data, like client records, is mediated by program stakeholders. In this article, we consider two central questions: how can program stakeholder motives shape a realist evaluation project; and how might realist evaluators respond to stakeholders' belief-motive explanations, including those about program effectiveness, based on factors such as supererogatory commitment or trying together in good faith? Drawing on our realist evaluation of a service reform initiative involving multiple agencies, we describe stakeholder motives at key phases, highlighting a need for tactics and skills that help to manage explanatory tensions. In conclusion, the relevance of stakeholders' belief-motive explanations ('we believe the program works') in realist evaluation is clarified and discussed.
RESUMO
In recent years, articles in Evaluation and Program Planning have noted the importance of evaluating programs' unintended consequences, and the need to increase our knowledge in that area. To that end, this paper considers the information that can be obtained about the unintended consequences of foreign assistance programs through an automated textual analysis and review of publicly-available monitoring reports and evaluations. Automated full text searches for terms synonymous with 'unintended consequences' were conducted of more than 1,300 monitoring reports and evaluations downloaded from a publicly-available database of foreign assistance programs. The reports identified by the automated searches were screened and analyzed to determine which had considered and/or reported about such consequences. Positive and negative consequences were identified, as were the assistance sectors and recipient countries. While this study makes available more information on the unintended consequences of foreign assistance programs, it also emphasizes the need for greater research in this area, and outlines how a future research project of this nature might obtain more data.
Assuntos
Países em Desenvolvimento , Cooperação Internacional , Avaliação de Programas e Projetos de Saúde/métodos , United States Agency for International Development , Documentação , Humanos , Aprendizagem , Estados UnidosRESUMO
Surveys of two independent random samples of American Evaluation Association (AEA) members were conducted to investigate application of the logic of evaluation in their evaluation practice. This logic consists of four parts: (1) establish criteria, (2) set standards, (3) measure performance on criteria and compare to standards, and (4) synthesize into a value judgment. Nearly three-fourths (71.84%⯱â¯5.98%) of AEA members are unfamiliar with this logic, yet a majority also indicate its importance and utility for evaluation practice. Moreover, and despite unfamiliarity with the four steps of the logic of evaluation, many AEA members identify evaluative criteria (82.41%⯱â¯3.34%), set performance standards (60.55%⯱â¯7.39%), compare performance to standards (62.14%⯱â¯5.98%), and synthesize into an evaluative conclusion (75.00%⯱â¯5.80%) in their evaluation practice. Much like the working logic of evaluation, however, application of the general logic varies widely.
Assuntos
Competência Profissional/estatística & dados numéricos , Avaliação de Programas e Projetos de Saúde/métodos , Estudos Transversais , Feminino , Humanos , Masculino , Sociedades , Inquéritos e QuestionáriosRESUMO
A rubric is a tool that can support evaluators in a core function of their practice-the process of combining evidence with values to determine merit, worth, or significance-however, little guidance specific to evaluation exists. This study examined, through semi-structured interviews, how a rare group of nine rubric-using seasoned evaluators from across the globe use and learned to use rubrics in their program evaluation practice. Key findings revealed rubrics were a critical component to the practice of these evaluators to make determinations, but also as frameworks to sharpen an evaluation's focus. Additionally, findings support the notion that there is a paucity of formal channels for learning about rubrics and indicate these early adopters are instead, honing their skills through informal channels such as trial and error and by tapping into a community of practice. Future directions for training and research should include expanding understanding, application, and acceptance of use.
Assuntos
Avaliação de Programas e Projetos de Saúde/métodos , Livros , Humanos , Relações Interprofissionais , Entrevistas como Assunto , Aprendizagem , Resolução de ProblemasRESUMO
Published articles from the Evaluation and Program Planning journal were examined over a six year period from 2010 to 2016. We investigated the focus of the journal, evaluation type (formative vs summative), number of articles published, place of authorship, number of authors, research domain of articles, research topics, and data collection method used. Results indicated that (a) public health, evaluation, and adolescent/child research domains were most prevalent; (b) most authors were from North-America; (c) most articles had three or more authors; and (d) document review was most prevalent data collection method. We suggest that more articles with a multicultural background be published, and more articles be solicited from other countries to fulfull the international mission of the journal.
Assuntos
Bibliometria , Desenvolvimento de Programas/métodos , Avaliação de Programas e Projetos de Saúde/métodos , HumanosRESUMO
The role of politics has often been discussed in evaluation theory and practice. The political influence of the situation can have major effects on the evaluation design, approach and methods. Politics also has the potential to influence the decisions made from the evaluation findings. The current study focuses on the influence of the political context on stakeholder decision making. Utilizing a simulation scenario, this study compares stakeholder decision making in high and low stakes evaluation contexts. Findings suggest that high stakes political environments are more likely than low stakes environments to lead to reduced reliance on technically appropriate measures and increased dependence on measures better reflect the broader political environment.
Assuntos
Tomada de Decisões , Estudos de Avaliação como Assunto , Política , Adulto , Análise de Variância , Feminino , Política de Saúde , Humanos , Masculino , Pessoa de Meia-Idade , Estados UnidosRESUMO
Stakeholder participation and evaluation use have attracted a lot of attention from practitioners, theorists and researchers. A common hypothesis is that participation is positively associated with evaluation use. Whereas the number of empirical studies conducted on this topic is impressive, quantitative research has held a minority position within this scientific production. This study mobilizes systematic review methods to 'map' the empirical literature that has quantitatively studied participation and use. The goal is to take stock and assess the strength of evidence of this literature (but not to synthesize the findings) and, based on this assessment, to provide directions for future research.