RESUMO
Youth development researchers and practitioners share an interest in ensuring that youth development programs contribute positively to youth outcomes. Engaging in evaluation and data-informed decision making (DIDM) has the potential to empower practitioners to improve and adapt programs, improving youth outcomes. Yet, not all practitioners are comfortable engaging in evaluation and/or DIDM. Engaging in partnerships and utilizing internal supporters of evaluations (or "champions") have been identified as potential strategies to build evaluation capacity and strengthen DIDM within an organization. However, little research explores how to do so. This study engaged evaluation champions in four states to examine their experience as they partnered with practitioners within their organization. Results suggest that peer groups can be utilized to promote evaluation capacity, especially by utilizing peers that already have an interest in evaluation. Practitioner engagement can also be developed by using less academic jargon in communication, highlighting the practical value of evaluation, and building capacity slowly.
Assuntos
Desenvolvimento do Adolescente , Fortalecimento Institucional , Tomada de Decisões , Avaliação de Programas e Projetos de Saúde , Humanos , Avaliação de Programas e Projetos de Saúde/métodos , Fortalecimento Institucional/organização & administração , Adolescente , Grupo Associado , Comunicação , Masculino , Feminino , Comportamento CooperativoRESUMO
Evaluation capacity building (ECB) continues to attract attention. Over the past two decades, a broad literature has emerged-covering the dimensions, contexts, and practices of ECB. This article presents findings from a bibliometric analysis of ECB articles published in six evaluation journals from 2000 to 2019. The findings shed light on the communities of scholars that contribute to the ECB knowledge base, the connections between these communities, and the themes they cover. Informed by the findings, future directions for ECB scholarship and how bibliometric analysis can supplement more established approaches to literature reviews are discussed.
Assuntos
Fortalecimento Institucional , Publicações , Humanos , Avaliação de Programas e Projetos de Saúde , BibliometriaRESUMO
Evaluation capacity building (ECB) continues to attract the attention and interest of scholars and practitioners. Over the years, models, frameworks, strategies, and practices related to ECB have been developed and implemented. Although ECB is highly contextual, the evolution of knowledge in this area depends on learning from past efforts in a structured approach. The purpose of the present article is to integrate the ECB literature in evaluation journals. More specifically, the article aims to answer three questions: What types of articles and themes comprise the current literature on ECB? How are current practices of ECB described in the literature? And what is the current status of research on ECB? Informed by the findings of the review, the article concludes with suggestions for future ECB practice and scholarship.
RESUMO
Increasing demand for evidence generated through program evaluation has led many community-based organizations (CBOs) to seek external support for evaluation capacity building (ECB). However, studies have yet to explore the essential competencies required by evaluation capacity builders working in the community sector. Our qualitative study aimed to examine the perceptions of ECB practitioners (n = 12) regarding essential competencies for building evaluation capacity in this sector. Our findings reveal that ECB practice requires competencies not found in known evaluation competency frameworks, such as instructional design, knowledge of organizational change models, motivating stakeholders, and understanding of the community sector. Our findings provide valuable information to help guide future education and training related to building the evaluation capacity of community organizations.
Assuntos
Fortalecimento Institucional , Humanos , Avaliação de Programas e Projetos de Saúde , Pesquisa QualitativaRESUMO
Background: The program evaluation standards (PES) can be considered established criteria for high-quality evaluations. We emphasize PES Utility standards and evaluation capacity building as we strive for meaningful application of our work in the real world. Purpose: We focused our methodology on understanding how stakeholders discussed utility and how their perceptions related to our evaluation work aligned with the Utility domain of the program evaluation standards. Setting: The West Virginia Clinical Translational Science Institute (WVCTSI), a statewide multi-institutional entity for which we have conducted tracking and evaluation since 2012. Intervention: Sustained collaborative engagement of evaluation stakeholders with the goal of increasing their utilization of evaluation products and evaluative thinking. Research Design: Case study. Data Collection and Analysis: We interviewed five key stakeholders. We used themes developed from coding of interview data to inform document analyses. We used interview and document analyses to develop additional themes and illustrative examples, as well as to develop and describe a five-level evaluation uptake scale. Findings: We describe shifts in initiation, use, and internalization of evaluative thinking by non-evaluation personnel. These shifts prompted our development and application of an evaluation uptake scale to capture increased evaluation capacity among stakeholders over time. We discuss how focus on the PES Utility standards and evaluation capacity building facilitated these shifts, and their implications for maximizing utility of evaluation activity in large, complex programmatic evaluations.
RESUMO
The capacity of self-assessment, to learn from experience, to make information-based decisions, and to adapt over time are essential drivers of success for any project aiming at healthcare system change. Yet, many of those projects are managed by healthcare providers' teams with little evaluation capacity. In this article, we describe the support mission delivered by an interdisciplinary scientific team to 12 integrated care pilot projects in Belgium, mobilizing a set of tools and methods: a dashboard gathering population health indicators, a significant event reporting method, an annual report, and the development of a sustainable "learning community." The article provides a reflexive return on the design and implementation of such interventions aimed at building organizational evaluation capacity. Some lessons were drawn from our experience, in comparison with the broader evaluation literature: The provided support should be adapted to the various needs and contexts of the beneficiary organizations, and it has to foster experience-based learning and requires all stakeholders to adopt a learning posture. A long-time, secure perspective should be provided for organizations, and the availability of data and other resources is an essential precondition for successful work.
Assuntos
Prestação Integrada de Cuidados de Saúde , Saúde da População , Humanos , Bélgica , Pessoal de Saúde , Equipe de Assistência ao PacienteRESUMO
Performance-based funding and calls for public-funded science to demonstrate societal impact are encouraging public research organisations to evaluate impact, the so-called impact agenda. This paper explores evaluation methods of four fully or partially public-funded agricultural research organisations and how they are building evaluative capacity to respond to the impact agenda. Drawing on cross-organisational comparison of the readiness of each organisation to implement evaluation, the implications for improving evaluative capacity building (ECB) are discussed. This study extends the current literature on ECB, as very little has focussed on research organisations in general, and particularly agricultural research. Driven by the impact agenda, the organisations are beginning to emphasise summative evaluation. Organisational leaders valuing the demonstration of impact and commitment to building evaluation capacity are important precursors to other aspects of organisational readiness to implement evaluation. However, organisational emphasis remains on using evaluation for accountability and to improve efficiency and allocation of funding. The organisations have yet to systematically embed evaluation processes and capabilities for learning at programme and organisation-levels. There is, therefore, an opportunity to develop organisation and programme-level evaluation processes that inform each other and the pathways to impact from science. To realise this opportunity, organisations could strengthen internal and external networks of evaluation practitioners and academics to bridge the gap between the theory and practice of monitoring and evaluation for learning (MEL) and to begin to reshape organisational culture by using evaluation methods that are grounded in co-production and integrated scientific and societal values.
Assuntos
Fortalecimento Institucional , Humanos , Irlanda , Nova Zelândia , Avaliação de Programas e Projetos de Saúde , Espanha , UruguaiRESUMO
COVID-19 pandemic has affected every country across different continents, be a developed or developing economy. The COVID-19 pandemic has led to a dramatic loss of human life worldwide and presents an unprecedented challenge to public health, food systems and the world of work. Conducting evaluation during COVID-19 pandemic was even more challenging as compared to the evaluation in conflict areas. Sudden lockdown and sustained restrictions was unexpected and affected the evaluators plan of actions for the ongoing as well as forthcoming evaluation activities. Not only primary data collection but secondary research also got hampered as access to knowledge resource centres/libraries stopped due to closure of these centres. As far as primary data collection is concerned, not only data collection exercise got stopped but even for those evaluations where data collection had been completed, the electronic data entry of filled-in survey schedules got stalled for a while. The paper discusses the critical components of evaluation, which gets affected during pandemic like situation such as use of participatory evaluation techniques; missing evidence based policy decisions; external and internal validity not ensured or ethical norms get compromised. To overcome such situations, the evaluation world should be ready with the suggested solutions such as, Use of Artificial Intelligence, computer-assisted interviews, capacity building of community members for participatory evaluation and making ethical review of evaluation protocols mandatory.
Assuntos
COVID-19 , Inteligência Artificial , Controle de Doenças Transmissíveis , Humanos , Pandemias , Avaliação de Programas e Projetos de Saúde , SARS-CoV-2RESUMO
The Maternal, Infant, and Early Childhood Home Visiting (MIECHV) program, administered by the Health Resources and Service Administration in collaboration with the Administration for Children and Families, provides evidence-based home visiting services across 50 states, the District of Columbia, and five U.S. territories. MIECHV invests in comprehensive technical assistance (TA) to support and build the capacity of awardees to conduct rigorous evaluations of their programs. Throughout the course of the evaluation process, awardees received TA from the Design Options for Home Visiting Evaluation project. Between 2011 and 2020, over 173 state-led evaluations have been conducted. Individual technical assistance (TA) modalities included conference calls, emails, interactive and individualized webinars, developing and sharing resources, and involvement of content experts. When issues and challenges were identified across multiple awardees, Design Options for Home Visiting Evaluation (DOHVE) delivered targeted group TA to awardees with common needs that may benefit from peer-to-peer learning. When cross-cutting issues and challenges were identified, DOHVE used universal approaches such as webinars and guidance documents that were made available to all awardees.Through the Maternal, Infant, and Early Childhood Home Visiting (MIECHV) program, efforts have been taken to promote awardee capacity by targeting all phases of the evaluation process, including planning, implementing, and disseminating findings and providing TA that is responsive and tailored to meet awardee-specific needs. This approach enabled DOHVE to support MIECHV awardees in expanding knowledge of their programs and the evidence base on home visiting. Lessons learned from TA provision highlight the importance of developing feasible plans and providing ongoing support during implementation.
Assuntos
Fortalecimento Institucional , Visita Domiciliar , Criança , Pré-Escolar , Família , Feminino , Humanos , Cuidado Pós-Natal , Gravidez , Avaliação de Processos em Cuidados de Saúde , Avaliação de Programas e Projetos de SaúdeRESUMO
OBJECTIVES: Building evaluation capacity for chronic disease prevention (CDP) is a critical step in ensuring the effectiveness of CDP programming over time. In this article, we highlight the findings of the qualitative arm of a mixed-methods needs assessment designed to assess the gaps and areas of strength within Ontario's public health system with respect to CDP evaluation. METHODS: We conducted 29 interviews and focus groups with representatives from 25 public health units (PHUs) and analyzed the data using thematic analysis. We sought to understand what gaps and challenges exist in the Ontario public health system around CDP evaluation. RESULTS: Challenges facing Ontario's PHUs in CDP evaluation include variation and centralization of capacity to evaluate, as well as competing priorities limiting the development of evaluative thinking. Participating PHUs identified the need for evaluation capacity building (ECB) strategies grounded in an understanding of the unique contexts in which they work and a desire for guidance in conducting a complex and thoughtful evaluation. Moving forward, PHUs noted a desire for a strong system of knowledge sharing and consultation across the public health system, including through strengthening existing partnerships with community collaborators. CONCLUSION: These results support the case for ECB strategies that are adaptive and context-sensitive and equip PHUs with the skills required to evaluate complex CDP programming.
RéSUMé: OBJECTIFS: Le renforcement des capacités en évaluation dans le domaine de la prévention des maladies chroniques (PMC) est crucial pour assurer l'efficacité des programmes visant à prévenir les maladies chroniques au fil du temps. Dans cet article, nous rapportons les résultats du volet qualitatif d'une analyse des besoins utilisant les méthodes mixtes conçues pour rendre compte des lacunes et des forces du système de santé publique de l'Ontario en matière d'évaluation de programmes de PMC. MéTHODES: Nous avons d'abord mené 29 entretiens et groupes de discussion auprès de 25 unités de santé publique (USP) pour ensuite analyser les données recueillies par une analyse thématique de contenu. Nous avons cherché à identifier les lacunes, les forces et les défis qui existent dans le système de santé publique de l'Ontario en matière d'évaluation de programmes de PMC. RéSULTATS: Les défis auxquels sont confrontées les USP de l'Ontario en matière d'évaluation de programmes de PMC comprennent la centralisation et les variations dans les capacités à réaliser l'évaluation ainsi que les priorités concurrentes qui limitent le développement de la pensée évaluative. Les organisations participantes souhaitent des stratégies de renforcement des capacités en évaluation qui tiennent compte des différents contextes dans lesquelles elles travaillent ainsi qu'un soutien dans la réalisation d'évaluations complexes. Pour aller de l'avant, les USP souhaitent également mettre en place un système efficace d'échanges d'informations et de consultations à travers le système de santé publique, notamment en renforçant les partenariats existants dans le milieu communautaire. CONCLUSION: Ces résultats appellent l'élaboration de stratégies de renforcement des capacités en évaluation qui sont flexibles, sensibles au contexte et qui dotent les USP des compétences requises en matière d'évaluation de programmes de PMC.
Assuntos
Doença Crônica , Saúde Pública , Doença Crônica/prevenção & controle , Humanos , Avaliação das Necessidades , Ontário , Pesquisa QualitativaRESUMO
Policymakers' demand for increased accountability has compelled organizations to pay more attention to internal evaluation capacity building (ECB). The existing literature about ECB has focused on capacity building experiences and organizational research, with limited attention on challenges that internal evaluation specialists face in building organizational evaluative capacity. To address this knowledge gap, we conducted a Delphi study with evaluation specialists in the United States' Cooperative Extension Service and developed a consensus on the most pervasive ECB challenges as well as the most useful strategies for overcoming ECB challenges. Challenges identified in this study include limited time and resources, limited understanding of the value of evaluation, evaluation considered as an afterthought, and limited support and buy-in from administrators. Alternatively, strategies found in the study include a shift in an organizational culture where evaluation is appreciated, buy-in and support from administration, clarifying the importance of quality than quantity of evaluations, and a strategic approach to ECB. The challenges identified in this study have persisted for decades, meaning administrators must understand the persistence of these issues and make an earnest investment (financial and human resource) to make noticeable progress. The Delphi approach can be used more often to prioritize ECB efforts.
Assuntos
Fortalecimento Institucional/organização & administração , Avaliação de Programas e Projetos de Saúde/métodos , Universidades/organização & administração , Fortalecimento Institucional/normas , Técnica Delphi , Humanos , Liderança , Cultura Organizacional , Fatores de Tempo , Estados Unidos , Universidades/normas , Engajamento no TrabalhoRESUMO
This paper describes some of the main challenges of evaluating complex interventions, as well as the implications of such challenges for evaluation capacity building. It discusses lessons learned from a case study of an evaluation of Dancing with Parkinson's, an organization that provides dance classes to people with Parkinson's disease in Toronto, Canada. These implications are developed from a realist evaluation lens. Key lessons include the need to develop skills to understand program mechanisms and contexts, recognize multiple models of causality, apply mixed method designs, and ensure the successful scaling up and spread of an intervention.
Assuntos
Fortalecimento Institucional , Doença de Parkinson , Canadá , Humanos , Avaliação de Programas e Projetos de Saúde , Projetos de PesquisaRESUMO
This paper discusses the Ontario Brain Institute's theory of change for the Evaluation Support Program, a program designed to enhance the role of community organizations in providing care and services for people living with a brain disorder. This is done by helping community organizations build evaluation capacity and foster the use of evidence to inform their activities and services. Helping organizations to build capacities to track the 'key ingredients' of their successes will help ensure that successes are replicated and services can be improved to maximize the benefit that people receive from them. This paper describes the hypothesized outcomes and early impacts of the Evaluation Support Program, as well as how the program will contribute to the field of evaluation capacity building.
Assuntos
Fortalecimento Institucional , Doenças do Sistema Nervoso , Ecologia , Humanos , Organizações , Avaliação de Programas e Projetos de SaúdeRESUMO
In 2015, Dancing With Parkinson's (DWP), a Toronto-based community organization, participated in the Ontario Brain Institute's (OBI) newly launched Evaluation Support Program. This paper reflects on that experience. In particular, we identify the key lessons derived from the OBI initiative, discuss how these lessons have informed DWP practice going forward, and highlight what we consider to be the most valuable aspects of the Evaluation Support Program. While we now recognize the need to establish an evaluation culture within DWP, we find that there are significant challenges associated with both building and sustaining evaluation capacity in the context of a small community-based organization. Whereas DWP has built considerable strengths in terms of informal evaluation capacity, on its own, such capacity is insufficient to, for example, demonstrate DWP's impact to outside audiences or successfully scale up the program.
Assuntos
Doença de Parkinson , Humanos , Ontário , Organizações , Avaliação de Programas e Projetos de SaúdeRESUMO
This paper is the introductory paper on a forum on evaluation capacity building for enhancing impacts of research on brain disorders. It describes challenges and opportunities of building evaluation capacity among community-based organizations in Ontario involved in enhancing brain health and supporting people living with a brain disorder. Using an example of a capacity building program called the "Evaluation Support Program", which is run by the Ontario Brain Institute, this forum discusses multiple themes including evaluation capacity building, evaluation culture and evaluation methodologies appropriate for evaluating complex community interventions. The goal of the Evaluation Support Program is to help community-based organizations build the capacity to demonstrate the value that they offer in order to improve, sustain, and spread their programs and activities. One of the features of this forum is that perspectives on the Evaluation Support Program are provided by multiple stakeholders, including the community-based organizations, evaluation team members involved in capacity building, thought leaders in the fields of evaluation capacity building and evaluation culture, and the funders.
Assuntos
Encefalopatias , Fortalecimento Institucional , Humanos , Ontário , Organizações , Avaliação de Programas e Projetos de SaúdeRESUMO
This paper discusses what was learned about evaluation capacity building with community organizations who deliver services to individuals with neurological disorders. Evaluation specialists engaged by the Ontario Brain Institute Evaluation Support Program were paired with community organizations, such as Dancing With Parkinson's. Some of the learning included: relationship building is key for this model of capacity building; community organizations often have had negative experiences with evaluation and the idea that evaluations can be friendly tools in implementing meaningful programs is one key mechanism by which such an initiative can work; community organizations often need evaluation most to be able to demonstrate their value; a strength of this initiative was that the focus was not just on creating products but mostly on developing a learning process in which capacities would remain; evaluation tools and skills that organizations found useful were developing a theory of change and the concept of heterogeneous mechanisms (informed by a realist evaluation lens).
Assuntos
Fortalecimento Institucional , Doença de Parkinson , Humanos , Organizações , Doença de Parkinson/diagnóstico , Doença de Parkinson/terapia , Aprendizagem Baseada em Problemas , Avaliação de Programas e Projetos de SaúdeRESUMO
This article, in three parts, reflects on the content of the six articles included in the forum. It begins with a description of the Evaluation Support Program, emphasizing its key attributes. Next, it raises two points regarding ECB theory: (1) the need to become clearer about the concepts and terms used to describe and study this phenomenon, and (2) the potential value of social science theory to understand ECB and improve its practice. The article concludes with practical ideas for improving ECB: (1) framing it as an educative act, which assigns the evaluator the critical role of evaluation teacher/coach; and (2) the importance of never assuming that an ECB effort begins in unchartered territory, but rather that it builds on people's knowledge, skills, attitudes, and previous experiences.
Assuntos
Encéfalo , Fortalecimento Institucional , Humanos , Ontário , Avaliação de Programas e Projetos de SaúdeRESUMO
Evaluation capacity building (ECB) is a practice that can help organizations conduct and use evaluations; however, there is little research on the sustainable impact of ECB interventions. This study provides an empirical inquiry into how ECB develops sustained evaluation practice. Interviews were conducted with 15 organizational leaders from non-profits, higher education institutions, and foundations that "bought in" to ECB and were at least six months removed from an ECB contract. The result of this work highlights how sustained evaluation practice developed over time and what these practices looked like in real-world settings. A developmental, iterative cycle for how ECB led organizations to sustain evaluation practice emerged around key components to sustainability. First, leadership supported ECB work and resources were dedicated to evaluation. Staff began to conduct and use evaluation, which led to understanding the benefits of evaluation, and promoted value and buy-in to evaluation among staff. Common barriers and emerging sustainability supports not previously identified by ECB literature-the "personal" factor and ongoing ECB practitioner contact-are described. Practical tips for ECB practitioners to promote sustainability are also detailed.
Assuntos
Fortalecimento Institucional/organização & administração , Avaliação de Programas e Projetos de Saúde/métodos , Comunicação , Humanos , Entrevistas como Assunto , Liderança , Tutoria/organização & administração , Cultura OrganizacionalRESUMO
The demand for improved quality of health promotion evaluation and greater capacity to undertake evaluation is growing, yet evidence of the challenges and facilitators to evaluation practice within the health promotion field is lacking. A limited number of evaluation capacity measurement instruments have been validated in government or non-government organisations (NGO), however there is no instrument designed for health promotion organisations. This study aimed to develop and validate an Evaluation Practice Analysis Survey (EPAS) to examine evaluation practices in health promotion organisations. Qualitative interviews, existing frameworks and instruments informed the survey development. Health promotion practitioners from government agencies and NGOs completed the survey (n = 169). Principal components analysis was used to determine scale structure and Cronbach's α used to estimate internal reliability. Logistic regression was conducted to assess predictive validity of selected EPAS scale. The final survey instrument included 25 scales (125 items). The EPAS demonstrated good internal reliability (α > 0.7) for 23 scales. Dedicated resources and time for evaluation, leadership, organisational culture and internal support for evaluation showed promising predictive validity. The EPAS can be used to describe elements of evaluation capacity at the individual, organisational and system levels and to guide initiatives to improve evaluation practice in health promotion organisations.
Assuntos
Promoção da Saúde/organização & administração , Avaliação de Programas e Projetos de Saúde/métodos , Inquéritos e Questionários/normas , Promoção da Saúde/economia , Promoção da Saúde/normas , Humanos , Liderança , Modelos Organizacionais , Política , Análise de Componente Principal , Competência Profissional , Psicometria , Pesquisa Qualitativa , Reprodutibilidade dos TestesRESUMO
As the need for rigorous evidence of program efficacy increases, integrating evaluation activities into program implementation is becoming crucial. As a result, external evaluators are placing increased focus on evaluation capacity building as a practice. However, empirical evidence of how to foster evaluation capacity in different contexts remains limited. This study presents findings from an evaluation capacity survey conducted within a multisite Empowerment Evaluation initiative, in which an external evaluator worked with 20 project teams at diverse community agencies implementing HIV prevention projects. Survey results revealed representatives from project teams (n = 33) reported significantly higher overall evaluation capacity after engaging with the external evaluator on planning and implementing their evaluation. Improvements differed across organization type, intervention type, staff position, and reported engagement on various activities throughout the course of the evaluation. Results indicated empowerment evaluation and other stakeholder-focused evaluation approaches are broadly applicable when evaluation capacity building is a desired outcome, particularly when able to engage project staff in the planning of the evaluation and in delivering technical assistance services. Accordingly, efforts should be made by program funders, staff, and evaluators to encourage active engagement starting in the early stages of program and evaluation planning.