Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 29
Filter
1.
Front Public Health ; 12: 1246897, 2024.
Article in English | MEDLINE | ID: mdl-38525334

ABSTRACT

Introduction: Evidence-based policies are a powerful tool for impacting health and addressing obesity. Effectively communicating evidence to policymakers is critical to ensure evidence is incorporated into policies. While all public health is local, limited knowledge exists regarding effective approaches for improving local policymakers' uptake of evidence-based policies. Methods: Local policymakers were randomized to view one of four versions of a policy brief (usual care, narrative, risk-framing, and narrative/risk-framing combination). They then answered a brief survey including questions about their impressions of the brief, their likelihood of using it, and how they determine legislative priorities. Results: Responses from 331 participants indicated that a majority rated local data (92%), constituent needs/opinions (92%), and cost-effectiveness data (89%) as important or very important in determining what issues they work on. The majority of respondents agreed or strongly agreed that briefs were understandable (87%), believable (77%), and held their attention (74%) with no brief version rated significantly higher than the others. Across the four types of briefs, 42% indicated they were likely to use the brief. Logistic regression models showed that those indicating that local data were important in determining what they work on were over seven times more likely to use the policy brief than those indicating that local data were less important in determining what they work on (aOR = 7.39, 95% CI = 1.86,52.57). Discussion: Among local policymakers in this study there was no dominant format or type of policy brief; all brief types were rated similarly highly. This highlights the importance of carefully crafting clear, succinct, credible, and understandable policy briefs, using different formats depending on communication objectives. Participants indicated a strong preference for receiving materials incorporating local data. To ensure maximum effect, every effort should be made to include data relevant to a policymaker's local area in policy communications.


Subject(s)
Communication , Health Policy , Humans , Public Health , Obesity/prevention & control , Surveys and Questionnaires
2.
Implement Sci Commun ; 4(1): 106, 2023 Aug 29.
Article in English | MEDLINE | ID: mdl-37644495

ABSTRACT

BACKGROUND: Logic models map the short-term and long-term outcomes that are expected to occur with a program, and thus are an essential tool for evaluation. Funding agencies, especially in the United States (US), have encouraged the use of logic models among their grantees. They also use logic models to clarify expectations for their own funding initiatives. It is increasingly recognized that logic models should be developed through a participatory approach which allows input from those who carry out the program being evaluated. While there are many positive examples of participatory logic modeling, funders have generally not engaged grantees in developing the logic model associated with their own initiatives. This article describes an instance where a US funder of a multi-site initiative fully engaged the funded organizations in developing the initiative logic model. The focus of the case study is Implementation Science Centers in Cancer Control (ISC3), a multi-year initiative funded by the National Cancer Institute. METHODS: The reflective case study was collectively constructed by representatives of the seven centers funded under ISC3. Members of the Cross-Center Evaluation (CCE) Work Group jointly articulated the process through which the logic model was developed and refined. Individual Work Group members contributed descriptions of how their respective centers reviewed and used the logic model. Cross-cutting themes and lessons emerged through CCE Work Group meetings and the writing process. RESULTS: The initial logic model for ISC3 changed in significant ways as a result of the input of the funded groups. Authentic participation in the development of the logic model led to strong buy-in among the centers, as evidenced by their utilization. The centers shifted both their evaluation design and their programmatic strategy to better accommodate the expectations reflected in the initiative logic model. CONCLUSIONS: The ISC3 case study demonstrates how participatory logic modeling can be mutually beneficial to funders, grantees and evaluators of multi-site initiatives. Funded groups have important insights about what is feasible and what will be required to achieve the initiative's stated objectives. They can also help identify the contextual factors that either inhibit or facilitate success, which can then be incorporated into both the logic model and the evaluation design. In addition, when grantees co-develop the logic model, they have a better understanding and appreciation of the funder's expectations and thus are better positioned to meet those expectations.

3.
J Public Health Manag Pract ; 29(5): 691-700, 2023.
Article in English | MEDLINE | ID: mdl-37290132

ABSTRACT

CONTEXT: Understanding the extent to which equity-focused work is occurring in public health departments (eg, in chronic disease programs) can identify areas of success and what is needed to move the needle on health equity. OBJECTIVE: The study objective was to characterize the patterns and correlates of equity-related practices in US state and territorial public health practice. DESIGN: The design was a multimethod (quantitative and qualitative), cross-sectional study. SETTING: The setting included US state and territorial public health departments. PARTICIPANTS: Chronic disease prevention practitioners (N = 600) completed self-report surveys in July 2022 through August 2022 (analyzed in September 2022 through December 2022). MAIN OUTCOME MEASURES: Health equity data were obtained across 4 domains: (1) staff skills, (2) work unit practices, (3) organizational priorities and values, and (4) partnerships and networks. RESULTS: There was a wide range in self-reported performance across the health equity variables. The highest values (those agreeing and strongly agreeing) were related to staff skills (eg, the ability to describe the causes of inequities [82%]). Low agreement was reported for multiple items, indicating the lack of systems for tracking progress on health equity (32%), the lack of hiring of staff members who represent disadvantaged communities (33%), and limited use of principles for community engagement (eg, sharing decision-making authority with partners [34%]). Qualitative data provided tangible examples showing how practitioners and their agencies are turning an array of health equity concepts into actions. CONCLUSIONS: There is urgency in addressing health equity and our data suggest considerable room for enhancing health equity practices in state and territorial public health. To support these activities, our findings provide some of the first information on areas of progress, gaps in practice, and where to target technical assistance, capacity building efforts, and accreditation planning.


Subject(s)
Health Equity , United States , Humans , Cross-Sectional Studies , Public Health Practice , Public Health/methods , Self Report , Chronic Disease
4.
Res Sq ; 2023 May 18.
Article in English | MEDLINE | ID: mdl-37292912

ABSTRACT

Background: It is increasingly being recognized that logic models should be developed through a participatory approach which allows input from those who carry out the program being evaluated. While there are many positive examples of participatory logic modeling, funders have generally not used this approach in the context of multi-site initiatives. This article describes an instance where the funder and evaluator of a multi-site initiative fully engaged the funded organizations in developing the initiative logic model. The focus of the case study is Implementation Science Centers in Cancer Control (ISC 3 ), a multi-year initiative funded by the National Cancer Institute (NCI). Methods: The case study was collectively constructed by representatives of the seven centers funded under ISC 3 . Members of the Cross-Center Evaluation (CCE) Work Group jointly articulated the process through which the logic model was developed and refined. Individual Work Group members contributed descriptions of how their respective centers reviewed and used the logic model. Cross-cutting themes and lessons emerged through CCE Work Group meetings and the writing process. Results: The initial logic model for ISC 3 changed in significant ways as a result of the input of the funded groups. Authentic participation in the development of the logic model led to strong buy-in among the centers, as evidenced by their utilization. The centers shifted both their evaluation design and their programmatic strategy to better accommodate the expectations reflected in the initiative logic model. Conclusions: The ISC 3 case study provides a positive example of how participatory logic modeling can be mutually beneficial to funders, grantees and evaluators of multi-site initiatives. Funded groups have important insights about what is feasible and what will be required to achieve the initiative's stated objectives. They can also help identify the contextual factors that either inhibit or facilitate success, which can then be incorporated into both the logic model and the evaluation design. In addition, when grantees co-develop the logic model, they have a better understanding and appreciation of the funder's expectations, and thus are better positioned to meet those expectations.

5.
J Public Health Manag Pract ; 29(2): 213-225, 2023.
Article in English | MEDLINE | ID: mdl-36240510

ABSTRACT

OBJECTIVES: Evidence-based decision making (EBDM) capacity in local public health departments is foundational to meeting both organizational and individual competencies and fulfilling expanded roles. In addition to on-the-job training, organizational supports are needed to prepare staff; yet, less is known in this area. This qualitative study explores supportive management practices instituted as part of a training and technical assistance intervention. DESIGN: This qualitative study used a semistructured interview guide to elicit participants' descriptions and perceptions via key informant interviews. Verbatim transcripts were coded and thematic analyses were conducted. SETTING: Local public health departments in a US Midwestern state participated in the project. PARTICIPANTS: Seventeen middle managers and staff from 4 local health departments participated in remote, audio-recorded interviews. INTERVENTION: Following delivery of a 3½-day in-person training, the study team met with health department leadership teams for department selection of supportive agency policies and procedures to revise or newly create. Periodic remote meetings included collaborative problem-solving, sharing of informational resources, and encouragement. MAIN OUTCOME MEASURES: Included management practices instituted to support EBDM and impact on day-to-day work as described by the interview participants. RESULTS: Leadership and middle management practices deemed most helpful included dedicating staff; creating specific guidelines; setting expectations; and providing trainings, resources, and guidance. Health departments with a preexisting supportive organizational culture and climat e were able to move more quickly and fully to integrate supportive management practices. Workforce development included creation of locally tailored overviews for all staff members and onboarding of new staff. Staff wanted additional hands-on skill-building trainings. Several worked with partners to incorporate evidence-based processes into community health improvement plans. CONCLUSIONS: Ongoing on-the-job experiential learning is needed to integrate EBDM principles into day-to-day public health practice. Management practices established by leadership teams and middle managers can create supportive work environments for EBDM integration.


Subject(s)
Evidence-Based Practice , Public Health , Humans , Public Health/methods , Evidence-Based Practice/methods , Public Health Practice , Qualitative Research , Decision Making
6.
Front Public Health ; 10: 853791, 2022.
Article in English | MEDLINE | ID: mdl-35570955

ABSTRACT

Background: Local health departments (LHDs) in the United States are charged with preventing disease and promoting health in their respective communities. Understanding and addressing what supports LHD's need to foster a climate and culture supportive of evidence-based decision making (EBDM) processes can enhance delivery of effective practices and services. Methods: We employed a stepped-wedge trial design to test staggered delivery of implementation supports in 12 LHDs (Missouri, USA) to expand capacity for EBDM processes. The intervention was an in-person training in EBDM and continued support by the research team over 24 months (March 2018-February 2020). We used a mixed-methods approach to evaluate: (1) individuals' EBDM skills, (2) organizational supports for EBDM, and (3) administered evidence-based interventions. LHD staff completed a quantitative survey at 4 time points measuring their EBDM skills, organizational supports, and evidence-based interventions. We selected 4 LHDs with high contact and engagement during the intervention period to interview staff (n = 17) about facilitators and barriers to EBDM. We used mixed-effects linear regression to examine quantitative survey outcomes. Interviews were transcribed verbatim and coded through a dual independent process. Results: Overall, 519 LHD staff were eligible and invited to complete quantitative surveys during control periods and 593 during intervention (365 unique individuals). A total of 434 completed during control and 492 during intervention (83.6 and 83.0% response, respectively). In both trial modes, half the participants had at least a master's degree (49.7-51.7%) and most were female (82.1-83.8%). No significant intervention effects were found in EBDM skills or in implementing evidence-based interventions. Two organizational supports scores decreased in intervention vs. control periods: awareness (-0.14, 95% CI -0.26 to -0.01, p < 0.05) and climate cultivation (-0.14, 95% CI -0.27 to -0.02, p < 0.05) but improved over time among all participants. Interviewees noted staff turnover, limited time, resources and momentum as challenges to continue EBDM work. Setting expectations, programmatic reviews, and pre-existing practices were seen as facilitators. Conclusions: Challenges (e.g., turnover, resources) may disrupt LHDs' abilities to fully embed organizational processes which support EBDM. This study and related literature provides understanding on how best to support LHDs in building capacity to use and sustain evidence-based practices.


Subject(s)
Evidence-Based Practice , Local Government , Evidence-Based Practice/methods , Female , Humans , Male , Surveys and Questionnaires , United States
7.
Implement Sci Commun ; 3(1): 41, 2022 Apr 13.
Article in English | MEDLINE | ID: mdl-35418309

ABSTRACT

BACKGROUND: Multi-center research initiatives offer opportunities to develop and strengthen connections among researchers. These initiatives often have goals of increased scientific collaboration which can be examined using social network analysis. METHODS: The National Cancer Institute (NCI)-funded Implementation Science Centers in Cancer Control (ISC3) initiative conducted an online social network survey in its first year of funding (2020) to (1) establish baseline network measures including the extent of cross-center collaboration and (2) assess factors associated with a network member's access to the network such as one's implementation science (IS) expertise. Members of the seven funded centers and NCI program staff identified collaborations in planning/conducting research, capacity building, product development, scientific dissemination, and practice/policy dissemination. RESULTS: Of the 192 invitees, 182 network members completed the survey (95%). The most prevalent roles were faculty (60%) and research staff (24%). Almost one-quarter (23%) of members reported advanced expertise in IS, 42% intermediate, and 35% beginner. Most members were female (69%) and white (79%). One-third (33%) of collaboration ties were among members from different centers. Across all collaboration activities, the network had a density of 14%, suggesting moderate cohesion. Degree centralization (0.33) and betweenness centralization (0.07) measures suggest a fairly dispersed network (no single or few central member(s) holding all connections). The most prevalent and densely connected collaboration was in planning/conducting research (1470 ties; 8% density). Practice/policy dissemination had the fewest collaboration, lowest density (284 ties' 3% density), and the largest number of non-connected members (n=43). Access to the ISC3 network varied significantly depending on members' level of IS expertise, role within the network, and racial/ethnic background. Across all collaboration activities, most connected members included those with advanced IS expertise, faculty and NCI staff, and Hispanic or Latino and white members. CONCLUSIONS: Results establish a baseline for assessing the growth of cross-center collaborations, highlighting specific areas in need of particular growth in network collaborations such as increasing engagement of racial and ethnic minorities and trainees or those with less expertise in IS.

8.
Am J Prev Med ; 61(2): 299-307, 2021 08.
Article in English | MEDLINE | ID: mdl-34020850

ABSTRACT

The evidence-based public health course equips public health professionals with skills and tools for applying evidence-based frameworks and processes in public health practice. To date, training has included participants from all the 50 U.S. states, 2 U.S. territories, and multiple other countries besides the U.S. This study pooled follow-up efforts (5 surveys, with 723 course participants, 2005-2019) to explore the benefits, application, and barriers to applying the evidence-based public health course content. All analyses were completed in 2020. The most common benefits (reported by >80% of all participants) were identifying ways to apply knowledge in their work, acquiring new knowledge, and becoming a better leader who promotes evidence-based approaches. Participants most frequently applied course content to searching the scientific literature (72.9%) and least frequently to writing grants (42.7%). Lack of funds for continued training (35.3%), not having enough time to implement evidence-based public health approaches (33.8%), and not having coworkers trained in evidence-based public health (33.1%) were common barriers to applying the content from the course. Mean scores were calculated for benefits, application, and barriers to explore subgroup differences. European participants generally reported higher benefits from the course (mean difference=0.12, 95% CI=0.00, 0.23) and higher frequency of application of the course content to their job (mean difference=0.17, 95% CI=0.06, 0.28) than U.S. participants. Participants from later cohorts (2012-2019) reported more overall barriers to applying course content in their work (mean difference=0.15, 95% CI=0.05, 0.24). The evidence-based public health course represents an important strategy for increasing the capacity (individual skills) for evidence-based processes within public health practice. Organization-level methods are also needed to scale up and sustain capacity-building efforts.


Subject(s)
Capacity Building , Public Health , Europe , Health Personnel , Humans , Surveys and Questionnaires
9.
Public Health Rep ; 136(6): 710-718, 2021.
Article in English | MEDLINE | ID: mdl-33593131

ABSTRACT

OBJECTIVES: Evidence-based decision making (EBDM) allows public health practitioners to implement effective programs and policies fitting the preferences of their communities. To engage in EBDM, practitioners must have skills themselves, their agencies must engage in administrative evidence-based practices (A-EBPs), and leaders must encourage the use of EBDM. We conducted this longitudinal study to quantify perceptions of individual EBDM skills and A-EBPs, as well as the longitudinal associations between the 2. METHODS: An online survey completed among US state health department practitioners in 2016 and 2018 assessed perceptions of respondents' skills in EBDM and A-EBPs. We used χ2 tests, t tests, and linear regressions to quantify changes over time, differences by demographic characteristics, and longitudinal associations between individual skills and A-EBPs among respondents who completed both surveys (N = 336). RESULTS: Means of most individual EBDM skills and A-EBPs did not change significantly from 2016 to 2018. We found significant positive associations between changes in A-EBPs and changes in EBDM skill gaps: for example, a 1-point increase in the relationships and partnerships score was associated with a narrowing of the EBDM skill gap (ß estimate = 0.38; 95% CI, 0.15-0.61). At both time points, perceived skills and A-EBPs related to financial practices were low. CONCLUSIONS: Findings from this study can guide the development and dissemination of initiatives designed to simultaneously improve individual and organizational capacity for EBDM in public health settings. Future studies should focus on types of strategies most effective to build capacity in particular types of agencies and practitioners, to ultimately improve public health practice.


Subject(s)
Health Personnel/psychology , Perception , Adult , Decision Making , Evidence-Based Practice/methods , Female , Health Personnel/statistics & numerical data , Humans , Leadership , Male , Middle Aged , Program Evaluation/standards , State Health Planning and Development Agencies/organization & administration , State Health Planning and Development Agencies/statistics & numerical data , Surveys and Questionnaires , United States
10.
Acad Med ; 96(1): 86-92, 2021 01 01.
Article in English | MEDLINE | ID: mdl-32941251

ABSTRACT

PROBLEM: Dissemination and implementation (D&I) science provides the tools needed to close the gap between known intervention strategies and their effective application. The authors report on the Mentored Training for Dissemination and Implementation Research in Cancer (MT-DIRC) program-a D&I training program for postdoctoral or early-career cancer prevention and control scholars. APPROACH: MT-DIRC was a 2-year training institute in which fellows attended 2 annual Summer Institutes and other conferences and received didactic, group, and individual instruction; individualized mentoring; and other supports (e.g., pilot funding). A quasi-experimental design compared changes in 3 areas: mentoring, skills, and network composition. To evaluate mentoring and D&I skills, data from fellows on their mentors' mentoring competencies, their perspectives on the importance of and satisfaction with mentoring priority areas, and their self-rated skills in D&I competency domains were collected. Network composition data were collected from faculty and fellows for 3 core social network domains: contact, mentoring, and collaboration. Paired t tests (mentoring), linear mixed models (skills), and descriptive analyses (network composition) were performed. OUTCOMES: Mentors were rated as highly competent across all mentoring competencies, and each mentoring priority area showed reductions in gaps between satisfaction and importance between the 6 and 18 months post-first Summer Institute. Fellows' self-rated skills in D&I competencies improved significantly in all domains over time (range: 42.5%-52.9% increase from baseline to 18 months post-first Summer Institute). Mentorship and collaboration networks grew over time, with the highest number of collaboration network ties for scholarly manuscripts (n = 199) in 2018 and for research projects (n = 160) in 2019. NEXT STEPS: Building on study findings and existing literature, mentored training of scholars is an important approach for building D&I skills and networks, and thus to better applying the vast amount of available intervention evidence to benefit cancer control.


Subject(s)
Biomedical Research/organization & administration , Delivery of Health Care/organization & administration , Information Dissemination/methods , Mentoring/organization & administration , Neoplasms/prevention & control , Research Personnel/education , Translational Research, Biomedical/education , Adult , Curriculum , Education, Medical, Continuing/organization & administration , Female , Humans , Male , Mentors , Middle Aged , Translational Research, Biomedical/organization & administration
11.
BMC Med Educ ; 20(1): 237, 2020 Jul 28.
Article in English | MEDLINE | ID: mdl-32723326

ABSTRACT

BACKGROUND: Mentored training approaches help build capacity for research through mentoring networks and skill building activities. Capacity for dissemination and implementation (D&I) research in cancer is needed and mentored training programs have been developed. Evaluation of mentored training programs through quantitative approaches often provides us with information on "what" improved for participants. Qualitative approaches provide a deeper understanding of "how" programs work best. METHODS: Qualitative interviews were conducted with 21 fellows of the National Cancer Institute-funded Mentored Training for Dissemination and Implementation in Cancer to gain understanding of their experiences with mentoring received during the program. Fellows were selected from all 55 trained participants based upon their gain in D&I research skills (highest and lowest) and number of collaborative connections in the program network (highest and lowest) reported in previous quantitative surveys. Phone interviews were recorded with permission, transcribed verbatim, and de-identified for analysis. Codes were developed a priori to reflect interview guide concepts followed by further development and iterative coding of three common themes that emerged: 1) program and mentoring structure, 2) importance of mentor attributes, and 3) enhanced capacity: credentials, confidence, credibility and connections. RESULTS: Interviews provided valuable information about program components that worked best and impacts attributed to participation in the program. Fellows reported that regular monthly check-in calls with mentors helped to keep their research moving forward and that group mentoring structures aided in their learning of basic D&I research concepts and their application. Accessible, responsive, and knowledgeable mentors were commonly mentioned by fellows as a key to their success in the program. Fellows mentioned various forms of impact that they attributed to their participation in the program including gaining credibility in the field, a network of peers and experts, and career developments (e.g., collaborative publications and grant funding). CONCLUSIONS: These findings suggest that mentored training works best when mentoring is structured and coupled with applied learning and when respected and dedicated mentors are on board. Increased scientific collaborations and credibility within a recognized network are important trainee experiences that should be considered when designing, implementing, and sustaining mentored training programs.


Subject(s)
Mentoring , Neoplasms , Delivery of Health Care , Humans , Mentors , Program Evaluation , Qualitative Research
12.
Implement Sci ; 15(1): 30, 2020 05 11.
Article in English | MEDLINE | ID: mdl-32393285

ABSTRACT

BACKGROUND: There is a continued need to evaluate training programs in dissemination and implementation (D&I) research. Scientific products yielded from trainees are an important and objective measure to understand the capacity growth within the D&I field. This study evaluates our mentored training program in terms of scientific productivity among applicants. METHODS: Post-doctoral and early-career cancer researchers were recruited and applied to the R25 Mentored Training for Dissemination and Implementation Research in Cancer (MT-DIRC) between 2014 and 2017. Using application details and publicly available bibliometric and funding data, we compared selected fellows with unsuccessful applicants (nonfellows). We extracted Scopus citations and US federal grant funding records for all applicants (N = 102). Funding and publication abstracts were de-identified and coded for D&I focus and aggregated to the applicant level for analysis. Logistic regression models were explored separately for the odds of (1) a D&I publication and (2) US federal grant funding post year of application among fellows (N = 55) and nonfellows (N = 47). Additional models were constructed to include independent variables that attenuated the program's association by 5% or more. Only US-based applicants (N = 87) were included in the grant funding analysis. RESULTS: Fellows and nonfellows were similar across several demographic characteristics. Fellows were more than 3 times more likely than nonfellows to have grant funding after MT-DIRC application year (OR 3.2; 95% CI 1.1-11.0) while controlling for time since application year; the association estimate was 3.1 (95% CI 0.98-11.0) after adjusting for both cancer research area and previous grant funding. For publications, fellows were almost 4 times more likely to publish D&I-focused work adjusting for time (OR 3.8; 95% CI 1.7-9.0). This association lessened after adjusting for previous D&I publication and years since undergraduate degree (OR 2.9; 95% CI 1.2-7.5). CONCLUSIONS: We document the association of a mentored training approach with built-in networks of peers to yield productive D&I researchers. Future evaluation efforts could be expanded to include other forms of longer-term productivity such as policy or practice change as additional objective measures. D&I research trainings in the USA and internationally should consider common evaluation measures.


Subject(s)
Biomedical Research/organization & administration , Implementation Science , Information Dissemination/methods , Medical Oncology/organization & administration , Mentors/education , Female , Humans , Male , Peer Group , Research Support as Topic/statistics & numerical data
13.
BMC Health Serv Res ; 20(1): 258, 2020 Mar 30.
Article in English | MEDLINE | ID: mdl-32228688

ABSTRACT

BACKGROUND: Public health resources are limited and best used for effective programs. This study explores associations of mis-implementation in public health (ending effective programs or continuing ineffective programs) with organizational supports for evidence-based decision making among U.S. local health departments. METHODS: The national U.S. sample for this cross-sectional study was stratified by local health department jurisdiction population size. One person was invited from each randomly selected local health department: the leader in chronic disease, or the director. Of 600 selected, 579 had valid email addresses; 376 completed the survey (64.9% response). Survey items assessed frequency of and reasons for mis-implementation. Participants indicated agreement with statements on organizational supports for evidence-based decision making (7-point Likert). RESULTS: Thirty percent (30.0%) reported programs often or always ended that should have continued (inappropriate termination); organizational supports for evidence-based decision making were not associated with the frequency of programs ending. The main reason given for inappropriate termination was grant funding ended (86.0%). Fewer (16.4%) reported programs often or always continued that should have ended (inappropriate continuation). Higher perceived organizational supports for evidence-based decision making were associated with less frequent inappropriate continuation (odds ratio = 0.86, 95% confidence interval 0.79, 0.94). All organizational support factors were negatively associated with inappropriate continuation. Top reasons were sustained funding (55.6%) and support from policymakers (34.0%). CONCLUSIONS: Organizational supports for evidence-based decision making may help local health departments avoid continuing programs that should end. Creative mechanisms of support are needed to avoid inappropriate termination. Understanding what influences mis-implementation can help identify supports for de-implementation of ineffective programs so resources can go towards evidence-based programs.


Subject(s)
Evidence-Based Practice , Program Evaluation , Public Health Administration , Chronic Disease , Cross-Sectional Studies , Decision Making , Female , Humans , Leadership , Local Government , Male , Odds Ratio , Resource Allocation , Surveys and Questionnaires , United States
14.
J Public Health Manag Pract ; 25(4): 373-381, 2019.
Article in English | MEDLINE | ID: mdl-31136511

ABSTRACT

OBJECTIVE: Use of research evidence in public health decision making can be affected by organizational supports. Study objectives are to identify patterns of organizational supports and explore associations with research evidence use for job tasks among public health practitioners. DESIGN: In this longitudinal study, we used latent class analysis to identify organizational support patterns, followed by mixed logistic regression analysis to quantify associations with research evidence use. SETTING: The setting included 12 state public health department chronic disease prevention units and their external partnering organizations involved in chronic disease prevention. PARTICIPANTS: Chronic disease prevention staff from 12 US state public health departments and partnering organizations completed self-report surveys at 2 time points, in 2014 and 2016 (N = 872). MAIN OUTCOME MEASURES: Latent class analysis was employed to identify subgroups of survey participants with distinct patterns of perceived organizational supports. Two classify-analyze approaches (maximum probability assignment and multiple pseudo-class draws) were used in 2017 to investigate the association between latent class membership and research evidence use. RESULTS: The optimal model identified 4 latent classes, labeled as "unsupportive workplace," "low agency leadership support," "high agency leadership support," and "supportive workplace." With maximum probability assignment, participants in "high agency leadership support" (odds ratio = 2.08; 95% CI, 1.35-3.23) and "supportive workplace" (odds ratio = 1.74; 95% CI, 1.10-2.74) were more likely to use research evidence in job tasks than "unsupportive workplace." The multiple pseudo-class draws produced comparable results with odds ratio = 2.09 (95% CI, 1.31-3.30) for "high agency leadership support" and odds ratio = 1.74 (95% CI, 1.07-2.82) for "supportive workplace." CONCLUSIONS: Findings suggest that leadership support may be a crucial element of organizational supports to encourage research evidence use. Organizational supports such as supervisory expectations, access to evidence, and participatory decision making may need leadership support as well to improve research evidence use in public health job tasks.


Subject(s)
Public Health Practice/standards , Research/standards , State Government , Adult , Chronic Disease/prevention & control , Female , Humans , Latent Class Analysis , Longitudinal Studies , Male , Middle Aged , Odds Ratio , Public Health Practice/statistics & numerical data , Research/statistics & numerical data
15.
Front Public Health ; 6: 257, 2018.
Article in English | MEDLINE | ID: mdl-30271767

ABSTRACT

Background: Evidence-based decision making (EBDM) in health programs and policies can reduce population disease burden. Training in EBDM for the public health workforce is necessary to continue capacity building efforts. While in-person training for EBDM is established and effective, gaps in skills for practicing EBDM remain. Distance and blended learning (a combination of distance and in-person) have the potential to increase reach and reduce costs for training in EBDM. However, evaluations to-date have focused primarily on in-person training. Here we examine effectiveness of in-person trainings compared to distance and blended learning. Methods: A quasi-experimental pre-post design was used to compare gaps in skills for EBDM among public health practitioners who received in-person training, distance and blended learning, and controls. Nine training sites agreed to replicate a course in EBDM with public health professionals in their state or region. Courses were conducted either in-person (n = 6) or via distance or blended learning (n = 3). All training participants, along with controls, were asked to complete a survey before the training and 6 months post-training. Paired surveys were used in linear mixed models to compare effectiveness of training compared to controls. Results: Response rates for pre and post-surveys were 63.9 and 48.8% for controls and 81.6 and 62.0% for training groups. Participants who completed both pre and post-surveys (n = 272; 84 in-person, 67 distance or blended, and 121 controls) were mostly female (89.0%) and about two-thirds (65.3%) were from local health departments. In comparison to controls, overall gaps in skills for EBDM were reduced for participants of both in-person training (ß = -0.55, SE = 0.27, p = 0.041) and distance or blended training (ß = -0.64, SE = 0.29, p = 0.026). Conclusions: This study highlights the importance of using diverse methods of learning (including distance or blended in-person approaches) for scaling up capacity building in EBDM. Further exploration into effective implementation strategies for EBDM trainings specific to course delivery type and understanding delivery preferences are important next steps.

16.
BMJ Open Diabetes Res Care ; 6(1): e000558, 2018.
Article in English | MEDLINE | ID: mdl-30233805

ABSTRACT

OBJECTIVE: The nearly 3000 local health departments (LHDs) nationwide are the front line of public health and are positioned to implement evidence-based interventions (EBIs) for diabetes control. Yet little is currently known about use of diabetes-related EBIs among LHDs. This study used a national online survey to determine the patterns and correlates of the Centers for Disease Control and Prevention Community Guide-recommended EBIs for diabetes control in LHDs. RESEARCH DESIGN AND METHODS: A cross-sectional study was conducted to survey a stratified random sample of LHDs regarding department characteristics, respondent characteristics, evidence-based decision making within the LHD, and delivery of EBIs (directly or in collaboration) within five categories (diabetes-related, nutrition, physical activity, obesity, and tobacco). Associations between delivering EBIs and respondent and LHD characteristics and evidence-based decision making were explored using logistic regression models. RESULTS: Among 240 LHDs there was considerable variation among the EBIs delivered. Diabetes prevalence in the state was positively associated with offering the Diabetes Prevention Program (OR=1.28 (95% CI 1.02 to 1.62)), diabetes self-management education (OR=1.32 (95% CI 1.04 to 1.67)), and identifying patients and determining treatment (OR=1.27 (95% CI 1.05 to 1.54)). Although all organizational supports for evidence-based decision making factors were related in a positive direction, the only significant association was between evaluation capacity and identifying patients with diabetes and determining effective treatment (OR=1.54 (95% CI 1.08 to 2.19)). CONCLUSION: Supporting evidence-based decision making and increasing the implementation of these EBIs by more LHDs can help control diabetes nationwide.

17.
Prev Chronic Dis ; 15: E92, 2018 07 12.
Article in English | MEDLINE | ID: mdl-30004862

ABSTRACT

BACKGROUND: Research shows that training can improve skills needed for evidence-based decision making, but less is known about instituting organizational supports to build capacity for evidence-based chronic disease prevention. COMMUNITY CONTEXT: The objectives of this case study were to assess facilitators and challenges of applying management practices to support evidence-based decision making in chronic disease prevention programs in the public health system in Georgia through key informant interviews and quantitatively test for changes in perceived management practices and skills through a pre-post survey. METHODS: Leadership of the chronic disease prevention section hosted a multiday training, provided regular supplemental training, restructured the section and staff meetings, led and oversaw technical assistance with partners, instituted transparent performance-based contracting, and made other changes. A 65-item online survey measured perceived importance of skills and the availability of skilled staff, organizational supports, and use of research evidence at baseline (2014) and in 2016 (after training). A structured interview guide asked about management practices, context, internal and external facilitators and barriers, and recommendations. CAPACITY-BUILDING ACTIVITIES AND SURVEY FINDINGS: Seventy-four staff members and partners completed both surveys (70.5% response). Eleven participants also completed a 1-hour telephone interview. Interview participants deemed leadership support and implementation of multiple concurrent management practices key facilitators to increase capacity. Main challenges included competing priorities, lack of political will, and receipt of requests counter to evidence-based approaches. At posttest, health department staff had significantly reduced gaps in skills overall (10-item sum) and in 4 of 10 individual skills, and increased use of research evidence to justify interventions. Use of research evidence for evaluation, but not skills, increased among partners. INTERPRETATION: The commitment of leaders with authority to establish multiple management practices to help staff members learn and apply evidence-based decision-making processes is key to increased use of evidence-based chronic disease prevention to improve population health.


Subject(s)
Chronic Disease/prevention & control , Decision Making , Delivery of Health Care , Evidence-Based Practice/methods , Public Health/standards , Administrative Personnel , Female , Georgia , Health Promotion , Health Services Needs and Demand , Humans , Leadership , Local Government , Male , Organizational Case Studies , Organizational Culture , Public Health Administration
18.
J Community Health ; 43(5): 856-863, 2018 10.
Article in English | MEDLINE | ID: mdl-29500725

ABSTRACT

Evidence-based public health (EBPH) practice, also called evidence-informed public health, can improve population health and reduce disease burden in populations. Organizational structures and processes can facilitate capacity-building for EBPH in public health agencies. This study involved 51 structured interviews with leaders and program managers in 12 state health department chronic disease prevention units to identify factors that facilitate the implementation of EBPH. Verbatim transcripts of the de-identified interviews were consensus coded in NVIVO qualitative software. Content analyses of coded texts were used to identify themes and illustrative quotes. Facilitator themes included leadership support within the chronic disease prevention unit and division, unit processes to enhance information sharing across program areas and recruitment and retention of qualified personnel, training and technical assistance to build skills, and the ability to provide support to external partners. Chronic disease prevention leaders' role modeling of EBPH processes and expectations for staff to justify proposed plans and approaches were key aspects of leadership support. Leaders protected staff time in order to identify and digest evidence to address the common barrier of lack of time for EBPH. Funding uncertainties or budget cuts, lack of political will for EBPH, and staff turnover remained challenges. In conclusion, leadership support is a key facilitator of EBPH capacity building and practice. Section and division leaders in public health agencies with authority and skills can institute management practices to help staff learn and apply EBPH processes and spread EBPH with partners.


Subject(s)
Capacity Building/standards , Evidence-Based Practice , Leadership , Public Health Administration/standards , Public Health/standards , Chronic Disease , Humans , Information Dissemination , Public Health/education , Public Health Administration/education
19.
Implement Sci ; 13(1): 18, 2018 01 22.
Article in English | MEDLINE | ID: mdl-29357876

ABSTRACT

BACKGROUND: As the field of D&I (dissemination and implementation) science grows to meet the need for more effective and timely applications of research findings in routine practice, the demand for formalized training programs has increased concurrently. The Mentored Training for Dissemination and Implementation Research in Cancer (MT-DIRC) Program aims to build capacity in the cancer control D&I research workforce, especially among early career researchers. This paper outlines the various components of the program and reports results of systematic evaluations to ascertain its effectiveness. METHODS: Essential features of the program include selection of early career fellows or more experienced investigators with a focus relevant to cancer control transitioning to a D&I research focus, a 5-day intensive training institute, ongoing peer and senior mentoring, mentored planning and work on a D&I research proposal or project, limited pilot funding, and training and ongoing improvement activities for mentors. The core faculty and staff members of the MT-DIRC program gathered baseline and ongoing evaluation data regarding D&I skill acquisition and mentoring competency through participant surveys and analyzed it by iterative collective reflection. RESULTS: A majority (79%) of fellows are female, assistant professors (55%); 59% are in allied health disciplines, and 48% focus on cancer prevention research. Forty-three D&I research competencies were assessed; all improved from baseline to 6 and 18 months. These effects were apparent across beginner, intermediate, and advanced initial D&I competency levels and across the competency domains. Mentoring competency was rated very highly by the fellows--higher than rated by the mentors themselves. The importance of different mentoring activities, as rated by the fellows, was generally congruent with their satisfaction with the activities, with the exception of relatively greater satisfaction with the degree of emotional support and relatively lower satisfaction for skill building and opportunity initially. CONCLUSIONS: These first years of MT-DIRC demonstrated the program's ability to attract, engage, and improve fellows' competencies and skills and implement a multicomponent mentoring program that was well received. This account of the program can serve as a basis for potential replication and evolution of this model in training future D&I science researchers.


Subject(s)
Biomedical Research/methods , Capacity Building/methods , Health Services Research/methods , Information Dissemination/methods , Mentoring , Mentors , Neoplasms/prevention & control , Research Personnel/education , Translational Research, Biomedical/methods , Biomedical Research/organization & administration , Delivery of Health Care , Female , Health Services Research/organization & administration , Humans , Male , Pilot Projects , Research Personnel/psychology , Universities
20.
Prev Chronic Dis ; 14: E121, 2017 11 30.
Article in English | MEDLINE | ID: mdl-29191262

ABSTRACT

INTRODUCTION: Although practitioners in state health departments are ideally positioned to implement evidence-based interventions, few studies have examined how to build their capacity to do so. The objective of this study was to explore how to increase the use of evidence-based decision-making processes at both the individual and organization levels. METHODS: We conducted a 2-arm, group-randomized trial with baseline data collection and follow-up at 18 to 24 months. Twelve state health departments were paired and randomly assigned to intervention or control condition. In the 6 intervention states, a multiday training on evidence-based decision making was conducted from March 2014 through March 2015 along with a set of supplemental capacity-building activities. Individual-level outcomes were evidence-based decision making skills of public health practitioners; organization-level outcomes were access to research evidence and participatory decision making. Mixed analysis of covariance models was used to evaluate the intervention effect by accounting for the cluster randomized trial design. Analysis was performed from March through May 2017. RESULTS: Participation 18 to 24 months after initial training was 73.5%. In mixed models adjusted for participant and state characteristics, the intervention group improved significantly in the overall skill gap (P = .01) and in 6 skill areas. Among the 4 organizational variables, only access to evidence and skilled staff showed an intervention effect (P = .04). CONCLUSION: Tailored and active strategies are needed to build capacity at the individual and organization levels for evidence-based decision making. Our study suggests several dissemination interventions for consideration by leaders seeking to improve public health practice.


Subject(s)
Chronic Disease/prevention & control , Decision Making , Evidence-Based Practice , Public Health Practice , Female , Humans , Male , United States
SELECTION OF CITATIONS
SEARCH DETAIL
...