Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
1.
Adm Policy Ment Health ; 46(3): 391-410, 2019 05.
Article in English | MEDLINE | ID: mdl-30710173

ABSTRACT

There is strong enthusiasm for utilizing implementation science in the implementation of evidence-based programs in children's community mental health, but there remains work to be done to improve the process. Despite the proliferation of implementation frameworks, there is limited literature providing case examples of overcoming implementation barriers. This article examines whether the use of three implementations strategies, a structured training and coaching program, the use of professional development portfolios for coaching, and a progress monitoring data system, help to overcome barriers to implementation by facilitating four implementation drivers at a community mental health agency. Results suggest that implementation is a process of recognizing and adapting to both predictable and unpredictable barriers. Furthermore, the use of these implementation strategies is important in improving implementation outcomes.


Subject(s)
Community Mental Health Services/organization & administration , Evidence-Based Practice/organization & administration , Child , Clinical Competence , Community Mental Health Services/standards , Evidence-Based Practice/standards , Humans , Leadership , Mentors , Organizational Case Studies , Staff Development/organization & administration
2.
BMC Health Serv Res ; 18(1): 882, 2018 Nov 22.
Article in English | MEDLINE | ID: mdl-30466422

ABSTRACT

CONTEXT: Implementation science measures are rarely used by stakeholders to inform and enhance clinical program change. Little is known about what makes implementation measures pragmatic (i.e., practical) for use in community settings; thus, the present study's objective was to generate a clinical stakeholder-driven operationalization of a pragmatic measures construct. EVIDENCE ACQUISITION: The pragmatic measures construct was defined using: 1) a systematic literature review to identify dimensions of the construct using PsycINFO and PubMed databases, and 2) interviews with an international stakeholder panel (N = 7) who were asked about their perspectives of pragmatic measures. EVIDENCE SYNTHESIS: Combined results from the systematic literature review and stakeholder interviews revealed a final list of 47 short statements (e.g., feasible, low cost, brief) describing pragmatic measures, which will allow for the development of a rigorous, stakeholder-driven conceptualization of the pragmatic measures construct. CONCLUSIONS: Results revealed significant overlap between terms related to the pragmatic construct in the existing literature and stakeholder interviews. However, a number of terms were unique to each methodology. This underscores the importance of understanding stakeholder perspectives of criteria measuring the pragmatic construct. These results will be used to inform future phases of the project where stakeholders will determine the relative importance and clarity of each dimension of the pragmatic construct, as well as their priorities for the pragmatic dimensions. Taken together, these results will be incorporated into a pragmatic rating system for existing implementation science measures to support implementation science and practice.


Subject(s)
Feedback , Implementation Science , Communication , Female , Humans , Male , Middle Aged , Research Design
3.
Adm Policy Ment Health ; 45(1): 48-61, 2018 01.
Article in English | MEDLINE | ID: mdl-27631610

ABSTRACT

Numerous trials demonstrate that monitoring client progress and using feedback for clinical decision-making enhances treatment outcomes, but available data suggest these practices are rare in clinical settings and no psychometrically validated measures exist for assessing attitudinal barriers to these practices. This national survey of 504 clinicians collected data on attitudes toward and use of monitoring and feedback. Two new measures were developed and subjected to factor analysis: The monitoring and feedback attitudes scale (MFA), measuring general attitudes toward monitoring and feedback, and the attitudes toward standardized assessment scales-monitoring and feedback (ASA-MF), measuring attitudes toward standardized progress tools. Both measures showed good fit to their final factor solutions, with excellent internal consistency for all subscales. Scores on the MFA subscales (Benefit, Harm) indicated that clinicians hold generally positive attitudes toward monitoring and feedback, but scores on the ASA-MF subscales (Clinical Utility, Treatment Planning, Practicality) were relatively neutral. Providers with cognitive-behavioral theoretical orientations held more positive attitudes. Only 13.9 % of clinicians reported using standardized progress measures at least monthly and 61.5 % never used them. Providers with more positive attitudes reported higher use, providing initial support for the predictive validity of the ASA-MF and MFA. Thus, while clinicians report generally positive attitudes toward monitoring and feedback, routine collection of standardized progress measures remains uncommon. Implications for the dissemination and implementation of monitoring and feedback systems are discussed.


Subject(s)
Attitude of Health Personnel , Clinical Decision-Making , Feedback , Mental Disorders/therapy , Practice Patterns, Physicians' , Psychotherapy , Adult , Aged , Aged, 80 and over , Evidence-Based Practice , Female , Humans , Male , Middle Aged , Treatment Outcome
4.
Implement Res Pract ; 3: 26334895221115216, 2022.
Article in English | MEDLINE | ID: mdl-37091107

ABSTRACT

Background: Achieving high quality outcomes in a community context requires the strategic coordination of many activities in a service system, involving families, clinicians, supervisors, and administrators. In modern implementation trials, the therapy itself is guided by a treatment manual; however, structured supports for other parts of the service system may remain less well-articulated (e.g., supervision, administrative policies for planning and review, information/feedback flow, resource availability). This implementation trial investigated how a psychosocial intervention performed when those non-therapy supports were not structured by a research team, but were instead provided as part of a scalable industrial implementation, testing whether outcomes achieved would meet benchmarks from published research trials. Method: In this single-arm observational benchmarking study, a total of 59 community clinicians were trained in the Modular Approach to Therapy for Children (MATCH) treatment program. These clinicians delivered MATCH treatment to 166 youth ages 6 to 17 naturally presenting for psychotherapy services. Clinicians received substantially fewer supports from the treatment developers or research team than in the original MATCH trials and instead relied on explicit process management tools to facilitate implementation. Prior RCTs of MATCH were used to benchmark the results of the current initiative. Client improvement was assessed using the Top Problems Assessment and Brief Problem Monitor. Results: Analysis of client symptom change indicated that youth experienced improvement equal to or better than the experimental condition in published research trials. Similarly, caregiver-reported outcomes were generally comparable to those in published trials. Conclusions: Although results must be interpreted cautiously, they support the feasibility of using process management tools to facilitate the successful implementation of MATCH outside the context of a formal research or funded implementation trial. Further, these results illustrate the value of benchmarking as a method to evaluation industrial implementation efforts.Plain Language Summary: Randomized effectiveness trials are inclusive of clinicians and cases that are routinely encountered in community-based settings, while continuing to rely on the research team for both clinical and administrative guidance. As a result, the field still struggles to understand what might be needed to support sustainable implementation and how interventions will perform when brought to scale in community settings without those clinical trial supports. Alternative approaches are needed to delineate and provide the clinical and operational support needed for implementation and to efficiently evaluate how evidence-based treatments perform. Benchmarking findings in the community against findings of more rigorous clinical trials is one such approach. This paper offers two main contributions to the literature. First, it provides an example of how benchmarking is used to evaluate how the Modular Approach to Therapy for Children (MATCH) treatment program performed outside the context of a research trial. Second, this study demonstrates that MATCH produced comparable symptom improvements to those seen in the original research trials and describes the implementation strategies associated with this success. In particular, although clinicians in this study had less rigorous expert clinical supervision as compared with the original trials, clinicians were provided with process management tools to support implementation. This study highlights the importance of evaluating the performance of intervention programs when brought to scale in community-based settings. This study also provides support for the use of process management tools to assist providers in effective implementation.

5.
Implement Res Pract ; 2: 26334895211037391, 2021.
Article in English | MEDLINE | ID: mdl-37089994

ABSTRACT

To rigorously measure the implementation of evidence-based interventions, implementation science requires measures that have evidence of reliability and validity across different contexts and populations. Measures that can detect change over time and impact on outcomes of interest are most useful to implementers. Moreover, measures that fit the practical needs of implementers could be used to guide implementation outside of the research context. To address this need, our team developed a rating scale for implementation science measures that considers their psychometric and pragmatic properties and the evidence available. The Psychometric and Pragmatic Evidence Rating Scale (PAPERS) can be used in systematic reviews of measures, in measure development, and to select measures. PAPERS may move the field toward measures that inform robust research evaluations and practical implementation efforts.

6.
Implement Res Pract ; 2: 26334895211018862, 2021.
Article in English | MEDLINE | ID: mdl-37090009

ABSTRACT

Background: Organizational culture, organizational climate, and implementation climate are key organizational constructs that influence the implementation of evidence-based practices. However, there has been little systematic investigation of the availability of psychometrically strong measures that can be used to assess these constructs in behavioral health. This systematic review identified and assessed the psychometric properties of measures of organizational culture, organizational climate, implementation climate, and related subconstructs as defined by the Consolidated Framework for Implementation Research (CFIR) and Ehrhart and colleagues. Methods: Data collection involved search string generation, title and abstract screening, full-text review, construct assignment, and citation searches for all known empirical uses. Data relevant to nine psychometric criteria from the Psychometric and Pragmatic Evidence Rating Scale (PAPERS) were extracted: internal consistency, convergent validity, discriminant validity, known-groups validity, predictive validity, concurrent validity, structural validity, responsiveness, and norms. Extracted data for each criterion were rated on a scale from -1 ("poor") to 4 ("excellent"), and each measure was assigned a total score (highest possible score = 36) that formed the basis for head-to-head comparisons of measures for each focal construct. Results: We identified full measures or relevant subscales of broader measures for organizational culture (n = 21), organizational climate (n = 36), implementation climate (n = 2), tension for change (n = 2), compatibility (n = 6), relative priority (n = 2), organizational incentives and rewards (n = 3), goals and feedback (n = 3), and learning climate (n = 2). Psychometric evidence was most frequently available for internal consistency and norms. Information about other psychometric properties was less available. Median ratings for psychometric properties across categories of measures ranged from "poor" to "good." There was limited evidence of responsiveness or predictive validity. Conclusion: While several promising measures were identified, the overall state of measurement related to these constructs is poor. To enhance understanding of how these constructs influence implementation research and practice, measures that are sensitive to change and predictive of key implementation and clinical outcomes are required. There is a need for further testing of the most promising measures, and ample opportunity to develop additional psychometrically strong measures of these important constructs. Plain Language Summary: Organizational culture, organizational climate, and implementation climate can play a critical role in facilitating or impeding the successful implementation and sustainment of evidence-based practices. Advancing our understanding of how these contextual factors independently or collectively influence implementation and clinical outcomes requires measures that are reliable and valid. Previous systematic reviews identified measures of organizational factors that influence implementation, but none focused explicitly on behavioral health; focused solely on organizational culture, organizational climate, and implementation climate; or assessed the evidence base of all known uses of a measure within a given area, such as behavioral health-focused implementation efforts. The purpose of this study was to identify and assess the psychometric properties of measures of organizational culture, organizational climate, implementation climate, and related subconstructs that have been used in behavioral health-focused implementation research. We identified 21 measures of organizational culture, 36 measures of organizational climate, 2 measures of implementation climate, 2 measures of tension for change, 6 measures of compatibility, 2 measures of relative priority, 3 measures of organizational incentives and rewards, 3 measures of goals and feedback, and 2 measures of learning climate. Some promising measures were identified; however, the overall state of measurement across these constructs is poor. This review highlights specific areas for improvement and suggests the need to rigorously evaluate existing measures and develop new measures.

7.
Transl Behav Med ; 11(1): 11-20, 2021 02 11.
Article in English | MEDLINE | ID: mdl-31747021

ABSTRACT

The use of reliable, valid measures in implementation practice will remain limited without pragmatic measures. Previous research identified the need for pragmatic measures, though the characteristic identification used only expert opinion and literature review. Our team completed four studies to develop a stakeholder-driven pragmatic rating criteria for implementation measures. We published Studies 1 (identifying dimensions of the pragmatic construct) and 2 (clarifying the internal structure) that engaged stakeholders-participants in mental health provider and implementation settings-to identify 17 terms/phrases across four categories: Useful, Compatible, Acceptable, and Easy. This paper presents Studies 3 and 4: a Delphi to ascertain stakeholder-prioritized dimensions within a mental health context, and a pilot study applying the rating criteria. Stakeholders (N = 26) participated in a Delphi and rated the relevance of 17 terms/phrases to the pragmatic construct. The investigator team further defined and shortened the list, which were piloted with 60 implementation measures. The Delphi confirmed the importance of all pragmatic criteria, but provided little guidance on relative importance. The investigators removed or combined terms/phrases to obtain 11 criteria. The 6-point rating system assigned to each criterion demonstrated sufficient variability across items. The grey literature did not add critical information. This work produced the first stakeholder-driven rating criteria to assess whether measures are pragmatic. The Psychometric and Pragmatic Evidence Rating Scale (PAPERS) combines the pragmatic criteria with psychometric rating criteria, from previous work. Use of PAPERS can inform development of implementation measures and to assess the quality of existing measures.


Subject(s)
Psychometrics , Humans , Pilot Projects , Reproducibility of Results
8.
Implement Sci ; 15(1): 3, 2020 Jan 03.
Article in English | MEDLINE | ID: mdl-31900162

ABSTRACT

Following publication of the original article [1] the authors reported an important acknowledgement was mistakenly omitted from the 'Acknowledgements' section. The full acknowledgement is included in this Correction article.

9.
J Behav Health Serv Res ; 46(4): 607-624, 2019 10.
Article in English | MEDLINE | ID: mdl-31037479

ABSTRACT

Existing measures of attitudes toward evidence-based practices (EBPs) assess attitudes toward manualized or research-based treatments. Providers of youth behavioral health (N = 282) completed the Valued Practices Inventory (VPI), a new measure of provider attitudes toward specific practices for youth that avoids mention of EBPs by listing specific therapies-some of which are drawn from EBPs (e.g., problem solving) and some of which are not included in EBPs (e.g., dream interpretation). Exploratory factor analysis revealed two factors: practices derived from the evidence base (PDEB) and alternative techniques (AT). The PDEB scale was significantly correlated with scales on the Evidence-Based Practice Attitude Scale-50 (Aarons et al. in Administration and Policy in Mental Health and Mental Health Services Research, 39(5): 331-340, 2012), whereas the AT scale was not. Attitudes toward PDEB and AT were also related to provider characteristics such as years of experience and work setting. The VPI offers a complementary approach to existing measures of attitudes because it avoids mention of EBPs, which may help prevent biases in responses.


Subject(s)
Attitude of Health Personnel , Health Personnel/psychology , Psychology, Adolescent/methods , Self Report/standards , Adult , Aged , Evidence-Based Practice , Factor Analysis, Statistical , Female , Health Behavior , Humans , Male , Middle Aged , Psychometrics , Reproducibility of Results , Young Adult
10.
Implement Sci ; 12(1): 118, 2017 10 03.
Article in English | MEDLINE | ID: mdl-28974248

ABSTRACT

BACKGROUND: Advancing implementation research and practice requires valid and reliable measures of implementation determinants, mechanisms, processes, strategies, and outcomes. However, researchers and implementation stakeholders are unlikely to use measures if they are not also pragmatic. The purpose of this study was to establish a stakeholder-driven conceptualization of the domains that comprise the pragmatic measure construct. It built upon a systematic review of the literature and semi-structured stakeholder interviews that generated 47 criteria for pragmatic measures, and aimed to further refine that set of criteria by identifying conceptually distinct categories of the pragmatic measure construct and providing quantitative ratings of the criteria's clarity and importance. METHODS: Twenty-four stakeholders with expertise in implementation practice completed a concept mapping activity wherein they organized the initial list of 47 criteria into conceptually distinct categories and rated their clarity and importance. Multidimensional scaling, hierarchical cluster analysis, and descriptive statistics were used to analyze the data. FINDINGS: The 47 criteria were meaningfully grouped into four distinct categories: (1) acceptable, (2) compatible, (3) easy, and (4) useful. Average ratings of clarity and importance at the category and individual criteria level will be presented. CONCLUSIONS: This study advances the field of implementation science and practice by providing clear and conceptually distinct domains of the pragmatic measure construct. Next steps will include a Delphi process to develop consensus on the most important criteria and the development of quantifiable pragmatic rating criteria that can be used to assess measures.


Subject(s)
Health Plan Implementation/methods , Health Services Research/methods , Stakeholder Participation , Cluster Analysis , Humans , Interviews as Topic
11.
Implement Sci ; 10: 2, 2015 Jan 08.
Article in English | MEDLINE | ID: mdl-25567126

ABSTRACT

BACKGROUND: Identification of psychometrically strong instruments for the field of implementation science is a high priority underscored in a recent National Institutes of Health working meeting (October 2013). Existing instrument reviews are limited in scope, methods, and findings. The Society for Implementation Research Collaboration Instrument Review Project's objectives address these limitations by identifying and applying a unique methodology to conduct a systematic and comprehensive review of quantitative instruments assessing constructs delineated in two of the field's most widely used frameworks, adopt a systematic search process (using standard search strings), and engage an international team of experts to assess the full range of psychometric criteria (reliability, construct and criterion validity). Although this work focuses on implementation of psychosocial interventions in mental health and health-care settings, the methodology and results will likely be useful across a broad spectrum of settings. This effort has culminated in a centralized online open-access repository of instruments depicting graphical head-to-head comparisons of their psychometric properties. This article describes the methodology and preliminary outcomes. METHODS: The seven stages of the review, synthesis, and evaluation methodology include (1) setting the scope for the review, (2) identifying frameworks to organize and complete the review, (3) generating a search protocol for the literature review of constructs, (4) literature review of specific instruments, (5) development of an evidence-based assessment rating criteria, (6) data extraction and rating instrument quality by a task force of implementation experts to inform knowledge synthesis, and (7) the creation of a website repository. RESULTS: To date, this multi-faceted and collaborative search and synthesis methodology has identified over 420 instruments related to 34 constructs (total 48 including subconstructs) that are relevant to implementation science. Despite numerous constructs having greater than 20 available instruments, which implies saturation, preliminary results suggest that few instruments stem from gold standard development procedures. We anticipate identifying few high-quality, psychometrically sound instruments once our evidence-based assessment rating criteria have been applied. CONCLUSIONS: The results of this methodology may enhance the rigor of implementation science evaluations by systematically facilitating access to psychometrically validated instruments and identifying where further instrument development is needed.


Subject(s)
Cooperative Behavior , Program Evaluation/methods , Translational Research, Biomedical/methods , Evidence-Based Medicine/methods , Humans , Psychometrics , Research Design/standards , Translational Research, Biomedical/standards
SELECTION OF CITATIONS
SEARCH DETAIL