Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 24
Filter
1.
Implement Res Pract ; 3: 26334895221115216, 2022.
Article in English | MEDLINE | ID: mdl-37091107

ABSTRACT

Background: Achieving high quality outcomes in a community context requires the strategic coordination of many activities in a service system, involving families, clinicians, supervisors, and administrators. In modern implementation trials, the therapy itself is guided by a treatment manual; however, structured supports for other parts of the service system may remain less well-articulated (e.g., supervision, administrative policies for planning and review, information/feedback flow, resource availability). This implementation trial investigated how a psychosocial intervention performed when those non-therapy supports were not structured by a research team, but were instead provided as part of a scalable industrial implementation, testing whether outcomes achieved would meet benchmarks from published research trials. Method: In this single-arm observational benchmarking study, a total of 59 community clinicians were trained in the Modular Approach to Therapy for Children (MATCH) treatment program. These clinicians delivered MATCH treatment to 166 youth ages 6 to 17 naturally presenting for psychotherapy services. Clinicians received substantially fewer supports from the treatment developers or research team than in the original MATCH trials and instead relied on explicit process management tools to facilitate implementation. Prior RCTs of MATCH were used to benchmark the results of the current initiative. Client improvement was assessed using the Top Problems Assessment and Brief Problem Monitor. Results: Analysis of client symptom change indicated that youth experienced improvement equal to or better than the experimental condition in published research trials. Similarly, caregiver-reported outcomes were generally comparable to those in published trials. Conclusions: Although results must be interpreted cautiously, they support the feasibility of using process management tools to facilitate the successful implementation of MATCH outside the context of a formal research or funded implementation trial. Further, these results illustrate the value of benchmarking as a method to evaluation industrial implementation efforts.Plain Language Summary: Randomized effectiveness trials are inclusive of clinicians and cases that are routinely encountered in community-based settings, while continuing to rely on the research team for both clinical and administrative guidance. As a result, the field still struggles to understand what might be needed to support sustainable implementation and how interventions will perform when brought to scale in community settings without those clinical trial supports. Alternative approaches are needed to delineate and provide the clinical and operational support needed for implementation and to efficiently evaluate how evidence-based treatments perform. Benchmarking findings in the community against findings of more rigorous clinical trials is one such approach. This paper offers two main contributions to the literature. First, it provides an example of how benchmarking is used to evaluate how the Modular Approach to Therapy for Children (MATCH) treatment program performed outside the context of a research trial. Second, this study demonstrates that MATCH produced comparable symptom improvements to those seen in the original research trials and describes the implementation strategies associated with this success. In particular, although clinicians in this study had less rigorous expert clinical supervision as compared with the original trials, clinicians were provided with process management tools to support implementation. This study highlights the importance of evaluating the performance of intervention programs when brought to scale in community-based settings. This study also provides support for the use of process management tools to assist providers in effective implementation.

2.
J Community Psychol ; 50(1): 541-552, 2022 01.
Article in English | MEDLINE | ID: mdl-34096626

ABSTRACT

This study examined the accessibility of community resources (e.g., welfare programs and afterschool programs) for underserved youth and families with mental health needs. Mental health professionals (n = 52) from a large community mental health and welfare agency serving predominantly low-income, Latinx families completed a semistructured interview that asked about the accessibility of community resources. Participant responses were coded using an inductive thematic analysis. Results showed that 71% of participants endorsed availability barriers (e.g., limited local programs), 37% endorsed logistical barriers (e.g., waitlists), 27% endorsed attitudinal barriers (e.g., stigmatized beliefs about help-seeking), and 23% endorsed knowledge barriers (e.g., lacking awareness about local programs). Professionals' perceived availability barriers were mostly consistent with the actual availability of community resources. Findings highlight the compounding challenges that underserved communities face and point to opportunities for promoting enhanced well-being and functioning for youth and families with mental health needs.


Subject(s)
Community Resources , Mental Health Services , Adolescent , Humans , Mental Health , Poverty , Qualitative Research
3.
Implement Res Pract ; 2: 26334895211037391, 2021.
Article in English | MEDLINE | ID: mdl-37089994

ABSTRACT

To rigorously measure the implementation of evidence-based interventions, implementation science requires measures that have evidence of reliability and validity across different contexts and populations. Measures that can detect change over time and impact on outcomes of interest are most useful to implementers. Moreover, measures that fit the practical needs of implementers could be used to guide implementation outside of the research context. To address this need, our team developed a rating scale for implementation science measures that considers their psychometric and pragmatic properties and the evidence available. The Psychometric and Pragmatic Evidence Rating Scale (PAPERS) can be used in systematic reviews of measures, in measure development, and to select measures. PAPERS may move the field toward measures that inform robust research evaluations and practical implementation efforts.

4.
Implement Res Pract ; 2: 26334895211018862, 2021.
Article in English | MEDLINE | ID: mdl-37090009

ABSTRACT

Background: Organizational culture, organizational climate, and implementation climate are key organizational constructs that influence the implementation of evidence-based practices. However, there has been little systematic investigation of the availability of psychometrically strong measures that can be used to assess these constructs in behavioral health. This systematic review identified and assessed the psychometric properties of measures of organizational culture, organizational climate, implementation climate, and related subconstructs as defined by the Consolidated Framework for Implementation Research (CFIR) and Ehrhart and colleagues. Methods: Data collection involved search string generation, title and abstract screening, full-text review, construct assignment, and citation searches for all known empirical uses. Data relevant to nine psychometric criteria from the Psychometric and Pragmatic Evidence Rating Scale (PAPERS) were extracted: internal consistency, convergent validity, discriminant validity, known-groups validity, predictive validity, concurrent validity, structural validity, responsiveness, and norms. Extracted data for each criterion were rated on a scale from -1 ("poor") to 4 ("excellent"), and each measure was assigned a total score (highest possible score = 36) that formed the basis for head-to-head comparisons of measures for each focal construct. Results: We identified full measures or relevant subscales of broader measures for organizational culture (n = 21), organizational climate (n = 36), implementation climate (n = 2), tension for change (n = 2), compatibility (n = 6), relative priority (n = 2), organizational incentives and rewards (n = 3), goals and feedback (n = 3), and learning climate (n = 2). Psychometric evidence was most frequently available for internal consistency and norms. Information about other psychometric properties was less available. Median ratings for psychometric properties across categories of measures ranged from "poor" to "good." There was limited evidence of responsiveness or predictive validity. Conclusion: While several promising measures were identified, the overall state of measurement related to these constructs is poor. To enhance understanding of how these constructs influence implementation research and practice, measures that are sensitive to change and predictive of key implementation and clinical outcomes are required. There is a need for further testing of the most promising measures, and ample opportunity to develop additional psychometrically strong measures of these important constructs. Plain Language Summary: Organizational culture, organizational climate, and implementation climate can play a critical role in facilitating or impeding the successful implementation and sustainment of evidence-based practices. Advancing our understanding of how these contextual factors independently or collectively influence implementation and clinical outcomes requires measures that are reliable and valid. Previous systematic reviews identified measures of organizational factors that influence implementation, but none focused explicitly on behavioral health; focused solely on organizational culture, organizational climate, and implementation climate; or assessed the evidence base of all known uses of a measure within a given area, such as behavioral health-focused implementation efforts. The purpose of this study was to identify and assess the psychometric properties of measures of organizational culture, organizational climate, implementation climate, and related subconstructs that have been used in behavioral health-focused implementation research. We identified 21 measures of organizational culture, 36 measures of organizational climate, 2 measures of implementation climate, 2 measures of tension for change, 6 measures of compatibility, 2 measures of relative priority, 3 measures of organizational incentives and rewards, 3 measures of goals and feedback, and 2 measures of learning climate. Some promising measures were identified; however, the overall state of measurement across these constructs is poor. This review highlights specific areas for improvement and suggests the need to rigorously evaluate existing measures and develop new measures.

5.
Implement Res Pract ; 2: 26334895211000458, 2021.
Article in English | MEDLINE | ID: mdl-37090010

ABSTRACT

Background: Identification of psychometrically strong implementation measures could (1) advance researchers' understanding of how individual characteristics impact implementation processes and outcomes, and (2) promote the success of real-world implementation efforts. The current study advances the work that our team published in 2015 by providing an updated and enhanced systematic review that identifies and evaluates the psychometric properties of implementation measures that assess individual characteristics. Methods: A full description of our systematic review methodology, which included three phases, is described in a previously published protocol paper. Phase I focused on data collection and involved search string generation, title and abstract screening, full-text review, construct assignment, and measure forward searches. During Phase II, we completed data extraction (i.e., coding psychometric information). Phase III involved data analysis, where two trained specialists independently rated each measurement tool using our psychometric rating criteria. Results: Our team identified 124 measures of individual characteristics used in mental or behavioral health research, and 123 of those measures were deemed suitable for rating using Psychometric and Pragmatic Evidence Rating Scale. We identified measures of knowledge and beliefs about the intervention (n = 76), self-efficacy (n = 24), individual stage of change (n = 2), individual identification with organization (n = 7), and other personal attributes (n = 15). While psychometric information was unavailable and/or unreported for many measures, information about internal consistency and norms were the most commonly identified psychometric data across all individual characteristics' constructs. Ratings for all psychometric properties predominantly ranged from "poor" to "good." Conclusion: The majority of research that develops, uses, or examines implementation measures that evaluate individual characteristics does not include the psychometric properties of those measures. The development and use of psychometric reporting standards could advance the use of valid and reliable tools within implementation research and practice, thereby enhancing the successful implementation and sustainment of evidence-based practice in community care. Plain Language Summary: Measurement is the foundation for advancing practice in health care and other industries. In the field of implementation science, the state of measurement is only recently being targeted as an area for improvement, given that high-quality measures need to be identified and utilized in implementation work to avoid developing another research to practice gap. For the current study, we utilized the Consolidated Framework for Implementation Research to identify measures related to individual characteristics' constructs, such as knowledge and beliefs about the intervention, self-efficacy, individual identification with the organization, individual stage of change, and other personal attributes. Our review showed that many measures exist for certain constructs (e.g., measures related to assessing providers' attitudes and perceptions about evidence-based practice interventions), while others have very few (e.g., an individual's stage of change). Also, we rated measures for their psychometric strength utilizing an anchored rating system and found that most measures assessing individual characteristics are in need of more research to establish their evidence of quality. It was also clear from our results that frequency of use/citations does not equate to high quality, psychometric strength. Ultimately, the state of the literature has demonstrated that assessing individual characteristics of implementation stakeholders is an area of strong interest in implementation work. It will be important for future research to focus on clearly delineating the psychometric properties of existing measures for saturated constructs, while for the others the emphasis should be on developing new, high-quality measures and make these available to stakeholders.

6.
Transl Behav Med ; 11(1): 11-20, 2021 02 11.
Article in English | MEDLINE | ID: mdl-31747021

ABSTRACT

The use of reliable, valid measures in implementation practice will remain limited without pragmatic measures. Previous research identified the need for pragmatic measures, though the characteristic identification used only expert opinion and literature review. Our team completed four studies to develop a stakeholder-driven pragmatic rating criteria for implementation measures. We published Studies 1 (identifying dimensions of the pragmatic construct) and 2 (clarifying the internal structure) that engaged stakeholders-participants in mental health provider and implementation settings-to identify 17 terms/phrases across four categories: Useful, Compatible, Acceptable, and Easy. This paper presents Studies 3 and 4: a Delphi to ascertain stakeholder-prioritized dimensions within a mental health context, and a pilot study applying the rating criteria. Stakeholders (N = 26) participated in a Delphi and rated the relevance of 17 terms/phrases to the pragmatic construct. The investigator team further defined and shortened the list, which were piloted with 60 implementation measures. The Delphi confirmed the importance of all pragmatic criteria, but provided little guidance on relative importance. The investigators removed or combined terms/phrases to obtain 11 criteria. The 6-point rating system assigned to each criterion demonstrated sufficient variability across items. The grey literature did not add critical information. This work produced the first stakeholder-driven rating criteria to assess whether measures are pragmatic. The Psychometric and Pragmatic Evidence Rating Scale (PAPERS) combines the pragmatic criteria with psychometric rating criteria, from previous work. Use of PAPERS can inform development of implementation measures and to assess the quality of existing measures.


Subject(s)
Psychometrics , Humans , Pilot Projects , Reproducibility of Results
7.
Adm Policy Ment Health ; 48(2): 250-265, 2021 03.
Article in English | MEDLINE | ID: mdl-32656631

ABSTRACT

Mental health clinicians and administrators are increasingly asked to collect and report treatment outcome data despite numerous challenges to select and use instruments in routine practice. Measurement-based care (MBC) is an evidence-based practice for improving patient care. We propose that data collected from MBC processes with patients can be strategically leveraged by agencies to also support clinicians and respond to accountability requirements. MBC data elements are outlined using the Precision Mental Health Framework (Bickman et al. in Adm Policy Mental Health Mental Health Serv Res 43:271-276, 2016), practical guidance is provided for agency administrators, and conceptual examples illustrate strategic applications of one or more instruments to meet various needs throughout the organization.


Subject(s)
Mental Health Services , Mental Health , Humans , Organizational Objectives , Patient Care
9.
Implement Sci ; 15(1): 3, 2020 Jan 03.
Article in English | MEDLINE | ID: mdl-31900162

ABSTRACT

Following publication of the original article [1] the authors reported an important acknowledgement was mistakenly omitted from the 'Acknowledgements' section. The full acknowledgement is included in this Correction article.

10.
Implement Res Pract ; 1: 2633489520933896, 2020.
Article in English | MEDLINE | ID: mdl-37089124

ABSTRACT

Background: Systematic measure reviews can facilitate advances in implementation research and practice by locating reliable, valid, pragmatic measures; identifying promising measures needing refinement and testing; and highlighting measurement gaps. This review identifies and evaluates the psychometric and pragmatic properties of measures of readiness for implementation and its sub-constructs as delineated in the Consolidated Framework for Implementation Research: leadership engagement, available resources, and access to knowledge and information. Methods: The systematic review methodology is described fully elsewhere. The review, which focused on measures used in mental or behavioral health, proceeded in three phases. Phase I, data collection, involved search string generation, title and abstract screening, full text review, construct assignment, and cited citation searches. Phase II, data extraction, involved coding relevant psychometric and pragmatic information. Phase III, data analysis, involved two trained specialists independently rating each measure using Psychometric and Pragmatic Evidence Rating Scales (PAPERS). Frequencies and central tendencies summarized information availability and PAPERS ratings. Results: Searches identified 9 measures of readiness for implementation, 24 measures of leadership engagement, 17 measures of available resources, and 6 measures of access to knowledge and information. Information about internal consistency was available for most measures. Information about other psychometric properties was often not available. Ratings for internal consistency were "adequate" or "good." Ratings for other psychometric properties were less than "adequate." Information on pragmatic properties was most often available regarding cost, language readability, and brevity. Information was less often available regarding training burden and interpretation burden. Cost and language readability generally exhibited "good" or "excellent" ratings, interpretation burden generally exhibiting "minimal" ratings, and training burden and brevity exhibiting mixed ratings across measures. Conclusion: Measures of readiness for implementation and its sub-constructs used in mental health and behavioral health care are unevenly distributed, exhibit unknown or low psychometric quality, and demonstrate mixed pragmatic properties. This review identified a few promising measures, but targeted efforts are needed to systematically develop and test measures that are useful for both research and practice. Plain language abstract: Successful implementation of effective mental health or behavioral health treatments in service delivery settings depends in part on the readiness of the service providers and administrators to implement the treatment; the engagement of organizational leaders in the implementation effort; the resources available to support implementation, such as time, money, space, and training; and the accessibility of knowledge and information among service providers about the treatment and how it works. It is important that the methods for measuring these factors are dependable, accurate, and practical; otherwise, we cannot assess their presence or strength with confidence or know whether efforts to increase their presence or strength have worked. This systematic review of published studies sought to identify and evaluate the quality of questionnaires (referred to as measures) that assess readiness for implementation, leadership engagement, available resources, and access to knowledge and information. We identified 56 measures of these factors and rated their quality in terms of how dependable, accurate, and practical they are. Our findings indicate there is much work to be done to improve the quality of available measures; we offer several recommendations for doing so.

11.
Adm Policy Ment Health ; 47(3): 366-379, 2020 05.
Article in English | MEDLINE | ID: mdl-31721005

ABSTRACT

This study explored mental health professionals' perceptions about barriers and facilitators to engaging underserved populations. Responses were coded using an iterative thematic analysis based on grounded theory. Results revealed that many professionals endorsed barriers to engaging ethnic minorities and families receiving social services. Client-provider racial and linguistic matching, therapy processes and procedures (e.g., nonjudgmental stance), and implementation supports (e.g., supervision) were commonly nominated as engagement facilitators. Many professionals felt that an organizational culture focused on productivity is detrimental to client engagement. Findings shed light on professionals' perceived barriers to delivering high-quality care to underserved communities and illuminate potential engagement strategies.


Subject(s)
Attitude of Health Personnel , Community Mental Health Services , Health Personnel/psychology , Medically Underserved Area , Vulnerable Populations , Adult , Female , Health Status Disparities , Humans , Interviews as Topic , Male , Middle Aged , Qualitative Research
12.
J Behav Health Serv Res ; 46(4): 607-624, 2019 10.
Article in English | MEDLINE | ID: mdl-31037479

ABSTRACT

Existing measures of attitudes toward evidence-based practices (EBPs) assess attitudes toward manualized or research-based treatments. Providers of youth behavioral health (N = 282) completed the Valued Practices Inventory (VPI), a new measure of provider attitudes toward specific practices for youth that avoids mention of EBPs by listing specific therapies-some of which are drawn from EBPs (e.g., problem solving) and some of which are not included in EBPs (e.g., dream interpretation). Exploratory factor analysis revealed two factors: practices derived from the evidence base (PDEB) and alternative techniques (AT). The PDEB scale was significantly correlated with scales on the Evidence-Based Practice Attitude Scale-50 (Aarons et al. in Administration and Policy in Mental Health and Mental Health Services Research, 39(5): 331-340, 2012), whereas the AT scale was not. Attitudes toward PDEB and AT were also related to provider characteristics such as years of experience and work setting. The VPI offers a complementary approach to existing measures of attitudes because it avoids mention of EBPs, which may help prevent biases in responses.


Subject(s)
Attitude of Health Personnel , Health Personnel/psychology , Psychology, Adolescent/methods , Self Report/standards , Adult , Aged , Evidence-Based Practice , Factor Analysis, Statistical , Female , Health Behavior , Humans , Male , Middle Aged , Psychometrics , Reproducibility of Results , Young Adult
13.
Adm Policy Ment Health ; 46(3): 391-410, 2019 05.
Article in English | MEDLINE | ID: mdl-30710173

ABSTRACT

There is strong enthusiasm for utilizing implementation science in the implementation of evidence-based programs in children's community mental health, but there remains work to be done to improve the process. Despite the proliferation of implementation frameworks, there is limited literature providing case examples of overcoming implementation barriers. This article examines whether the use of three implementations strategies, a structured training and coaching program, the use of professional development portfolios for coaching, and a progress monitoring data system, help to overcome barriers to implementation by facilitating four implementation drivers at a community mental health agency. Results suggest that implementation is a process of recognizing and adapting to both predictable and unpredictable barriers. Furthermore, the use of these implementation strategies is important in improving implementation outcomes.


Subject(s)
Community Mental Health Services/organization & administration , Evidence-Based Practice/organization & administration , Child , Clinical Competence , Community Mental Health Services/standards , Evidence-Based Practice/standards , Humans , Leadership , Mentors , Organizational Case Studies , Staff Development/organization & administration
14.
BMC Health Serv Res ; 18(1): 882, 2018 Nov 22.
Article in English | MEDLINE | ID: mdl-30466422

ABSTRACT

CONTEXT: Implementation science measures are rarely used by stakeholders to inform and enhance clinical program change. Little is known about what makes implementation measures pragmatic (i.e., practical) for use in community settings; thus, the present study's objective was to generate a clinical stakeholder-driven operationalization of a pragmatic measures construct. EVIDENCE ACQUISITION: The pragmatic measures construct was defined using: 1) a systematic literature review to identify dimensions of the construct using PsycINFO and PubMed databases, and 2) interviews with an international stakeholder panel (N = 7) who were asked about their perspectives of pragmatic measures. EVIDENCE SYNTHESIS: Combined results from the systematic literature review and stakeholder interviews revealed a final list of 47 short statements (e.g., feasible, low cost, brief) describing pragmatic measures, which will allow for the development of a rigorous, stakeholder-driven conceptualization of the pragmatic measures construct. CONCLUSIONS: Results revealed significant overlap between terms related to the pragmatic construct in the existing literature and stakeholder interviews. However, a number of terms were unique to each methodology. This underscores the importance of understanding stakeholder perspectives of criteria measuring the pragmatic construct. These results will be used to inform future phases of the project where stakeholders will determine the relative importance and clarity of each dimension of the pragmatic construct, as well as their priorities for the pragmatic dimensions. Taken together, these results will be incorporated into a pragmatic rating system for existing implementation science measures to support implementation science and practice.


Subject(s)
Feedback , Implementation Science , Communication , Female , Humans , Male , Middle Aged , Research Design
15.
Syst Rev ; 7(1): 66, 2018 04 25.
Article in English | MEDLINE | ID: mdl-29695295

ABSTRACT

BACKGROUND: Implementation science is the study of strategies used to integrate evidence-based practices into real-world settings (Eccles and Mittman, Implement Sci. 1(1):1, 2006). Central to the identification of replicable, feasible, and effective implementation strategies is the ability to assess the impact of contextual constructs and intervention characteristics that may influence implementation, but several measurement issues make this work quite difficult. For instance, it is unclear which constructs have no measures and which measures have any evidence of psychometric properties like reliability and validity. As part of a larger set of studies to advance implementation science measurement (Lewis et al., Implement Sci. 10:102, 2015), we will complete systematic reviews of measures that map onto the Consolidated Framework for Implementation Research (Damschroder et al., Implement Sci. 4:50, 2009) and the Implementation Outcomes Framework (Proctor et al., Adm Policy Ment Health. 38(2):65-76, 2011), the protocol for which is described in this manuscript. METHODS: Our primary databases will be PubMed and Embase. Our search strings will be comprised of five levels: (1) the outcome or construct term; (2) terms for measure; (3) terms for evidence-based practice; (4) terms for implementation; and (5) terms for mental health. Two trained research specialists will independently review all titles and abstracts followed by full-text review for inclusion. The research specialists will then conduct measure-forward searches using the "cited by" function to identify all published empirical studies using each measure. The measure and associated publications will be compiled in a packet for data extraction. Data relevant to our Psychometric and Pragmatic Evidence Rating Scale (PAPERS) will be independently extracted and then rated using a worst score counts methodology reflecting "poor" to "excellent" evidence. DISCUSSION: We will build a centralized, accessible, searchable repository through which researchers, practitioners, and other stakeholders can identify psychometrically and pragmatically strong measures of implementation contexts, processes, and outcomes. By facilitating the employment of psychometrically and pragmatically strong measures identified through this systematic review, the repository would enhance the cumulativeness, reproducibility, and applicability of research findings in the rapidly growing field of implementation science.


Subject(s)
Evidence-Based Practice , Health Plan Implementation , Systematic Reviews as Topic , Humans , Health Plan Implementation/methods
17.
Adm Policy Ment Health ; 45(1): 48-61, 2018 01.
Article in English | MEDLINE | ID: mdl-27631610

ABSTRACT

Numerous trials demonstrate that monitoring client progress and using feedback for clinical decision-making enhances treatment outcomes, but available data suggest these practices are rare in clinical settings and no psychometrically validated measures exist for assessing attitudinal barriers to these practices. This national survey of 504 clinicians collected data on attitudes toward and use of monitoring and feedback. Two new measures were developed and subjected to factor analysis: The monitoring and feedback attitudes scale (MFA), measuring general attitudes toward monitoring and feedback, and the attitudes toward standardized assessment scales-monitoring and feedback (ASA-MF), measuring attitudes toward standardized progress tools. Both measures showed good fit to their final factor solutions, with excellent internal consistency for all subscales. Scores on the MFA subscales (Benefit, Harm) indicated that clinicians hold generally positive attitudes toward monitoring and feedback, but scores on the ASA-MF subscales (Clinical Utility, Treatment Planning, Practicality) were relatively neutral. Providers with cognitive-behavioral theoretical orientations held more positive attitudes. Only 13.9 % of clinicians reported using standardized progress measures at least monthly and 61.5 % never used them. Providers with more positive attitudes reported higher use, providing initial support for the predictive validity of the ASA-MF and MFA. Thus, while clinicians report generally positive attitudes toward monitoring and feedback, routine collection of standardized progress measures remains uncommon. Implications for the dissemination and implementation of monitoring and feedback systems are discussed.


Subject(s)
Attitude of Health Personnel , Clinical Decision-Making , Feedback , Mental Disorders/therapy , Practice Patterns, Physicians' , Psychotherapy , Adult , Aged , Aged, 80 and over , Evidence-Based Practice , Female , Humans , Male , Middle Aged , Treatment Outcome
18.
Assessment ; 25(1): 126-138, 2018 Jan.
Article in English | MEDLINE | ID: mdl-26969687

ABSTRACT

OBJECTIVE: The objective of this study was to create the Korean version of the Modified Practice Attitudes Scale (K-MPAS) to measure clinicians' attitudes toward evidence-based treatments (EBTs) in the Korean mental health system. METHOD: Using 189 U.S. therapists and 283 members from the Korean mental health system, we examined the reliability and validity of the MPAS scores. We also conducted the first exploratory and confirmatory factor analysis on the MPAS and compared EBT attitudes across U.S. and Korean therapists. RESULTS: Results revealed that the inclusion of both "reversed-worded" and "non-reversed-worded" items introduced significant method effects that compromised the integrity of the one-factor MPAS model. Problems with the one-factor structure were resolved by eliminating the "non-reversed-worded" items. Reliability and validity were adequate among both Korean and U.S. therapists. Korean therapists also reported significantly more negative attitudes toward EBTs on the MPAS than U.S. therapists. CONCLUSIONS: The K-MPAS is the first questionnaire designed to measure Korean service providers' attitudes toward EBTs to help advance the dissemination of EBTs in Korea. The current study also demonstrated the negative impacts that can be introduced by incorporating oppositely worded items into a scale, particularly with respect to factor structure and detecting significant group differences.


Subject(s)
Attitude of Health Personnel , Evidence-Based Practice , Health Personnel/psychology , Mental Disorders/psychology , Surveys and Questionnaires/standards , Adult , Cross-Cultural Comparison , Factor Analysis, Statistical , Female , Humans , Male , Mental Disorders/therapy , Mental Health Services , Middle Aged , Montana , Psychometrics , Reproducibility of Results , Republic of Korea , Young Adult
19.
Implement Sci ; 12(1): 118, 2017 10 03.
Article in English | MEDLINE | ID: mdl-28974248

ABSTRACT

BACKGROUND: Advancing implementation research and practice requires valid and reliable measures of implementation determinants, mechanisms, processes, strategies, and outcomes. However, researchers and implementation stakeholders are unlikely to use measures if they are not also pragmatic. The purpose of this study was to establish a stakeholder-driven conceptualization of the domains that comprise the pragmatic measure construct. It built upon a systematic review of the literature and semi-structured stakeholder interviews that generated 47 criteria for pragmatic measures, and aimed to further refine that set of criteria by identifying conceptually distinct categories of the pragmatic measure construct and providing quantitative ratings of the criteria's clarity and importance. METHODS: Twenty-four stakeholders with expertise in implementation practice completed a concept mapping activity wherein they organized the initial list of 47 criteria into conceptually distinct categories and rated their clarity and importance. Multidimensional scaling, hierarchical cluster analysis, and descriptive statistics were used to analyze the data. FINDINGS: The 47 criteria were meaningfully grouped into four distinct categories: (1) acceptable, (2) compatible, (3) easy, and (4) useful. Average ratings of clarity and importance at the category and individual criteria level will be presented. CONCLUSIONS: This study advances the field of implementation science and practice by providing clear and conceptually distinct domains of the pragmatic measure construct. Next steps will include a Delphi process to develop consensus on the most important criteria and the development of quantifiable pragmatic rating criteria that can be used to assess measures.


Subject(s)
Health Plan Implementation/methods , Health Services Research/methods , Stakeholder Participation , Cluster Analysis , Humans , Interviews as Topic
20.
Implement Sci ; 12(1): 108, 2017 08 29.
Article in English | MEDLINE | ID: mdl-28851459

ABSTRACT

BACKGROUND: Implementation outcome measures are essential for monitoring and evaluating the success of implementation efforts. Yet, currently available measures lack conceptual clarity and have largely unknown reliability and validity. This study developed and psychometrically assessed three new measures: the Acceptability of Intervention Measure (AIM), Intervention Appropriateness Measure (IAM), and Feasibility of Intervention Measure (FIM). METHODS: Thirty-six implementation scientists and 27 mental health professionals assigned 31 items to the constructs and rated their confidence in their assignments. The Wilcoxon one-sample signed rank test was used to assess substantive and discriminant content validity. Exploratory and confirmatory factor analysis (EFA and CFA) and Cronbach alphas were used to assess the validity of the conceptual model. Three hundred twenty-six mental health counselors read one of six randomly assigned vignettes depicting a therapist contemplating adopting an evidence-based practice (EBP). Participants used 15 items to rate the therapist's perceptions of the acceptability, appropriateness, and feasibility of adopting the EBP. CFA and Cronbach alphas were used to refine the scales, assess structural validity, and assess reliability. Analysis of variance (ANOVA) was used to assess known-groups validity. Finally, half of the counselors were randomly assigned to receive the same vignette and the other half the opposite vignette; and all were asked to re-rate acceptability, appropriateness, and feasibility. Pearson correlation coefficients were used to assess test-retest reliability and linear regression to assess sensitivity to change. RESULTS: All but five items exhibited substantive and discriminant content validity. A trimmed CFA with five items per construct exhibited acceptable model fit (CFI = 0.98, RMSEA = 0.08) and high factor loadings (0.79 to 0.94). The alphas for 5-item scales were between 0.87 and 0.89. Scale refinement based on measure-specific CFAs and Cronbach alphas using vignette data produced 4-item scales (α's from 0.85 to 0.91). A three-factor CFA exhibited acceptable fit (CFI = 0.96, RMSEA = 0.08) and high factor loadings (0.75 to 0.89), indicating structural validity. ANOVA showed significant main effects, indicating known-groups validity. Test-retest reliability coefficients ranged from 0.73 to 0.88. Regression analysis indicated each measure was sensitive to change in both directions. CONCLUSIONS: The AIM, IAM, and FIM demonstrate promising psychometric properties. Predictive validity assessment is planned.


Subject(s)
Health Plan Implementation/methods , Health Plan Implementation/statistics & numerical data , Outcome Assessment, Health Care/methods , Outcome Assessment, Health Care/statistics & numerical data , Surveys and Questionnaires , Factor Analysis, Statistical , Feasibility Studies , Female , Humans , Male , Psychometrics , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL
...