RESUMEN
BACKGROUND: Initial training and ongoing post-training consultation (i.e., ongoing support following training, provided by an expert) are among the most common implementation strategies used to change clinician practice. However, extant research has not experimentally investigated the optimal dosages of consultation necessary to produce desired outcomes. Moreover, the degree to which training and consultation engage theoretical implementation mechanisms-such as provider knowledge, skills, and attitudes-is not well understood. This study examined the effects of a brief online training and varying dosages of post-training consultation (BOLT+PTC) on implementation mechanisms and outcomes for measurement-based care (MBC) practices delivered in the context of education sector mental health services. METHODS: A national sample of 75 clinicians who provide mental health interventions to children and adolescents in schools were randomly assigned to BOLT+PTC or control (services as usual). Those in BOLT+PTC were further randomized to 2-, 4-, or 8-week consultation conditions. Self-reported MBC knowledge, skills, attitudes, and use (including standardized assessment, individualized assessment, and assessment-informed treatment modification) were collected for 32 weeks. Multilevel models were used to examine main effects of BOLT+PTC versus control on MBC use at the end of consultation and over time, as well as comparisons among PTC dosage conditions and theorized mechanisms (skills, attitudes, knowledge). RESULTS: There was a significant linear effect of BOLT+PTC over time on standardized assessment use (b = .02, p < .01), and a significant quadratic effect of BOLT+PTC over time on individualized assessment use (b = .04, p < .001), but no significant effect on treatment modification. BOLT + any level of PTC resulted in higher MBC knowledge and larger growth in MBC skill over the intervention period as compared to control. PTC dosage levels were inconsistently predictive of outcomes, providing no clear evidence for added benefit of higher PTC dosage. CONCLUSIONS: Online training and consultation in MBC had effects on standardized and individualized assessment use among clinicians as compared to services as usual with no consistent benefit detected for increased consultation dosage. Continued research investigating optimal dosages and mechanisms of these established implementation strategies is needed to ensure training and consultation resources are deployed efficiently to impact clinician practices. TRIAL REGISTRATION: ClinicalTrials.gov NCT05041517 . Retrospectively registered on 10 September 2021.
RESUMEN
Clinician bias is a contributor to health care inequities, but research on racial-ethnic bias among mental health professionals, especially toward minoritized youths, is limited. This column describes two studies involving mental health clinicians in schools, where most youths access mental health services. Study 1 used a mixed-methods approach to identify stereotypes about Black and Latinx youths salient to clinicians (e.g., academic failure; anger and aggression). In study 2, the authors developed four Implicit Association Tests to assess clinicians' implicit prejudice and stereotyping of Black and Latinx youths and found pro-White and anti-Black/Latinx bias at levels similar to those of other health care providers and the general population.
Asunto(s)
Actitud del Personal de Salud , Racismo , Humanos , Adolescente , Disparidades en Atención de Salud , Sesgo Implícito , Salud Mental , Racismo/psicología , Personal de Salud/psicología , Instituciones AcadémicasRESUMEN
Clinician bias has been identified as a potential contributor to persistent healthcare disparities across many medical specialties and service settings. Few studies have examined strategies to reduce clinician bias, especially in mental healthcare, despite decades of research evidencing service and outcome disparities in adult and pediatric populations. This manuscript describes an intervention development study and a pilot feasibility trial of the Virtual Implicit Bias Reduction and Neutralization Training (VIBRANT) for mental health clinicians in schools-where most youth in the U.S. access mental healthcare. Clinicians (N = 12) in the feasibility study-a non-randomized open trial-rated VIBRANT as highly usable, appropriate, acceptable, and feasible for their school-based practice. Preliminarily, clinicians appeared to demonstrate improvements in implicit bias knowledge, use of bias-management strategies, and implicit biases (as measured by the Implicit Association Test [IAT]) post-training. Moreover, putative mediators (e.g., clinicians' VIBRANT strategies use, IAT D scores) and outcome variables (e.g., clinician-rated quality of rapport) generally demonstrated correlations in the expected directions. These pilot results suggest that brief and highly scalable online interventions such as VIBRANT are feasible and promising for addressing implicit bias among healthcare providers (e.g., mental health clinicians) and can have potential downstream impacts on minoritized youth's care experience.
Asunto(s)
Sesgo Implícito , Intervención basada en la Internet , Adolescente , Adulto , Actitud del Personal de Salud , Niño , Estudios de Factibilidad , Disparidades en Atención de Salud , Humanos , Salud Mental , Proyectos PilotoRESUMEN
BACKGROUND: Implementation strategies have flourished in an effort to increase integration of research evidence into clinical practice. Most strategies are complex, socially mediated processes. Many are complicated, expensive, and ultimately impractical to deliver in real-world settings. The field lacks methods to assess the extent to which strategies are usable and aligned with the needs and constraints of the individuals and contexts who will deliver or receive them. Drawn from the field of human-centered design, cognitive walkthroughs are an efficient assessment method with potential to identify aspects of strategies that may inhibit their usability and, ultimately, effectiveness. This article presents a novel walkthrough methodology for evaluating strategy usability as well as an example application to a post-training consultation strategy to support school mental health clinicians to adopt measurement-based care. METHOD: The Cognitive Walkthrough for Implementation Strategies (CWIS) is a pragmatic, mixed-methods approach for evaluating complex, socially mediated implementation strategies. CWIS includes six steps: (1) determine preconditions; (2) hierarchical task analysis; (3) task prioritization; (4) convert tasks to scenarios; (5) pragmatic group testing; and (6) usability issue identification, classification, and prioritization. A facilitator conducted two group testing sessions with clinician users (N = 10), guiding participants through 6 scenarios and 11 associated subtasks. Clinicians reported their anticipated likelihood of completing each subtask and provided qualitative justifications during group discussion. Following the walkthrough sessions, users completed an adapted quantitative assessment of strategy usability. RESULTS: Average anticipated success ratings indicated substantial variability across participants and subtasks. Usability ratings (scale 0-100) of the consultation protocol averaged 71.3 (SD = 10.6). Twenty-one usability problems were identified via qualitative content analysis with consensus coding, and classified by severity and problem type. High-severity problems included potential misalignment between consultation and clinical service timelines as well as digressions during consultation processes. CONCLUSIONS: CWIS quantitative usability ratings indicated that the consultation protocol was at the low end of the "acceptable" range (based on norms from the unadapted scale). Collectively, the 21 resulting usability issues explained the quantitative usability data and provided specific direction for usability enhancements. The current study provides preliminary evidence for the utility of CWIS to assess strategy usability and generate a blueprint for redesign.