Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
BMC Public Health ; 23(1): 183, 2023 01 27.
Article in English | MEDLINE | ID: mdl-36707792

ABSTRACT

Granted by the U.S. Food and Drug Administration, an Emergency Use Authorization (EUA) can only be utilized upon declaration that a specialized set of circumstances exist which justify the authorization. In 2020, the COVID-19 pandemic demanded rapid communication strategies to promote treatment options available through EUA. Despite the authorizations of available monoclonal antibody (mAb) treatments in November 2020, their rate of adoption among health care providers in the U.S. remained low well into 2021. This study examines the accelerators and barriers to provider adoption of COVID-19 treatment so that future adoption of treatments in emerging public health emergencies may be better communicated and hastened. We established a framework informed by adoption accelerators and barriers identified by Diffusion of Innovations (DoI) Theory and conducted a study during the rapidly evolving COVID-19 public health emergency. Most DoI public health research focuses on chronic health issues and has yet to be applied to provider adoption of new treatment under EUA. Through a series of guided interviews with health care providers, primarily physicians or nurse practitioners that were responsible for referring COVID-19 patients, we extracted tools, processes, or other mechanisms (accelerators) and barriers to validate against our DoI framework and fill the gap regarding emergency situations. Our research found that providers supported by large health systems were more inclined to adoption, due to many contributing factors such as the availability of collaborative support and availability of information. Further, communicating evidence-based summaries of treatment options and related processes was also critical to adoption.


Subject(s)
COVID-19 , Public Health , Humans , COVID-19/epidemiology , Pandemics , COVID-19 Drug Treatment , Health Personnel
2.
J Gen Intern Med ; 35(Suppl 2): 788-795, 2020 11.
Article in English | MEDLINE | ID: mdl-32875505

ABSTRACT

BACKGROUND: Clinical decision support (CDS) is a promising tool for reducing antibiotic prescribing for acute respiratory infections (ARIs). OBJECTIVE: To assess the impact of previously effective CDS on antibiotic-prescribing rates for ARIs when adapted and implemented in diverse primary care settings. DESIGN: Cluster randomized clinical trial (RCT) implementing a CDS tool designed to guide evidence-based evaluation and treatment of streptococcal pharyngitis and pneumonia. SETTING: Two large academic health system primary care networks with a mix of providers. PARTICIPANTS: All primary care practices within each health system were invited. All providers within participating clinic were considered a participant. Practices were randomized selection to a control or intervention group. INTERVENTIONS: Intervention practice providers had access to an integrated clinical prediction rule (iCPR) system designed to determine the risk of bacterial infection from reason for visit of sore throat, cough, or upper respiratory infection and guide evidence-based evaluation and treatment. MAIN OUTCOME(S): Change in overall antibiotic prescription rates. MEASURE(S): Frequency, rates, and type of antibiotics prescribed in intervention and controls groups. RESULTS: 33 primary care practices participated with 541 providers and 100,573 patient visits. Intervention providers completed the tool in 6.9% of eligible visits. Antibiotics were prescribed in 35% and 36% of intervention and control visits, respectively, showing no statistically significant difference. There were also no differences in rates of orders for rapid streptococcal tests (RR, 0.94; P = 0.11) or chest X-rays (RR, 1.01; P = 0.999) between groups. CONCLUSIONS: The iCPR tool was not effective in reducing antibiotic prescription rates for upper respiratory infections in diverse primary care settings. This has implications for the generalizability of CDS tools as they are adapted to heterogeneous clinical contexts. TRIAL REGISTRATION: Clinicaltrials.gov (NCT02534987). Registered August 26, 2015 at https://clinicaltrials.gov.


Subject(s)
Decision Support Systems, Clinical , Respiratory Tract Infections , Anti-Bacterial Agents/therapeutic use , Humans , Practice Patterns, Physicians' , Primary Health Care , Respiratory Tract Infections/diagnosis , Respiratory Tract Infections/drug therapy , Respiratory Tract Infections/epidemiology
3.
JMIR Hum Factors ; 6(2): e12471, 2019 Apr 15.
Article in English | MEDLINE | ID: mdl-30985283

ABSTRACT

BACKGROUND: Potential of the electronic health records (EHR) and clinical decision support (CDS) systems to improve the practice of medicine has been tempered by poor design and the resulting burden they place on providers. CDS is rarely tested in the real clinical environment. As a result, many tools are hard to use, placing strain on providers and resulting in low adoption rates. The existing CDS usability literature relies primarily on expert opinion and provider feedback via survey. This is the first study to evaluate CDS usability and the provider-computer-patient interaction with complex CDS in the real clinical environment. OBJECTIVE: This study aimed to further understand the barriers and facilitators of meaningful CDS usage within a real clinical context. METHODS: This qualitative observational study was conducted with 3 primary care providers during 6 patient care sessions. In patients with the chief complaint of sore throat, a CDS tool built with the Centor Score was used to stratify the risk of group A Streptococcus pharyngitis. In patients with a chief complaint of cough or upper respiratory tract infection, a CDS tool built with the Heckerling Rule was used to stratify the risk of pneumonia. During usability testing, all human-computer interactions, including audio and continuous screen capture, were recorded using the Camtasia software. Participants' comments and interactions with the tool during clinical sessions and participant comments during a postsession brief interview were placed into coding categories and analyzed for generalizable themes. RESULTS: In the 6 encounters observed, primary care providers toggled between addressing either the computer or the patient during the visit. Minimal time was spent listening to the patient without engaging the EHR. Participants mostly used the CDS tool with the patient, asking questions to populate the calculator and discussing the results of the risk assessment; they reported the ability to do this as the major benefit of the tool. All providers were interrupted during their use of the CDS tool by the need to refer to other sections of the chart. In half of the visits, patients' clinical symptoms challenged the applicability of the tool to calculate the risk of bacterial infection. Primary care providers rarely used the incorporated incentives for CDS usage, including progress notes and patient instructions. CONCLUSIONS: Live usability testing of these CDS tools generated insights about their role in the patient-provider interaction. CDS may contribute to the interaction by being simultaneously viewed by the provider and patient. CDS can improve usability and lessen the strain it places on providers by being short, flexible, and customizable to unique provider workflow. A useful component of CDS is being as widely applicable as possible and ensuring that its functions represent the fastest way to perform a particular task.

4.
Digit Health ; 5: 2055207619827716, 2019.
Article in English | MEDLINE | ID: mdl-30792877

ABSTRACT

OBJECTIVE: We employed an agile, user-centered approach to the design of a clinical decision support tool in our prior integrated clinical prediction rule study, which achieved high adoption rates. To understand if applying this user-centered process to adapt clinical decision support tools is effective in improving the use of clinical prediction rules, we examined utilization rates of a clinical decision support tool adapted from the original integrated clinical prediction rule study tool to determine if applying this user-centered process to design yields enhanced utilization rates similar to the integrated clinical prediction rule study.MATERIALS & METHODS: We conducted pre-deployment usability testing and semi-structured group interviews at 6 months post-deployment with 75 providers at 14 intervention clinics across the two sites to collect user feedback. Qualitative data analysis is bifurcated into immediate and delayed stages; we reported on immediate-stage findings from real-time field notes used to generate a set of rapid, pragmatic recommendations for iterative refinement. Monthly utilization rates were calculated and examined over 12 months. RESULTS: We hypothesized a well-validated, user-centered clinical decision support tool would lead to relatively high adoption rates. Then 6 months post-deployment, integrated clinical prediction rule study tool utilization rates were substantially lower than anticipated based on the original integrated clinical prediction rule study trial (68%) at 17% (Health System A) and 5% (Health System B). User feedback at 6 months resulted in recommendations for tool refinement, which were incorporated when possible into tool design; however, utilization rates at 12 months post-deployment remained low at 14% and 4% respectively. DISCUSSION: Although valuable, findings demonstrate the limitations of a user-centered approach given the complexity of clinical decision support. CONCLUSION: Strategies for addressing persistent external factors impacting clinical decision support adoption should be considered in addition to the user-centered design and implementation of clinical decision support.

5.
Int J Med Inform ; 106: 1-8, 2017 10.
Article in English | MEDLINE | ID: mdl-28870378

ABSTRACT

OBJECTIVES: Low provider adoption continues to be a significant barrier to realizing the potential of clinical decision support. "Think Aloud" and "Near Live" usability testing were conducted on two clinical decision support tools. Each was composed of an alert, a clinical prediction rule which estimated risk of either group A Streptococcus pharyngitis or pneumonia and an automatic order set based on risk. The objective of this study was to further understanding of the facilitators of usability and to evaluate the types of additional information gained from proceeding to "Near Live" testing after completing "Think Aloud". METHODS: This was a qualitative observational study conducted at a large academic health care system with 12 primary care providers. During "Think Aloud" testing, participants were provided with written clinical scenarios and asked to verbalize their thought process while interacting with the tool. During "Near Live" testing participants interacted with a mock patient. Morae usability software was used to record full screen capture and audio during every session. Participant comments were placed into coding categories and analyzed for generalizable themes. Themes were compared across usability methods. RESULTS: "Think Aloud" and "Near Live" usability testing generated similar themes under the coding categories visibility, workflow, content, understand-ability and navigation. However, they generated significantly different themes under the coding categories usability, practical usefulness and medical usefulness. During both types of testing participants found the tool easier to use when important text was distinct in its appearance, alerts were passive and appropriately timed, content was up to date, language was clear and simple, and each component of the tool included obvious indicators of next steps. Participant comments reflected higher expectations for usability and usefulness during "Near Live" testing. For example, visit aids, such as automatically generated order sets, were felt to be less useful during "Near-Live" testing because they would not be all inclusive for the visit. CONCLUSIONS: These complementary types of usability testing generated unique and generalizable insights. Feedback during "Think Aloud" testing primarily helped to improve the tools' ease of use. The additional feedback from "Near Live" testing, which mimics a real clinical encounter, was helpful for eliciting key barriers and facilitators to provider workflow and adoption.


Subject(s)
Decision Support Systems, Clinical/statistics & numerical data , Electronic Health Records/statistics & numerical data , Pharyngitis/diagnosis , Pneumonia/diagnosis , Software , Evidence-Based Medicine , Female , Health Personnel , Humans , Male , Middle Aged , Pharyngitis/etiology , Pneumonia/etiology , User-Computer Interface
SELECTION OF CITATIONS
SEARCH DETAIL
...