ABSTRACT
STUDY OBJECTIVE: Early notification of admissions from the emergency department (ED) may allow hospitals to plan for inpatient bed demand. This study aimed to assess Epic's ED Likelihood to Occupy an Inpatient Bed predictive model and its application in improving hospital bed planning workflows. METHODS: All ED adult (18 years and older) visits from September 2021 to August 2022 at a large regional health care system were included. The primary outcome was inpatient admission. The predictive model is a random forest algorithm that uses demographic and clinical features. The model was implemented prospectively, with scores generated every 15 minutes. The area under the receiver operator curves (AUROC) and precision-recall curves (AUPRC) were calculated using the maximum score prior to the outcome and for each prediction independently. Test characteristics and lead time were calculated over a range of model score thresholds. RESULTS: Over 11 months, 329,194 encounters were evaluated, with an incidence of inpatient admission of 25.4%. The encounter-level AUROC was 0.849 (95% confidence interval [CI], 0.848 to 0.851), and the AUPRC was 0.643 (95% CI, 0.640 to 0.647). With a prediction horizon of 6 hours, the AUROC was 0.758 (95% CI, 0.758 to 0.759,) and the AUPRC was 0.470 (95% CI, 0.469 to 0.471). At a predictive model threshold of 40, the sensitivity was 0.49, the positive predictive value was 0.65, and the median lead-time warning was 127 minutes before the inpatient bed request. CONCLUSION: The Epic ED Likelihood to Occupy an Inpatient Bed model may improve hospital bed planning workflows. Further study is needed to determine its operational effect.
Subject(s)
Inpatients , Patient Admission , Adult , Humans , Prospective Studies , Hospitalization , Emergency Service, Hospital , Retrospective StudiesABSTRACT
STUDY OBJECTIVE: Delays in the second dose of antibiotics in the emergency department (ED) are associated with increased morbidity and mortality in patients with serious infections. We analyzed the influence of clinical decision support to prevent delays in second doses of broad-spectrum antibiotics in the ED. METHODS: We allocated adult patients who received cefepime or piperacillin/tazobactam in 9 EDs within an integrated health care system to an electronic alert that reminded ED clinicians to reorder antibiotics at the appropriate interval vs usual care. The primary outcome was a median delay in antibiotic administration. Secondary outcomes were rates of intensive care unit (ICU) admission, hospital mortality, and hospital length of stay. We included a post hoc secondary outcome of frequency of major delay (>25% of expected interval for second antibiotic dose). RESULTS: A total of 1,113 ED patients treated with cefepime or piperacillin/tazobactam were enrolled in the study, of whom 420 remained under ED care when their second dose was due and were included in the final analysis. The clinical decision support tool was associated with reduced antibiotic delays (median difference 35 minutes, 95% confidence interval [CI], 5 to 65). There were no differences in ICU transfers, inpatient mortality, or hospital length of stay. The clinical decision support tool was associated with decreased probability of major delay (absolute risk reduction 13%, 95% CI, 6 to 20). CONCLUSIONS: The implementation of a clinical decision support alert reminding clinicians to reorder second doses of antibiotics was associated with a reduction in the length and frequency of antibiotic delays in the ED. There was no effect on the rates of ICU transfers, inpatient mortality, or hospital length of stay.
Subject(s)
Anti-Bacterial Agents , Hospitalization , Adult , Humans , Anti-Bacterial Agents/therapeutic use , Cefepime , Piperacillin, Tazobactam Drug Combination , Emergency Service, Hospital , Length of Stay , Retrospective StudiesABSTRACT
BACKGROUND: Isolation of hospitalized persons under investigation (PUIs) for coronavirus disease 2019 (COVID-19) reduces nosocomial transmission risk. Efficient evaluation of PUIs is needed to preserve scarce healthcare resources. We describe the development, implementation, and outcomes of an inpatient diagnostic algorithm and clinical decision support system (CDSS) to evaluate PUIs. METHODS: We conducted a pre-post study of CORAL (COvid Risk cALculator), a CDSS that guides frontline clinicians through a risk-stratified COVID-19 diagnostic workup, removes transmission-based precautions when workup is complete and negative, and triages complex cases to infectious diseases (ID) physician review. Before CORAL, ID physicians reviewed all PUI records to guide workup and precautions. After CORAL, frontline clinicians evaluated PUIs directly using CORAL. We compared pre- and post-CORAL frequency of repeated severe acute respiratory syndrome coronavirus 2 nucleic acid amplification tests (NAATs), time from NAAT result to PUI status discontinuation, total duration of PUI status, and ID physician work hours, using linear and logistic regression, adjusted for COVID-19 incidence. RESULTS: Fewer PUIs underwent repeated testing after an initial negative NAAT after CORAL than before CORAL (54% vs 67%, respectively; adjusted odd ratio, 0.53 [95% confidence interval, .44-.63]; Pâ <â .01). CORAL significantly reduced average time to PUI status discontinuation (adjusted difference [standard error], -7.4 [0.8] hours per patient), total duration of PUI status (-19.5 [1.9] hours per patient), and average ID physician work-hours (-57.4 [2.0] hours per day) (all Pâ <â .01). No patients had a positive NAAT result within 7 days after discontinuation of precautions via CORAL. CONCLUSIONS: CORAL is an efficient and effective CDSS to guide frontline clinicians through the diagnostic evaluation of PUIs and safe discontinuation of precautions.
Subject(s)
Anthozoa , COVID-19 , Animals , Humans , Nucleic Acid Amplification Techniques , Odds Ratio , SARS-CoV-2ABSTRACT
STUDY OBJECTIVE: Tetanus is the most common vaccination given in the emergency department; yet, administrations of tetanus vaccine boosters in the ED may not comply with the US Centers for Disease Control and Prevention's recommended vaccination schedule. We implemented a clinical decision support alert in the electronic health record that warned providers when ordering a tetanus vaccine if a prior one had been given within 10 years and studied its efficacy to reduce potentially unnecessary vaccines in the ED. METHODS: This was a retrospective, quasi-experimental, 1-group, pretest-posttest study in 3 hospital EDs in Boston, MA. We studied adult patients for whom tetanus vaccines were ordered despite a history of vaccination within the prior 10 years. We compared the number of potentially unnecessary tetanus vaccine administrations in a baseline phase (when the clinical decision support alert was not visible) versus an intervention phase. RESULTS: Of eligible patients, 22.1% (95% confidence interval [CI] 21.8% to 22.4%) had prior tetanus vaccines within 5 years, 12.8% (95% CI 12.5% to 13.0%) within 5 to 10 years, 3.8% (95% CI 3.6% to 3.9%) more than 10 years ago, and 61.3% (95% CI 60.9% to 61.7%) had no prior tetanus vaccination documentation. Of 60,983 encounters, 337 met the inclusion criteria. A tetanus vaccination was administered in 91% (95% CI 87% to 96%) of encounters in the baseline phase, compared to 55% (95% CI 47% to 62%) during the intervention. The absolute risk reduction was 36.7% (95% CI 28.0% to 45.4%), and the number of encounters needed to alert to avoid 1 potentially unnecessary tetanus vaccine (number needed to treat) was 2.7 (95% CI 2.2% to 3.6%). For patients with tetanus vaccines within the prior 5 years, the absolute risk reduction was 47.9% (95% CI 35.5 % to 60.3%) and the number needed to treat was 2.1 (95% CI 1.7% to 2.8%). CONCLUSION: A clinical decision support alert that warns ED clinicians that a patient may have an up-to-date tetanus vaccination status reduces potentially unnecessary vaccinations.
Subject(s)
Decision Support Systems, Clinical/standards , Immunization Schedule , Tetanus Toxoid/administration & dosage , Vaccination/statistics & numerical data , Adolescent , Adult , Aged , Aged, 80 and over , Emergency Service, Hospital/statistics & numerical data , Female , Humans , Male , Middle Aged , Non-Randomized Controlled Trials as Topic , Quality Improvement , Retrospective Studies , Tetanus Toxoid/adverse effects , Tetanus Toxoid/immunology , Unnecessary Procedures , Young AdultABSTRACT
RATIONALE: Current methods assessing clinical risk because of exercise intolerance in patients with cardiopulmonary disease rely on a small subset of traditional variables. Alternative strategies incorporating the spectrum of factors underlying prognosis in at-risk patients may be useful clinically, but are lacking. OBJECTIVE: Use unbiased analyses to identify variables that correspond to clinical risk in patients with exercise intolerance. METHODS AND RESULTS: Data from 738 consecutive patients referred for invasive cardiopulmonary exercise testing at a single center (2011-2015) were analyzed retrospectively (derivation cohort). A correlation network of invasive cardiopulmonary exercise testing parameters was assembled using |r|>0.5. From an exercise network of 39 variables (ie, nodes) and 98 correlations (ie, edges) corresponding to P<9.5e-46 for each correlation, we focused on a subnetwork containing peak volume of oxygen consumption (pVo2) and 9 linked nodes. K-mean clustering based on these 10 variables identified 4 novel patient clusters characterized by significant differences in 44 of 45 exercise measurements (P<0.01). Compared with a probabilistic model, including 23 independent predictors of pVo2 and pVo2 itself, the network model was less redundant and identified clusters that were more distinct. Cluster assignment from the network model was predictive of subsequent clinical events. For example, a 4.3-fold (P<0.0001; 95% CI, 2.2-8.1) and 2.8-fold (P=0.0018; 95% CI, 1.5-5.2) increase in hazard for age- and pVo2-adjusted all-cause 3-year hospitalization, respectively, were observed between the highest versus lowest risk clusters. Using these data, we developed the first risk-stratification calculator for patients with exercise intolerance. When applying the risk calculator to patients in 2 independent invasive cardiopulmonary exercise testing cohorts (Boston and Graz, Austria), we observed a clinical risk profile that paralleled the derivation cohort. CONCLUSIONS: Network analyses were used to identify novel exercise groups and develop a point-of-care risk calculator. These data expand the range of useful clinical variables beyond pVo2 that predict hospitalization in patients with exercise intolerance.
Subject(s)
Cardiovascular Diseases/epidemiology , Exercise Tolerance , Aged , Exercise Test/statistics & numerical data , Female , Hospitalization/statistics & numerical data , Humans , Male , Middle AgedABSTRACT
BACKGROUND: Providers should estimate a patient's chance of surviving an in-hospital cardiac arrest with good neurologic outcome when initially admitting a patient, in order to participate in shared decision making with patients about their code status. OBJECTIVE: To examine the utility of the "Good Outcome Following Attempted Resuscitation (GO-FAR)" score in predicting prognosis after in-hospital cardiac arrest in a US trauma center. DESIGN: Retrospective observational study SETTING: Level 1 trauma and academic hospital in Minneapolis, MN, USA PARTICIPANTS: All cases of pulseless in-hospital cardiac arrest occurring in adults (18 years or older) admitted to the hospital between Jan 2009 and Sept 2018 are included. For patients with more than one arrest, only the first was included in this analysis. MAIN MEASURES: For each patient with verified in-hospital cardiac arrest, we calculated a GO-FAR score based on variables present in the electronic health record at time of admission. Pre-determined outcomes included survival to discharge and survival to discharge with good neurologic outcome. KEY RESULTS: From 2009 to 2018, 403 adults suffered in-hospital cardiac arrest. A majority (65.5%) were male with a mean age of 60.3 years. Overall survival to discharge was 33.0%; survival to discharge with good neurologic outcome was 17.4%. GO-FAR score calculated at the time of admission correlated with survival to discharge with good neurologic outcome (AUC 0.68), which occurred in 5.3% of patients with below average survival likelihood by GO-FAR score, 22.5% with average survival likelihood, and 34.1% with above average survival likelihood. CONCLUSIONS: The GO-FAR score can estimate, at time of admission to the hospital, the probability that a patient will survive to discharge with good neurologic outcome after an in-hospital cardiac arrest. This prognostic information can help providers frame discussions with patients on admission regarding whether to attempt cardiopulmonary resuscitation in the event of cardiac arrest.
Subject(s)
Cardiopulmonary Resuscitation/statistics & numerical data , Decision Support Techniques , Heart Arrest/mortality , Aged , Female , Heart Arrest/therapy , Humans , Male , Middle Aged , Registries , Retrospective Studies , United States/epidemiologySubject(s)
Transgender Persons , Transsexualism , Electronic Health Records , Gender Identity , HumansABSTRACT
Importance: Emergency department (ED) visits by older adults with life-limiting illnesses are a critical opportunity to establish patient care end-of-life preferences, but little is known about the optimal screening criteria for resource-constrained EDs. Objectives: To externally validate the Geriatric End-of-Life Screening Tool (GEST) in an independent population and compare it with commonly used serious illness diagnostic criteria. Design, Setting, and Participants: This prognostic study assessed a cohort of patients aged 65 years and older who were treated in a tertiary care ED in Boston, Massachusetts, from 2017 to 2021. Patients arriving in cardiac arrest or who died within 1 day of ED arrival were excluded. Data analysis was performed from August 1, 2023, to March 27, 2024. Exposure: GEST, a logistic regression algorithm that uses commonly available electronic health record (EHR) datapoints and was developed and validated across 9 EDs, was compared with serious illness diagnoses as documented in the EHR. Serious illnesses included stroke/transient ischemic attack, liver disease, cancer, lung disease, and age greater than 80 years, among others. Main Outcomes and Measures: The primary outcome was 6-month mortality following an ED encounter. Statistical analyses included area under the receiver operating characteristic curve, calibration analyses, Kaplan-Meier survival curves, and decision curves. Results: This external validation included 82â¯371 ED encounters by 40â¯505 unique individuals (mean [SD] age, 76.8 [8.4] years; 54.3% women, 13.8% 6-month mortality rate). GEST had an external validation area under the receiver operating characteristic curve of 0.79 (95% CI, 0.78-0.79) that was stable across years and demographic subgroups. Of included encounters, 53.4% had a serious illness, with a sensitivity of 77.4% (95% CI, 76.6%-78.2%) and specificity of 50.5% (95% CI, 50.1%-50.8%). Varying GEST cutoffs from 5% to 30% increased specificity (5%: 49.1% [95% CI, 48.7%-49.5%]; 30%: 92.2% [95% CI, 92.0%-92.4%]) at the cost of sensitivity (5%: 89.3% [95% CI, 88.8-89.9]; 30%: 36.2% [95% CI, 35.3-37.1]). In a decision curve analysis, GEST outperformed serious illness criteria across all tested thresholds. When comparing patients referred to intervention by GEST with serious illness criteria, GEST reclassified 45.1% of patients with serious illness as having low risk of mortality with an observed mortality rate 8.1% and 2.6% of patients without serious illness as having high mortality risk with an observed mortality rate of 34.3% for a total reclassification rate of 25.3%. Conclusions and Relevance: The findings of this study suggest that both serious illness criteria and GEST identified older ED patients at risk for 6-month mortality, but GEST offered more useful screening characteristics. Future trials of serious illness interventions for high mortality risk in older adults may consider transitioning from diagnosis code criteria to GEST, an automatable EHR-based algorithm.
Subject(s)
Emergency Service, Hospital , Terminal Care , Humans , Aged , Female , Male , Aged, 80 and over , Terminal Care/statistics & numerical data , Emergency Service, Hospital/statistics & numerical data , Geriatric Assessment/methods , Geriatric Assessment/statistics & numerical data , Boston/epidemiology , Prognosis , MortalityABSTRACT
OBJECTIVES: Despite federally mandated collection of sex and gender demographics in the electronic health record (EHR), longitudinal assessments are lacking. We assessed sex and gender demographic field utilization using EHR metadata. MATERIALS AND METHODS: Patients ≥18 years of age in the Mass General Brigham health system with a first Legal Sex entry (registration requirement) between January 8, 2018 and January 1, 2022 were included in this retrospective study. Metadata for all sex and gender fields (Legal Sex, Sex Assigned at Birth [SAAB], Gender Identity) were quantified by completion rates, user types, and longitudinal change. A nested qualitative study of providers from specialties with high and low field use identified themes related to utilization. RESULTS: 1 576 120 patients met inclusion criteria: 100% had a Legal Sex, 20% a Gender Identity, and 19% a SAAB; 321 185 patients had field changes other than initial Legal Sex entry. About 2% of patients had a subsequent Legal Sex change, and 25% of those had ≥2 changes; 20% of patients had ≥1 update to Gender Identity and 19% to SAAB. Excluding the first Legal Sex entry, administrators made most changes (67%) across all fields, followed by patients (25%), providers (7.2%), and automated Health Level-7 (HL7) interface messages (0.7%). Provider utilization varied by subspecialty; themes related to systems barriers and personal perceptions were identified. DISCUSSION: Sex and gender demographic fields are primarily used by administrators and raise concern about data accuracy; provider use is heterogenous and lacking. Provider awareness of field availability and variable workflows may impede use. CONCLUSION: EHR metadata highlights areas for improvement of sex and gender field utilization.
Subject(s)
Gender Identity , Transgender Persons , Infant, Newborn , Humans , Male , Female , Electronic Health Records , Metadata , Retrospective Studies , DemographyABSTRACT
Background: Heparin-induced thrombocytopenia (HIT) is a difficult clinicopathologic diagnosis to make and to treat. Delays in identification and appropriate treatment can lead to increased morbidity and mortality. Objectives: To use electronic health alert interventions to improve provider diagnosis and management of heparin-induced thrombocytopenia through guideline-based, accurate care delivery. Methods: This quality improvement initiative developed 3 electronic health record-based interventions at our 750-bed academic medical center to improve the initial management of suspected HIT between 2018 and 2021: 1. an interruptive alert to recommend discontinuation of active heparin products when signing a heparin-platelet factor 4 test (PF4) order, 2. integrated 4T score calculation in the heparin-PF4 test order, and 3. interruptive alert suggesting not to order heparin-PF4 tests when the 4T score is <4. Changes in practice were assessed over defined time periods pre and post each intervention. Results: Intervention 1 resulted in heparin discontinuation in more patients, with 65% (191 heparin orders/293 heparin-PF4 enzyme-linked immunosorbent assay tests) of cases continuing heparin prealert and only 54% (127 heparin orders/235 heparin-PF4 enzyme-linked immunosorbent assay tests) postinterruptive alert (95% CI 2.3-19.9; P = .015). Intervention 2 increased appropriate heparin-PF4 test ordering from 40.4% (110/272) preintervention to 79.1% (246/311) (95% CI 30.9-46.4; P < .00001) postintervention, with inappropriate PF4 ordering defined as testing when 4T score was <4. Intervention 3 did not lead to reduction in heparin-PF4 testing in the control group (96 inappropriate orders/402 total orders, 24%) compared to the randomized alert group (56 inappropriate orders/298 total orders; 19%) (95% CI -1.2 to 11.5; P = .13). Conclusion: Implementation of unique electronic health record interventions, including both diagnostic and management interventions, led to improved guideline-based, accurate care delivery with 4T score calculation and cessation of heparin for patients with suspected HIT.
ABSTRACT
BACKGROUND: Computerized clinical decision support (CDS) used in electronic health record systems (EHRs) has led to positive outcomes as well as unintended consequences, such as alert fatigue. Characteristics of the EHR session can be used to restrict CDS tools and increase their relevance, but implications of this approach are not rigorously studied. OBJECTIVES: To assess the utility of using "login location" of EHR users-that is, the location they chose on the login screen-as a variable in the CDS logic. METHODS: We measured concordance between user's login location and the location of the patients they placed orders for and conducted stratified analyses by user groups. We also estimated how often login location data may be stale or inaccurate. RESULTS: One in five CDS alerts incorporated the EHR users' login location into their logic. Analysis of nearly 2 million orders placed by nearly 8,000 users showed that concordance between login location and patient location was high for nurses, nurse practitioners, and physician assistance (all >95%), but lower for fellows (77%) and residents (55%). When providers switched between patients in the EHR, they usually did not update their login location accordingly. CONCLUSION: CDS alerts commonly incorporate user's login location into their logic. User's login location is often the same as the location of the patient the user is providing care for, but substantial discordance can be observed for certain user groups. While this may provide additional information that could be useful to the CDS logic, a substantial amount of discordance happened in specific user groups or when users appeared not to change their login location across different sessions. Those who design CDS alerts should consider a data-driven approach to evaluate the appropriateness of login location for each use case.
Subject(s)
Decision Support Systems, Clinical , Physicians , Electronic Health Records , HumansABSTRACT
OBJECTIVE: To identify common medication route-related causes of clinical decision support (CDS) malfunctions and best practices for avoiding them. MATERIALS AND METHODS: Case series of medication route-related CDS malfunctions from diverse healthcare provider organizations. RESULTS: Nine cases were identified and described, including both false-positive and false-negative alert scenarios. A common cause was the inclusion of nonsystemically available medication routes in value sets (eg, eye drops, ear drops, or topical preparations) when only systemically available routes were appropriate. DISCUSSION: These value set errors are common, occur across healthcare provider organizations and electronic health record (EHR) systems, affect many different types of medications, and can impact the accuracy of CDS interventions. New knowledge management tools and processes for auditing existing value sets and supporting the creation of new value sets can mitigate many of these issues. Furthermore, value set issues can adversely affect other aspects of the EHR, such as quality reporting and population health management. CONCLUSION: Value set issues related to medication routes are widespread and can lead to CDS malfunctions. Organizations should make appropriate investments in knowledge management tools and strategies, such as those outlined in our recommendations.
Subject(s)
Decision Support Systems, Clinical , Medical Order Entry Systems , Electronic Health Records , Ophthalmic Solutions , Research , SoftwareABSTRACT
OBJECTIVE: Surviving Sepsis guidelines recommend blood cultures before administration of intravenous (IV) antibiotics for patients with sepsis or moderate to high risk of bacteremia. Clinical decision support (CDS) that reminds emergency department (ED) providers to obtain blood cultures when ordering IV antibiotics may lead to improvements in this process measure. METHODS: This was a multicenter causal impact analysis comparing timely blood culture collections prior to IV antibiotics for adult ED patients 1 year before and after a CDS intervention implementation in the electronic health record. A Bayesian structured time-series model compared daily timely blood cultures collected compared to a forecasted synthetic control. Mixed effects models evaluated the impact of the intervention controlling for confounders. RESULTS: The analysis included 54â538 patients over 2 years. In the baseline phase, 46.1% had blood cultures prior to IV antibiotics, compared to 58.8% after the intervention. Causal impact analysis determined an absolute increase of 13.1% (95% CI 10.4-15.7%) of timely blood culture collections overall, although the difference in patients with a sepsis diagnosis or who met CDC Adult Sepsis Event criteria was not significant, absolute difference 8.0% (95% CI -0.2 to 15.8). Blood culture positivity increased in the intervention phase, and contamination rates were similar in both study phases. DISCUSSION: CDS improved blood culture collection before IV antibiotics in the ED, without increasing overutilization. CONCLUSION: A simple CDS alert increased timely blood culture collections in ED patients for whom concern for infection was high enough to warrant IV antibiotics.
Subject(s)
Decision Support Systems, Clinical , Sepsis , Adult , Anti-Bacterial Agents/therapeutic use , Bayes Theorem , Blood Culture , Emergency Service, Hospital , Humans , Retrospective Studies , Sepsis/diagnosis , Sepsis/drug therapyABSTRACT
OBJECTIVES: To improve clinical decision support (CDS) by allowing users to provide real-time feedback when they interact with CDS tools and by creating processes for responding to and acting on this feedback. METHODS: Two organizations implemented similar real-time feedback tools and processes in their electronic health record and gathered data over a 30-month period. At both sites, users could provide feedback by using Likert feedback links embedded in all end-user facing alerts, with results stored outside the electronic health record, and provide feedback as a comment when they overrode an alert. Both systems are monitored daily by clinical informatics teams. RESULTS: The two sites received 2,639 Likert feedback comments and 623,270 override comments over a 30-month period. Through four case studies, we describe our use of end-user feedback to rapidly respond to build errors, as well as identifying inaccurate knowledge management, user-interface issues, and unique workflows. CONCLUSION: Feedback on CDS tools can be solicited in multiple ways, and it contains valuable and actionable suggestions to improve CDS alerts. Additionally, end users appreciate knowing their feedback is being received and may also make other suggestions to improve the electronic health record. Incorporation of end-user feedback into CDS monitoring, evaluation, and remediation is a way to improve CDS.
Subject(s)
Decision Support Systems, Clinical , Feedback , Electronic Health Records , WorkflowABSTRACT
Monkeypox virus was historically rare outside of West and Central Africa until the current 2022 global outbreak, which has required clinicians to be alert to identify individuals with possible monkeypox, institute isolation, and take appropriate next steps in evaluation and management. Clinical decision support systems (CDSS), which have been shown to improve adherence to clinical guidelines, can support frontline clinicians in applying the most current evaluation and management guidance in the setting of an emerging infectious disease outbreak when those guidelines are evolving over time. Here, we describe the rapid development and implementation of a CDSS tool embedded in the electronic health record to guide frontline clinicians in the diagnostic evaluation of monkeypox infection and triage patients with potential monkeypox infection to individualized infectious disease physician review. We also present data on the initial performance of this tool in a large integrated healthcare system.
Subject(s)
Decision Support Systems, Clinical , Mpox (monkeypox) , Physicians , Humans , Mpox (monkeypox)/epidemiology , Disease Outbreaks , Electronic Health RecordsABSTRACT
The early phase of the coronavirus disease 2019 (COVID-19) pandemic and ongoing efforts for mitigation underscore the importance of universal travel and symptom screening. We analyzed adherence to documentation of travel and symptom screening through a travel navigator tool with clinical decision support to identify patients at risk for Middle East Respiratory Syndrome.
Subject(s)
COVID-19 , Communicable Disease Control , Communicable Diseases, Emerging , Coronavirus Infections , Mass Screening/methods , Travel Medicine , COVID-19/epidemiology , COVID-19/prevention & control , Communicable Disease Control/methods , Communicable Disease Control/organization & administration , Communicable Diseases, Emerging/epidemiology , Communicable Diseases, Emerging/prevention & control , Coronavirus Infections/epidemiology , Coronavirus Infections/prevention & control , Decision Support Techniques , Guideline Adherence/statistics & numerical data , Humans , Massachusetts/epidemiology , Records , Risk Assessment/methods , SARS-CoV-2 , Travel/trends , Travel Medicine/methods , Travel Medicine/trends , Travel-Related IllnessABSTRACT
OBJECTIVE: To investigate the effects of adjusting the default order set settings on telemetry usage. MATERIALS AND METHODS: We performed a retrospective, controlled, before-after study of patients admitted to a house staff medicine service at an academic medical center examining the effect of changing whether the admission telemetry order was pre-selected or not. Telemetry orders on admission and subsequent orders for telemetry were monitored pre- and post-change. Two other order sets that had no change in their default settings were used as controls. RESULTS: Between January 1, 2017 and May 1, 2018, there were 1, 163 patients admitted using the residency-customized version of the admission order set which initially had telemetry pre-selected. In this group of patients, there was a significant decrease in telemetry ordering in the post-intervention period: from 79.1% of patients in the 8.5 months prior ordered to have telemetry to 21.3% of patients ordered in the 7.5 months after (χ2 = 382; P < .001). There was no significant change in telemetry usage among patients admitted using the two control order sets. DISCUSSION: Default settings have been shown to affect clinician ordering behavior in multiple domains. Consistent with prior findings, our study shows that changing the order set settings can significantly affect ordering practices. Our study was limited in that we were unable to determine if the change in ordering behavior had significant impact on patient care or safety. CONCLUSION: Decisions about default selections in electronic health record order sets can have significant consequences on ordering behavior.
Subject(s)
Medical Order Entry Systems , Practice Patterns, Physicians' , Telemetry , Academic Medical Centers , Humans , Internship and Residency , Medical Staff, Hospital , Retrospective StudiesABSTRACT
Clinical decision support systems (CDSS) are widely used to improve patient care and guide workflow. End users can be valuable contributors to monitoring for CDSS malfunctions. However, they often have little means of providing direct feedback on the design and build of such systems. In this study, we describe an electronic survey tool deployed from within the electronic health record and coupled with a conversation with Clinical Informaticians as a method to manage CDSS design and lifecycle.
Subject(s)
Decision Support Systems, Clinical , Electronic Health Records , Surveys and Questionnaires , WorkflowABSTRACT
Clinical decision support (CDS) systems are prevalent in electronic health records and drive many safety advantages. However, CDS systems can also cause unintended consequences. Monitoring programs focused on alert firing rates are important to detect anomalies and ensure systems are working as intended. Monitoring efforts do not generally include system load and time to generate decision support, which is becoming increasingly important as more CDS systems rely on external, web-based content and algorithms. We report a case in which a web-based service caused significant increase in the time to generate decision support, in turn leading to marked delays in electronic health record system responsiveness, which could have led to patient safety events. Given this, it is critical to consider adding decision support-time generation to ongoing CDS system monitoring programs.