Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 24
Filter
Add more filters

Country/Region as subject
Publication year range
1.
Crit Care Med ; 50(9): 1339-1347, 2022 09 01.
Article in English | MEDLINE | ID: mdl-35452010

ABSTRACT

OBJECTIVES: To determine the impact of a machine learning early warning risk score, electronic Cardiac Arrest Risk Triage (eCART), on mortality for elevated-risk adult inpatients. DESIGN: A pragmatic pre- and post-intervention study conducted over the same 10-month period in 2 consecutive years. SETTING: Four-hospital community-academic health system. PATIENTS: All adult patients admitted to a medical-surgical ward. INTERVENTIONS: During the baseline period, clinicians were blinded to eCART scores. During the intervention period, scores were presented to providers. Scores greater than or equal to 95th percentile were designated high risk prompting a physician assessment for ICU admission. Scores between the 89th and 95th percentiles were designated intermediate risk, triggering a nurse-directed workflow that included measuring vital signs every 2 hours and contacting a physician to review the treatment plan. MEASUREMENTS AND MAIN RESULTS: The primary outcome was all-cause inhospital mortality. Secondary measures included vital sign assessment within 2 hours, ICU transfer rate, and time to ICU transfer. A total of 60,261 patients were admitted during the study period, of which 6,681 (11.1%) met inclusion criteria (baseline period n = 3,191, intervention period n = 3,490). The intervention period was associated with a significant decrease in hospital mortality for the main cohort (8.8% vs 13.9%; p < 0.0001; adjusted odds ratio [OR], 0.60 [95% CI, 0.52-0.71]). A significant decrease in mortality was also seen for the average-risk cohort not subject to the intervention (0.49% vs 0.26%; p < 0.05; adjusted OR, 0.53 [95% CI, 0.41-0.74]). In subgroup analysis, the benefit was seen in both high- (17.9% vs 23.9%; p = 0.001) and intermediate-risk (2.0% vs 4.0 %; p = 0.005) patients. The intervention period was also associated with a significant increase in ICU transfers, decrease in time to ICU transfer, and increase in vital sign reassessment within 2 hours. CONCLUSIONS: Implementation of a machine learning early warning score-driven protocol was associated with reduced inhospital mortality, likely driven by earlier and more frequent ICU transfer.


Subject(s)
Early Warning Score , Heart Arrest , Adult , Heart Arrest/diagnosis , Heart Arrest/therapy , Hospital Mortality , Humans , Intensive Care Units , Machine Learning , Vital Signs
2.
JAMA ; 328(16): 1595-1603, 2022 10 25.
Article in English | MEDLINE | ID: mdl-36269852

ABSTRACT

Importance: The effectiveness of ivermectin to shorten symptom duration or prevent hospitalization among outpatients in the US with mild to moderate symptomatic COVID-19 is unknown. Objective: To evaluate the efficacy of ivermectin, 400 µg/kg, daily for 3 days compared with placebo for the treatment of early mild to moderate COVID-19. Design, Setting, and Participants: ACTIV-6, an ongoing, decentralized, double-blind, randomized, placebo-controlled platform trial, was designed to evaluate repurposed therapies in outpatients with mild to moderate COVID-19. A total of 1591 participants aged 30 years and older with confirmed COVID-19, experiencing 2 or more symptoms of acute infection for 7 days or less, were enrolled from June 23, 2021, through February 4, 2022, with follow-up data through May 31, 2022, at 93 sites in the US. Interventions: Participants were randomized to receive ivermectin, 400 µg/kg (n = 817), daily for 3 days or placebo (n = 774). Main Outcomes and Measures: Time to sustained recovery, defined as at least 3 consecutive days without symptoms. There were 7 secondary outcomes, including a composite of hospitalization or death by day 28. Results: Among 1800 participants who were randomized (mean [SD] age, 48 [12] years; 932 women [58.6%]; 753 [47.3%] reported receiving at least 2 doses of a SARS-CoV-2 vaccine), 1591 completed the trial. The hazard ratio (HR) for improvement in time to recovery was 1.07 (95% credible interval [CrI], 0.96-1.17; posterior P value [HR >1] = .91). The median time to recovery was 12 days (IQR, 11-13) in the ivermectin group and 13 days (IQR, 12-14) in the placebo group. There were 10 hospitalizations or deaths in the ivermectin group and 9 in the placebo group (1.2% vs 1.2%; HR, 1.1 [95% CrI, 0.4-2.6]). The most common serious adverse events were COVID-19 pneumonia (ivermectin [n = 5]; placebo [n = 7]) and venous thromboembolism (ivermectin [n = 1]; placebo [n = 5]). Conclusions and Relevance: Among outpatients with mild to moderate COVID-19, treatment with ivermectin, compared with placebo, did not significantly improve time to recovery. These findings do not support the use of ivermectin in patients with mild to moderate COVID-19. Trial Registration: ClinicalTrials.gov Identifier: NCT04885530.


Subject(s)
Anti-Infective Agents , COVID-19 Drug Treatment , COVID-19 , Hospitalization , Ivermectin , Female , Humans , Middle Aged , COVID-19/mortality , COVID-19/prevention & control , COVID-19 Vaccines/therapeutic use , Double-Blind Method , Ivermectin/adverse effects , Ivermectin/therapeutic use , SARS-CoV-2 , Treatment Outcome , Anti-Infective Agents/adverse effects , Anti-Infective Agents/therapeutic use , Ambulatory Care , Drug Repositioning , Time Factors , Recovery of Function , Male , Adult
3.
Clin Infect Dis ; 73(11): e4166-e4174, 2021 12 06.
Article in English | MEDLINE | ID: mdl-32706859

ABSTRACT

BACKGROUND: We compared the efficacy of the antiviral agent, remdesivir, versus standard-of-care treatment in adults with severe coronavirus disease 2019 (COVID-19) using data from a phase 3 remdesivir trial and a retrospective cohort of patients with severe COVID-19 treated with standard of care. METHODS: GS-US-540-5773 is an ongoing phase 3, randomized, open-label trial comparing two courses of remdesivir (remdesivir-cohort). GS-US-540-5807 is an ongoing real-world, retrospective cohort study of clinical outcomes in patients receiving standard-of-care treatment (non-remdesivir-cohort). Inclusion criteria were similar between studies: patients had confirmed severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection, were hospitalized, had oxygen saturation ≤94% on room air or required supplemental oxygen, and had pulmonary infiltrates. Stabilized inverse probability of treatment weighted multivariable logistic regression was used to estimate the treatment effect of remdesivir versus standard of care. The primary endpoint was the proportion of patients with recovery on day 14, dichotomized from a 7-point clinical status ordinal scale. A key secondary endpoint was mortality. RESULTS: After the inverse probability of treatment weighting procedure, 312 and 818 patients were counted in the remdesivir- and non-remdesivir-cohorts, respectively. At day 14, 74.4% of patients in the remdesivir-cohort had recovered versus 59.0% in the non-remdesivir-cohort (adjusted odds ratio [aOR] 2.03: 95% confidence interval [CI]: 1.34-3.08, P < .001). At day 14, 7.6% of patients in the remdesivir-cohort had died versus 12.5% in the non-remdesivir-cohort (aOR 0.38, 95% CI: .22-.68, P = .001). CONCLUSIONS: In this comparative analysis, by day 14, remdesivir was associated with significantly greater recovery and 62% reduced odds of death versus standard-of-care treatment in patients with severe COVID-19. CLINICAL TRIALS REGISTRATION: NCT04292899 and EUPAS34303.


Subject(s)
COVID-19 Drug Treatment , Adenosine Monophosphate/analogs & derivatives , Adult , Alanine/analogs & derivatives , Antiviral Agents/therapeutic use , Cohort Studies , Humans , Oxygen Saturation , Retrospective Studies , SARS-CoV-2 , Standard of Care , Treatment Outcome
4.
Crit Care Med ; 49(10): 1694-1705, 2021 10 01.
Article in English | MEDLINE | ID: mdl-33938715

ABSTRACT

OBJECTIVES: Early antibiotic administration is a central component of sepsis guidelines, and delays may increase mortality. However, prior studies have examined the delay to first antibiotic administration as a single time period even though it contains two distinct processes: antibiotic ordering and antibiotic delivery, which can each be targeted for improvement through different interventions. The objective of this study was to characterize and compare patients who experienced order or delivery delays, investigate the association of each delay type with mortality, and identify novel patient subphenotypes with elevated risk of harm from delays. DESIGN: Retrospective analysis of multicenter inpatient data. SETTING: Two tertiary care medical centers (2008-2018, 2006-2017) and four community-based hospitals (2008-2017). PATIENTS: All patients admitted through the emergency department who met clinical criteria for infection. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Patient demographics, vitals, laboratory values, medication order and administration times, and in-hospital survival data were obtained from the electronic health record. Order and delivery delays were calculated for each admission. Adjusted logistic regression models were used to examine the relationship between each delay and in-hospital mortality. Causal forests, a machine learning method, was used to identify a high-risk subgroup. A total of 60,817 admissions were included, and delays occurred in 58% of patients. Each additional hour of order delay (odds ratio, 1.04; 95% CI, 1.03-1.05) and delivery delay (odds ratio, 1.05; 95% CI, 1.02-1.08) was associated with increased mortality. A patient subgroup identified by causal forests with higher comorbidity burden, greater organ dysfunction, and abnormal initial lactate measurements had a higher risk of death associated with delays (odds ratio, 1.07; 95% CI, 1.06-1.09 vs odds ratio, 1.02; 95% CI, 1.01-1.03). CONCLUSIONS: Delays in antibiotic ordering and drug delivery are both associated with a similar increase in mortality. A distinct subgroup of high-risk patients exist who could be targeted for more timely therapy.


Subject(s)
Anti-Bacterial Agents/administration & dosage , Phenotype , Sepsis/genetics , Time-to-Treatment/statistics & numerical data , Aged , Aged, 80 and over , Anti-Bacterial Agents/therapeutic use , Emergency Service, Hospital/organization & administration , Emergency Service, Hospital/statistics & numerical data , Female , Hospitalization/statistics & numerical data , Humans , Illinois/epidemiology , Male , Middle Aged , Prospective Studies , Retrospective Studies , Sepsis/drug therapy , Sepsis/physiopathology , Time Factors
5.
Crit Care Med ; 49(7): e673-e682, 2021 07 01.
Article in English | MEDLINE | ID: mdl-33861547

ABSTRACT

OBJECTIVES: Recent sepsis studies have defined patients as "infected" using a combination of culture and antibiotic orders rather than billing data. However, the accuracy of these definitions is unclear. We aimed to compare the accuracy of different established criteria for identifying infected patients using detailed chart review. DESIGN: Retrospective observational study. SETTING: Six hospitals from three health systems in Illinois. PATIENTS: Adult admissions with blood culture or antibiotic orders, or Angus International Classification of Diseases infection codes and death were eligible for study inclusion as potentially infected patients. Nine-hundred to 1,000 of these admissions were randomly selected from each health system for chart review, and a proportional number of patients who did not meet chart review eligibility criteria were also included and deemed not infected. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: The accuracy of published billing code criteria by Angus et al and electronic health record criteria by Rhee et al and Seymour et al (Sepsis-3) was determined using the manual chart review results as the gold standard. A total of 5,215 patients were included, with 2,874 encounters analyzed via chart review and a proportional 2,341 added who did not meet chart review eligibility criteria. In the study cohort, 27.5% of admissions had at least one infection. This was most similar to the percentage of admissions with blood culture orders (26.8%), Angus infection criteria (28.7%), and the Sepsis-3 criteria (30.4%). Sepsis-3 criteria was the most sensitive (81%), followed by Angus (77%) and Rhee (52%), while Rhee (97%) and Angus (90%) were more specific than the Sepsis-3 criteria (89%). Results were similar for patients with organ dysfunction during their admission. CONCLUSIONS: Published criteria have a wide range of accuracy for identifying infected patients, with the Sepsis-3 criteria being the most sensitive and Rhee criteria being the most specific. These findings have important implications for studies investigating the burden of sepsis on a local and national level.


Subject(s)
Data Accuracy , Electronic Health Records/standards , Infections/epidemiology , Information Storage and Retrieval/methods , Adult , Aged , Anti-Bacterial Agents/therapeutic use , Antibiotic Prophylaxis/statistics & numerical data , Blood Culture , Chicago/epidemiology , False Positive Reactions , Female , Humans , Infections/diagnosis , International Classification of Diseases , Male , Middle Aged , Organ Dysfunction Scores , Patient Admission/statistics & numerical data , Prevalence , Retrospective Studies , Sensitivity and Specificity , Sepsis/diagnosis
6.
Am J Emerg Med ; 47: 239-243, 2021 Sep.
Article in English | MEDLINE | ID: mdl-33945978

ABSTRACT

BACKGROUND: The global healthcare burden of COVID-19 continues to rise. There is currently limited information regarding the disease progression and the need for hospitalizations in patients who present to the Emergency Department (ED) with minimal or no symptoms. OBJECTIVES: This study identifies bounceback rates and timeframes for patients who return to the ED due to COVID-19 after initial discharge on the date of testing. METHODS: Using the NorthShore University Health System's (NSUHS) Enterprise Data Warehouse (EDW), we conducted a retrospective cohort analysis of patients who were tested positive for COVID-19 and were discharged home on the date of testing. A one-month follow-up period was included to ensure the capture of disease progression. RESULTS: Of 1883 positive cases with initially mild symptoms, 14.6% returned to the ED for complaints related to COVID-19. 56.9% of the mildly symptomatic bounceback patients were discharged on the return visit while 39.5% were admitted to the floor and 3.6% to the ICU. Of the 1120 positive cases with no initial symptoms, only four returned to the ED (0.26%) and only one patient was admitted. Median initial testing occurred on day 3 (2-5.6) of illness, and median ED bounceback occurred on day 9 (6.3-12.7). Our statistical model was unable to identify risk factors for ED bouncebacks. CONCLUSION: COVID-19 patients diagnosed with mild symptoms on initial presentation have a 14.6% rate of bounceback due to progression of illness.


Subject(s)
COVID-19/epidemiology , Emergency Service, Hospital/statistics & numerical data , Patient Readmission/statistics & numerical data , Adult , Aged , Female , Health Services Accessibility , Humans , Illinois/epidemiology , Logistic Models , Male , Middle Aged , Retrospective Studies , Risk Assessment , Risk Factors , SARS-CoV-2 , Severity of Illness Index
7.
Article in English | MEDLINE | ID: mdl-32312778

ABSTRACT

Empiric antibiotic prescribing can be supported by guidelines and/or local antibiograms, but these have limitations. We sought to use data from a comprehensive electronic health record to use statistical learning to develop predictive models for individual antibiotics that incorporate patient- and hospital-specific factors. This paper reports on the development and validation of these models with a large retrospective cohort. This was a retrospective cohort study including hospitalized patients with positive urine cultures in the first 48 h of hospitalization at a 1,500-bed tertiary-care hospital over a 4.5-year period. All first urine cultures with susceptibilities were included. Statistical learning techniques, including penalized logistic regression, were used to create predictive models for cefazolin, ceftriaxone, ciprofloxacin, cefepime, and piperacillin-tazobactam. These were validated on a held-out cohort. The final data set used for analysis included 6,366 patients. Final model covariates included demographics, comorbidity score, recent antibiotic use, recent antimicrobial resistance, and antibiotic allergies. Models had acceptable to good discrimination in the training data set and acceptable performance in the validation data set, with a point estimate for area under the receiver operating characteristic curve (AUC) that ranged from 0.65 for ceftriaxone to 0.69 for cefazolin. All models had excellent calibration. We used electronic health record data to create predictive models to estimate antibiotic susceptibilities for urinary tract infections in hospitalized patients. Our models had acceptable performance in a held-out validation cohort.


Subject(s)
Urinary Tract Infections , Anti-Bacterial Agents/therapeutic use , Hospitals , Humans , Microbial Sensitivity Tests , Retrospective Studies , Urinary Tract Infections/drug therapy
8.
Crit Care Med ; 48(11): e1020-e1028, 2020 11.
Article in English | MEDLINE | ID: mdl-32796184

ABSTRACT

OBJECTIVES: Bacteremia and fungemia can cause life-threatening illness with high mortality rates, which increase with delays in antimicrobial therapy. The objective of this study is to develop machine learning models to predict blood culture results at the time of the blood culture order using routine data in the electronic health record. DESIGN: Retrospective analysis of a large, multicenter inpatient data. SETTING: Two academic tertiary medical centers between the years 2007 and 2018. SUBJECTS: All hospitalized patients who received a blood culture during hospitalization. INTERVENTIONS: The dataset was partitioned temporally into development and validation cohorts: the logistic regression and gradient boosting machine models were trained on the earliest 80% of hospital admissions and validated on the most recent 20%. MEASUREMENTS AND MAIN RESULTS: There were 252,569 blood culture days-defined as nonoverlapping 24-hour periods in which one or more blood cultures were ordered. In the validation cohort, there were 50,514 blood culture days, with 3,762 cases of bacteremia (7.5%) and 370 cases of fungemia (0.7%). The gradient boosting machine model for bacteremia had significantly higher area under the receiver operating characteristic curve (0.78 [95% CI 0.77-0.78]) than the logistic regression model (0.73 [0.72-0.74]) (p < 0.001). The model identified a high-risk group with over 30 times the occurrence rate of bacteremia in the low-risk group (27.4% vs 0.9%; p < 0.001). Using the low-risk cut-off, the model identifies bacteremia with 98.7% sensitivity. The gradient boosting machine model for fungemia had high discrimination (area under the receiver operating characteristic curve 0.88 [95% CI 0.86-0.90]). The high-risk fungemia group had 252 fungemic cultures compared with one fungemic culture in the low-risk group (5.0% vs 0.02%; p < 0.001). Further, the high-risk group had a mortality rate 60 times higher than the low-risk group (28.2% vs 0.4%; p < 0.001). CONCLUSIONS: Our novel models identified patients at low and high-risk for bacteremia and fungemia using routinely collected electronic health record data. Further research is needed to evaluate the cost-effectiveness and impact of model implementation in clinical practice.


Subject(s)
Bacteremia/diagnosis , Electronic Health Records/statistics & numerical data , Fungemia/diagnosis , Machine Learning , Aged , Bacteremia/blood , Bacteremia/etiology , Bacteremia/microbiology , Blood Culture , Female , Fungemia/blood , Fungemia/etiology , Fungemia/microbiology , Hospitalization/statistics & numerical data , Humans , Male , Middle Aged , Models, Statistical , Reproducibility of Results , Retrospective Studies , Risk Factors
9.
AIDS Behav ; 18(2): 335-45, 2014 Feb.
Article in English | MEDLINE | ID: mdl-24337699

ABSTRACT

Younger Black men who have sex with men (BMSM) ages 16-29 have the highest rates of HIV in the United States. Despite increased attention to social and sexual networks as a framework for biomedical intervention, the role of measured network positions, such as bridging and their relationship to HIV risk has received limited attention. A network sample (N = 620) of BMSM respondents (N = 154) and their MSM and transgendered person network members (N = 466) was generated through respondent driven sampling of BMSM and elicitation of their personal networks. Bridging status of each network member was determined by a constraint measure and was used to assess the relationship between this bridging and unprotected anal intercourse (UAI), sex-drug use (SDU), group sex (GS) and HIV status within the network in South Chicago. Low, moderate and high bridging was observed in 411 (66.8 %), 81 (13.2 %) and 123 (20.0 %) of the network. In addition to age and having sex with men only, moderate and high levels of bridging were associated with HIV status (aOR 3.19; 95 % CI 1.58-6.45 and aOR 3.83; 95 % CI 1.23-11.95, respectively). Risk behaviors observed including UAS, GS, and SDU were not associated with HIV status, however, they clustered together in their associations with one another. Bridging network position but not risk behavior was associated with HIV status in this network sample of younger BMSM. Socio-structural features such as position within the network may be important when implementing effective HIV prevention interventions in younger BMSM populations.


Subject(s)
Black or African American/statistics & numerical data , HIV Infections/ethnology , Homosexuality, Male/ethnology , Sexual Behavior , Social Networking , Adolescent , Adult , Chicago/epidemiology , Cross-Sectional Studies , HIV Infections/prevention & control , Homosexuality, Male/statistics & numerical data , Humans , Logistic Models , Male , Prevalence , Risk-Taking , Socioeconomic Factors , Unsafe Sex , Young Adult
10.
Appl Clin Inform ; 15(2): 313-319, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38657955

ABSTRACT

BACKGROUND: Inefficient electronic health record (EHR) usage increases the documentation burden on physicians and other providers, which increases cognitive load and contributes to provider burnout. Studies show that EHR efficiency sessions, optimization sprints, reduce burnout using a resource-intense five-person team. We implemented sprint-inspired one-on-one post-go-live efficiency training sessions (mini-sprints) as a more economical training option directed at providers. OBJECTIVES: We evaluated a post-go-live mini-sprint intervention to assess provider satisfaction and efficiency. METHODS: NorthShore University HealthSystem implemented one-on-one provider-to-provider mini-sprint sessions to optimize provider workflow within the EHR platform. The physician informaticist completed a 9-point checklist of efficiency tips with physician trainees covering schedule organization, chart review, speed buttons, billing, note personalization/optimization, preference lists, quick actions, and quick tips. We collected postsession survey data assessing for net promoter score (NPS) and open-ended feedback. We conducted financial analysis of pre- and post-mini-sprint efficiency levels and financial data. RESULTS: Seventy-six sessions were conducted with 32 primary care physicians, 28 specialty physicians, and 16 nonphysician providers within primary care and other areas. Thirty-seven physicians completed the postsession survey. The average NPS for the completed mini-sprint sessions was 97. The proficiency score had a median of 6.12 (Interquartile range (IQR): 4.71-7.64) before training, and a median of 7.10 (IQR: 6.25-8.49) after training. Financial data analysis indicates that higher level billing codes were used at a greater frequency post-mini-sprint. The revenue increase 12 months post-mini-sprint was $213,234, leading to a return of $75,559.50 for 40 providers, or $1,888.98 per provider in a 12-month period. CONCLUSION: Our data show that mini-sprint sessions were effective in optimizing efficiency within the EHR platform. Financial analysis demonstrates that this type of training program is sustainable and pays for itself. There was high satisfaction with the mini-sprint training modality, and feedback indicated an interest in further mini-sprint training sessions for physicians and nonphysician staff.


Subject(s)
Electronic Health Records , Humans , Personal Satisfaction , Physicians
11.
JAMIA Open ; 7(2): ooae025, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38617994

ABSTRACT

Objectives: A data commons is a software platform for managing, curating, analyzing, and sharing data with a community. The Pandemic Response Commons (PRC) is a data commons designed to provide a data platform for researchers studying an epidemic or pandemic. Methods: The PRC was developed using the open source Gen3 data platform and is based upon consortium, data, and platform agreements developed by the not-for-profit Open Commons Consortium. A formal consortium of Chicagoland area organizations was formed to develop and operate the PRC. Results: The consortium developed a general PRC and an instance of it for the Chicagoland region called the Chicagoland COVID-19 Commons. A Gen3 data platform was set up and operated with policies, procedures, and controls for a NIST SP 800-53 revision 4 Moderate system. A consensus data model for the commons was developed, and a variety of datasets were curated, harmonized and ingested, including statistical summary data about COVID cases, patient level clinical data, and SARS-CoV-2 viral variant data. Discussion and conclusions: Given the various legal and data agreements required to operate a data commons, a PRC is designed to be in place and operating at a low level prior to the occurrence of an epidemic, with the activities increasing as required during an epidemic. A regional instance of a PRC can also be part of a broader data ecosystem or data mesh consisting of multiple regional commons supporting pandemic response through sharing regional data.

12.
medRxiv ; 2024 Mar 19.
Article in English | MEDLINE | ID: mdl-38562803

ABSTRACT

Rationale: Early detection of clinical deterioration using early warning scores may improve outcomes. However, most implemented scores were developed using logistic regression, only underwent retrospective internal validation, and were not tested in important patient subgroups. Objectives: To develop a gradient boosted machine model (eCARTv5) for identifying clinical deterioration and then validate externally, test prospectively, and evaluate across patient subgroups. Methods: All adult patients hospitalized on the wards in seven hospitals from 2008- 2022 were used to develop eCARTv5, with demographics, vital signs, clinician documentation, and laboratory values utilized to predict intensive care unit transfer or death in the next 24 hours. The model was externally validated retrospectively in 21 hospitals from 2009-2023 and prospectively in 10 hospitals from February to May 2023. eCARTv5 was compared to the Modified Early Warning Score (MEWS) and the National Early Warning Score (NEWS) using the area under the receiver operating characteristic curve (AUROC). Measurements and Main Results: The development cohort included 901,491 admissions, the retrospective validation cohort included 1,769,461 admissions, and the prospective validation cohort included 46,330 admissions. In retrospective validation, eCART had the highest AUROC (0.835; 95%CI 0.834, 0.835), followed by NEWS (0.766 (95%CI 0.766, 0.767)), and MEWS (0.704 (95%CI 0.703, 0.704)). eCART's performance remained high (AUROC ≥0.80) across a range of patient demographics, clinical conditions, and during prospective validation. Conclusions: We developed eCARTv5, which accurately identifies early clinical deterioration in hospitalized ward patients. Our model performed better than the NEWS and MEWS retrospectively, prospectively, and across a range of subgroups.

13.
medRxiv ; 2024 Feb 06.
Article in English | MEDLINE | ID: mdl-38370788

ABSTRACT

OBJECTIVE: Timely intervention for clinically deteriorating ward patients requires that care teams accurately diagnose and treat their underlying medical conditions. However, the most common diagnoses leading to deterioration and the relevant therapies provided are poorly characterized. Therefore, we aimed to determine the diagnoses responsible for clinical deterioration, the relevant diagnostic tests ordered, and the treatments administered among high-risk ward patients using manual chart review. DESIGN: Multicenter retrospective observational study. SETTING: Inpatient medical-surgical wards at four health systems from 2006-2020 PATIENTS: Randomly selected patients (1,000 from each health system) with clinical deterioration, defined by reaching the 95th percentile of a validated early warning score, electronic Cardiac Arrest Risk Triage (eCART), were included. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Clinical deterioration was confirmed by a trained reviewer or marked as a false alarm if no deterioration occurred for each patient. For true deterioration events, the condition causing deterioration, relevant diagnostic tests ordered, and treatments provided were collected. Of the 4,000 included patients, 2,484 (62%) had clinical deterioration confirmed by chart review. Sepsis was the most common cause of deterioration (41%; n=1,021), followed by arrhythmia (19%; n=473), while liver failure had the highest in-hospital mortality (41%). The most common diagnostic tests ordered were complete blood counts (47% of events), followed by chest x-rays (42%), and cultures (40%), while the most common medication orders were antimicrobials (46%), followed by fluid boluses (34%), and antiarrhythmics (19%). CONCLUSIONS: We found that sepsis was the most common cause of deterioration, while liver failure had the highest mortality. Complete blood counts and chest x-rays were the most common diagnostic tests ordered, and antimicrobials and fluid boluses were the most common medication interventions. These results provide important insights for clinical decision-making at the bedside, training of rapid response teams, and the development of institutional treatment pathways for clinical deterioration. KEY POINTS: Question: What are the most common diagnoses, diagnostic test orders, and treatments for ward patients experiencing clinical deterioration? Findings: In manual chart review of 2,484 encounters with deterioration across four health systems, we found that sepsis was the most common cause of clinical deterioration, followed by arrythmias, while liver failure had the highest mortality. Complete blood counts and chest x-rays were the most common diagnostic test orders, while antimicrobials and fluid boluses were the most common treatments. Meaning: Our results provide new insights into clinical deterioration events, which can inform institutional treatment pathways, rapid response team training, and patient care.

14.
medRxiv ; 2024 May 18.
Article in English | MEDLINE | ID: mdl-38798524

ABSTRACT

Importance: The effect of montelukast in reducing symptom duration among outpatients with mild to moderate coronavirus disease 2019 (COVID-19) is uncertain. Objective: To assess the effectiveness of montelukast compared with placebo in treating outpatients with mild to moderate COVID-19. Design Setting and Participants: The ACTIV-6 platform randomized clinical trial aims to evaluate the effectiveness of repurposed medications in treating mild to moderate COVID-19. Between January 27, 2023, and June 23, 2023, 1250 participants ≥30 years of age with confirmed SARS-CoV-2 infection and ≥2 acute COVID-19 symptoms for ≤7 days, were included across 104 US sites to evaluate the use of montelukast. Interventions: Participants were randomized to receive montelukast 10 mg once daily or matched placebo for 14 days. Main Outcomes and Measures: The primary outcome was time to sustained recovery (defined as at least 3 consecutive days without symptoms). Secondary outcomes included time to death; time to hospitalization or death; a composite of hospitalization, urgent care visit, emergency department visit, or death; COVID clinical progression scale; and difference in mean time unwell. Results: Among participants who were randomized and received study drug, the median age was 53 years (IQR 42-62), 60.2% were female, 64.6% identified as Hispanic/Latino, and 56.3% reported ≥2 doses of a SARS-CoV-2 vaccine. Among 628 participants who received montelukast and 622 who received placebo, differences in time to sustained recovery were not observed (adjusted hazard ratio [HR] 1.02; 95% credible interval [CrI] 0.92-1.12; P(efficacy) = 0.63]). Unadjusted median time to sustained recovery was 10 days (95% confidence interval 10-11) in both groups. No deaths were reported and 2 hospitalizations were reported in each group; 36 participants reported healthcare utilization events (a priori defined as death, hospitalization, emergency department/urgent care visit); 18 in the montelukast group compared with 18 in the placebo group (HR 1.01; 95% CrI 0.45-1.84; P(efficacy)=0.48). Five participants experienced serious adverse events (3 with montelukast and 2 with placebo). Conclusions and Relevance: Among outpatients with mild to moderate COVID-19, treatment with montelukast does not reduce duration of COVID-19 symptoms. Trial Registration: ClinicalTrials.gov ( NCT04885530 ).

15.
J Am Med Inform Assoc ; 29(10): 1696-1704, 2022 09 12.
Article in English | MEDLINE | ID: mdl-35869954

ABSTRACT

OBJECTIVES: Early identification of infection improves outcomes, but developing models for early identification requires determining infection status with manual chart review, limiting sample size. Therefore, we aimed to compare semi-supervised and transfer learning algorithms with algorithms based solely on manual chart review for identifying infection in hospitalized patients. MATERIALS AND METHODS: This multicenter retrospective study of admissions to 6 hospitals included "gold-standard" labels of infection from manual chart review and "silver-standard" labels from nonchart-reviewed patients using the Sepsis-3 infection criteria based on antibiotic and culture orders. "Gold-standard" labeled admissions were randomly allocated to training (70%) and testing (30%) datasets. Using patient characteristics, vital signs, and laboratory data from the first 24 hours of admission, we derived deep learning and non-deep learning models using transfer learning and semi-supervised methods. Performance was compared in the gold-standard test set using discrimination and calibration metrics. RESULTS: The study comprised 432 965 admissions, of which 2724 underwent chart review. In the test set, deep learning and non-deep learning approaches had similar discrimination (area under the receiver operating characteristic curve of 0.82). Semi-supervised and transfer learning approaches did not improve discrimination over models fit using only silver- or gold-standard data. Transfer learning had the best calibration (unreliability index P value: .997, Brier score: 0.173), followed by self-learning gradient boosted machine (P value: .67, Brier score: 0.170). DISCUSSION: Deep learning and non-deep learning models performed similarly for identifying infection, as did models developed using Sepsis-3 and manual chart review labels. CONCLUSION: In a multicenter study of almost 3000 chart-reviewed patients, semi-supervised and transfer learning models showed similar performance for model discrimination as baseline XGBoost, while transfer learning improved calibration.


Subject(s)
Machine Learning , Sepsis , Humans , ROC Curve , Retrospective Studies , Sepsis/diagnosis
16.
JMIR Res Protoc ; 11(8): e36741, 2022 Aug 25.
Article in English | MEDLINE | ID: mdl-36006689

ABSTRACT

BACKGROUND: Heart failure (HF) is a prevalent chronic disease and is associated with increases in mortality and morbidity. HF is a leading cause of hospitalizations and readmissions in the United States. A potentially promising area for preventing HF readmissions is continuous remote patient monitoring (CRPM). OBJECTIVE: The primary aim of this study is to determine the feasibility and preliminary efficacy of a CRPM solution in patients with HF at NorthShore University HealthSystem. METHODS: This study is a feasibility study and uses a wearable biosensor to continuously remotely monitor patients with HF for 30 days after discharge. Eligible patients admitted with an HF exacerbation at NorthShore University HealthSystem are being recruited, and the wearable biosensor is placed before discharge. The biosensor collects physiological ambulatory data, which are analyzed for signs of patient deterioration. Participants are also completing a daily survey through a dedicated study smartphone. If prespecified criteria from the physiological data and survey results are met, a notification is triggered, and a predetermined electronic health record-based pathway of telephonic management is completed. In phase 1, which has already been completed, 5 patients were enrolled and monitored for 30 days after discharge. The results of phase 1 were analyzed, and modifications to the program were made to optimize it. After analysis of the phase 1 results, 15 patients are being enrolled for phase 2, which is a calibration and testing period to enable further adjustments to be made. After phase 2, we will enroll 45 patients for phase 3. The combined results of phases 1, 2, and 3 will be analyzed to determine the feasibility of a CRPM program in patients with HF. Semistructured interviews are being conducted with key stakeholders, including patients, and these results will be analyzed using the affective adaptation of the technology acceptance model. RESULTS: During phase 1, of the 5 patients, 2 (40%) were readmitted during the study period. The study completion rate for phase 1 was 80% (4/5), and the study attrition rate was 20% (1/5). There were 57 protocol deviations out of 150 patient days in phase 1 of the study. The results of phase 1 were analyzed, and the study protocol was adjusted to optimize it for phases 2 and 3. Phase 2 and phase 3 results will be available by the end of 2022. CONCLUSIONS: A CRPM program may offer a low-risk solution to improve care of patients with HF after hospital discharge and may help to decrease readmission of patients with HF to the hospital. This protocol may also lay the groundwork for the use of CRPM solutions in other groups of patients considered to be at high risk. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): DERR1-10.2196/36741.

17.
Acad Pathol ; 8: 23742895211010257, 2021.
Article in English | MEDLINE | ID: mdl-33959677

ABSTRACT

In March 2020, NorthShore University Health System laboratories mobilized to develop and validate polymerase chain reaction based testing for detection of SARS-CoV-2. Using laboratory data, NorthShore University Health System created the Data Coronavirus Analytics Research Team to track activities affected by SARS-CoV-2 across the organization. Operational leaders used data insights and predictions from Data Coronavirus Analytics Research Team to redeploy critical care resources across the hospital system, and real-time data were used daily to make adjustments to staffing and supply decisions. Geographical data were used to triage patients to other hospitals in our system when COVID-19 detected pavilions were at capacity. Additionally, one of the consequences of COVID-19 was the inability for patients to receive elective care leading to extended periods of pain and uncertainty about a disease or treatment. After shutting down elective surgeries beginning in March of 2020, NorthShore University Health System set a recovery goal to achieve 80% of our historical volumes by October 1, 2020. Using the Data Coronavirus Analytics Research Team, our operational and clinical teams were able to achieve 89% of our historical volumes a month ahead of schedule, allowing rapid recovery of surgical volume and financial stability. The Data Coronavirus Analytics Research Team also was used to demonstrate that the accelerated recovery period had no negative impact with regard to iatrogenic COVID-19 infection and did not result in increased deep vein thrombosis, pulmonary embolisms, or cerebrovascular accident. These achievements demonstrate how a coordinated and transparent data-driven effort that was built upon a robust laboratory testing capability was essential to the operational response and recovery from the COVID-19 crisis.

18.
Appl Clin Inform ; 12(5): 1161-1173, 2021 10.
Article in English | MEDLINE | ID: mdl-34965606

ABSTRACT

OBJECTIVE: We report on our experience of deploying a continuous remote patient monitoring (CRPM) study soft launch with structured cascading and escalation pathways on heart failure (HF) patients post-discharge. The lessons learned from the soft launch are used to modify and fine-tune the workflow process and study protocol. METHODS: This soft launch was conducted at NorthShore University HealthSystem's Evanston Hospital from December 2020 to March 2021. Patients were provided with non-invasive wearable biosensors that continuously collect ambulatory physiological data, and a study phone that collects patient-reported outcomes. The physiological data are analyzed by machine learning algorithms, potentially identifying physiological perturbation in HF patients. Alerts from this algorithm may be cascaded with other patient status data to inform home health nurses' (HHNs') management via a structured protocol. HHNs review the monitoring platform daily. If the patient's status meets specific criteria, HHNs perform assessments and escalate patient cases to the HF team for further guidance on early intervention. RESULTS: We enrolled five patients into the soft launch. Four participants adhered to study activities. Two out of five patients were readmitted, one due to HF, one due to infection. Observed miscommunication and protocol gaps were noted for protocol amendment. The study team adopted an organizational development method from change management theory to reconfigure the study protocol. CONCLUSION: We sought to automate the monitoring aspects of post-discharge care by aligning a new technology that generates streaming data from a wearable device with a complex, multi-provider workflow into a novel protocol using iterative design, implementation, and evaluation methods to monitor post-discharge HF patients. CRPM with structured escalation and telemonitoring protocol shows potential to maintain patients in their home environment and reduce HF-related readmissions. Our results suggest that further education to engage and empower frontline workers using advanced technology is essential to scale up the approach.


Subject(s)
Aftercare , Heart Failure , Heart Failure/diagnosis , Home Environment , Humans , Monitoring, Physiologic , Patient Discharge , Prospective Studies
19.
Am J Clin Pathol ; 154(1): 115-123, 2020 06 08.
Article in English | MEDLINE | ID: mdl-32249294

ABSTRACT

OBJECTIVES: Tuberculosis (TB) is a significant global health problem. In low-prevalence areas and low clinical suspicion, nucleic acid amplification tests (NAAT) for direct detection of Mycobacterium tuberculosis complex (MTBC) can speed therapy initiation and infection control. An NAAT assay (TBPCR) targeting MTBC IS6110 is used for detecting MTBC in our low-prevalence population. METHODS: Fifteen-year review of patient records identified 146 patients with culture-positive pulmonary tuberculosis (PTB) or extrapulmonary tuberculosis (EPTB). Laboratory-developed TBPCR was retrospectively compared with standard stain and cultures for PTB and EPTB diagnoses. RESULTS: TBPCR assay was used in 57% of patients with PTB and 33% of patients with EPTB. TBPCR detected 88.4% of all TB (smear-positive, 97%; smear-negative, 79%) with 100% specificity. Low bacterial load was indicated in TBPCR-negative PTB (P = .002) and EPTB (P < .008). CONCLUSIONS: TBPCR performance was optimum but significantly underused. Guidelines are proposed for mandated use of TBPCR that capture patients with clinically suspected PTB. Focused TBPCR use in low prevalence populations will benefit patient care, infection prevention, and public health.


Subject(s)
DNA, Bacterial/analysis , Polymerase Chain Reaction/methods , Tuberculosis/diagnosis , Adolescent , Adult , Aged , Aged, 80 and over , Child , Female , Humans , Male , Middle Aged , Young Adult
20.
JAMA Netw Open ; 3(5): e205191, 2020 05 01.
Article in English | MEDLINE | ID: mdl-32427324

ABSTRACT

Importance: Risk scores used in early warning systems exist for general inpatients and patients with suspected infection outside the intensive care unit (ICU), but their relative performance is incompletely characterized. Objective: To compare the performance of tools used to determine points-based risk scores among all hospitalized patients, including those with and without suspected infection, for identifying those at risk for death and/or ICU transfer. Design, Setting, and Participants: In a cohort design, a retrospective analysis of prospectively collected data was conducted in 21 California and 7 Illinois hospitals between 2006 and 2018 among adult inpatients outside the ICU using points-based scores from 5 commonly used tools: National Early Warning Score (NEWS), Modified Early Warning Score (MEWS), Between the Flags (BTF), Quick Sequential Sepsis-Related Organ Failure Assessment (qSOFA), and Systemic Inflammatory Response Syndrome (SIRS). Data analysis was conducted from February 2019 to January 2020. Main Outcomes and Measures: Risk model discrimination was assessed in each state for predicting in-hospital mortality and the combined outcome of ICU transfer or mortality with area under the receiver operating characteristic curves (AUCs). Stratified analyses were also conducted based on suspected infection. Results: The study included 773 477 hospitalized patients in California (mean [SD] age, 65.1 [17.6] years; 416 605 women [53.9%]) and 713 786 hospitalized patients in Illinois (mean [SD] age, 61.3 [19.9] years; 384 830 women [53.9%]). The NEWS exhibited the highest discrimination for mortality (AUC, 0.87; 95% CI, 0.87-0.87 in California vs AUC, 0.86; 95% CI, 0.85-0.86 in Illinois), followed by the MEWS (AUC, 0.83; 95% CI, 0.83-0.84 in California vs AUC, 0.84; 95% CI, 0.84-0.85 in Illinois), qSOFA (AUC, 0.78; 95% CI, 0.78-0.79 in California vs AUC, 0.78; 95% CI, 0.77-0.78 in Illinois), SIRS (AUC, 0.76; 95% CI, 0.76-0.76 in California vs AUC, 0.76; 95% CI, 0.75-0.76 in Illinois), and BTF (AUC, 0.73; 95% CI, 0.73-0.73 in California vs AUC, 0.74; 95% CI, 0.73-0.74 in Illinois). At specific decision thresholds, the NEWS outperformed the SIRS and qSOFA at all 28 hospitals either by reducing the percentage of at-risk patients who need to be screened by 5% to 20% or increasing the percentage of adverse outcomes identified by 3% to 25%. Conclusions and Relevance: In all hospitalized patients evaluated in this study, including those meeting criteria for suspected infection, the NEWS appeared to display the highest discrimination. Our results suggest that, among commonly used points-based scoring systems, determining the NEWS for inpatient risk stratification could identify patients with and without infection at high risk of mortality.


Subject(s)
Early Warning Score , Hospital Mortality , Hospitalization/statistics & numerical data , Infections/mortality , Intensive Care Units/statistics & numerical data , Patient Transfer/statistics & numerical data , Aged , California/epidemiology , Female , Humans , Illinois/epidemiology , Infections/diagnosis , Infections/epidemiology , Length of Stay/statistics & numerical data , Male , Middle Aged , Retrospective Studies , Risk Assessment , Risk Factors , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL