Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
1.
medRxiv ; 2024 Mar 19.
Artigo em Inglês | MEDLINE | ID: mdl-38562803

RESUMO

Rationale: Early detection of clinical deterioration using early warning scores may improve outcomes. However, most implemented scores were developed using logistic regression, only underwent retrospective internal validation, and were not tested in important patient subgroups. Objectives: To develop a gradient boosted machine model (eCARTv5) for identifying clinical deterioration and then validate externally, test prospectively, and evaluate across patient subgroups. Methods: All adult patients hospitalized on the wards in seven hospitals from 2008- 2022 were used to develop eCARTv5, with demographics, vital signs, clinician documentation, and laboratory values utilized to predict intensive care unit transfer or death in the next 24 hours. The model was externally validated retrospectively in 21 hospitals from 2009-2023 and prospectively in 10 hospitals from February to May 2023. eCARTv5 was compared to the Modified Early Warning Score (MEWS) and the National Early Warning Score (NEWS) using the area under the receiver operating characteristic curve (AUROC). Measurements and Main Results: The development cohort included 901,491 admissions, the retrospective validation cohort included 1,769,461 admissions, and the prospective validation cohort included 46,330 admissions. In retrospective validation, eCART had the highest AUROC (0.835; 95%CI 0.834, 0.835), followed by NEWS (0.766 (95%CI 0.766, 0.767)), and MEWS (0.704 (95%CI 0.703, 0.704)). eCART's performance remained high (AUROC ≥0.80) across a range of patient demographics, clinical conditions, and during prospective validation. Conclusions: We developed eCARTv5, which accurately identifies early clinical deterioration in hospitalized ward patients. Our model performed better than the NEWS and MEWS retrospectively, prospectively, and across a range of subgroups.

2.
medRxiv ; 2024 Feb 06.
Artigo em Inglês | MEDLINE | ID: mdl-38370788

RESUMO

OBJECTIVE: Timely intervention for clinically deteriorating ward patients requires that care teams accurately diagnose and treat their underlying medical conditions. However, the most common diagnoses leading to deterioration and the relevant therapies provided are poorly characterized. Therefore, we aimed to determine the diagnoses responsible for clinical deterioration, the relevant diagnostic tests ordered, and the treatments administered among high-risk ward patients using manual chart review. DESIGN: Multicenter retrospective observational study. SETTING: Inpatient medical-surgical wards at four health systems from 2006-2020 PATIENTS: Randomly selected patients (1,000 from each health system) with clinical deterioration, defined by reaching the 95th percentile of a validated early warning score, electronic Cardiac Arrest Risk Triage (eCART), were included. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Clinical deterioration was confirmed by a trained reviewer or marked as a false alarm if no deterioration occurred for each patient. For true deterioration events, the condition causing deterioration, relevant diagnostic tests ordered, and treatments provided were collected. Of the 4,000 included patients, 2,484 (62%) had clinical deterioration confirmed by chart review. Sepsis was the most common cause of deterioration (41%; n=1,021), followed by arrhythmia (19%; n=473), while liver failure had the highest in-hospital mortality (41%). The most common diagnostic tests ordered were complete blood counts (47% of events), followed by chest x-rays (42%), and cultures (40%), while the most common medication orders were antimicrobials (46%), followed by fluid boluses (34%), and antiarrhythmics (19%). CONCLUSIONS: We found that sepsis was the most common cause of deterioration, while liver failure had the highest mortality. Complete blood counts and chest x-rays were the most common diagnostic tests ordered, and antimicrobials and fluid boluses were the most common medication interventions. These results provide important insights for clinical decision-making at the bedside, training of rapid response teams, and the development of institutional treatment pathways for clinical deterioration. KEY POINTS: Question: What are the most common diagnoses, diagnostic test orders, and treatments for ward patients experiencing clinical deterioration? Findings: In manual chart review of 2,484 encounters with deterioration across four health systems, we found that sepsis was the most common cause of clinical deterioration, followed by arrythmias, while liver failure had the highest mortality. Complete blood counts and chest x-rays were the most common diagnostic test orders, while antimicrobials and fluid boluses were the most common treatments. Meaning: Our results provide new insights into clinical deterioration events, which can inform institutional treatment pathways, rapid response team training, and patient care.

3.
J Am Med Inform Assoc ; 29(10): 1696-1704, 2022 09 12.
Artigo em Inglês | MEDLINE | ID: mdl-35869954

RESUMO

OBJECTIVES: Early identification of infection improves outcomes, but developing models for early identification requires determining infection status with manual chart review, limiting sample size. Therefore, we aimed to compare semi-supervised and transfer learning algorithms with algorithms based solely on manual chart review for identifying infection in hospitalized patients. MATERIALS AND METHODS: This multicenter retrospective study of admissions to 6 hospitals included "gold-standard" labels of infection from manual chart review and "silver-standard" labels from nonchart-reviewed patients using the Sepsis-3 infection criteria based on antibiotic and culture orders. "Gold-standard" labeled admissions were randomly allocated to training (70%) and testing (30%) datasets. Using patient characteristics, vital signs, and laboratory data from the first 24 hours of admission, we derived deep learning and non-deep learning models using transfer learning and semi-supervised methods. Performance was compared in the gold-standard test set using discrimination and calibration metrics. RESULTS: The study comprised 432 965 admissions, of which 2724 underwent chart review. In the test set, deep learning and non-deep learning approaches had similar discrimination (area under the receiver operating characteristic curve of 0.82). Semi-supervised and transfer learning approaches did not improve discrimination over models fit using only silver- or gold-standard data. Transfer learning had the best calibration (unreliability index P value: .997, Brier score: 0.173), followed by self-learning gradient boosted machine (P value: .67, Brier score: 0.170). DISCUSSION: Deep learning and non-deep learning models performed similarly for identifying infection, as did models developed using Sepsis-3 and manual chart review labels. CONCLUSION: In a multicenter study of almost 3000 chart-reviewed patients, semi-supervised and transfer learning models showed similar performance for model discrimination as baseline XGBoost, while transfer learning improved calibration.


Assuntos
Aprendizado de Máquina , Sepse , Humanos , Curva ROC , Estudos Retrospectivos , Sepse/diagnóstico
4.
Crit Care Med ; 50(9): 1339-1347, 2022 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-35452010

RESUMO

OBJECTIVES: To determine the impact of a machine learning early warning risk score, electronic Cardiac Arrest Risk Triage (eCART), on mortality for elevated-risk adult inpatients. DESIGN: A pragmatic pre- and post-intervention study conducted over the same 10-month period in 2 consecutive years. SETTING: Four-hospital community-academic health system. PATIENTS: All adult patients admitted to a medical-surgical ward. INTERVENTIONS: During the baseline period, clinicians were blinded to eCART scores. During the intervention period, scores were presented to providers. Scores greater than or equal to 95th percentile were designated high risk prompting a physician assessment for ICU admission. Scores between the 89th and 95th percentiles were designated intermediate risk, triggering a nurse-directed workflow that included measuring vital signs every 2 hours and contacting a physician to review the treatment plan. MEASUREMENTS AND MAIN RESULTS: The primary outcome was all-cause inhospital mortality. Secondary measures included vital sign assessment within 2 hours, ICU transfer rate, and time to ICU transfer. A total of 60,261 patients were admitted during the study period, of which 6,681 (11.1%) met inclusion criteria (baseline period n = 3,191, intervention period n = 3,490). The intervention period was associated with a significant decrease in hospital mortality for the main cohort (8.8% vs 13.9%; p < 0.0001; adjusted odds ratio [OR], 0.60 [95% CI, 0.52-0.71]). A significant decrease in mortality was also seen for the average-risk cohort not subject to the intervention (0.49% vs 0.26%; p < 0.05; adjusted OR, 0.53 [95% CI, 0.41-0.74]). In subgroup analysis, the benefit was seen in both high- (17.9% vs 23.9%; p = 0.001) and intermediate-risk (2.0% vs 4.0 %; p = 0.005) patients. The intervention period was also associated with a significant increase in ICU transfers, decrease in time to ICU transfer, and increase in vital sign reassessment within 2 hours. CONCLUSIONS: Implementation of a machine learning early warning score-driven protocol was associated with reduced inhospital mortality, likely driven by earlier and more frequent ICU transfer.


Assuntos
Escore de Alerta Precoce , Parada Cardíaca , Adulto , Parada Cardíaca/diagnóstico , Parada Cardíaca/terapia , Mortalidade Hospitalar , Humanos , Unidades de Terapia Intensiva , Aprendizado de Máquina , Sinais Vitais
5.
Crit Care Med ; 49(7): e673-e682, 2021 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-33861547

RESUMO

OBJECTIVES: Recent sepsis studies have defined patients as "infected" using a combination of culture and antibiotic orders rather than billing data. However, the accuracy of these definitions is unclear. We aimed to compare the accuracy of different established criteria for identifying infected patients using detailed chart review. DESIGN: Retrospective observational study. SETTING: Six hospitals from three health systems in Illinois. PATIENTS: Adult admissions with blood culture or antibiotic orders, or Angus International Classification of Diseases infection codes and death were eligible for study inclusion as potentially infected patients. Nine-hundred to 1,000 of these admissions were randomly selected from each health system for chart review, and a proportional number of patients who did not meet chart review eligibility criteria were also included and deemed not infected. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: The accuracy of published billing code criteria by Angus et al and electronic health record criteria by Rhee et al and Seymour et al (Sepsis-3) was determined using the manual chart review results as the gold standard. A total of 5,215 patients were included, with 2,874 encounters analyzed via chart review and a proportional 2,341 added who did not meet chart review eligibility criteria. In the study cohort, 27.5% of admissions had at least one infection. This was most similar to the percentage of admissions with blood culture orders (26.8%), Angus infection criteria (28.7%), and the Sepsis-3 criteria (30.4%). Sepsis-3 criteria was the most sensitive (81%), followed by Angus (77%) and Rhee (52%), while Rhee (97%) and Angus (90%) were more specific than the Sepsis-3 criteria (89%). Results were similar for patients with organ dysfunction during their admission. CONCLUSIONS: Published criteria have a wide range of accuracy for identifying infected patients, with the Sepsis-3 criteria being the most sensitive and Rhee criteria being the most specific. These findings have important implications for studies investigating the burden of sepsis on a local and national level.


Assuntos
Confiabilidade dos Dados , Registros Eletrônicos de Saúde/normas , Infecções/epidemiologia , Armazenamento e Recuperação da Informação/métodos , Adulto , Idoso , Antibacterianos/uso terapêutico , Antibioticoprofilaxia/estatística & dados numéricos , Hemocultura , Chicago/epidemiologia , Reações Falso-Positivas , Feminino , Humanos , Infecções/diagnóstico , Classificação Internacional de Doenças , Masculino , Pessoa de Meia-Idade , Escores de Disfunção Orgânica , Admissão do Paciente/estatística & dados numéricos , Prevalência , Estudos Retrospectivos , Sensibilidade e Especificidade , Sepse/diagnóstico
6.
Carbohydr Polym ; 182: 149-158, 2018 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-29279109

RESUMO

The efficacy of rifapentine, an oral antibiotic used to treat tuberculosis, may be reduced due to degradation at gastric pH and low solubility at intestinal pH. We hypothesized that delivery properties would be improved in vitro by incorporating rifapentine into pH-responsive amorphous solid dispersions (ASDs) with cellulose derivatives including: hydroxypropylmethylcellulose acetate succinate (HPMCAS), cellulose acetate suberate (CASub), and 5-carboxypentyl hydroxypropyl cellulose (CHC). ASDs generally reduced rifapentine release at gastric pH, with CASub affording >31-fold decrease in area under the curve (AUC) compared to rifapentine alone. Critically, reduced gastric dissolution was accompanied by reduced degradation to 3-formylrifamycin. Certain ASDs also enhanced apparent solubility and stabilization of supersaturated solutions at intestinal pH, with HPMCAS providing nearly 4-fold increase in total AUC vs. rifapentine alone. These results suggest that rifapentine delivery via ASD with these cellulosic polymers may improve bioavailability in vivo.


Assuntos
Antibióticos Antituberculose/química , Celulose/química , Sistemas de Liberação de Medicamentos , Rifampina/análogos & derivados , Portadores de Fármacos/química , Humanos , Concentração de Íons de Hidrogênio , Metilcelulose/análogos & derivados , Conformação Molecular , Rifampina/química , Solubilidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...