Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Am J Respir Crit Care Med ; 207(10): 1300-1309, 2023 05 15.
Artigo em Inglês | MEDLINE | ID: mdl-36449534

RESUMO

Rationale: Despite etiologic and severity heterogeneity in neutropenic sepsis, management is often uniform. Understanding host response clinical subphenotypes might inform treatment strategies for neutropenic sepsis. Objectives: In this retrospective two-hospital study, we analyzed whether temperature trajectory modeling could identify distinct, clinically relevant subphenotypes among oncology patients with neutropenia and suspected infection. Methods: Among adult oncologic admissions with neutropenia and blood cultures within 24 hours, a previously validated model classified patients' initial 72-hour temperature trajectories into one of four subphenotypes. We analyzed subphenotypes' independent relationships with hospital mortality and bloodstream infection using multivariable models. Measurements and Main Results: Patients (primary cohort n = 1,145, validation cohort n = 6,564) fit into one of four temperature subphenotypes. "Hyperthermic slow resolvers" (pooled n = 1,140 [14.8%], mortality n = 104 [9.1%]) and "hypothermic" encounters (n = 1,612 [20.9%], mortality n = 138 [8.6%]) had higher mortality than "hyperthermic fast resolvers" (n = 1,314 [17.0%], mortality n = 47 [3.6%]) and "normothermic" (n = 3,643 [47.3%], mortality n = 196 [5.4%]) encounters (P < 0.001). Bloodstream infections were more common among hyperthermic slow resolvers (n = 248 [21.8%]) and hyperthermic fast resolvers (n = 240 [18.3%]) than among hypothermic (n = 188 [11.7%]) or normothermic (n = 418 [11.5%]) encounters (P < 0.001). Adjusted for confounders, hyperthermic slow resolvers had increased adjusted odds for mortality (primary cohort odds ratio, 1.91 [P = 0.03]; validation cohort odds ratio, 2.19 [P < 0.001]) and bloodstream infection (primary odds ratio, 1.54 [P = 0.04]; validation cohort odds ratio, 2.15 [P < 0.001]). Conclusions: Temperature trajectory subphenotypes were independently associated with important outcomes among hospitalized patients with neutropenia in two independent cohorts.


Assuntos
Neoplasias , Neutropenia , Sepse , Adulto , Humanos , Estudos Retrospectivos , Temperatura , Neutropenia/complicações , Sepse/complicações , Febre , Neoplasias/complicações , Neoplasias/terapia
2.
BMC Pregnancy Childbirth ; 22(1): 295, 2022 Apr 06.
Artigo em Inglês | MEDLINE | ID: mdl-35387624

RESUMO

BACKGROUND: Early warning scores are designed to identify hospitalized patients who are at high risk of clinical deterioration. Although many general scores have been developed for the medical-surgical wards, specific scores have also been developed for obstetric patients due to differences in normal vital sign ranges and potential complications in this unique population. The comparative performance of general and obstetric early warning scores for predicting deterioration and infection on the maternal wards is not known. METHODS: This was an observational cohort study at the University of Chicago that included patients hospitalized on obstetric wards from November 2008 to December 2018. Obstetric scores (modified early obstetric warning system (MEOWS), maternal early warning criteria (MEWC), and maternal early warning trigger (MEWT)), paper-based general scores (Modified Early Warning Score (MEWS) and National Early Warning Score (NEWS), and a general score developed using machine learning (electronic Cardiac Arrest Risk Triage (eCART) score) were compared using the area under the receiver operating characteristic score (AUC) for predicting ward to intensive care unit (ICU) transfer and/or death and new infection. RESULTS: A total of 19,611 patients were included, with 43 (0.2%) experiencing deterioration (ICU transfer and/or death) and 88 (0.4%) experiencing an infection. eCART had the highest discrimination for deterioration (p < 0.05 for all comparisons), with an AUC of 0.86, followed by MEOWS (0.74), NEWS (0.72), MEWC (0.71), MEWS (0.70), and MEWT (0.65). MEWC, MEWT, and MEOWS had higher accuracy than MEWS and NEWS but lower accuracy than eCART at specific cut-off thresholds. For predicting infection, eCART (AUC 0.77) had the highest discrimination. CONCLUSIONS: Within the limitations of our retrospective study, eCART had the highest accuracy for predicting deterioration and infection in our ante- and postpartum patient population. Maternal early warning scores were more accurate than MEWS and NEWS. While institutional choice of an early warning system is complex, our results have important implications for the risk stratification of maternal ward patients, especially since the low prevalence of events means that small improvements in accuracy can lead to large decreases in false alarms.


Assuntos
Deterioração Clínica , Escore de Alerta Precoce , Parada Cardíaca , Feminino , Parada Cardíaca/diagnóstico , Humanos , Unidades de Terapia Intensiva , Gravidez , Curva ROC , Estudos Retrospectivos , Medição de Risco/métodos
3.
Crit Care Med ; 49(10): 1694-1705, 2021 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-33938715

RESUMO

OBJECTIVES: Early antibiotic administration is a central component of sepsis guidelines, and delays may increase mortality. However, prior studies have examined the delay to first antibiotic administration as a single time period even though it contains two distinct processes: antibiotic ordering and antibiotic delivery, which can each be targeted for improvement through different interventions. The objective of this study was to characterize and compare patients who experienced order or delivery delays, investigate the association of each delay type with mortality, and identify novel patient subphenotypes with elevated risk of harm from delays. DESIGN: Retrospective analysis of multicenter inpatient data. SETTING: Two tertiary care medical centers (2008-2018, 2006-2017) and four community-based hospitals (2008-2017). PATIENTS: All patients admitted through the emergency department who met clinical criteria for infection. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Patient demographics, vitals, laboratory values, medication order and administration times, and in-hospital survival data were obtained from the electronic health record. Order and delivery delays were calculated for each admission. Adjusted logistic regression models were used to examine the relationship between each delay and in-hospital mortality. Causal forests, a machine learning method, was used to identify a high-risk subgroup. A total of 60,817 admissions were included, and delays occurred in 58% of patients. Each additional hour of order delay (odds ratio, 1.04; 95% CI, 1.03-1.05) and delivery delay (odds ratio, 1.05; 95% CI, 1.02-1.08) was associated with increased mortality. A patient subgroup identified by causal forests with higher comorbidity burden, greater organ dysfunction, and abnormal initial lactate measurements had a higher risk of death associated with delays (odds ratio, 1.07; 95% CI, 1.06-1.09 vs odds ratio, 1.02; 95% CI, 1.01-1.03). CONCLUSIONS: Delays in antibiotic ordering and drug delivery are both associated with a similar increase in mortality. A distinct subgroup of high-risk patients exist who could be targeted for more timely therapy.


Assuntos
Antibacterianos/administração & dosagem , Fenótipo , Sepse/genética , Tempo para o Tratamento/estatística & dados numéricos , Idoso , Idoso de 80 Anos ou mais , Antibacterianos/uso terapêutico , Serviço Hospitalar de Emergência/organização & administração , Serviço Hospitalar de Emergência/estatística & dados numéricos , Feminino , Hospitalização/estatística & dados numéricos , Humanos , Illinois/epidemiologia , Masculino , Pessoa de Meia-Idade , Estudos Prospectivos , Estudos Retrospectivos , Sepse/tratamento farmacológico , Sepse/fisiopatologia , Fatores de Tempo
4.
Crit Care Med ; 49(7): e673-e682, 2021 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-33861547

RESUMO

OBJECTIVES: Recent sepsis studies have defined patients as "infected" using a combination of culture and antibiotic orders rather than billing data. However, the accuracy of these definitions is unclear. We aimed to compare the accuracy of different established criteria for identifying infected patients using detailed chart review. DESIGN: Retrospective observational study. SETTING: Six hospitals from three health systems in Illinois. PATIENTS: Adult admissions with blood culture or antibiotic orders, or Angus International Classification of Diseases infection codes and death were eligible for study inclusion as potentially infected patients. Nine-hundred to 1,000 of these admissions were randomly selected from each health system for chart review, and a proportional number of patients who did not meet chart review eligibility criteria were also included and deemed not infected. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: The accuracy of published billing code criteria by Angus et al and electronic health record criteria by Rhee et al and Seymour et al (Sepsis-3) was determined using the manual chart review results as the gold standard. A total of 5,215 patients were included, with 2,874 encounters analyzed via chart review and a proportional 2,341 added who did not meet chart review eligibility criteria. In the study cohort, 27.5% of admissions had at least one infection. This was most similar to the percentage of admissions with blood culture orders (26.8%), Angus infection criteria (28.7%), and the Sepsis-3 criteria (30.4%). Sepsis-3 criteria was the most sensitive (81%), followed by Angus (77%) and Rhee (52%), while Rhee (97%) and Angus (90%) were more specific than the Sepsis-3 criteria (89%). Results were similar for patients with organ dysfunction during their admission. CONCLUSIONS: Published criteria have a wide range of accuracy for identifying infected patients, with the Sepsis-3 criteria being the most sensitive and Rhee criteria being the most specific. These findings have important implications for studies investigating the burden of sepsis on a local and national level.


Assuntos
Confiabilidade dos Dados , Registros Eletrônicos de Saúde/normas , Infecções/epidemiologia , Armazenamento e Recuperação da Informação/métodos , Adulto , Idoso , Antibacterianos/uso terapêutico , Antibioticoprofilaxia/estatística & dados numéricos , Hemocultura , Chicago/epidemiologia , Reações Falso-Positivas , Feminino , Humanos , Infecções/diagnóstico , Classificação Internacional de Doenças , Masculino , Pessoa de Meia-Idade , Escores de Disfunção Orgânica , Admissão do Paciente/estatística & dados numéricos , Prevalência , Estudos Retrospectivos , Sensibilidade e Especificidade , Sepse/diagnóstico
5.
Crit Care Med ; 48(11): e1020-e1028, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-32796184

RESUMO

OBJECTIVES: Bacteremia and fungemia can cause life-threatening illness with high mortality rates, which increase with delays in antimicrobial therapy. The objective of this study is to develop machine learning models to predict blood culture results at the time of the blood culture order using routine data in the electronic health record. DESIGN: Retrospective analysis of a large, multicenter inpatient data. SETTING: Two academic tertiary medical centers between the years 2007 and 2018. SUBJECTS: All hospitalized patients who received a blood culture during hospitalization. INTERVENTIONS: The dataset was partitioned temporally into development and validation cohorts: the logistic regression and gradient boosting machine models were trained on the earliest 80% of hospital admissions and validated on the most recent 20%. MEASUREMENTS AND MAIN RESULTS: There were 252,569 blood culture days-defined as nonoverlapping 24-hour periods in which one or more blood cultures were ordered. In the validation cohort, there were 50,514 blood culture days, with 3,762 cases of bacteremia (7.5%) and 370 cases of fungemia (0.7%). The gradient boosting machine model for bacteremia had significantly higher area under the receiver operating characteristic curve (0.78 [95% CI 0.77-0.78]) than the logistic regression model (0.73 [0.72-0.74]) (p < 0.001). The model identified a high-risk group with over 30 times the occurrence rate of bacteremia in the low-risk group (27.4% vs 0.9%; p < 0.001). Using the low-risk cut-off, the model identifies bacteremia with 98.7% sensitivity. The gradient boosting machine model for fungemia had high discrimination (area under the receiver operating characteristic curve 0.88 [95% CI 0.86-0.90]). The high-risk fungemia group had 252 fungemic cultures compared with one fungemic culture in the low-risk group (5.0% vs 0.02%; p < 0.001). Further, the high-risk group had a mortality rate 60 times higher than the low-risk group (28.2% vs 0.4%; p < 0.001). CONCLUSIONS: Our novel models identified patients at low and high-risk for bacteremia and fungemia using routinely collected electronic health record data. Further research is needed to evaluate the cost-effectiveness and impact of model implementation in clinical practice.


Assuntos
Bacteriemia/diagnóstico , Registros Eletrônicos de Saúde/estatística & dados numéricos , Fungemia/diagnóstico , Aprendizado de Máquina , Idoso , Bacteriemia/sangue , Bacteriemia/etiologia , Bacteriemia/microbiologia , Hemocultura , Feminino , Fungemia/sangue , Fungemia/etiologia , Fungemia/microbiologia , Hospitalização/estatística & dados numéricos , Humanos , Masculino , Pessoa de Meia-Idade , Modelos Estatísticos , Reprodutibilidade dos Testes , Estudos Retrospectivos , Fatores de Risco
6.
Crit Care Med ; 48(11): 1645-1653, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-32947475

RESUMO

OBJECTIVES: We recently found that distinct body temperature trajectories of infected patients correlated with survival. Understanding the relationship between the temperature trajectories and the host immune response to infection could allow us to immunophenotype patients at the bedside using temperature. The objective was to identify whether temperature trajectories have consistent associations with specific cytokine responses in two distinct cohorts of infected patients. DESIGN: Prospective observational study. SETTING: Large academic medical center between 2013 and 2019. SUBJECTS: Two cohorts of infected patients: 1) patients in the ICU with septic shock and 2) hospitalized patients with Staphylococcus aureus bacteremia. INTERVENTIONS: Clinical data (including body temperature) and plasma cytokine concentrations were measured. Patients were classified into four temperature trajectory subphenotypes using their temperature measurements in the first 72 hours from the onset of infection. Log-transformed cytokine levels were standardized to the mean and compared with the subphenotypes in both cohorts. MEASUREMENTS AND MAIN RESULTS: The cohorts consisted of 120 patients with septic shock (cohort 1) and 88 patients with S. aureus bacteremia (cohort 2). Patients from both cohorts were classified into one of four previously validated temperature subphenotypes: "hyperthermic, slow resolvers" (n = 19 cohort 1; n = 13 cohort 2), "hyperthermic, fast resolvers" (n = 18 C1; n = 24 C2), "normothermic" (n = 54 C1; n = 31 C2), and "hypothermic" (n = 29 C1; n = 20 C2). Both "hyperthermic, slow resolvers" and "hyperthermic, fast resolvers" had high levels of G-CSF, CCL2, and interleukin-10 compared with the "hypothermic" group when controlling for cohort and timing of cytokine measurement (p < 0.05). In contrast to the "hyperthermic, slow resolvers," the "hyperthermic, fast resolvers" showed significant decreases in the levels of several cytokines over a 24-hour period, including interleukin-1RA, interleukin-6, interleukin-8, G-CSF, and M-CSF (p < 0.001). CONCLUSIONS: Temperature trajectory subphenotypes are associated with consistent cytokine profiles in two distinct cohorts of infected patients. These subphenotypes could play a role in the bedside identification of cytokine profiles in patients with sepsis.


Assuntos
Temperatura Corporal/fisiologia , Imunidade/imunologia , Sepse/imunologia , Idoso , Bacteriemia/imunologia , Bacteriemia/fisiopatologia , Temperatura Corporal/imunologia , Citocinas/sangue , Feminino , Febre/imunologia , Febre/fisiopatologia , Humanos , Imunidade/fisiologia , Masculino , Pessoa de Meia-Idade , Estudos Prospectivos , Sepse/fisiopatologia , Choque Séptico/imunologia , Choque Séptico/fisiopatologia , Infecções Estafilocócicas/imunologia , Infecções Estafilocócicas/fisiopatologia
7.
Am J Respir Crit Care Med ; 200(3): 327-335, 2019 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-30789749

RESUMO

Rationale: Sepsis is a heterogeneous syndrome, and identifying clinically relevant subphenotypes is essential.Objectives: To identify novel subphenotypes in hospitalized patients with infection using longitudinal temperature trajectories.Methods: In the model development cohort, inpatient admissions meeting criteria for infection in the emergency department and receiving antibiotics within 24 hours of presentation were included. Temperature measurements within the first 72 hours were compared between survivors and nonsurvivors. Group-based trajectory modeling was performed to identify temperature trajectory groups, and patient characteristics and outcomes were compared between the groups. The model was then externally validated at a second hospital using the same inclusion criteria.Measurements and Main Results: A total of 12,413 admissions were included in the development cohort, and 19,053 were included in the validation cohort. In the development cohort, four temperature trajectory groups were identified: "hyperthermic, slow resolvers" (n = 1,855; 14.9% of the cohort); "hyperthermic, fast resolvers" (n = 2,877; 23.2%); "normothermic" (n = 4,067; 32.8%); and "hypothermic" (n = 3,614; 29.1%). The hypothermic subjects were the oldest and had the most comorbidities, the lowest levels of inflammatory markers, and the highest in-hospital mortality rate (9.5%). The hyperthermic, slow resolvers were the youngest and had the fewest comorbidities, the highest levels of inflammatory markers, and a mortality rate of 5.1%. The hyperthermic, fast resolvers had the lowest mortality rate (2.9%). Similar trajectory groups, patient characteristics, and outcomes were found in the validation cohort.Conclusions: We identified and validated four novel subphenotypes of patients with infection, with significant variability in inflammatory markers and outcomes.


Assuntos
Temperatura Corporal , Febre/diagnóstico , Febre/etiologia , Sepse/complicações , Sepse/mortalidade , Idoso , Estudos de Coortes , Feminino , Febre/terapia , Mortalidade Hospitalar , Hospitalização , Humanos , Masculino , Pessoa de Meia-Idade , Sepse/terapia , Fatores de Tempo
8.
Crit Care Med ; 47(12): 1735-1742, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31599813

RESUMO

OBJECTIVES: The immune response during sepsis remains poorly understood and is likely influenced by the host's preexisting immunologic comorbidities. Although more than 20% of the U.S. population has an allergic-atopic disease, the type 2 immune response that is overactive in these diseases can also mediate beneficial pro-resolving, tissue-repair functions. Thus, the presence of allergic immunologic comorbidities may be advantageous for patients suffering from sepsis. The objective of this study was to test the hypothesis that comorbid type 2 immune diseases confer protection against morbidity and mortality due to acute infection. DESIGN: Retrospective cohort study of patients hospitalized with an acute infection between November 2008 and January 2016 using electronic health record data. SETTING: Single tertiary-care academic medical center. PATIENTS: Admissions to the hospital through the emergency department with likely infection at the time of admission who may or may not have had a type 2 immune-mediated disease, defined as asthma, allergic rhinitis, atopic dermatitis, or food allergy, as determined by International Classification of Diseases, 9th Revision, Clinical Modification codes. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Of 10,789 admissions for infection, 2,578 (24%) had a type 2 disease; these patients were more likely to be female, black, and younger than patients without type 2 diseases. In unadjusted analyses, type 2 patients had decreased odds of dying during the hospitalization (0.47; 95% CI, 0.38-0.59, p < 0.001), while having more than one type 2 disease conferred a dose-dependent reduction in the risk of mortality (p < 0.001). When adjusting for demographics, medications, types of infection, and illness severity, the presence of a type 2 disease remained protective (odds ratio, 0.55; 95% CI, 0.43-0.70; p < 0.001). Similar results were found using a propensity score analysis (odds ratio, 0.57; 95% CI, 0.45-0.71; p < 0.001). CONCLUSIONS: Patients with type 2 diseases admitted with acute infections have reduced mortality, implying that the type 2 immune response is protective in sepsis.


Assuntos
Hipersensibilidade/complicações , Hipersensibilidade/mortalidade , Infecções/complicações , Infecções/mortalidade , Doença Aguda , Adulto , Idoso , Estudos de Coortes , Feminino , Hospitalização , Humanos , Infecções/imunologia , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos , Medição de Risco
9.
Crit Care Med ; 47(12): e962-e965, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31567342

RESUMO

OBJECTIVES: Early warning scores were developed to identify high-risk patients on the hospital wards. Research on early warning scores has focused on patients in short-term acute care hospitals, but there are other settings, such as long-term acute care hospitals, where these tools could be useful. However, the accuracy of early warning scores in long-term acute care hospitals is unknown. DESIGN: Observational cohort study. SETTING: Two long-term acute care hospitals in Illinois from January 2002 to September 2017. PATIENTS: Admitted adult long-term acute care hospital patients. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Demographic characteristics, vital signs, laboratory values, nursing flowsheet data, and outcomes data were collected from the electronic health record. The accuracy of individual variables, the Modified Early Warning Score, the National Early Warning Score version 2, and our previously developed electronic Cardiac Arrest Risk Triage score were compared for predicting the need for acute hospital transfer or death using the area under the receiver operating characteristic curve. A total of 12,497 patient admissions were included, with 3,550 experiencing the composite outcome. The median age was 65 (interquartile range, 54-74), 46% were female, and the median length of stay in the long-term acute care hospital was 27 days (interquartile range, 17-40 d), with an 8% in-hospital mortality. Laboratory values were the best predictors, with blood urea nitrogen being the most accurate (area under the receiver operating characteristic curve, 0.63) followed by albumin, bilirubin, and WBC count (area under the receiver operating characteristic curve, 0.61). Systolic blood pressure was the most accurate vital sign (area under the receiver operating characteristic curve, 0.60). Electronic Cardiac Arrest Risk Triage (area under the receiver operating characteristic curve, 0.72) was significantly more accurate than National Early Warning Score version 2 (area under the receiver operating characteristic curve, 0.66) and Modified Early Warning Score (area under the receiver operating characteristic curve, 0.65; p < 0.01 for all pairwise comparisons). CONCLUSIONS: In this retrospective cohort study, we found that the electronic Cardiac Arrest Risk Triage score was significantly more accurate than Modified Early Warning Score and National Early Warning Score version 2 for predicting acute hospital transfer and mortality. Because laboratory values were more predictive than vital signs and the average length of stay in an long-term acute care hospital is much longer than short-term acute hospitals, developing a score specific to the long-term acute care hospital population would likely further improve accuracy, thus allowing earlier identification of high-risk patients for potentially life-saving interventions.


Assuntos
Escore de Alerta Precoce , Parada Cardíaca/diagnóstico , Medição de Risco/métodos , Doença Aguda , Idoso , Estudos de Coortes , Feminino , Hospitais , Humanos , Assistência de Longa Duração , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos
10.
Crit Care Med ; 47(10): 1283-1289, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31343475

RESUMO

OBJECTIVES: To characterize the rapid response team activations, and the patients receiving them, in the American Heart Association-sponsored Get With The Guidelines Resuscitation-Medical Emergency Team cohort between 2005 and 2015. DESIGN: Retrospective multicenter cohort study. SETTING: Three hundred sixty U.S. hospitals. PATIENTS: Consecutive adult patients experiencing rapid response team activation. INTERVENTIONS: Rapid response team activation. MEASUREMENTS AND MAIN RESULTS: The cohort included 402,023 rapid response team activations from 347,401 unique healthcare encounters. Respiratory triggers (38.0%) and cardiac triggers (37.4%) were most common. The most frequent interventions-pulse oximetry (66.5%), other monitoring (59.6%), and supplemental oxygen (62.0%)-were noninvasive. Fluids were the most common medication ordered (19.3%), but new antibiotic orders were rare (1.2%). More than 10% of rapid response teams resulted in code status changes. Hospital mortality was over 14% and increased with subsequent rapid response activations. CONCLUSIONS: Although patients requiring rapid response team activation have high inpatient mortality, most rapid response team activations involve relatively few interventions, which may limit these teams' ability to improve patient outcomes.


Assuntos
Serviço Hospitalar de Emergência , Equipe de Respostas Rápidas de Hospitais/estatística & dados numéricos , Sistema de Registros , Ressuscitação/estatística & dados numéricos , Idoso , Estudos de Coortes , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Guias de Prática Clínica como Assunto , Estudos Retrospectivos , Estados Unidos
11.
Crit Care Med ; 46(7): 1070-1077, 2018 07.
Artigo em Inglês | MEDLINE | ID: mdl-29596073

RESUMO

OBJECTIVES: To develop an acute kidney injury risk prediction model using electronic health record data for longitudinal use in hospitalized patients. DESIGN: Observational cohort study. SETTING: Tertiary, urban, academic medical center from November 2008 to January 2016. PATIENTS: All adult inpatients without pre-existing renal failure at admission, defined as first serum creatinine greater than or equal to 3.0 mg/dL, International Classification of Diseases, 9th Edition, code for chronic kidney disease stage 4 or higher or having received renal replacement therapy within 48 hours of first serum creatinine measurement. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Demographics, vital signs, diagnostics, and interventions were used in a Gradient Boosting Machine algorithm to predict serum creatinine-based Kidney Disease Improving Global Outcomes stage 2 acute kidney injury, with 60% of the data used for derivation and 40% for validation. Area under the receiver operator characteristic curve (AUC) was calculated in the validation cohort, and subgroup analyses were conducted across admission serum creatinine, acute kidney injury severity, and hospital location. Among the 121,158 included patients, 17,482 (14.4%) developed any Kidney Disease Improving Global Outcomes acute kidney injury, with 4,251 (3.5%) developing stage 2. The AUC (95% CI) was 0.90 (0.90-0.90) for predicting stage 2 acute kidney injury within 24 hours and 0.87 (0.87-0.87) within 48 hours. The AUC was 0.96 (0.96-0.96) for receipt of renal replacement therapy (n = 821) in the next 48 hours. Accuracy was similar across hospital settings (ICU, wards, and emergency department) and admitting serum creatinine groupings. At a probability threshold of greater than or equal to 0.022, the algorithm had a sensitivity of 84% and a specificity of 85% for stage 2 acute kidney injury and predicted the development of stage 2 a median of 41 hours (interquartile range, 12-141 hr) prior to the development of stage 2 acute kidney injury. CONCLUSIONS: Readily available electronic health record data can be used to predict impending acute kidney injury prior to changes in serum creatinine with excellent accuracy across different patient locations and admission serum creatinine. Real-time use of this model would allow early interventions for those at high risk of acute kidney injury.


Assuntos
Injúria Renal Aguda/etiologia , Aprendizado de Máquina , Injúria Renal Aguda/diagnóstico , Algoritmos , Área Sob a Curva , Creatinina/sangue , Registros Eletrônicos de Saúde , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Modelos Estatísticos , Curva ROC , Terapia de Substituição Renal/estatística & dados numéricos , Reprodutibilidade dos Testes
14.
J Am Med Inform Assoc ; 31(6): 1322-1330, 2024 May 20.
Artigo em Inglês | MEDLINE | ID: mdl-38679906

RESUMO

OBJECTIVES: To compare and externally validate popular deep learning model architectures and data transformation methods for variable-length time series data in 3 clinical tasks (clinical deterioration, severe acute kidney injury [AKI], and suspected infection). MATERIALS AND METHODS: This multicenter retrospective study included admissions at 2 medical centers that spanned 2007-2022. Distinct datasets were created for each clinical task, with 1 site used for training and the other for testing. Three feature engineering methods (normalization, standardization, and piece-wise linear encoding with decision trees [PLE-DTs]) and 3 architectures (long short-term memory/gated recurrent unit [LSTM/GRU], temporal convolutional network, and time-distributed wrapper with convolutional neural network [TDW-CNN]) were compared in each clinical task. Model discrimination was evaluated using the area under the precision-recall curve (AUPRC) and the area under the receiver operating characteristic curve (AUROC). RESULTS: The study comprised 373 825 admissions for training and 256 128 admissions for testing. LSTM/GRU models tied with TDW-CNN models with both obtaining the highest mean AUPRC in 2 tasks, and LSTM/GRU had the highest mean AUROC across all tasks (deterioration: 0.81, AKI: 0.92, infection: 0.87). PLE-DT with LSTM/GRU achieved the highest AUPRC in all tasks. DISCUSSION: When externally validated in 3 clinical tasks, the LSTM/GRU model architecture with PLE-DT transformed data demonstrated the highest AUPRC in all tasks. Multiple models achieved similar performance when evaluated using AUROC. CONCLUSION: The LSTM architecture performs as well or better than some newer architectures, and PLE-DT may enhance the AUPRC in variable-length time series data for predicting clinical outcomes during external validation.


Assuntos
Aprendizado Profundo , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Injúria Renal Aguda , Conjuntos de Dados como Assunto , Redes Neurais de Computação , Estudos Retrospectivos , Curva ROC
15.
Resusc Plus ; 17: 100540, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38260119

RESUMO

Background and Objective: The Children's Early Warning Tool (CEWT), developed in Australia, is widely used in many countries to monitor the risk of deterioration in hospitalized children. Our objective was to compare CEWT prediction performance against a version of the Bedside Pediatric Early Warning Score (Bedside PEWS), Between the Flags (BTF), and the pediatric Calculated Assessment of Risk and Triage (pCART). Methods: We conducted a retrospective observational study of all patient admissions to the Comer Children's Hospital at the University of Chicago between 2009-2019. We compared performance for predicting the primary outcome of a direct ward-to-intensive care unit (ICU) transfer within the next 12 h using the area under the receiver operating characteristic curve (AUC). Alert rates at various score thresholds were also compared. Results: Of 50,815 ward admissions, 1,874 (3.7%) experienced the primary outcome. Among patients in Cohort 1 (years 2009-2017, on which the machine learning-based pCART was trained), CEWT performed slightly worse than Bedside PEWS but better than BTF (CEWT AUC 0.74 vs. Bedside PEWS 0.76, P < 0.001; vs. BTF 0.66, P < 0.001), while pCART performed best for patients in Cohort 2 (years 2018-2019, pCART AUC 0.84 vs. CEWT AUC 0.79, P < 0.001; vs. BTF AUC 0.67, P < 0.001; vs. Bedside PEWS 0.80, P < 0.001). Sensitivity, specificity, and positive predictive values varied across all four tools at the examined thresholds for alerts. Conclusion: CEWT has good discrimination for predicting which patients will likely be transferred to the ICU, while pCART performed the best.

16.
J Am Med Inform Assoc ; 31(6): 1291-1302, 2024 May 20.
Artigo em Inglês | MEDLINE | ID: mdl-38587875

RESUMO

OBJECTIVE: The timely stratification of trauma injury severity can enhance the quality of trauma care but it requires intense manual annotation from certified trauma coders. The objective of this study is to develop machine learning models for the stratification of trauma injury severity across various body regions using clinical text and structured electronic health records (EHRs) data. MATERIALS AND METHODS: Our study utilized clinical documents and structured EHR variables linked with the trauma registry data to create 2 machine learning models with different approaches to representing text. The first one fuses concept unique identifiers (CUIs) extracted from free text with structured EHR variables, while the second one integrates free text with structured EHR variables. Temporal validation was undertaken to ensure the models' temporal generalizability. Additionally, analyses to assess the variable importance were conducted. RESULTS: Both models demonstrated impressive performance in categorizing leg injuries, achieving high accuracy with macro-F1 scores of over 0.8. Additionally, they showed considerable accuracy, with macro-F1 scores exceeding or near 0.7, in assessing injuries in the areas of the chest and head. We showed in our variable importance analysis that the most important features in the model have strong face validity in determining clinically relevant trauma injuries. DISCUSSION: The CUI-based model achieves comparable performance, if not higher, compared to the free-text-based model, with reduced complexity. Furthermore, integrating structured EHR data improves performance, particularly when the text modalities are insufficiently indicative. CONCLUSIONS: Our multi-modal, multiclass models can provide accurate stratification of trauma injury severity and clinically relevant interpretations.


Assuntos
Registros Eletrônicos de Saúde , Aprendizado de Máquina , Ferimentos e Lesões , Humanos , Ferimentos e Lesões/classificação , Escala de Gravidade do Ferimento , Sistema de Registros , Índices de Gravidade do Trauma , Processamento de Linguagem Natural
17.
Crit Care Explor ; 6(3): e1066, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38505174

RESUMO

OBJECTIVES: Alcohol withdrawal syndrome (AWS) may progress to require high-intensity care. Approaches to identify hospitalized patients with AWS who received higher level of care have not been previously examined. This study aimed to examine the utility of Clinical Institute Withdrawal Assessment Alcohol Revised (CIWA-Ar) for alcohol scale scores and medication doses for alcohol withdrawal management in identifying patients who received high-intensity care. DESIGN: A multicenter observational cohort study of hospitalized adults with alcohol withdrawal. SETTING: University of Chicago Medical Center and University of Wisconsin Hospital. PATIENTS: Inpatient encounters between November 2008 and February 2022 with a CIWA-Ar score greater than 0 and benzodiazepine or barbiturate administered within the first 24 hours. The primary composite outcome was patients who progressed to high-intensity care (intermediate care or ICU). INTERVENTIONS: None. MAIN RESULTS: Among the 8742 patients included in the study, 37.5% (n = 3280) progressed to high-intensity care. The odds ratio for the composite outcome increased above 1.0 when the CIWA-Ar score was 24. The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) at this threshold were 0.12 (95% CI, 0.11-0.13), 0.95 (95% CI, 0.94-0.95), 0.58 (95% CI, 0.54-0.61), and 0.64 (95% CI, 0.63-0.65), respectively. The OR increased above 1.0 at a 24-hour lorazepam milligram equivalent dose cutoff of 15 mg. The sensitivity, specificity, PPV, and NPV at this threshold were 0.16 (95% CI, 0.14-0.17), 0.96 (95% CI, 0.95-0.96), 0.68 (95% CI, 0.65-0.72), and 0.65 (95% CI, 0.64-0.66), respectively. CONCLUSIONS: Neither CIWA-Ar scores nor medication dose cutoff points were effective measures for identifying patients with alcohol withdrawal who received high-intensity care. Research studies for examining outcomes in patients who deteriorate with AWS will require better methods for cohort identification.

18.
medRxiv ; 2024 Mar 19.
Artigo em Inglês | MEDLINE | ID: mdl-38562803

RESUMO

Rationale: Early detection of clinical deterioration using early warning scores may improve outcomes. However, most implemented scores were developed using logistic regression, only underwent retrospective internal validation, and were not tested in important patient subgroups. Objectives: To develop a gradient boosted machine model (eCARTv5) for identifying clinical deterioration and then validate externally, test prospectively, and evaluate across patient subgroups. Methods: All adult patients hospitalized on the wards in seven hospitals from 2008- 2022 were used to develop eCARTv5, with demographics, vital signs, clinician documentation, and laboratory values utilized to predict intensive care unit transfer or death in the next 24 hours. The model was externally validated retrospectively in 21 hospitals from 2009-2023 and prospectively in 10 hospitals from February to May 2023. eCARTv5 was compared to the Modified Early Warning Score (MEWS) and the National Early Warning Score (NEWS) using the area under the receiver operating characteristic curve (AUROC). Measurements and Main Results: The development cohort included 901,491 admissions, the retrospective validation cohort included 1,769,461 admissions, and the prospective validation cohort included 46,330 admissions. In retrospective validation, eCART had the highest AUROC (0.835; 95%CI 0.834, 0.835), followed by NEWS (0.766 (95%CI 0.766, 0.767)), and MEWS (0.704 (95%CI 0.703, 0.704)). eCART's performance remained high (AUROC ≥0.80) across a range of patient demographics, clinical conditions, and during prospective validation. Conclusions: We developed eCARTv5, which accurately identifies early clinical deterioration in hospitalized ward patients. Our model performed better than the NEWS and MEWS retrospectively, prospectively, and across a range of subgroups.

19.
medRxiv ; 2024 Feb 06.
Artigo em Inglês | MEDLINE | ID: mdl-38370788

RESUMO

OBJECTIVE: Timely intervention for clinically deteriorating ward patients requires that care teams accurately diagnose and treat their underlying medical conditions. However, the most common diagnoses leading to deterioration and the relevant therapies provided are poorly characterized. Therefore, we aimed to determine the diagnoses responsible for clinical deterioration, the relevant diagnostic tests ordered, and the treatments administered among high-risk ward patients using manual chart review. DESIGN: Multicenter retrospective observational study. SETTING: Inpatient medical-surgical wards at four health systems from 2006-2020 PATIENTS: Randomly selected patients (1,000 from each health system) with clinical deterioration, defined by reaching the 95th percentile of a validated early warning score, electronic Cardiac Arrest Risk Triage (eCART), were included. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Clinical deterioration was confirmed by a trained reviewer or marked as a false alarm if no deterioration occurred for each patient. For true deterioration events, the condition causing deterioration, relevant diagnostic tests ordered, and treatments provided were collected. Of the 4,000 included patients, 2,484 (62%) had clinical deterioration confirmed by chart review. Sepsis was the most common cause of deterioration (41%; n=1,021), followed by arrhythmia (19%; n=473), while liver failure had the highest in-hospital mortality (41%). The most common diagnostic tests ordered were complete blood counts (47% of events), followed by chest x-rays (42%), and cultures (40%), while the most common medication orders were antimicrobials (46%), followed by fluid boluses (34%), and antiarrhythmics (19%). CONCLUSIONS: We found that sepsis was the most common cause of deterioration, while liver failure had the highest mortality. Complete blood counts and chest x-rays were the most common diagnostic tests ordered, and antimicrobials and fluid boluses were the most common medication interventions. These results provide important insights for clinical decision-making at the bedside, training of rapid response teams, and the development of institutional treatment pathways for clinical deterioration. KEY POINTS: Question: What are the most common diagnoses, diagnostic test orders, and treatments for ward patients experiencing clinical deterioration? Findings: In manual chart review of 2,484 encounters with deterioration across four health systems, we found that sepsis was the most common cause of clinical deterioration, followed by arrythmias, while liver failure had the highest mortality. Complete blood counts and chest x-rays were the most common diagnostic test orders, while antimicrobials and fluid boluses were the most common treatments. Meaning: Our results provide new insights into clinical deterioration events, which can inform institutional treatment pathways, rapid response team training, and patient care.

20.
Front Pediatr ; 11: 1284672, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38188917

RESUMO

Introduction: Critical deterioration in hospitalized children, defined as ward to pediatric intensive care unit (PICU) transfer followed by mechanical ventilation (MV) or vasoactive infusion (VI) within 12 h, has been used as a primary metric to evaluate the effectiveness of clinical interventions or quality improvement initiatives. We explore the association between critical events (CEs), i.e., MV or VI events, within the first 48 h of PICU transfer from the ward or emergency department (ED) and in-hospital mortality. Methods: We conducted a retrospective study of a cohort of PICU transfers from the ward or the ED at two tertiary-care academic hospitals. We determined the association between mortality and occurrence of CEs within 48 h of PICU transfer after adjusting for age, gender, hospital, and prior comorbidities. Results: Experiencing a CE within 48 h of PICU transfer was associated with an increased risk of mortality [OR 12.40 (95% CI: 8.12-19.23, P < 0.05)]. The increased risk of mortality was highest in the first 12 h [OR 11.32 (95% CI: 7.51-17.15, P < 0.05)] but persisted in the 12-48 h time interval [OR 2.84 (95% CI: 1.40-5.22, P < 0.05)]. Varying levels of risk were observed when considering ED or ward transfers only, when considering different age groups, and when considering individual 12-h time intervals. Discussion: We demonstrate that occurrence of a CE within 48 h of PICU transfer was associated with mortality after adjusting for confounders. Studies focusing on the impact of quality improvement efforts may benefit from using CEs within 48 h of PICU transfer as an additional evaluation metric, provided these events could have been influenced by the initiative.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA