Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
Mais filtros

Bases de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Crit Care Med ; 50(9): 1339-1347, 2022 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-35452010

RESUMO

OBJECTIVES: To determine the impact of a machine learning early warning risk score, electronic Cardiac Arrest Risk Triage (eCART), on mortality for elevated-risk adult inpatients. DESIGN: A pragmatic pre- and post-intervention study conducted over the same 10-month period in 2 consecutive years. SETTING: Four-hospital community-academic health system. PATIENTS: All adult patients admitted to a medical-surgical ward. INTERVENTIONS: During the baseline period, clinicians were blinded to eCART scores. During the intervention period, scores were presented to providers. Scores greater than or equal to 95th percentile were designated high risk prompting a physician assessment for ICU admission. Scores between the 89th and 95th percentiles were designated intermediate risk, triggering a nurse-directed workflow that included measuring vital signs every 2 hours and contacting a physician to review the treatment plan. MEASUREMENTS AND MAIN RESULTS: The primary outcome was all-cause inhospital mortality. Secondary measures included vital sign assessment within 2 hours, ICU transfer rate, and time to ICU transfer. A total of 60,261 patients were admitted during the study period, of which 6,681 (11.1%) met inclusion criteria (baseline period n = 3,191, intervention period n = 3,490). The intervention period was associated with a significant decrease in hospital mortality for the main cohort (8.8% vs 13.9%; p < 0.0001; adjusted odds ratio [OR], 0.60 [95% CI, 0.52-0.71]). A significant decrease in mortality was also seen for the average-risk cohort not subject to the intervention (0.49% vs 0.26%; p < 0.05; adjusted OR, 0.53 [95% CI, 0.41-0.74]). In subgroup analysis, the benefit was seen in both high- (17.9% vs 23.9%; p = 0.001) and intermediate-risk (2.0% vs 4.0 %; p = 0.005) patients. The intervention period was also associated with a significant increase in ICU transfers, decrease in time to ICU transfer, and increase in vital sign reassessment within 2 hours. CONCLUSIONS: Implementation of a machine learning early warning score-driven protocol was associated with reduced inhospital mortality, likely driven by earlier and more frequent ICU transfer.


Assuntos
Escore de Alerta Precoce , Parada Cardíaca , Adulto , Parada Cardíaca/diagnóstico , Parada Cardíaca/terapia , Mortalidade Hospitalar , Humanos , Unidades de Terapia Intensiva , Aprendizado de Máquina , Sinais Vitais
2.
Crit Care Med ; 49(10): 1694-1705, 2021 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-33938715

RESUMO

OBJECTIVES: Early antibiotic administration is a central component of sepsis guidelines, and delays may increase mortality. However, prior studies have examined the delay to first antibiotic administration as a single time period even though it contains two distinct processes: antibiotic ordering and antibiotic delivery, which can each be targeted for improvement through different interventions. The objective of this study was to characterize and compare patients who experienced order or delivery delays, investigate the association of each delay type with mortality, and identify novel patient subphenotypes with elevated risk of harm from delays. DESIGN: Retrospective analysis of multicenter inpatient data. SETTING: Two tertiary care medical centers (2008-2018, 2006-2017) and four community-based hospitals (2008-2017). PATIENTS: All patients admitted through the emergency department who met clinical criteria for infection. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Patient demographics, vitals, laboratory values, medication order and administration times, and in-hospital survival data were obtained from the electronic health record. Order and delivery delays were calculated for each admission. Adjusted logistic regression models were used to examine the relationship between each delay and in-hospital mortality. Causal forests, a machine learning method, was used to identify a high-risk subgroup. A total of 60,817 admissions were included, and delays occurred in 58% of patients. Each additional hour of order delay (odds ratio, 1.04; 95% CI, 1.03-1.05) and delivery delay (odds ratio, 1.05; 95% CI, 1.02-1.08) was associated with increased mortality. A patient subgroup identified by causal forests with higher comorbidity burden, greater organ dysfunction, and abnormal initial lactate measurements had a higher risk of death associated with delays (odds ratio, 1.07; 95% CI, 1.06-1.09 vs odds ratio, 1.02; 95% CI, 1.01-1.03). CONCLUSIONS: Delays in antibiotic ordering and drug delivery are both associated with a similar increase in mortality. A distinct subgroup of high-risk patients exist who could be targeted for more timely therapy.


Assuntos
Antibacterianos/administração & dosagem , Fenótipo , Sepse/genética , Tempo para o Tratamento/estatística & dados numéricos , Idoso , Idoso de 80 Anos ou mais , Antibacterianos/uso terapêutico , Serviço Hospitalar de Emergência/organização & administração , Serviço Hospitalar de Emergência/estatística & dados numéricos , Feminino , Hospitalização/estatística & dados numéricos , Humanos , Illinois/epidemiologia , Masculino , Pessoa de Meia-Idade , Estudos Prospectivos , Estudos Retrospectivos , Sepse/tratamento farmacológico , Sepse/fisiopatologia , Fatores de Tempo
3.
Crit Care Med ; 49(7): e673-e682, 2021 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-33861547

RESUMO

OBJECTIVES: Recent sepsis studies have defined patients as "infected" using a combination of culture and antibiotic orders rather than billing data. However, the accuracy of these definitions is unclear. We aimed to compare the accuracy of different established criteria for identifying infected patients using detailed chart review. DESIGN: Retrospective observational study. SETTING: Six hospitals from three health systems in Illinois. PATIENTS: Adult admissions with blood culture or antibiotic orders, or Angus International Classification of Diseases infection codes and death were eligible for study inclusion as potentially infected patients. Nine-hundred to 1,000 of these admissions were randomly selected from each health system for chart review, and a proportional number of patients who did not meet chart review eligibility criteria were also included and deemed not infected. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: The accuracy of published billing code criteria by Angus et al and electronic health record criteria by Rhee et al and Seymour et al (Sepsis-3) was determined using the manual chart review results as the gold standard. A total of 5,215 patients were included, with 2,874 encounters analyzed via chart review and a proportional 2,341 added who did not meet chart review eligibility criteria. In the study cohort, 27.5% of admissions had at least one infection. This was most similar to the percentage of admissions with blood culture orders (26.8%), Angus infection criteria (28.7%), and the Sepsis-3 criteria (30.4%). Sepsis-3 criteria was the most sensitive (81%), followed by Angus (77%) and Rhee (52%), while Rhee (97%) and Angus (90%) were more specific than the Sepsis-3 criteria (89%). Results were similar for patients with organ dysfunction during their admission. CONCLUSIONS: Published criteria have a wide range of accuracy for identifying infected patients, with the Sepsis-3 criteria being the most sensitive and Rhee criteria being the most specific. These findings have important implications for studies investigating the burden of sepsis on a local and national level.


Assuntos
Confiabilidade dos Dados , Registros Eletrônicos de Saúde/normas , Infecções/epidemiologia , Armazenamento e Recuperação da Informação/métodos , Adulto , Idoso , Antibacterianos/uso terapêutico , Antibioticoprofilaxia/estatística & dados numéricos , Hemocultura , Chicago/epidemiologia , Reações Falso-Positivas , Feminino , Humanos , Infecções/diagnóstico , Classificação Internacional de Doenças , Masculino , Pessoa de Meia-Idade , Escores de Disfunção Orgânica , Admissão do Paciente/estatística & dados numéricos , Prevalência , Estudos Retrospectivos , Sensibilidade e Especificidade , Sepse/diagnóstico
4.
Crit Care Med ; 44(2): 368-74, 2016 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-26771782

RESUMO

OBJECTIVE: Machine learning methods are flexible prediction algorithms that may be more accurate than conventional regression. We compared the accuracy of different techniques for detecting clinical deterioration on the wards in a large, multicenter database. DESIGN: Observational cohort study. SETTING: Five hospitals, from November 2008 until January 2013. PATIENTS: Hospitalized ward patients INTERVENTIONS: None MEASUREMENTS AND MAIN RESULTS: Demographic variables, laboratory values, and vital signs were utilized in a discrete-time survival analysis framework to predict the combined outcome of cardiac arrest, intensive care unit transfer, or death. Two logistic regression models (one using linear predictor terms and a second utilizing restricted cubic splines) were compared to several different machine learning methods. The models were derived in the first 60% of the data by date and then validated in the next 40%. For model derivation, each event time window was matched to a non-event window. All models were compared to each other and to the Modified Early Warning score, a commonly cited early warning score, using the area under the receiver operating characteristic curve (AUC). A total of 269,999 patients were admitted, and 424 cardiac arrests, 13,188 intensive care unit transfers, and 2,840 deaths occurred in the study. In the validation dataset, the random forest model was the most accurate model (AUC, 0.80 [95% CI, 0.80-0.80]). The logistic regression model with spline predictors was more accurate than the model utilizing linear predictors (AUC, 0.77 vs 0.74; p < 0.01), and all models were more accurate than the MEWS (AUC, 0.70 [95% CI, 0.70-0.70]). CONCLUSIONS: In this multicenter study, we found that several machine learning methods more accurately predicted clinical deterioration than logistic regression. Use of detection algorithms derived from these techniques may result in improved identification of critically ill patients on the wards.


Assuntos
Parada Cardíaca/mortalidade , Unidades de Terapia Intensiva/organização & administração , Aprendizado de Máquina/estatística & dados numéricos , Modelos Estatísticos , Fatores Etários , Estudos de Coortes , Técnicas e Procedimentos Diagnósticos , Humanos , Modelos Logísticos , Redes Neurais de Computação , Curva ROC , Medição de Risco , Fatores Socioeconômicos , Máquina de Vetores de Suporte , Análise de Sobrevida , Fatores de Tempo , Sinais Vitais
5.
Am J Respir Crit Care Med ; 192(8): 958-64, 2015 Oct 15.
Artigo em Inglês | MEDLINE | ID: mdl-26158402

RESUMO

RATIONALE: Tools that screen inpatients for sepsis use the systemic inflammatory response syndrome (SIRS) criteria and organ dysfunctions, but most studies of these criteria were performed in intensive care unit or emergency room populations. OBJECTIVES: To determine the incidence and prognostic value of SIRS and organ dysfunctions in a multicenter dataset of hospitalized ward patients. METHODS: Hospitalized ward patients at five hospitals from November 2008 to January 2013 were included. SIRS and organ system dysfunctions were defined using 2001 International Consensus criteria. Patient characteristics and in-hospital mortality were compared among patients meeting two or more SIRS criteria and by the presence or absence of organ system dysfunction. MEASUREMENTS AND MAIN RESULTS: A total of 269,951 patients were included in the study, after excluding 48 patients with missing discharge status. Forty-seven percent (n = 125,841) of the included patients met two or more SIRS criteria at least once during their ward stay. On ward admission, 39,105 (14.5%) patients met two or more SIRS criteria, and patients presenting with SIRS had higher in-hospital mortality than those without SIRS (4.3% vs. 1.2%; P < 0.001). Fourteen percent of patients (n = 36,767) had at least one organ dysfunction at ward admission, and those presenting with organ dysfunction had increased mortality compared with those without organ dysfunction (5.3% vs. 1.1%; P < 0.001). CONCLUSIONS: Almost half of patients hospitalized on the wards developed SIRS at least once during their ward stay. Our findings suggest that screening ward patients using SIRS criteria for identifying those with sepsis would be impractical.


Assuntos
Insuficiência de Múltiplos Órgãos/epidemiologia , Sepse/diagnóstico , Síndrome de Resposta Inflamatória Sistêmica/epidemiologia , Adulto , Idoso , Idoso de 80 Anos ou mais , Temperatura Corporal , Bases de Dados Factuais , Feminino , Frequência Cardíaca , Mortalidade Hospitalar , Hospitalização , Humanos , Incidência , Tempo de Internação , Contagem de Leucócitos , Masculino , Programas de Rastreamento , Pessoa de Meia-Idade , Insuficiência de Múltiplos Órgãos/diagnóstico , Quartos de Pacientes , Contagem de Plaquetas , Prognóstico , Taxa Respiratória , Síndrome de Resposta Inflamatória Sistêmica/diagnóstico
6.
Crit Care Med ; 43(4): 816-22, 2015 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-25559439

RESUMO

OBJECTIVES: Vital signs and composite scores, such as the Modified Early Warning Score, are used to identify high-risk ward patients and trigger rapid response teams. Although age-related vital sign changes are known to occur, little is known about the differences in vital signs between elderly and nonelderly patients prior to ward cardiac arrest. We aimed to compare the accuracy of vital signs for detecting cardiac arrest between elderly and nonelderly patients. DESIGN: Observational cohort study. SETTING: Five hospitals in the United States. PATIENTS: A total of 269,956 patient admissions to the wards with documented age, including 422 index ward cardiac arrests. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Patient characteristics and vital signs prior to cardiac arrest were compared between elderly (age, 65 yr or older) and nonelderly (age, <65 yr) patients. The area under the receiver operating characteristic curve for vital signs and the Modified Early Warning Score were also compared. Elderly patients had a higher cardiac arrest rate (2.2 vs 1.0 per 1,000 ward admissions; p<0.001) and in-hospital mortality (2.9% vs 0.7%; p<0.001) than nonelderly patients. Within 4 hours of cardiac arrest, elderly patients had significantly lower mean heart rate (88 vs 99 beats/min; p<0.001), diastolic blood pressure (60 vs 66 mm Hg; p=0.007), shock index (0.82 vs 0.93; p<0.001), and Modified Early Warning Score (2.6 vs 3.3; p<0.001) and higher pulse pressure index (0.45 vs 0.41; p<0.001) and temperature (36.4°C vs 36.3°C; p=0.047). The area under the receiver operating characteristic curves for all vital signs and the Modified Early Warning Score were higher for nonelderly patients than elderly patients (Modified Early Warning Score area under the receiver operating characteristic curve 0.85 [95% CI, 0.82-0.88] vs 0.71 [95% CI, 0.68-0.75]; p<0.001). CONCLUSIONS: Vital signs more accurately detect cardiac arrest in nonelderly patients compared with elderly patients, which has important implications for how they are used for identifying critically ill patients. More accurate methods for risk stratification of elderly patients are necessary to decrease the occurrence of this devastating event.


Assuntos
Parada Cardíaca/fisiopatologia , Sinais Vitais , Fatores Etários , Idoso , Pressão Sanguínea , Estudos de Coortes , Feminino , Frequência Cardíaca , Humanos , Masculino , Pessoa de Meia-Idade , Curva ROC
7.
Am J Respir Crit Care Med ; 190(6): 649-55, 2014 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-25089847

RESUMO

RATIONALE: Most ward risk scores were created using subjective opinion in individual hospitals and only use vital signs. OBJECTIVES: To develop and validate a risk score using commonly collected electronic health record data. METHODS: All patients hospitalized on the wards in five hospitals were included in this observational cohort study. Discrete-time survival analysis was used to predict the combined outcome of cardiac arrest (CA), intensive care unit (ICU) transfer, or death on the wards. Laboratory results, vital signs, and demographics were used as predictor variables. The model was developed in the first 60% of the data at each hospital and then validated in the remaining 40%. The final model was compared with the Modified Early Warning Score (MEWS) using the area under the receiver operating characteristic curve and the net reclassification index (NRI). MEASUREMENTS AND MAIN RESULTS: A total of 269,999 patient admissions were included, with 424 CAs, 13,188 ICU transfers, and 2,840 deaths occurring during the study period. The derived model was more accurate than the MEWS in the validation dataset for all outcomes (area under the receiver operating characteristic curve, 0.83 vs. 0.71 for CA; 0.75 vs. 0.68 for ICU transfer; 0.93 vs. 0.88 for death; and 0.77 vs. 0.70 for the combined outcome; P value < 0.01 for all comparisons). This accuracy improvement was seen across all hospitals. The NRI for the electronic Cardiac Arrest Risk Triage compared with the MEWS was 0.28 (0.18-0.38), with a positive NRI of 0.19 (0.09-0.29) and a negative NRI of 0.09 (0.09-0.09). CONCLUSIONS: We developed an accurate ward risk stratification tool using commonly collected electronic health record variables in a large multicenter dataset. Further study is needed to determine whether implementation in real-time would improve patient outcomes.


Assuntos
Registros Eletrônicos de Saúde , Parada Cardíaca/mortalidade , Pacientes Internados/estatística & dados numéricos , Unidades de Terapia Intensiva/estatística & dados numéricos , Transferência de Pacientes/estatística & dados numéricos , Medição de Risco/métodos , Medição de Risco/normas , Adulto , Idoso , Idoso de 80 Anos ou mais , Estudos de Coortes , Precisão da Medição Dimensional , Diagnóstico Precoce , Feminino , Equipe de Respostas Rápidas de Hospitais/estatística & dados numéricos , Humanos , Masculino , Pessoa de Meia-Idade , Modelos Estatísticos , Análise de Sobrevida
8.
medRxiv ; 2024 Mar 19.
Artigo em Inglês | MEDLINE | ID: mdl-38562803

RESUMO

Rationale: Early detection of clinical deterioration using early warning scores may improve outcomes. However, most implemented scores were developed using logistic regression, only underwent retrospective internal validation, and were not tested in important patient subgroups. Objectives: To develop a gradient boosted machine model (eCARTv5) for identifying clinical deterioration and then validate externally, test prospectively, and evaluate across patient subgroups. Methods: All adult patients hospitalized on the wards in seven hospitals from 2008- 2022 were used to develop eCARTv5, with demographics, vital signs, clinician documentation, and laboratory values utilized to predict intensive care unit transfer or death in the next 24 hours. The model was externally validated retrospectively in 21 hospitals from 2009-2023 and prospectively in 10 hospitals from February to May 2023. eCARTv5 was compared to the Modified Early Warning Score (MEWS) and the National Early Warning Score (NEWS) using the area under the receiver operating characteristic curve (AUROC). Measurements and Main Results: The development cohort included 901,491 admissions, the retrospective validation cohort included 1,769,461 admissions, and the prospective validation cohort included 46,330 admissions. In retrospective validation, eCART had the highest AUROC (0.835; 95%CI 0.834, 0.835), followed by NEWS (0.766 (95%CI 0.766, 0.767)), and MEWS (0.704 (95%CI 0.703, 0.704)). eCART's performance remained high (AUROC ≥0.80) across a range of patient demographics, clinical conditions, and during prospective validation. Conclusions: We developed eCARTv5, which accurately identifies early clinical deterioration in hospitalized ward patients. Our model performed better than the NEWS and MEWS retrospectively, prospectively, and across a range of subgroups.

9.
medRxiv ; 2024 Feb 06.
Artigo em Inglês | MEDLINE | ID: mdl-38370788

RESUMO

OBJECTIVE: Timely intervention for clinically deteriorating ward patients requires that care teams accurately diagnose and treat their underlying medical conditions. However, the most common diagnoses leading to deterioration and the relevant therapies provided are poorly characterized. Therefore, we aimed to determine the diagnoses responsible for clinical deterioration, the relevant diagnostic tests ordered, and the treatments administered among high-risk ward patients using manual chart review. DESIGN: Multicenter retrospective observational study. SETTING: Inpatient medical-surgical wards at four health systems from 2006-2020 PATIENTS: Randomly selected patients (1,000 from each health system) with clinical deterioration, defined by reaching the 95th percentile of a validated early warning score, electronic Cardiac Arrest Risk Triage (eCART), were included. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Clinical deterioration was confirmed by a trained reviewer or marked as a false alarm if no deterioration occurred for each patient. For true deterioration events, the condition causing deterioration, relevant diagnostic tests ordered, and treatments provided were collected. Of the 4,000 included patients, 2,484 (62%) had clinical deterioration confirmed by chart review. Sepsis was the most common cause of deterioration (41%; n=1,021), followed by arrhythmia (19%; n=473), while liver failure had the highest in-hospital mortality (41%). The most common diagnostic tests ordered were complete blood counts (47% of events), followed by chest x-rays (42%), and cultures (40%), while the most common medication orders were antimicrobials (46%), followed by fluid boluses (34%), and antiarrhythmics (19%). CONCLUSIONS: We found that sepsis was the most common cause of deterioration, while liver failure had the highest mortality. Complete blood counts and chest x-rays were the most common diagnostic tests ordered, and antimicrobials and fluid boluses were the most common medication interventions. These results provide important insights for clinical decision-making at the bedside, training of rapid response teams, and the development of institutional treatment pathways for clinical deterioration. KEY POINTS: Question: What are the most common diagnoses, diagnostic test orders, and treatments for ward patients experiencing clinical deterioration? Findings: In manual chart review of 2,484 encounters with deterioration across four health systems, we found that sepsis was the most common cause of clinical deterioration, followed by arrythmias, while liver failure had the highest mortality. Complete blood counts and chest x-rays were the most common diagnostic test orders, while antimicrobials and fluid boluses were the most common treatments. Meaning: Our results provide new insights into clinical deterioration events, which can inform institutional treatment pathways, rapid response team training, and patient care.

10.
J Am Med Inform Assoc ; 29(10): 1696-1704, 2022 09 12.
Artigo em Inglês | MEDLINE | ID: mdl-35869954

RESUMO

OBJECTIVES: Early identification of infection improves outcomes, but developing models for early identification requires determining infection status with manual chart review, limiting sample size. Therefore, we aimed to compare semi-supervised and transfer learning algorithms with algorithms based solely on manual chart review for identifying infection in hospitalized patients. MATERIALS AND METHODS: This multicenter retrospective study of admissions to 6 hospitals included "gold-standard" labels of infection from manual chart review and "silver-standard" labels from nonchart-reviewed patients using the Sepsis-3 infection criteria based on antibiotic and culture orders. "Gold-standard" labeled admissions were randomly allocated to training (70%) and testing (30%) datasets. Using patient characteristics, vital signs, and laboratory data from the first 24 hours of admission, we derived deep learning and non-deep learning models using transfer learning and semi-supervised methods. Performance was compared in the gold-standard test set using discrimination and calibration metrics. RESULTS: The study comprised 432 965 admissions, of which 2724 underwent chart review. In the test set, deep learning and non-deep learning approaches had similar discrimination (area under the receiver operating characteristic curve of 0.82). Semi-supervised and transfer learning approaches did not improve discrimination over models fit using only silver- or gold-standard data. Transfer learning had the best calibration (unreliability index P value: .997, Brier score: 0.173), followed by self-learning gradient boosted machine (P value: .67, Brier score: 0.170). DISCUSSION: Deep learning and non-deep learning models performed similarly for identifying infection, as did models developed using Sepsis-3 and manual chart review labels. CONCLUSION: In a multicenter study of almost 3000 chart-reviewed patients, semi-supervised and transfer learning models showed similar performance for model discrimination as baseline XGBoost, while transfer learning improved calibration.


Assuntos
Aprendizado de Máquina , Sepse , Humanos , Curva ROC , Estudos Retrospectivos , Sepse/diagnóstico
11.
JAMA Netw Open ; 3(8): e2012892, 2020 08 03.
Artigo em Inglês | MEDLINE | ID: mdl-32780123

RESUMO

Importance: Acute kidney injury (AKI) is associated with increased morbidity and mortality in hospitalized patients. Current methods to identify patients at high risk of AKI are limited, and few prediction models have been externally validated. Objective: To internally and externally validate a machine learning risk score to detect AKI in hospitalized patients. Design, Setting, and Participants: This diagnostic study included 495 971 adult hospital admissions at the University of Chicago (UC) from 2008 to 2016 (n = 48 463), at Loyola University Medical Center (LUMC) from 2007 to 2017 (n = 200 613), and at NorthShore University Health System (NUS) from 2006 to 2016 (n = 246 895) with serum creatinine (SCr) measurements. Patients with an SCr concentration at admission greater than 3.0 mg/dL, with a prior diagnostic code for chronic kidney disease stage 4 or higher, or who received kidney replacement therapy within 48 hours of admission were excluded. A simplified version of a previously published gradient boosted machine AKI prediction algorithm was used; it was validated internally among patients at UC and externally among patients at NUS and LUMC. Main Outcomes and Measures: Prediction of Kidney Disease Improving Global Outcomes SCr-defined stage 2 AKI within a 48-hour interval was the primary outcome. Discrimination was assessed by the area under the receiver operating characteristic curve (AUC). Results: The study included 495 971 adult admissions (mean [SD] age, 63 [18] years; 87 689 [17.7%] African American; and 266 866 [53.8%] women) across 3 health systems. The development of stage 2 or higher AKI occurred in 15 664 of 48 463 patients (3.4%) in the UC cohort, 5711 of 200 613 (2.8%) in the LUMC cohort, and 3499 of 246 895 (1.4%) in the NUS cohort. In the UC cohort, 332 patients (0.7%) required kidney replacement therapy compared with 672 patients (0.3%) in the LUMC cohort and 440 patients (0.2%) in the NUS cohort. The AUCs for predicting at least stage 2 AKI in the next 48 hours were 0.86 (95% CI, 0.86-0.86) in the UC cohort, 0.85 (95% CI, 0.84-0.85) in the LUMC cohort, and 0.86 (95% CI, 0.86-0.86) in the NUS cohort. The AUCs for receipt of kidney replacement therapy within 48 hours were 0.96 (95% CI, 0.96-0.96) in the UC cohort, 0.95 (95% CI, 0.94-0.95) in the LUMC cohort, and 0.95 (95% CI, 0.94-0.95) in the NUS cohort. In time-to-event analysis, a probability cutoff of at least 0.057 predicted the onset of stage 2 AKI a median (IQR) of 27 (6.5-93) hours before the eventual doubling in SCr concentrations in the UC cohort, 34.5 (19-85) hours in the NUS cohort, and 39 (19-108) hours in the LUMC cohort. Conclusions and Relevance: In this study, the machine learning algorithm demonstrated excellent discrimination in both internal and external validation, supporting its generalizability and potential as a clinical decision support tool to improve AKI detection and outcomes.


Assuntos
Injúria Renal Aguda/diagnóstico , Injúria Renal Aguda/epidemiologia , Aprendizado de Máquina , Medição de Risco/métodos , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Modelos Estatísticos , Curva ROC , Estudos Retrospectivos , Fatores de Risco
12.
JAMA Netw Open ; 3(5): e205191, 2020 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-32427324

RESUMO

Importance: Risk scores used in early warning systems exist for general inpatients and patients with suspected infection outside the intensive care unit (ICU), but their relative performance is incompletely characterized. Objective: To compare the performance of tools used to determine points-based risk scores among all hospitalized patients, including those with and without suspected infection, for identifying those at risk for death and/or ICU transfer. Design, Setting, and Participants: In a cohort design, a retrospective analysis of prospectively collected data was conducted in 21 California and 7 Illinois hospitals between 2006 and 2018 among adult inpatients outside the ICU using points-based scores from 5 commonly used tools: National Early Warning Score (NEWS), Modified Early Warning Score (MEWS), Between the Flags (BTF), Quick Sequential Sepsis-Related Organ Failure Assessment (qSOFA), and Systemic Inflammatory Response Syndrome (SIRS). Data analysis was conducted from February 2019 to January 2020. Main Outcomes and Measures: Risk model discrimination was assessed in each state for predicting in-hospital mortality and the combined outcome of ICU transfer or mortality with area under the receiver operating characteristic curves (AUCs). Stratified analyses were also conducted based on suspected infection. Results: The study included 773 477 hospitalized patients in California (mean [SD] age, 65.1 [17.6] years; 416 605 women [53.9%]) and 713 786 hospitalized patients in Illinois (mean [SD] age, 61.3 [19.9] years; 384 830 women [53.9%]). The NEWS exhibited the highest discrimination for mortality (AUC, 0.87; 95% CI, 0.87-0.87 in California vs AUC, 0.86; 95% CI, 0.85-0.86 in Illinois), followed by the MEWS (AUC, 0.83; 95% CI, 0.83-0.84 in California vs AUC, 0.84; 95% CI, 0.84-0.85 in Illinois), qSOFA (AUC, 0.78; 95% CI, 0.78-0.79 in California vs AUC, 0.78; 95% CI, 0.77-0.78 in Illinois), SIRS (AUC, 0.76; 95% CI, 0.76-0.76 in California vs AUC, 0.76; 95% CI, 0.75-0.76 in Illinois), and BTF (AUC, 0.73; 95% CI, 0.73-0.73 in California vs AUC, 0.74; 95% CI, 0.73-0.74 in Illinois). At specific decision thresholds, the NEWS outperformed the SIRS and qSOFA at all 28 hospitals either by reducing the percentage of at-risk patients who need to be screened by 5% to 20% or increasing the percentage of adverse outcomes identified by 3% to 25%. Conclusions and Relevance: In all hospitalized patients evaluated in this study, including those meeting criteria for suspected infection, the NEWS appeared to display the highest discrimination. Our results suggest that, among commonly used points-based scoring systems, determining the NEWS for inpatient risk stratification could identify patients with and without infection at high risk of mortality.


Assuntos
Escore de Alerta Precoce , Mortalidade Hospitalar , Hospitalização/estatística & dados numéricos , Infecções/mortalidade , Unidades de Terapia Intensiva/estatística & dados numéricos , Transferência de Pacientes/estatística & dados numéricos , Idoso , California/epidemiologia , Feminino , Humanos , Illinois/epidemiologia , Infecções/diagnóstico , Infecções/epidemiologia , Tempo de Internação/estatística & dados numéricos , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos , Medição de Risco , Fatores de Risco , Sensibilidade e Especificidade
13.
Harmful Algae ; 81: 59-64, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-30638499

RESUMO

Toxic cyanobacterial harmful algal blooms (cyanoHABs) are one of the most significant threats to the security of Earth's surface freshwaters. In the United States, the Federal Water Pollution Control Act of 1972 (i.e., the Clean Water Act) requires that states report any waterbody that fails to meet applicable water quality standards. The problem is that for fresh waters impacted by cyanoHABs, no scientifically-based framework exists for making this designation. This study describes the development of a data-based framework using the Ohio waters of western Lake Erie as an exemplar for large lakes impacted by cyanoHABs. To address this designation for Ohio's open waters, the Ohio Environmental Protection Agency (EPA) assembled a group of academic, state and federal scientists to develop a framework that would determine the criteria for Ohio EPA to consider in deciding on a recreation use impairment designation due to cyanoHAB presence. Typically, the metrics are derived from on-lake monitoring programs, but for large, dynamic lakes such as Lake Erie, using criteria based on discrete samples is problematic. However, significant advances in remote sensing allows for the estimation of cyanoHAB biomass of an entire lake. Through multiple years of validation, we developed a framework to determine lake-specific criteria for designating a waterbody as impaired by cyanoHABs on an annual basis. While the criteria reported in this manuscript are specific to Ohio's open waters, the framework used to determine them can be applied to any large lake where long-term monitoring data and satellite imagery are available.


Assuntos
Cianobactérias , Proliferação Nociva de Algas , Lagos , Ohio , Estados Unidos , Qualidade da Água
14.
Carbohydr Polym ; 182: 149-158, 2018 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-29279109

RESUMO

The efficacy of rifapentine, an oral antibiotic used to treat tuberculosis, may be reduced due to degradation at gastric pH and low solubility at intestinal pH. We hypothesized that delivery properties would be improved in vitro by incorporating rifapentine into pH-responsive amorphous solid dispersions (ASDs) with cellulose derivatives including: hydroxypropylmethylcellulose acetate succinate (HPMCAS), cellulose acetate suberate (CASub), and 5-carboxypentyl hydroxypropyl cellulose (CHC). ASDs generally reduced rifapentine release at gastric pH, with CASub affording >31-fold decrease in area under the curve (AUC) compared to rifapentine alone. Critically, reduced gastric dissolution was accompanied by reduced degradation to 3-formylrifamycin. Certain ASDs also enhanced apparent solubility and stabilization of supersaturated solutions at intestinal pH, with HPMCAS providing nearly 4-fold increase in total AUC vs. rifapentine alone. These results suggest that rifapentine delivery via ASD with these cellulosic polymers may improve bioavailability in vivo.


Assuntos
Antibióticos Antituberculose/química , Celulose/química , Sistemas de Liberação de Medicamentos , Rifampina/análogos & derivados , Portadores de Fármacos/química , Humanos , Concentração de Íons de Hidrogênio , Metilcelulose/análogos & derivados , Conformação Molecular , Rifampina/química , Solubilidade
15.
J Hosp Med ; 11(11): 757-762, 2016 11.
Artigo em Inglês | MEDLINE | ID: mdl-27352032

RESUMO

BACKGROUND: Previous research investigating the impact of delayed intensive care unit (ICU) transfer on outcomes has utilized subjective criteria for defining critical illness. OBJECTIVE: To investigate the impact of delayed ICU transfer using the electronic Cardiac Arrest Risk Triage (eCART) score, a previously published early warning score, as an objective marker of critical illness. DESIGN: Observational cohort study. SETTING: Medical-surgical wards at 5 hospitals between November 2008 and January 2013. PATIENTS: Ward patients. INTERVENTION: None. MEASUREMENTS: eCART scores were calculated for all patients. The threshold with a specificity of 95% for ICU transfer (eCART ≥ 60) denoted critical illness. A logistic regression model adjusting for age, sex, and surgical status was used to calculate the association between time to ICU transfer from first critical eCART value and in-hospital mortality. RESULTS: A total of 3789 patients met the critical eCART threshold before ICU transfer, and the median time to ICU transfer was 5.4 hours. Delayed transfer (>6 hours) occurred in 46% of patients (n = 1734) and was associated with increased mortality compared to patients transferred early (33.2% vs 24.5%, P < 0.001). Each 1-hour increase in delay was associated with an adjusted 3% increase in odds of mortality (P < 0.001). In patients who survived to discharge, delayed transfer was associated with longer hospital length of stay (median 13 vs 11 days, P < 0.001). CONCLUSIONS: Delayed ICU transfer is associated with increased hospital length of stay and mortality. Use of an evidence-based early warning score, such as eCART, could lead to timely ICU transfer and reduced preventable death. Journal of Hospital Medicine 2016;11:757-762. © 2016 Society of Hospital Medicine.


Assuntos
Estado Terminal/mortalidade , Mortalidade Hospitalar , Unidades de Terapia Intensiva/organização & administração , Transferência de Pacientes/organização & administração , Idoso , Idoso de 80 Anos ou mais , Estudos de Coortes , Feminino , Parada Cardíaca/diagnóstico , Humanos , Tempo de Internação , Masculino , Pessoa de Meia-Idade , Fatores de Tempo , Sinais Vitais/fisiologia
16.
Chest ; 121(5): 1548-54, 2002 May.
Artigo em Inglês | MEDLINE | ID: mdl-12006442

RESUMO

CONTEXT: Respiratory complications are frequent in patients with acute cervical spinal injury (CSI); however, the importance of respiratory complications experienced during the initial hospitalization following injury is unknown. OBJECTIVE: To determine if respiratory complications experienced during the initial acute-care hospitalization in patients with acute traumatic cervical spinal injury (CSI) are more important determinants of the length of stay (LOS) and total hospital costs than level of injury. DESIGN: A retrospective analysis of an inception cohort for the 5-year period from 1993 to 1997. SETTING: The Midwest Regional Spinal Cord Injury Care System, a model system for CSI, at Northwestern Memorial Hospital, a tertiary referral academic medical center. PATIENTS: Four hundred thirteen patients admitted with acute CSI and discharged alive. Patients with concurrent thoracic injuries were excluded. MAIN OUTCOME MEASURES: Initial acute-care LOS and hospital costs. RESULTS: Both mean LOS and hospital costs increased monotonically with the number of respiratory complications experienced (p < 0.001, between none and one complication, and between one and two complications; p = 0.24 between two and three or more complications). A hierarchical regression analysis showed that four variables-use of mechanical ventilation, occurrence of pneumonia, need for surgery, and use of tracheostomy-explain nearly 60% of the variance in both LOS and hospital costs. Each of these variables, when considered independently, is a better predictor of hospital costs than level of injury. CONCLUSIONS: The number of respiratory complications experienced during the initial acute-care hospitalization for CSI is a more important determinant of LOS and hospital costs than level of injury.


Assuntos
Vértebras Cervicais/lesões , Custos Hospitalares , Tempo de Internação/economia , Insuficiência Respiratória/economia , Infecções Respiratórias/economia , Traumatismos da Medula Espinal/complicações , Doença Aguda , Adulto , Humanos , Insuficiência Respiratória/etiologia , Insuficiência Respiratória/terapia , Infecções Respiratórias/etiologia , Infecções Respiratórias/terapia , Estudos Retrospectivos , Traumatismos da Medula Espinal/economia
17.
Magn Reson Chem ; 44(10): 969-71, 2006 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-16826553

RESUMO

The structure of an unexpected compound from the dehydration of an aldol addition product has been determined using 1-D and 2-D NMR techniques. This reaction is the last step in a new synthetic approach to the galanthan ring system. Complete 1H and 13C NMR assignments for two synthetic precursors are also reported.


Assuntos
Cetonas/química , Água/química , Isótopos de Carbono/análise , Cristalografia por Raios X , Hidrogênio/análise , Espectroscopia de Ressonância Magnética , Estrutura Molecular
18.
Am J Phys Med Rehabil ; 82(10): 803-14, 2003 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-14508412

RESUMO

There are >200,000 persons living with a spinal cord injury in the United States, with approximately 10,000 new cases of traumatic injury per year. Advances in the care of these patients have significantly reduced acute and long-term mortality rates, although life expectancy remains decreased. This article will review the alterations in respiratory mechanics resulting from a spinal cord injury and will examine the contribution of respiratory complications to morbidity and mortality associated with various types of spinal cord injury.


Assuntos
Mecânica Respiratória/fisiologia , Traumatismos da Medula Espinal/fisiopatologia , Diafragma/fisiopatologia , Humanos , Pneumopatias/etiologia , Medidas de Volume Pulmonar , Músculo Esquelético/fisiopatologia , Traumatismos da Medula Espinal/complicações , Traumatismos da Medula Espinal/mortalidade
19.
J Org Chem ; 69(5): 1603-6, 2004 Mar 05.
Artigo em Inglês | MEDLINE | ID: mdl-14987017

RESUMO

A five-step, atom-efficient synthesis of the Galanthan tetracyclic skeleton has been developed. The key step is an unusual intramolecular de Mayo reaction using an isocarbostyril substrate with a functionalized tether on nitrogen. The target molecule is produced in 35% overall yield from isocarbostyril.


Assuntos
Alcaloides/síntese química , Fenantridinas/síntese química , Alcaloides/química , Ciclização , Ligação de Hidrogênio , Espectroscopia de Ressonância Magnética , Estrutura Molecular , Fenantridinas/química , Fotoquímica
20.
Am J Phys Med Rehabil ; 82(3): 222-5, 2003 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-12595774

RESUMO

Pregnancy imposes a load on the respiratory system that is usually easily assumed because of alterations in the thoracoabdominal architecture. It is presumed that the respiratory mechanical disadvantage of severe kyphoscoliosis and the muscle weakness of spinal muscular atrophy impede these adaptations sufficiently to preclude a successful gestation. We report the case of a successful pregnancy in a woman with spinal muscular atrophy, severe uncorrected scoliosis, and the lowest spirometric values reported in the literature without the use of ventilatory support. This patient demonstrates that women with severe kyphoscoliosis and a profound ventilatory limitation can carry a successful pregnancy well into the third trimester without requiring full ventilatory support.


Assuntos
Cifose/complicações , Atrofia Muscular Espinal/complicações , Complicações na Gravidez , Resultado da Gravidez , Transtornos Respiratórios/etiologia , Escoliose/complicações , Adulto , Feminino , Humanos , Gravidez , Transtornos Respiratórios/terapia , Respiração Artificial/métodos , Testes de Função Respiratória
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA