Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 21
Filtrar
Más filtros

Bases de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Crit Care Med ; 50(9): 1339-1347, 2022 09 01.
Artículo en Inglés | MEDLINE | ID: mdl-35452010

RESUMEN

OBJECTIVES: To determine the impact of a machine learning early warning risk score, electronic Cardiac Arrest Risk Triage (eCART), on mortality for elevated-risk adult inpatients. DESIGN: A pragmatic pre- and post-intervention study conducted over the same 10-month period in 2 consecutive years. SETTING: Four-hospital community-academic health system. PATIENTS: All adult patients admitted to a medical-surgical ward. INTERVENTIONS: During the baseline period, clinicians were blinded to eCART scores. During the intervention period, scores were presented to providers. Scores greater than or equal to 95th percentile were designated high risk prompting a physician assessment for ICU admission. Scores between the 89th and 95th percentiles were designated intermediate risk, triggering a nurse-directed workflow that included measuring vital signs every 2 hours and contacting a physician to review the treatment plan. MEASUREMENTS AND MAIN RESULTS: The primary outcome was all-cause inhospital mortality. Secondary measures included vital sign assessment within 2 hours, ICU transfer rate, and time to ICU transfer. A total of 60,261 patients were admitted during the study period, of which 6,681 (11.1%) met inclusion criteria (baseline period n = 3,191, intervention period n = 3,490). The intervention period was associated with a significant decrease in hospital mortality for the main cohort (8.8% vs 13.9%; p < 0.0001; adjusted odds ratio [OR], 0.60 [95% CI, 0.52-0.71]). A significant decrease in mortality was also seen for the average-risk cohort not subject to the intervention (0.49% vs 0.26%; p < 0.05; adjusted OR, 0.53 [95% CI, 0.41-0.74]). In subgroup analysis, the benefit was seen in both high- (17.9% vs 23.9%; p = 0.001) and intermediate-risk (2.0% vs 4.0 %; p = 0.005) patients. The intervention period was also associated with a significant increase in ICU transfers, decrease in time to ICU transfer, and increase in vital sign reassessment within 2 hours. CONCLUSIONS: Implementation of a machine learning early warning score-driven protocol was associated with reduced inhospital mortality, likely driven by earlier and more frequent ICU transfer.


Asunto(s)
Puntuación de Alerta Temprana , Paro Cardíaco , Adulto , Paro Cardíaco/diagnóstico , Paro Cardíaco/terapia , Mortalidad Hospitalaria , Humanos , Unidades de Cuidados Intensivos , Aprendizaje Automático , Signos Vitales
2.
Crit Care Med ; 49(7): e673-e682, 2021 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-33861547

RESUMEN

OBJECTIVES: Recent sepsis studies have defined patients as "infected" using a combination of culture and antibiotic orders rather than billing data. However, the accuracy of these definitions is unclear. We aimed to compare the accuracy of different established criteria for identifying infected patients using detailed chart review. DESIGN: Retrospective observational study. SETTING: Six hospitals from three health systems in Illinois. PATIENTS: Adult admissions with blood culture or antibiotic orders, or Angus International Classification of Diseases infection codes and death were eligible for study inclusion as potentially infected patients. Nine-hundred to 1,000 of these admissions were randomly selected from each health system for chart review, and a proportional number of patients who did not meet chart review eligibility criteria were also included and deemed not infected. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: The accuracy of published billing code criteria by Angus et al and electronic health record criteria by Rhee et al and Seymour et al (Sepsis-3) was determined using the manual chart review results as the gold standard. A total of 5,215 patients were included, with 2,874 encounters analyzed via chart review and a proportional 2,341 added who did not meet chart review eligibility criteria. In the study cohort, 27.5% of admissions had at least one infection. This was most similar to the percentage of admissions with blood culture orders (26.8%), Angus infection criteria (28.7%), and the Sepsis-3 criteria (30.4%). Sepsis-3 criteria was the most sensitive (81%), followed by Angus (77%) and Rhee (52%), while Rhee (97%) and Angus (90%) were more specific than the Sepsis-3 criteria (89%). Results were similar for patients with organ dysfunction during their admission. CONCLUSIONS: Published criteria have a wide range of accuracy for identifying infected patients, with the Sepsis-3 criteria being the most sensitive and Rhee criteria being the most specific. These findings have important implications for studies investigating the burden of sepsis on a local and national level.


Asunto(s)
Exactitud de los Datos , Registros Electrónicos de Salud/normas , Infecciones/epidemiología , Almacenamiento y Recuperación de la Información/métodos , Adulto , Anciano , Antibacterianos/uso terapéutico , Profilaxis Antibiótica/estadística & datos numéricos , Cultivo de Sangre , Chicago/epidemiología , Reacciones Falso Positivas , Femenino , Humanos , Infecciones/diagnóstico , Clasificación Internacional de Enfermedades , Masculino , Persona de Mediana Edad , Puntuaciones en la Disfunción de Órganos , Admisión del Paciente/estadística & datos numéricos , Prevalencia , Estudios Retrospectivos , Sensibilidad y Especificidad , Sepsis/diagnóstico
3.
Crit Care Med ; 49(10): 1694-1705, 2021 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-33938715

RESUMEN

OBJECTIVES: Early antibiotic administration is a central component of sepsis guidelines, and delays may increase mortality. However, prior studies have examined the delay to first antibiotic administration as a single time period even though it contains two distinct processes: antibiotic ordering and antibiotic delivery, which can each be targeted for improvement through different interventions. The objective of this study was to characterize and compare patients who experienced order or delivery delays, investigate the association of each delay type with mortality, and identify novel patient subphenotypes with elevated risk of harm from delays. DESIGN: Retrospective analysis of multicenter inpatient data. SETTING: Two tertiary care medical centers (2008-2018, 2006-2017) and four community-based hospitals (2008-2017). PATIENTS: All patients admitted through the emergency department who met clinical criteria for infection. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Patient demographics, vitals, laboratory values, medication order and administration times, and in-hospital survival data were obtained from the electronic health record. Order and delivery delays were calculated for each admission. Adjusted logistic regression models were used to examine the relationship between each delay and in-hospital mortality. Causal forests, a machine learning method, was used to identify a high-risk subgroup. A total of 60,817 admissions were included, and delays occurred in 58% of patients. Each additional hour of order delay (odds ratio, 1.04; 95% CI, 1.03-1.05) and delivery delay (odds ratio, 1.05; 95% CI, 1.02-1.08) was associated with increased mortality. A patient subgroup identified by causal forests with higher comorbidity burden, greater organ dysfunction, and abnormal initial lactate measurements had a higher risk of death associated with delays (odds ratio, 1.07; 95% CI, 1.06-1.09 vs odds ratio, 1.02; 95% CI, 1.01-1.03). CONCLUSIONS: Delays in antibiotic ordering and drug delivery are both associated with a similar increase in mortality. A distinct subgroup of high-risk patients exist who could be targeted for more timely therapy.


Asunto(s)
Antibacterianos/administración & dosificación , Fenotipo , Sepsis/genética , Tiempo de Tratamiento/estadística & datos numéricos , Anciano , Anciano de 80 o más Años , Antibacterianos/uso terapéutico , Servicio de Urgencia en Hospital/organización & administración , Servicio de Urgencia en Hospital/estadística & datos numéricos , Femenino , Hospitalización/estadística & datos numéricos , Humanos , Illinois/epidemiología , Masculino , Persona de Mediana Edad , Estudios Prospectivos , Estudios Retrospectivos , Sepsis/tratamiento farmacológico , Sepsis/fisiopatología , Factores de Tiempo
4.
Crit Care Med ; 44(2): 368-74, 2016 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-26771782

RESUMEN

OBJECTIVE: Machine learning methods are flexible prediction algorithms that may be more accurate than conventional regression. We compared the accuracy of different techniques for detecting clinical deterioration on the wards in a large, multicenter database. DESIGN: Observational cohort study. SETTING: Five hospitals, from November 2008 until January 2013. PATIENTS: Hospitalized ward patients INTERVENTIONS: None MEASUREMENTS AND MAIN RESULTS: Demographic variables, laboratory values, and vital signs were utilized in a discrete-time survival analysis framework to predict the combined outcome of cardiac arrest, intensive care unit transfer, or death. Two logistic regression models (one using linear predictor terms and a second utilizing restricted cubic splines) were compared to several different machine learning methods. The models were derived in the first 60% of the data by date and then validated in the next 40%. For model derivation, each event time window was matched to a non-event window. All models were compared to each other and to the Modified Early Warning score, a commonly cited early warning score, using the area under the receiver operating characteristic curve (AUC). A total of 269,999 patients were admitted, and 424 cardiac arrests, 13,188 intensive care unit transfers, and 2,840 deaths occurred in the study. In the validation dataset, the random forest model was the most accurate model (AUC, 0.80 [95% CI, 0.80-0.80]). The logistic regression model with spline predictors was more accurate than the model utilizing linear predictors (AUC, 0.77 vs 0.74; p < 0.01), and all models were more accurate than the MEWS (AUC, 0.70 [95% CI, 0.70-0.70]). CONCLUSIONS: In this multicenter study, we found that several machine learning methods more accurately predicted clinical deterioration than logistic regression. Use of detection algorithms derived from these techniques may result in improved identification of critically ill patients on the wards.


Asunto(s)
Paro Cardíaco/mortalidad , Unidades de Cuidados Intensivos/organización & administración , Aprendizaje Automático/estadística & datos numéricos , Modelos Estadísticos , Factores de Edad , Estudios de Cohortes , Técnicas y Procedimientos Diagnósticos , Humanos , Modelos Logísticos , Redes Neurales de la Computación , Curva ROC , Medición de Riesgo , Factores Socioeconómicos , Máquina de Vectores de Soporte , Análisis de Supervivencia , Factores de Tiempo , Signos Vitales
5.
Am J Respir Crit Care Med ; 192(8): 958-64, 2015 Oct 15.
Artículo en Inglés | MEDLINE | ID: mdl-26158402

RESUMEN

RATIONALE: Tools that screen inpatients for sepsis use the systemic inflammatory response syndrome (SIRS) criteria and organ dysfunctions, but most studies of these criteria were performed in intensive care unit or emergency room populations. OBJECTIVES: To determine the incidence and prognostic value of SIRS and organ dysfunctions in a multicenter dataset of hospitalized ward patients. METHODS: Hospitalized ward patients at five hospitals from November 2008 to January 2013 were included. SIRS and organ system dysfunctions were defined using 2001 International Consensus criteria. Patient characteristics and in-hospital mortality were compared among patients meeting two or more SIRS criteria and by the presence or absence of organ system dysfunction. MEASUREMENTS AND MAIN RESULTS: A total of 269,951 patients were included in the study, after excluding 48 patients with missing discharge status. Forty-seven percent (n = 125,841) of the included patients met two or more SIRS criteria at least once during their ward stay. On ward admission, 39,105 (14.5%) patients met two or more SIRS criteria, and patients presenting with SIRS had higher in-hospital mortality than those without SIRS (4.3% vs. 1.2%; P < 0.001). Fourteen percent of patients (n = 36,767) had at least one organ dysfunction at ward admission, and those presenting with organ dysfunction had increased mortality compared with those without organ dysfunction (5.3% vs. 1.1%; P < 0.001). CONCLUSIONS: Almost half of patients hospitalized on the wards developed SIRS at least once during their ward stay. Our findings suggest that screening ward patients using SIRS criteria for identifying those with sepsis would be impractical.


Asunto(s)
Insuficiencia Multiorgánica/epidemiología , Sepsis/diagnóstico , Síndrome de Respuesta Inflamatoria Sistémica/epidemiología , Adulto , Anciano , Anciano de 80 o más Años , Temperatura Corporal , Bases de Datos Factuales , Femenino , Frecuencia Cardíaca , Mortalidad Hospitalaria , Hospitalización , Humanos , Incidencia , Tiempo de Internación , Recuento de Leucocitos , Masculino , Tamizaje Masivo , Persona de Mediana Edad , Insuficiencia Multiorgánica/diagnóstico , Habitaciones de Pacientes , Recuento de Plaquetas , Pronóstico , Frecuencia Respiratoria , Síndrome de Respuesta Inflamatoria Sistémica/diagnóstico
6.
Crit Care Med ; 43(4): 816-22, 2015 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-25559439

RESUMEN

OBJECTIVES: Vital signs and composite scores, such as the Modified Early Warning Score, are used to identify high-risk ward patients and trigger rapid response teams. Although age-related vital sign changes are known to occur, little is known about the differences in vital signs between elderly and nonelderly patients prior to ward cardiac arrest. We aimed to compare the accuracy of vital signs for detecting cardiac arrest between elderly and nonelderly patients. DESIGN: Observational cohort study. SETTING: Five hospitals in the United States. PATIENTS: A total of 269,956 patient admissions to the wards with documented age, including 422 index ward cardiac arrests. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Patient characteristics and vital signs prior to cardiac arrest were compared between elderly (age, 65 yr or older) and nonelderly (age, <65 yr) patients. The area under the receiver operating characteristic curve for vital signs and the Modified Early Warning Score were also compared. Elderly patients had a higher cardiac arrest rate (2.2 vs 1.0 per 1,000 ward admissions; p<0.001) and in-hospital mortality (2.9% vs 0.7%; p<0.001) than nonelderly patients. Within 4 hours of cardiac arrest, elderly patients had significantly lower mean heart rate (88 vs 99 beats/min; p<0.001), diastolic blood pressure (60 vs 66 mm Hg; p=0.007), shock index (0.82 vs 0.93; p<0.001), and Modified Early Warning Score (2.6 vs 3.3; p<0.001) and higher pulse pressure index (0.45 vs 0.41; p<0.001) and temperature (36.4°C vs 36.3°C; p=0.047). The area under the receiver operating characteristic curves for all vital signs and the Modified Early Warning Score were higher for nonelderly patients than elderly patients (Modified Early Warning Score area under the receiver operating characteristic curve 0.85 [95% CI, 0.82-0.88] vs 0.71 [95% CI, 0.68-0.75]; p<0.001). CONCLUSIONS: Vital signs more accurately detect cardiac arrest in nonelderly patients compared with elderly patients, which has important implications for how they are used for identifying critically ill patients. More accurate methods for risk stratification of elderly patients are necessary to decrease the occurrence of this devastating event.


Asunto(s)
Paro Cardíaco/fisiopatología , Signos Vitales , Factores de Edad , Anciano , Presión Sanguínea , Estudios de Cohortes , Femenino , Frecuencia Cardíaca , Humanos , Masculino , Persona de Mediana Edad , Curva ROC
7.
Am J Respir Crit Care Med ; 190(6): 649-55, 2014 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-25089847

RESUMEN

RATIONALE: Most ward risk scores were created using subjective opinion in individual hospitals and only use vital signs. OBJECTIVES: To develop and validate a risk score using commonly collected electronic health record data. METHODS: All patients hospitalized on the wards in five hospitals were included in this observational cohort study. Discrete-time survival analysis was used to predict the combined outcome of cardiac arrest (CA), intensive care unit (ICU) transfer, or death on the wards. Laboratory results, vital signs, and demographics were used as predictor variables. The model was developed in the first 60% of the data at each hospital and then validated in the remaining 40%. The final model was compared with the Modified Early Warning Score (MEWS) using the area under the receiver operating characteristic curve and the net reclassification index (NRI). MEASUREMENTS AND MAIN RESULTS: A total of 269,999 patient admissions were included, with 424 CAs, 13,188 ICU transfers, and 2,840 deaths occurring during the study period. The derived model was more accurate than the MEWS in the validation dataset for all outcomes (area under the receiver operating characteristic curve, 0.83 vs. 0.71 for CA; 0.75 vs. 0.68 for ICU transfer; 0.93 vs. 0.88 for death; and 0.77 vs. 0.70 for the combined outcome; P value < 0.01 for all comparisons). This accuracy improvement was seen across all hospitals. The NRI for the electronic Cardiac Arrest Risk Triage compared with the MEWS was 0.28 (0.18-0.38), with a positive NRI of 0.19 (0.09-0.29) and a negative NRI of 0.09 (0.09-0.09). CONCLUSIONS: We developed an accurate ward risk stratification tool using commonly collected electronic health record variables in a large multicenter dataset. Further study is needed to determine whether implementation in real-time would improve patient outcomes.


Asunto(s)
Registros Electrónicos de Salud , Paro Cardíaco/mortalidad , Pacientes Internos/estadística & datos numéricos , Unidades de Cuidados Intensivos/estadística & datos numéricos , Transferencia de Pacientes/estadística & datos numéricos , Medición de Riesgo/métodos , Medición de Riesgo/normas , Adulto , Anciano , Anciano de 80 o más Años , Estudios de Cohortes , Precisión de la Medición Dimensional , Diagnóstico Precoz , Femenino , Equipo Hospitalario de Respuesta Rápida/estadística & datos numéricos , Humanos , Masculino , Persona de Mediana Edad , Modelos Estadísticos , Análisis de Supervivencia
8.
medRxiv ; 2024 Mar 19.
Artículo en Inglés | MEDLINE | ID: mdl-38562803

RESUMEN

Rationale: Early detection of clinical deterioration using early warning scores may improve outcomes. However, most implemented scores were developed using logistic regression, only underwent retrospective internal validation, and were not tested in important patient subgroups. Objectives: To develop a gradient boosted machine model (eCARTv5) for identifying clinical deterioration and then validate externally, test prospectively, and evaluate across patient subgroups. Methods: All adult patients hospitalized on the wards in seven hospitals from 2008- 2022 were used to develop eCARTv5, with demographics, vital signs, clinician documentation, and laboratory values utilized to predict intensive care unit transfer or death in the next 24 hours. The model was externally validated retrospectively in 21 hospitals from 2009-2023 and prospectively in 10 hospitals from February to May 2023. eCARTv5 was compared to the Modified Early Warning Score (MEWS) and the National Early Warning Score (NEWS) using the area under the receiver operating characteristic curve (AUROC). Measurements and Main Results: The development cohort included 901,491 admissions, the retrospective validation cohort included 1,769,461 admissions, and the prospective validation cohort included 46,330 admissions. In retrospective validation, eCART had the highest AUROC (0.835; 95%CI 0.834, 0.835), followed by NEWS (0.766 (95%CI 0.766, 0.767)), and MEWS (0.704 (95%CI 0.703, 0.704)). eCART's performance remained high (AUROC ≥0.80) across a range of patient demographics, clinical conditions, and during prospective validation. Conclusions: We developed eCARTv5, which accurately identifies early clinical deterioration in hospitalized ward patients. Our model performed better than the NEWS and MEWS retrospectively, prospectively, and across a range of subgroups.

9.
Crit Care Explor ; 6(7): e1116, 2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-39028867

RESUMEN

BACKGROUND AND OBJECTIVE: To develop the COVid Veteran (COVet) score for clinical deterioration in Veterans hospitalized with COVID-19 and further validate this model in both Veteran and non-Veteran samples. No such score has been derived and validated while incorporating a Veteran sample. DERIVATION COHORT: Adults (age ≥ 18 yr) hospitalized outside the ICU with a diagnosis of COVID-19 for model development to the Veterans Health Administration (VHA) (n = 80 hospitals). VALIDATION COHORT: External validation occurred in a VHA cohort of 34 hospitals, as well as six non-Veteran health systems for further external validation (n = 21 hospitals) between 2020 and 2023. PREDICTION MODEL: eXtreme Gradient Boosting machine learning methods were used, and performance was assessed using the area under the receiver operating characteristic curve and compared with the National Early Warning Score (NEWS). The primary outcome was transfer to the ICU or death within 24 hours of each new variable observation. Model predictor variables included demographics, vital signs, structured flowsheet data, and laboratory values. RESULTS: A total of 96,908 admissions occurred during the study period, of which 59,897 were in the Veteran sample and 37,011 were in the non-Veteran sample. During external validation in the Veteran sample, the model demonstrated excellent discrimination, with an area under the receiver operating characteristic curve of 0.88. This was significantly higher than NEWS (0.79; p < 0.01). In the non-Veteran sample, the model also demonstrated excellent discrimination (0.86 vs. 0.79 for NEWS; p < 0.01). The top three variables of importance were eosinophil percentage, mean oxygen saturation in the prior 24-hour period, and worst mental status in the prior 24-hour period. CONCLUSIONS: We used machine learning methods to develop and validate a highly accurate early warning score in both Veterans and non-Veterans hospitalized with COVID-19. The model could lead to earlier identification and therapy, which may improve outcomes.


Asunto(s)
COVID-19 , Aprendizaje Automático , Veteranos , Humanos , COVID-19/diagnóstico , COVID-19/epidemiología , Masculino , Femenino , Persona de Mediana Edad , Veteranos/estadística & datos numéricos , Anciano , Medición de Riesgo/métodos , Estados Unidos/epidemiología , Hospitalización/estadística & datos numéricos , Adulto , Unidades de Cuidados Intensivos , Curva ROC , Estudios de Cohortes
10.
medRxiv ; 2024 Feb 06.
Artículo en Inglés | MEDLINE | ID: mdl-38370788

RESUMEN

OBJECTIVE: Timely intervention for clinically deteriorating ward patients requires that care teams accurately diagnose and treat their underlying medical conditions. However, the most common diagnoses leading to deterioration and the relevant therapies provided are poorly characterized. Therefore, we aimed to determine the diagnoses responsible for clinical deterioration, the relevant diagnostic tests ordered, and the treatments administered among high-risk ward patients using manual chart review. DESIGN: Multicenter retrospective observational study. SETTING: Inpatient medical-surgical wards at four health systems from 2006-2020 PATIENTS: Randomly selected patients (1,000 from each health system) with clinical deterioration, defined by reaching the 95th percentile of a validated early warning score, electronic Cardiac Arrest Risk Triage (eCART), were included. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Clinical deterioration was confirmed by a trained reviewer or marked as a false alarm if no deterioration occurred for each patient. For true deterioration events, the condition causing deterioration, relevant diagnostic tests ordered, and treatments provided were collected. Of the 4,000 included patients, 2,484 (62%) had clinical deterioration confirmed by chart review. Sepsis was the most common cause of deterioration (41%; n=1,021), followed by arrhythmia (19%; n=473), while liver failure had the highest in-hospital mortality (41%). The most common diagnostic tests ordered were complete blood counts (47% of events), followed by chest x-rays (42%), and cultures (40%), while the most common medication orders were antimicrobials (46%), followed by fluid boluses (34%), and antiarrhythmics (19%). CONCLUSIONS: We found that sepsis was the most common cause of deterioration, while liver failure had the highest mortality. Complete blood counts and chest x-rays were the most common diagnostic tests ordered, and antimicrobials and fluid boluses were the most common medication interventions. These results provide important insights for clinical decision-making at the bedside, training of rapid response teams, and the development of institutional treatment pathways for clinical deterioration. KEY POINTS: Question: What are the most common diagnoses, diagnostic test orders, and treatments for ward patients experiencing clinical deterioration? Findings: In manual chart review of 2,484 encounters with deterioration across four health systems, we found that sepsis was the most common cause of clinical deterioration, followed by arrythmias, while liver failure had the highest mortality. Complete blood counts and chest x-rays were the most common diagnostic test orders, while antimicrobials and fluid boluses were the most common treatments. Meaning: Our results provide new insights into clinical deterioration events, which can inform institutional treatment pathways, rapid response team training, and patient care.

11.
J Am Med Inform Assoc ; 29(10): 1696-1704, 2022 09 12.
Artículo en Inglés | MEDLINE | ID: mdl-35869954

RESUMEN

OBJECTIVES: Early identification of infection improves outcomes, but developing models for early identification requires determining infection status with manual chart review, limiting sample size. Therefore, we aimed to compare semi-supervised and transfer learning algorithms with algorithms based solely on manual chart review for identifying infection in hospitalized patients. MATERIALS AND METHODS: This multicenter retrospective study of admissions to 6 hospitals included "gold-standard" labels of infection from manual chart review and "silver-standard" labels from nonchart-reviewed patients using the Sepsis-3 infection criteria based on antibiotic and culture orders. "Gold-standard" labeled admissions were randomly allocated to training (70%) and testing (30%) datasets. Using patient characteristics, vital signs, and laboratory data from the first 24 hours of admission, we derived deep learning and non-deep learning models using transfer learning and semi-supervised methods. Performance was compared in the gold-standard test set using discrimination and calibration metrics. RESULTS: The study comprised 432 965 admissions, of which 2724 underwent chart review. In the test set, deep learning and non-deep learning approaches had similar discrimination (area under the receiver operating characteristic curve of 0.82). Semi-supervised and transfer learning approaches did not improve discrimination over models fit using only silver- or gold-standard data. Transfer learning had the best calibration (unreliability index P value: .997, Brier score: 0.173), followed by self-learning gradient boosted machine (P value: .67, Brier score: 0.170). DISCUSSION: Deep learning and non-deep learning models performed similarly for identifying infection, as did models developed using Sepsis-3 and manual chart review labels. CONCLUSION: In a multicenter study of almost 3000 chart-reviewed patients, semi-supervised and transfer learning models showed similar performance for model discrimination as baseline XGBoost, while transfer learning improved calibration.


Asunto(s)
Aprendizaje Automático , Sepsis , Humanos , Curva ROC , Estudios Retrospectivos , Sepsis/diagnóstico
12.
JAMA Netw Open ; 3(8): e2012892, 2020 08 03.
Artículo en Inglés | MEDLINE | ID: mdl-32780123

RESUMEN

Importance: Acute kidney injury (AKI) is associated with increased morbidity and mortality in hospitalized patients. Current methods to identify patients at high risk of AKI are limited, and few prediction models have been externally validated. Objective: To internally and externally validate a machine learning risk score to detect AKI in hospitalized patients. Design, Setting, and Participants: This diagnostic study included 495 971 adult hospital admissions at the University of Chicago (UC) from 2008 to 2016 (n = 48 463), at Loyola University Medical Center (LUMC) from 2007 to 2017 (n = 200 613), and at NorthShore University Health System (NUS) from 2006 to 2016 (n = 246 895) with serum creatinine (SCr) measurements. Patients with an SCr concentration at admission greater than 3.0 mg/dL, with a prior diagnostic code for chronic kidney disease stage 4 or higher, or who received kidney replacement therapy within 48 hours of admission were excluded. A simplified version of a previously published gradient boosted machine AKI prediction algorithm was used; it was validated internally among patients at UC and externally among patients at NUS and LUMC. Main Outcomes and Measures: Prediction of Kidney Disease Improving Global Outcomes SCr-defined stage 2 AKI within a 48-hour interval was the primary outcome. Discrimination was assessed by the area under the receiver operating characteristic curve (AUC). Results: The study included 495 971 adult admissions (mean [SD] age, 63 [18] years; 87 689 [17.7%] African American; and 266 866 [53.8%] women) across 3 health systems. The development of stage 2 or higher AKI occurred in 15 664 of 48 463 patients (3.4%) in the UC cohort, 5711 of 200 613 (2.8%) in the LUMC cohort, and 3499 of 246 895 (1.4%) in the NUS cohort. In the UC cohort, 332 patients (0.7%) required kidney replacement therapy compared with 672 patients (0.3%) in the LUMC cohort and 440 patients (0.2%) in the NUS cohort. The AUCs for predicting at least stage 2 AKI in the next 48 hours were 0.86 (95% CI, 0.86-0.86) in the UC cohort, 0.85 (95% CI, 0.84-0.85) in the LUMC cohort, and 0.86 (95% CI, 0.86-0.86) in the NUS cohort. The AUCs for receipt of kidney replacement therapy within 48 hours were 0.96 (95% CI, 0.96-0.96) in the UC cohort, 0.95 (95% CI, 0.94-0.95) in the LUMC cohort, and 0.95 (95% CI, 0.94-0.95) in the NUS cohort. In time-to-event analysis, a probability cutoff of at least 0.057 predicted the onset of stage 2 AKI a median (IQR) of 27 (6.5-93) hours before the eventual doubling in SCr concentrations in the UC cohort, 34.5 (19-85) hours in the NUS cohort, and 39 (19-108) hours in the LUMC cohort. Conclusions and Relevance: In this study, the machine learning algorithm demonstrated excellent discrimination in both internal and external validation, supporting its generalizability and potential as a clinical decision support tool to improve AKI detection and outcomes.


Asunto(s)
Lesión Renal Aguda/diagnóstico , Lesión Renal Aguda/epidemiología , Aprendizaje Automático , Medición de Riesgo/métodos , Adulto , Anciano , Anciano de 80 o más Años , Femenino , Humanos , Masculino , Persona de Mediana Edad , Modelos Estadísticos , Curva ROC , Estudios Retrospectivos , Factores de Riesgo
13.
JAMA Netw Open ; 3(5): e205191, 2020 05 01.
Artículo en Inglés | MEDLINE | ID: mdl-32427324

RESUMEN

Importance: Risk scores used in early warning systems exist for general inpatients and patients with suspected infection outside the intensive care unit (ICU), but their relative performance is incompletely characterized. Objective: To compare the performance of tools used to determine points-based risk scores among all hospitalized patients, including those with and without suspected infection, for identifying those at risk for death and/or ICU transfer. Design, Setting, and Participants: In a cohort design, a retrospective analysis of prospectively collected data was conducted in 21 California and 7 Illinois hospitals between 2006 and 2018 among adult inpatients outside the ICU using points-based scores from 5 commonly used tools: National Early Warning Score (NEWS), Modified Early Warning Score (MEWS), Between the Flags (BTF), Quick Sequential Sepsis-Related Organ Failure Assessment (qSOFA), and Systemic Inflammatory Response Syndrome (SIRS). Data analysis was conducted from February 2019 to January 2020. Main Outcomes and Measures: Risk model discrimination was assessed in each state for predicting in-hospital mortality and the combined outcome of ICU transfer or mortality with area under the receiver operating characteristic curves (AUCs). Stratified analyses were also conducted based on suspected infection. Results: The study included 773 477 hospitalized patients in California (mean [SD] age, 65.1 [17.6] years; 416 605 women [53.9%]) and 713 786 hospitalized patients in Illinois (mean [SD] age, 61.3 [19.9] years; 384 830 women [53.9%]). The NEWS exhibited the highest discrimination for mortality (AUC, 0.87; 95% CI, 0.87-0.87 in California vs AUC, 0.86; 95% CI, 0.85-0.86 in Illinois), followed by the MEWS (AUC, 0.83; 95% CI, 0.83-0.84 in California vs AUC, 0.84; 95% CI, 0.84-0.85 in Illinois), qSOFA (AUC, 0.78; 95% CI, 0.78-0.79 in California vs AUC, 0.78; 95% CI, 0.77-0.78 in Illinois), SIRS (AUC, 0.76; 95% CI, 0.76-0.76 in California vs AUC, 0.76; 95% CI, 0.75-0.76 in Illinois), and BTF (AUC, 0.73; 95% CI, 0.73-0.73 in California vs AUC, 0.74; 95% CI, 0.73-0.74 in Illinois). At specific decision thresholds, the NEWS outperformed the SIRS and qSOFA at all 28 hospitals either by reducing the percentage of at-risk patients who need to be screened by 5% to 20% or increasing the percentage of adverse outcomes identified by 3% to 25%. Conclusions and Relevance: In all hospitalized patients evaluated in this study, including those meeting criteria for suspected infection, the NEWS appeared to display the highest discrimination. Our results suggest that, among commonly used points-based scoring systems, determining the NEWS for inpatient risk stratification could identify patients with and without infection at high risk of mortality.


Asunto(s)
Puntuación de Alerta Temprana , Mortalidad Hospitalaria , Hospitalización/estadística & datos numéricos , Infecciones/mortalidad , Unidades de Cuidados Intensivos/estadística & datos numéricos , Transferencia de Pacientes/estadística & datos numéricos , Anciano , California/epidemiología , Femenino , Humanos , Illinois/epidemiología , Infecciones/diagnóstico , Infecciones/epidemiología , Tiempo de Internación/estadística & datos numéricos , Masculino , Persona de Mediana Edad , Estudios Retrospectivos , Medición de Riesgo , Factores de Riesgo , Sensibilidad y Especificidad
14.
Harmful Algae ; 81: 59-64, 2019 01.
Artículo en Inglés | MEDLINE | ID: mdl-30638499

RESUMEN

Toxic cyanobacterial harmful algal blooms (cyanoHABs) are one of the most significant threats to the security of Earth's surface freshwaters. In the United States, the Federal Water Pollution Control Act of 1972 (i.e., the Clean Water Act) requires that states report any waterbody that fails to meet applicable water quality standards. The problem is that for fresh waters impacted by cyanoHABs, no scientifically-based framework exists for making this designation. This study describes the development of a data-based framework using the Ohio waters of western Lake Erie as an exemplar for large lakes impacted by cyanoHABs. To address this designation for Ohio's open waters, the Ohio Environmental Protection Agency (EPA) assembled a group of academic, state and federal scientists to develop a framework that would determine the criteria for Ohio EPA to consider in deciding on a recreation use impairment designation due to cyanoHAB presence. Typically, the metrics are derived from on-lake monitoring programs, but for large, dynamic lakes such as Lake Erie, using criteria based on discrete samples is problematic. However, significant advances in remote sensing allows for the estimation of cyanoHAB biomass of an entire lake. Through multiple years of validation, we developed a framework to determine lake-specific criteria for designating a waterbody as impaired by cyanoHABs on an annual basis. While the criteria reported in this manuscript are specific to Ohio's open waters, the framework used to determine them can be applied to any large lake where long-term monitoring data and satellite imagery are available.


Asunto(s)
Cianobacterias , Floraciones de Algas Nocivas , Lagos , Ohio , Estados Unidos , Calidad del Agua
15.
Carbohydr Polym ; 182: 149-158, 2018 Feb 15.
Artículo en Inglés | MEDLINE | ID: mdl-29279109

RESUMEN

The efficacy of rifapentine, an oral antibiotic used to treat tuberculosis, may be reduced due to degradation at gastric pH and low solubility at intestinal pH. We hypothesized that delivery properties would be improved in vitro by incorporating rifapentine into pH-responsive amorphous solid dispersions (ASDs) with cellulose derivatives including: hydroxypropylmethylcellulose acetate succinate (HPMCAS), cellulose acetate suberate (CASub), and 5-carboxypentyl hydroxypropyl cellulose (CHC). ASDs generally reduced rifapentine release at gastric pH, with CASub affording >31-fold decrease in area under the curve (AUC) compared to rifapentine alone. Critically, reduced gastric dissolution was accompanied by reduced degradation to 3-formylrifamycin. Certain ASDs also enhanced apparent solubility and stabilization of supersaturated solutions at intestinal pH, with HPMCAS providing nearly 4-fold increase in total AUC vs. rifapentine alone. These results suggest that rifapentine delivery via ASD with these cellulosic polymers may improve bioavailability in vivo.


Asunto(s)
Antibióticos Antituberculosos/química , Celulosa/química , Sistemas de Liberación de Medicamentos , Rifampin/análogos & derivados , Portadores de Fármacos/química , Humanos , Concentración de Iones de Hidrógeno , Metilcelulosa/análogos & derivados , Conformación Molecular , Rifampin/química , Solubilidad
16.
J Hosp Med ; 11(11): 757-762, 2016 11.
Artículo en Inglés | MEDLINE | ID: mdl-27352032

RESUMEN

BACKGROUND: Previous research investigating the impact of delayed intensive care unit (ICU) transfer on outcomes has utilized subjective criteria for defining critical illness. OBJECTIVE: To investigate the impact of delayed ICU transfer using the electronic Cardiac Arrest Risk Triage (eCART) score, a previously published early warning score, as an objective marker of critical illness. DESIGN: Observational cohort study. SETTING: Medical-surgical wards at 5 hospitals between November 2008 and January 2013. PATIENTS: Ward patients. INTERVENTION: None. MEASUREMENTS: eCART scores were calculated for all patients. The threshold with a specificity of 95% for ICU transfer (eCART ≥ 60) denoted critical illness. A logistic regression model adjusting for age, sex, and surgical status was used to calculate the association between time to ICU transfer from first critical eCART value and in-hospital mortality. RESULTS: A total of 3789 patients met the critical eCART threshold before ICU transfer, and the median time to ICU transfer was 5.4 hours. Delayed transfer (>6 hours) occurred in 46% of patients (n = 1734) and was associated with increased mortality compared to patients transferred early (33.2% vs 24.5%, P < 0.001). Each 1-hour increase in delay was associated with an adjusted 3% increase in odds of mortality (P < 0.001). In patients who survived to discharge, delayed transfer was associated with longer hospital length of stay (median 13 vs 11 days, P < 0.001). CONCLUSIONS: Delayed ICU transfer is associated with increased hospital length of stay and mortality. Use of an evidence-based early warning score, such as eCART, could lead to timely ICU transfer and reduced preventable death. Journal of Hospital Medicine 2016;11:757-762. © 2016 Society of Hospital Medicine.


Asunto(s)
Enfermedad Crítica/mortalidad , Mortalidad Hospitalaria , Unidades de Cuidados Intensivos/organización & administración , Transferencia de Pacientes/organización & administración , Anciano , Anciano de 80 o más Años , Estudios de Cohortes , Femenino , Paro Cardíaco/diagnóstico , Humanos , Tiempo de Internación , Masculino , Persona de Mediana Edad , Factores de Tiempo , Signos Vitales/fisiología
17.
Chest ; 121(5): 1548-54, 2002 May.
Artículo en Inglés | MEDLINE | ID: mdl-12006442

RESUMEN

CONTEXT: Respiratory complications are frequent in patients with acute cervical spinal injury (CSI); however, the importance of respiratory complications experienced during the initial hospitalization following injury is unknown. OBJECTIVE: To determine if respiratory complications experienced during the initial acute-care hospitalization in patients with acute traumatic cervical spinal injury (CSI) are more important determinants of the length of stay (LOS) and total hospital costs than level of injury. DESIGN: A retrospective analysis of an inception cohort for the 5-year period from 1993 to 1997. SETTING: The Midwest Regional Spinal Cord Injury Care System, a model system for CSI, at Northwestern Memorial Hospital, a tertiary referral academic medical center. PATIENTS: Four hundred thirteen patients admitted with acute CSI and discharged alive. Patients with concurrent thoracic injuries were excluded. MAIN OUTCOME MEASURES: Initial acute-care LOS and hospital costs. RESULTS: Both mean LOS and hospital costs increased monotonically with the number of respiratory complications experienced (p < 0.001, between none and one complication, and between one and two complications; p = 0.24 between two and three or more complications). A hierarchical regression analysis showed that four variables-use of mechanical ventilation, occurrence of pneumonia, need for surgery, and use of tracheostomy-explain nearly 60% of the variance in both LOS and hospital costs. Each of these variables, when considered independently, is a better predictor of hospital costs than level of injury. CONCLUSIONS: The number of respiratory complications experienced during the initial acute-care hospitalization for CSI is a more important determinant of LOS and hospital costs than level of injury.


Asunto(s)
Vértebras Cervicales/lesiones , Costos de Hospital , Tiempo de Internación/economía , Insuficiencia Respiratoria/economía , Infecciones del Sistema Respiratorio/economía , Traumatismos de la Médula Espinal/complicaciones , Enfermedad Aguda , Adulto , Humanos , Insuficiencia Respiratoria/etiología , Insuficiencia Respiratoria/terapia , Infecciones del Sistema Respiratorio/etiología , Infecciones del Sistema Respiratorio/terapia , Estudios Retrospectivos , Traumatismos de la Médula Espinal/economía
18.
Magn Reson Chem ; 44(10): 969-71, 2006 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-16826553

RESUMEN

The structure of an unexpected compound from the dehydration of an aldol addition product has been determined using 1-D and 2-D NMR techniques. This reaction is the last step in a new synthetic approach to the galanthan ring system. Complete 1H and 13C NMR assignments for two synthetic precursors are also reported.


Asunto(s)
Cetonas/química , Agua/química , Isótopos de Carbono/análisis , Cristalografía por Rayos X , Hidrógeno/análisis , Espectroscopía de Resonancia Magnética , Estructura Molecular
19.
Am J Phys Med Rehabil ; 82(10): 803-14, 2003 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-14508412

RESUMEN

There are >200,000 persons living with a spinal cord injury in the United States, with approximately 10,000 new cases of traumatic injury per year. Advances in the care of these patients have significantly reduced acute and long-term mortality rates, although life expectancy remains decreased. This article will review the alterations in respiratory mechanics resulting from a spinal cord injury and will examine the contribution of respiratory complications to morbidity and mortality associated with various types of spinal cord injury.


Asunto(s)
Mecánica Respiratoria/fisiología , Traumatismos de la Médula Espinal/fisiopatología , Diafragma/fisiopatología , Humanos , Enfermedades Pulmonares/etiología , Mediciones del Volumen Pulmonar , Músculo Esquelético/fisiopatología , Traumatismos de la Médula Espinal/complicaciones , Traumatismos de la Médula Espinal/mortalidad
20.
J Org Chem ; 69(5): 1603-6, 2004 Mar 05.
Artículo en Inglés | MEDLINE | ID: mdl-14987017

RESUMEN

A five-step, atom-efficient synthesis of the Galanthan tetracyclic skeleton has been developed. The key step is an unusual intramolecular de Mayo reaction using an isocarbostyril substrate with a functionalized tether on nitrogen. The target molecule is produced in 35% overall yield from isocarbostyril.


Asunto(s)
Alcaloides/síntesis química , Fenantridinas/síntesis química , Alcaloides/química , Ciclización , Enlace de Hidrógeno , Espectroscopía de Resonancia Magnética , Estructura Molecular , Fenantridinas/química , Fotoquímica
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA