Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 171
Filtrar
1.
Ann Surg Oncol ; 31(1): 488-498, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37782415

RESUMO

BACKGROUND: While lower socioeconomic status has been shown to correlate with worse outcomes in cancer care, data correlating neighborhood-level metrics with outcomes are scarce. We aim to explore the association between neighborhood disadvantage and both short- and long-term postoperative outcomes in patients undergoing pancreatectomy for pancreatic ductal adenocarcinoma (PDAC). PATIENTS AND METHODS: We retrospectively analyzed 243 patients who underwent resection for PDAC at a single institution between 1 January 2010 and 15 September 2021. To measure neighborhood disadvantage, the cohort was divided into tertiles by Area Deprivation Index (ADI). Short-term outcomes of interest were minor complications, major complications, unplanned readmission within 30 days, prolonged hospitalization, and delayed gastric emptying (DGE). The long-term outcome of interest was overall survival. Logistic regression was used to test short-term outcomes; Cox proportional hazards models and Kaplan-Meier method were used for long-term outcomes. RESULTS: The median ADI of the cohort was 49 (IQR 32-64.5). On adjusted analysis, the high-ADI group demonstrated greater odds of suffering a major complication (odds ratio [OR], 2.78; 95% confidence interval [CI], 1.26-6.40; p = 0.01) and of an unplanned readmission (OR, 3.09; 95% CI, 1.16-9.28; p = 0.03) compared with the low-ADI group. There were no significant differences between groups in the odds of minor complications, prolonged hospitalization, or DGE (all p > 0.05). High ADI did not confer an increased hazard of death (p = 0.63). CONCLUSIONS: We found that worse neighborhood disadvantage is associated with a higher risk of major complication and unplanned readmission after pancreatectomy for PDAC.


Assuntos
Carcinoma Ductal Pancreático , Neoplasias Pancreáticas , Humanos , Pancreatectomia/efeitos adversos , Pancreatectomia/métodos , Estudos Retrospectivos , Neoplasias Pancreáticas/patologia , Carcinoma Ductal Pancreático/patologia , Características da Vizinhança
2.
Crit Care ; 28(1): 113, 2024 04 08.
Artigo em Inglês | MEDLINE | ID: mdl-38589940

RESUMO

BACKGROUND: Perhaps nowhere else in the healthcare system than in the intensive care unit environment are the challenges to create useful models with direct time-critical clinical applications more relevant and the obstacles to achieving those goals more massive. Machine learning-based artificial intelligence (AI) techniques to define states and predict future events are commonplace activities of modern life. However, their penetration into acute care medicine has been slow, stuttering and uneven. Major obstacles to widespread effective application of AI approaches to the real-time care of the critically ill patient exist and need to be addressed. MAIN BODY: Clinical decision support systems (CDSSs) in acute and critical care environments support clinicians, not replace them at the bedside. As will be discussed in this review, the reasons are many and include the immaturity of AI-based systems to have situational awareness, the fundamental bias in many large databases that do not reflect the target population of patient being treated making fairness an important issue to address and technical barriers to the timely access to valid data and its display in a fashion useful for clinical workflow. The inherent "black-box" nature of many predictive algorithms and CDSS makes trustworthiness and acceptance by the medical community difficult. Logistically, collating and curating in real-time multidimensional data streams of various sources needed to inform the algorithms and ultimately display relevant clinical decisions support format that adapt to individual patient responses and signatures represent the efferent limb of these systems and is often ignored during initial validation efforts. Similarly, legal and commercial barriers to the access to many existing clinical databases limit studies to address fairness and generalizability of predictive models and management tools. CONCLUSIONS: AI-based CDSS are evolving and are here to stay. It is our obligation to be good shepherds of their use and further development.


Assuntos
Algoritmos , Inteligência Artificial , Humanos , Cuidados Críticos , Unidades de Terapia Intensiva , Atenção à Saúde
3.
Am J Respir Crit Care Med ; 207(10): 1300-1309, 2023 05 15.
Artigo em Inglês | MEDLINE | ID: mdl-36449534

RESUMO

Rationale: Despite etiologic and severity heterogeneity in neutropenic sepsis, management is often uniform. Understanding host response clinical subphenotypes might inform treatment strategies for neutropenic sepsis. Objectives: In this retrospective two-hospital study, we analyzed whether temperature trajectory modeling could identify distinct, clinically relevant subphenotypes among oncology patients with neutropenia and suspected infection. Methods: Among adult oncologic admissions with neutropenia and blood cultures within 24 hours, a previously validated model classified patients' initial 72-hour temperature trajectories into one of four subphenotypes. We analyzed subphenotypes' independent relationships with hospital mortality and bloodstream infection using multivariable models. Measurements and Main Results: Patients (primary cohort n = 1,145, validation cohort n = 6,564) fit into one of four temperature subphenotypes. "Hyperthermic slow resolvers" (pooled n = 1,140 [14.8%], mortality n = 104 [9.1%]) and "hypothermic" encounters (n = 1,612 [20.9%], mortality n = 138 [8.6%]) had higher mortality than "hyperthermic fast resolvers" (n = 1,314 [17.0%], mortality n = 47 [3.6%]) and "normothermic" (n = 3,643 [47.3%], mortality n = 196 [5.4%]) encounters (P < 0.001). Bloodstream infections were more common among hyperthermic slow resolvers (n = 248 [21.8%]) and hyperthermic fast resolvers (n = 240 [18.3%]) than among hypothermic (n = 188 [11.7%]) or normothermic (n = 418 [11.5%]) encounters (P < 0.001). Adjusted for confounders, hyperthermic slow resolvers had increased adjusted odds for mortality (primary cohort odds ratio, 1.91 [P = 0.03]; validation cohort odds ratio, 2.19 [P < 0.001]) and bloodstream infection (primary odds ratio, 1.54 [P = 0.04]; validation cohort odds ratio, 2.15 [P < 0.001]). Conclusions: Temperature trajectory subphenotypes were independently associated with important outcomes among hospitalized patients with neutropenia in two independent cohorts.


Assuntos
Neoplasias , Neutropenia , Sepse , Adulto , Humanos , Estudos Retrospectivos , Temperatura , Neutropenia/complicações , Sepse/complicações , Febre , Neoplasias/complicações , Neoplasias/terapia
4.
Am J Respir Crit Care Med ; 207(12): 1602-1611, 2023 06 15.
Artigo em Inglês | MEDLINE | ID: mdl-36877594

RESUMO

Rationale: A recent randomized trial found that using a bougie did not increase the incidence of successful intubation on first attempt in critically ill adults. The average effect of treatment in a trial population, however, may differ from effects for individuals. Objective: We hypothesized that application of a machine learning model to data from a clinical trial could estimate the effect of treatment (bougie vs. stylet) for individual patients based on their baseline characteristics ("individualized treatment effects"). Methods: This was a secondary analysis of the BOUGIE (Bougie or Stylet in Patients Undergoing Intubation Emergently) trial. A causal forest algorithm was used to model differences in outcome probabilities by randomized group assignment (bougie vs. stylet) for each patient in the first half of the trial (training cohort). This model was used to predict individualized treatment effects for each patient in the second half (validation cohort). Measurements and Main Results: Of 1,102 patients in the BOUGIE trial, 558 (50.6%) were the training cohort, and 544 (49.4%) were the validation cohort. In the validation cohort, individualized treatment effects predicted by the model significantly modified the effect of trial group assignment on the primary outcome (P value for interaction = 0.02; adjusted qini coefficient, 2.46). The most important model variables were difficult airway characteristics, body mass index, and Acute Physiology and Chronic Health Evaluation II score. Conclusions: In this hypothesis-generating secondary analysis of a randomized trial with no average treatment effect and no treatment effect in any prespecified subgroups, a causal forest machine learning algorithm identified patients who appeared to benefit from the use of a bougie over a stylet and from the use of a stylet over a bougie using complex interactions between baseline patient and operator characteristics.


Assuntos
Estado Terminal , Intubação Intratraqueal , Adulto , Humanos , Estado Terminal/terapia , Intubação Intratraqueal/efeitos adversos , Calibragem , Laringoscopia
5.
JAMA ; 331(14): 1195-1204, 2024 04 09.
Artigo em Inglês | MEDLINE | ID: mdl-38501205

RESUMO

Importance: Among critically ill adults, randomized trials have not found oxygenation targets to affect outcomes overall. Whether the effects of oxygenation targets differ based on an individual's characteristics is unknown. Objective: To determine whether an individual's characteristics modify the effect of lower vs higher peripheral oxygenation-saturation (Spo2) targets on mortality. Design, Setting, and Participants: A machine learning model to predict the effect of treatment with a lower vs higher Spo2 target on mortality for individual patients was derived in the Pragmatic Investigation of Optimal Oxygen Targets (PILOT) trial and externally validated in the Intensive Care Unit Randomized Trial Comparing Two Approaches to Oxygen Therapy (ICU-ROX) trial. Critically ill adults received invasive mechanical ventilation in an intensive care unit (ICU) in the United States between July 2018 and August 2021 for PILOT (n = 1682) and in 21 ICUs in Australia and New Zealand between September 2015 and May 2018 for ICU-ROX (n = 965). Exposures: Randomization to a lower vs higher Spo2 target group. Main Outcome and Measure: 28-Day mortality. Results: In the ICU-ROX validation cohort, the predicted effect of treatment with a lower vs higher Spo2 target for individual patients ranged from a 27.2% absolute reduction to a 34.4% absolute increase in 28-day mortality. For example, patients predicted to benefit from a lower Spo2 target had a higher prevalence of acute brain injury, whereas patients predicted to benefit from a higher Spo2 target had a higher prevalence of sepsis and abnormally elevated vital signs. Patients predicted to benefit from a lower Spo2 target experienced lower mortality when randomized to the lower Spo2 group, whereas patients predicted to benefit from a higher Spo2 target experienced lower mortality when randomized to the higher Spo2 group (likelihood ratio test for effect modification P = .02). The use of a Spo2 target predicted to be best for each patient, instead of the randomized Spo2 target, would have reduced the absolute overall mortality by 6.4% (95% CI, 1.9%-10.9%). Conclusion and relevance: Oxygenation targets that are individualized using machine learning analyses of randomized trials may reduce mortality for critically ill adults. A prospective trial evaluating the use of individualized oxygenation targets is needed.


Assuntos
Estado Terminal , Oxigênio , Adulto , Humanos , Oxigênio/uso terapêutico , Estado Terminal/terapia , Respiração Artificial , Estudos Prospectivos , Oxigenoterapia , Unidades de Terapia Intensiva
6.
JAMA ; 331(6): 500-509, 2024 02 13.
Artigo em Inglês | MEDLINE | ID: mdl-38349372

RESUMO

Importance: The US heart allocation system prioritizes medically urgent candidates with a high risk of dying without transplant. The current therapy-based 6-status system is susceptible to manipulation and has limited rank ordering ability. Objective: To develop and validate a candidate risk score that incorporates current clinical, laboratory, and hemodynamic data. Design, Setting, and Participants: A registry-based observational study of adult heart transplant candidates (aged ≥18 years) from the US heart allocation system listed between January 1, 2019, and December 31, 2022, split by center into training (70%) and test (30%) datasets. Adult candidates were listed between January 1, 2019, and December 31, 2022. Main Outcomes and Measures: A US candidate risk score (US-CRS) model was developed by adding a predefined set of predictors to the current French Candidate Risk Score (French-CRS) model. Sensitivity analyses were performed, which included intra-aortic balloon pumps (IABP) and percutaneous ventricular assist devices (VAD) in the definition of short-term mechanical circulatory support (MCS) for the US-CRS. Performance of the US-CRS model, French-CRS model, and 6-status model in the test dataset was evaluated by time-dependent area under the receiver operating characteristic curve (AUC) for death without transplant within 6 weeks and overall survival concordance (c-index) with integrated AUC. Results: A total of 16 905 adult heart transplant candidates were listed (mean [SD] age, 53 [13] years; 73% male; 58% White); 796 patients (4.7%) died without a transplant. The final US-CRS contained time-varying short-term MCS (ventricular assist-extracorporeal membrane oxygenation or temporary surgical VAD), the log of bilirubin, estimated glomerular filtration rate, the log of B-type natriuretic peptide, albumin, sodium, and durable left ventricular assist device. In the test dataset, the AUC for death within 6 weeks of listing for the US-CRS model was 0.79 (95% CI, 0.75-0.83), for the French-CRS model was 0.72 (95% CI, 0.67-0.76), and 6-status model was 0.68 (95% CI, 0.62-0.73). Overall c-index for the US-CRS model was 0.76 (95% CI, 0.73-0.80), for the French-CRS model was 0.69 (95% CI, 0.65-0.73), and 6-status model was 0.67 (95% CI, 0.63-0.71). Classifying IABP and percutaneous VAD as short-term MCS reduced the effect size by 54%. Conclusions and Relevance: In this registry-based study of US heart transplant candidates, a continuous multivariable allocation score outperformed the 6-status system in rank ordering heart transplant candidates by medical urgency and may be useful for the medical urgency component of heart allocation.


Assuntos
Insuficiência Cardíaca , Transplante de Coração , Obtenção de Tecidos e Órgãos , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Bilirrubina , Serviços de Laboratório Clínico , Coração , Fatores de Risco , Medição de Risco , Insuficiência Cardíaca/mortalidade , Insuficiência Cardíaca/cirurgia , Estados Unidos , Alocação de Recursos para a Atenção à Saúde/métodos , Valor Preditivo dos Testes , Obtenção de Tecidos e Órgãos/métodos , Obtenção de Tecidos e Órgãos/organização & administração
7.
Crit Care Med ; 51(12): 1697-1705, 2023 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-37378460

RESUMO

OBJECTIVES: To identify and validate novel COVID-19 subphenotypes with potential heterogenous treatment effects (HTEs) using electronic health record (EHR) data and 33 unique biomarkers. DESIGN: Retrospective cohort study of adults presenting for acute care, with analysis of biomarkers from residual blood collected during routine clinical care. Latent profile analysis (LPA) of biomarker and EHR data identified subphenotypes of COVID-19 inpatients, which were validated using a separate cohort of patients. HTE for glucocorticoid use among subphenotypes was evaluated using both an adjusted logistic regression model and propensity matching analysis for in-hospital mortality. SETTING: Emergency departments from four medical centers. PATIENTS: Patients diagnosed with COVID-19 based on International Classification of Diseases , 10th Revision codes and laboratory test results. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Biomarker levels generally paralleled illness severity, with higher levels among more severely ill patients. LPA of 522 COVID-19 inpatients from three sites identified two profiles: profile 1 ( n = 332), with higher levels of albumin and bicarbonate, and profile 2 ( n = 190), with higher inflammatory markers. Profile 2 patients had higher median length of stay (7.4 vs 4.1 d; p < 0.001) and in-hospital mortality compared with profile 1 patients (25.8% vs 4.8%; p < 0.001). These were validated in a separate, single-site cohort ( n = 192), which demonstrated similar outcome differences. HTE was observed ( p = 0.03), with glucocorticoid treatment associated with increased mortality for profile 1 patients (odds ratio = 4.54). CONCLUSIONS: In this multicenter study combining EHR data with research biomarker analysis of patients with COVID-19, we identified novel profiles with divergent clinical outcomes and differential treatment responses.


Assuntos
COVID-19 , Adulto , Humanos , Estudos Retrospectivos , Glucocorticoides/uso terapêutico , Biomarcadores , Mortalidade Hospitalar
8.
J Surg Res ; 291: 7-16, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37329635

RESUMO

INTRODUCTION: Weight gain among young adults continues to increase. Identifying adults at high risk for weight gain and intervening before they gain weight could have a major public health impact. Our objective was to develop and test electronic health record-based machine learning models to predict weight gain in young adults with overweight/class 1 obesity. METHODS: Seven machine learning models were assessed, including three regression models, random forest, single-layer neural network, gradient-boosted decision trees, and support vector machine (SVM) models. Four categories of predictors were included: 1) demographics; 2) obesity-related health conditions; 3) laboratory data and vital signs; and 4) neighborhood-level variables. The cohort was split 60:40 for model training and validation. Area under the receiver operating characteristic curves (AUC) were calculated to determine model accuracy at predicting high-risk individuals, defined by ≥ 10% total body weight gain within 2 y. Variable importance was measured via generalized analysis of variance procedures. RESULTS: Of the 24,183 patients (mean [SD] age, 32.0 [6.3] y; 55.1% females) in the study, 14.2% gained ≥10% total body weight. Area under the receiver operating characteristic curves varied from 0.557 (SVM) to 0.675 (gradient-boosted decision trees). Age, sex, and baseline body mass index were the most important predictors among the models except SVM and neural network. CONCLUSIONS: Our machine learning models performed similarly and had modest accuracy for identifying young adults at risk of weight gain. Future models may need to incorporate behavioral and/or genetic information to enhance model accuracy.


Assuntos
Aprendizado de Máquina , Aumento de Peso , Feminino , Humanos , Adulto Jovem , Adulto , Masculino , Redes Neurais de Computação , Registros Eletrônicos de Saúde , Obesidade/complicações , Obesidade/diagnóstico
9.
J Surg Oncol ; 128(2): 280-288, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37073788

RESUMO

BACKGROUND: Outcomes for pancreatic adenocarcinoma (PDAC) remain difficult to prognosticate. Multiple models attempt to predict survival following the resection of PDAC, but their utility in the neoadjuvant population is unknown. We aimed to assess their accuracy among patients that received neoadjuvant chemotherapy (NAC). METHODS: We performed a multi-institutional retrospective analysis of patients who received NAC and underwent resection of PDAC. Two prognostic systems were evaluated: the Memorial Sloan Kettering Cancer Center Pancreatic Adenocarcinoma Nomogram (MSKCCPAN) and the American Joint Committee on Cancer (AJCC) staging system. Discrimination between predicted and actual disease-specific survival was assessed using the Uno C-statistic and Kaplan-Meier method. Calibration of the MSKCCPAN was assessed using the Brier score. RESULTS: A total of 448 patients were included. There were 232 (51.8%) females, and the mean age was 64.1 years (±9.5). Most had AJCC Stage I or II disease (77.7%). For the MSKCCPAN, the Uno C-statistic at 12-, 24-, and 36-month time points was 0.62, 0.63, and 0.62, respectively. The AJCC system demonstrated similarly mediocre discrimination. The Brier score for the MSKCCPAN was 0.15 at 12 months, 0.26 at 24 months, and 0.30 at 36 months, demonstrating modest calibration. CONCLUSIONS: Current survival prediction models and staging systems for patients with PDAC undergoing resection after NAC have limited accuracy.


Assuntos
Adenocarcinoma , Carcinoma Ductal Pancreático , Neoplasias Pancreáticas , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adenocarcinoma/cirurgia , Carcinoma Ductal Pancreático/tratamento farmacológico , Carcinoma Ductal Pancreático/cirurgia , Terapia Neoadjuvante , Estadiamento de Neoplasias , Nomogramas , Neoplasias Pancreáticas/tratamento farmacológico , Neoplasias Pancreáticas/cirurgia , Prognóstico , Estudos Retrospectivos , Neoplasias Pancreáticas
10.
J Biomed Inform ; 142: 104346, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-37061012

RESUMO

Daily progress notes are a common note type in the electronic health record (EHR) where healthcare providers document the patient's daily progress and treatment plans. The EHR is designed to document all the care provided to patients, but it also enables note bloat with extraneous information that distracts from the diagnoses and treatment plans. Applications of natural language processing (NLP) in the EHR is a growing field with the majority of methods in information extraction. Few tasks use NLP methods for downstream diagnostic decision support. We introduced the 2022 National NLP Clinical Challenge (N2C2) Track 3: Progress Note Understanding - Assessment and Plan Reasoning as one step towards a new suite of tasks. The Assessment and Plan Reasoning task focuses on the most critical components of progress notes, Assessment and Plan subsections where health problems and diagnoses are contained. The goal of the task was to develop and evaluate NLP systems that automatically predict causal relations between the overall status of the patient contained in the Assessment section and its relation to each component of the Plan section which contains the diagnoses and treatment plans. The goal of the task was to identify and prioritize diagnoses as the first steps in diagnostic decision support to find the most relevant information in long documents like daily progress notes. We present the results of the 2022 N2C2 Track 3 and provide a description of the data, evaluation, participation and system performance.


Assuntos
Registros Eletrônicos de Saúde , Armazenamento e Recuperação da Informação , Humanos , Processamento de Linguagem Natural , Pessoal de Saúde
11.
J Biomed Inform ; 138: 104286, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36706848

RESUMO

The meaningful use of electronic health records (EHR) continues to progress in the digital era with clinical decision support systems augmented by artificial intelligence. A priority in improving provider experience is to overcome information overload and reduce the cognitive burden so fewer medical errors and cognitive biases are introduced during patient care. One major type of medical error is diagnostic error due to systematic or predictable errors in judgement that rely on heuristics. The potential for clinical natural language processing (cNLP) to model diagnostic reasoning in humans with forward reasoning from data to diagnosis and potentially reduce cognitive burden and medical error has not been investigated. Existing tasks to advance the science in cNLP have largely focused on information extraction and named entity recognition through classification tasks. We introduce a novel suite of tasks coined as Diagnostic Reasoning Benchmarks, Dr.Bench, as a new benchmark for developing and evaluating cNLP models with clinical diagnostic reasoning ability. The suite includes six tasks from ten publicly available datasets addressing clinical text understanding, medical knowledge reasoning, and diagnosis generation. DR.BENCH is the first clinical suite of tasks designed to be a natural language generation framework to evaluate pre-trained language models for diagnostic reasoning. The goal of DR. BENCH is to advance the science in cNLP to support downstream applications in computerized diagnostic decision support and improve the efficiency and accuracy of healthcare providers during patient care. We fine-tune and evaluate the state-of-the-art generative models on DR.BENCH. Experiments show that with domain adaptation pre-training on medical knowledge, the model demonstrated opportunities for improvement when evaluated in DR. BENCH. We share DR. BENCH as a publicly available GitLab repository with a systematic approach to load and evaluate models for the cNLP community. We also discuss the carbon footprint produced during the experiments and encourage future work on DR.BENCH to report the carbon footprint.


Assuntos
Inteligência Artificial , Processamento de Linguagem Natural , Humanos , Benchmarking , Resolução de Problemas , Armazenamento e Recuperação da Informação
12.
Crit Care Med ; 50(2): e162-e172, 2022 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-34406171

RESUMO

OBJECTIVES: Prognostication of neurologic status among survivors of in-hospital cardiac arrests remains a challenging task for physicians. Although models such as the Cardiac Arrest Survival Post-Resuscitation In-hospital score are useful for predicting neurologic outcomes, they were developed using traditional statistical techniques. In this study, we derive and compare the performance of several machine learning models with each other and with the Cardiac Arrest Survival Post-Resuscitation In-hospital score for predicting the likelihood of favorable neurologic outcomes among survivors of resuscitation. DESIGN: Analysis of the Get With The Guidelines-Resuscitation registry. SETTING: Seven-hundred fifty-five hospitals participating in Get With The Guidelines-Resuscitation from January 1, 2001, to January 28, 2017. PATIENTS: Adult in-hospital cardiac arrest survivors. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Of 117,674 patients in our cohort, 28,409 (24%) had a favorable neurologic outcome, as defined as survival with a Cerebral Performance Category score of less than or equal to 2 at discharge. Using patient characteristics, pre-existing conditions, prearrest interventions, and periarrest variables, we constructed logistic regression, support vector machines, random forests, gradient boosted machines, and neural network machine learning models to predict favorable neurologic outcome. Events prior to October 20, 2009, were used for model derivation, and all subsequent events were used for validation. The gradient boosted machine predicted favorable neurologic status at discharge significantly better than the Cardiac Arrest Survival Post-Resuscitation In-hospital score (C-statistic: 0.81 vs 0.73; p < 0.001) and outperformed all other machine learning models in terms of discrimination, calibration, and accuracy measures. Variables that were consistently most important for prediction across all models were duration of arrest, initial cardiac arrest rhythm, admission Cerebral Performance Category score, and age. CONCLUSIONS: The gradient boosted machine algorithm was the most accurate for predicting favorable neurologic outcomes in in-hospital cardiac arrest survivors. Our results highlight the utility of machine learning for predicting neurologic outcomes in resuscitated patients.


Assuntos
Previsões/métodos , Parada Cardíaca/complicações , Aprendizado de Máquina/normas , Avaliação de Resultados em Cuidados de Saúde/estatística & dados numéricos , Idoso , Área Sob a Curva , Estudos de Coortes , Feminino , Parada Cardíaca/epidemiologia , Parada Cardíaca/mortalidade , Humanos , Aprendizado de Máquina/estatística & dados numéricos , Masculino , Pessoa de Meia-Idade , Avaliação de Resultados em Cuidados de Saúde/métodos , Prognóstico , Curva ROC , Sobreviventes/estatística & dados numéricos
13.
Crit Care Med ; 50(9): 1339-1347, 2022 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-35452010

RESUMO

OBJECTIVES: To determine the impact of a machine learning early warning risk score, electronic Cardiac Arrest Risk Triage (eCART), on mortality for elevated-risk adult inpatients. DESIGN: A pragmatic pre- and post-intervention study conducted over the same 10-month period in 2 consecutive years. SETTING: Four-hospital community-academic health system. PATIENTS: All adult patients admitted to a medical-surgical ward. INTERVENTIONS: During the baseline period, clinicians were blinded to eCART scores. During the intervention period, scores were presented to providers. Scores greater than or equal to 95th percentile were designated high risk prompting a physician assessment for ICU admission. Scores between the 89th and 95th percentiles were designated intermediate risk, triggering a nurse-directed workflow that included measuring vital signs every 2 hours and contacting a physician to review the treatment plan. MEASUREMENTS AND MAIN RESULTS: The primary outcome was all-cause inhospital mortality. Secondary measures included vital sign assessment within 2 hours, ICU transfer rate, and time to ICU transfer. A total of 60,261 patients were admitted during the study period, of which 6,681 (11.1%) met inclusion criteria (baseline period n = 3,191, intervention period n = 3,490). The intervention period was associated with a significant decrease in hospital mortality for the main cohort (8.8% vs 13.9%; p < 0.0001; adjusted odds ratio [OR], 0.60 [95% CI, 0.52-0.71]). A significant decrease in mortality was also seen for the average-risk cohort not subject to the intervention (0.49% vs 0.26%; p < 0.05; adjusted OR, 0.53 [95% CI, 0.41-0.74]). In subgroup analysis, the benefit was seen in both high- (17.9% vs 23.9%; p = 0.001) and intermediate-risk (2.0% vs 4.0 %; p = 0.005) patients. The intervention period was also associated with a significant increase in ICU transfers, decrease in time to ICU transfer, and increase in vital sign reassessment within 2 hours. CONCLUSIONS: Implementation of a machine learning early warning score-driven protocol was associated with reduced inhospital mortality, likely driven by earlier and more frequent ICU transfer.


Assuntos
Escore de Alerta Precoce , Parada Cardíaca , Adulto , Parada Cardíaca/diagnóstico , Parada Cardíaca/terapia , Mortalidade Hospitalar , Humanos , Unidades de Terapia Intensiva , Aprendizado de Máquina , Sinais Vitais
14.
Crit Care Med ; 50(2): 212-223, 2022 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-35100194

RESUMO

OBJECTIVES: Body temperature trajectories of infected patients are associated with specific immune profiles and survival. We determined the association between temperature trajectories and distinct manifestations of coronavirus disease 2019. DESIGN: Retrospective observational study. SETTING: Four hospitals within an academic healthcare system from March 2020 to February 2021. PATIENTS: All adult patients hospitalized with coronavirus disease 2019. INTERVENTIONS: Using a validated group-based trajectory model, we classified patients into four previously defined temperature trajectory subphenotypes using oral temperature measurements from the first 72 hours of hospitalization. Clinical characteristics, biomarkers, and outcomes were compared between subphenotypes. MEASUREMENTS AND MAIN RESULTS: The 5,903 hospitalized coronavirus disease 2019 patients were classified into four subphenotypes: hyperthermic slow resolvers (n = 1,452, 25%), hyperthermic fast resolvers (1,469, 25%), normothermics (2,126, 36%), and hypothermics (856, 15%). Hypothermics had abnormal coagulation markers, with the highest d-dimer and fibrin monomers (p < 0.001) and the highest prevalence of cerebrovascular accidents (10%, p = 0.001). The prevalence of venous thromboembolism was significantly different between subphenotypes (p = 0.005), with the highest rate in hypothermics (8.5%) and lowest in hyperthermic slow resolvers (5.1%). Hyperthermic slow resolvers had abnormal inflammatory markers, with the highest C-reactive protein, ferritin, and interleukin-6 (p < 0.001). Hyperthermic slow resolvers had increased odds of mechanical ventilation, vasopressors, and 30-day inpatient mortality (odds ratio, 1.58; 95% CI, 1.13-2.19) compared with hyperthermic fast resolvers. Over the course of the pandemic, we observed a drastic decrease in the prevalence of hyperthermic slow resolvers, from representing 53% of admissions in March 2020 to less than 15% by 2021. We found that dexamethasone use was associated with significant reduction in probability of hyperthermic slow resolvers membership (27% reduction; 95% CI, 23-31%; p < 0.001). CONCLUSIONS: Hypothermics had abnormal coagulation markers, suggesting a hypercoagulable subphenotype. Hyperthermic slow resolvers had elevated inflammatory markers and the highest odds of mortality, suggesting a hyperinflammatory subphenotype. Future work should investigate whether temperature subphenotypes benefit from targeted antithrombotic and anti-inflammatory strategies.


Assuntos
Temperatura Corporal , COVID-19/patologia , Hipertermia/patologia , Hipotermia/patologia , Fenótipo , Centros Médicos Acadêmicos , Idoso , Anti-Inflamatórios/uso terapêutico , Biomarcadores/sangue , Coagulação Sanguínea , Estudos de Coortes , Dexametasona/uso terapêutico , Feminino , Humanos , Inflamação , Masculino , Pessoa de Meia-Idade , Escores de Disfunção Orgânica , Estudos Retrospectivos , SARS-CoV-2
15.
BMC Pregnancy Childbirth ; 22(1): 295, 2022 Apr 06.
Artigo em Inglês | MEDLINE | ID: mdl-35387624

RESUMO

BACKGROUND: Early warning scores are designed to identify hospitalized patients who are at high risk of clinical deterioration. Although many general scores have been developed for the medical-surgical wards, specific scores have also been developed for obstetric patients due to differences in normal vital sign ranges and potential complications in this unique population. The comparative performance of general and obstetric early warning scores for predicting deterioration and infection on the maternal wards is not known. METHODS: This was an observational cohort study at the University of Chicago that included patients hospitalized on obstetric wards from November 2008 to December 2018. Obstetric scores (modified early obstetric warning system (MEOWS), maternal early warning criteria (MEWC), and maternal early warning trigger (MEWT)), paper-based general scores (Modified Early Warning Score (MEWS) and National Early Warning Score (NEWS), and a general score developed using machine learning (electronic Cardiac Arrest Risk Triage (eCART) score) were compared using the area under the receiver operating characteristic score (AUC) for predicting ward to intensive care unit (ICU) transfer and/or death and new infection. RESULTS: A total of 19,611 patients were included, with 43 (0.2%) experiencing deterioration (ICU transfer and/or death) and 88 (0.4%) experiencing an infection. eCART had the highest discrimination for deterioration (p < 0.05 for all comparisons), with an AUC of 0.86, followed by MEOWS (0.74), NEWS (0.72), MEWC (0.71), MEWS (0.70), and MEWT (0.65). MEWC, MEWT, and MEOWS had higher accuracy than MEWS and NEWS but lower accuracy than eCART at specific cut-off thresholds. For predicting infection, eCART (AUC 0.77) had the highest discrimination. CONCLUSIONS: Within the limitations of our retrospective study, eCART had the highest accuracy for predicting deterioration and infection in our ante- and postpartum patient population. Maternal early warning scores were more accurate than MEWS and NEWS. While institutional choice of an early warning system is complex, our results have important implications for the risk stratification of maternal ward patients, especially since the low prevalence of events means that small improvements in accuracy can lead to large decreases in false alarms.


Assuntos
Deterioração Clínica , Escore de Alerta Precoce , Parada Cardíaca , Feminino , Parada Cardíaca/diagnóstico , Humanos , Unidades de Terapia Intensiva , Gravidez , Curva ROC , Estudos Retrospectivos , Medição de Risco/métodos
16.
Pediatr Crit Care Med ; 23(7): 514-523, 2022 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-35446816

RESUMO

OBJECTIVES: Unrecognized clinical deterioration during illness requiring hospitalization is associated with high risk of mortality and long-term morbidity among children. Our objective was to develop and externally validate machine learning algorithms using electronic health records for identifying ICU transfer within 12 hours indicative of a child's condition. DESIGN: Observational cohort study. SETTING: Two urban, tertiary-care, academic hospitals (sites 1 and 2). PATIENTS: Pediatric inpatients (age <18 yr). INTERVENTIONS: None. MEASUREMENT AND MAIN RESULTS: Our primary outcome was direct ward to ICU transfer. Using age, vital signs, and laboratory results, we derived logistic regression with regularization, restricted cubic spline regression, random forest, and gradient boosted machine learning models. Among 50,830 admissions at site 1 and 88,970 admissions at site 2, 1,993 (3.92%) and 2,317 (2.60%) experienced the primary outcome, respectively. Site 1 data were split longitudinally into derivation (2009-2017) and validation (2018-2019), whereas site 2 constituted the external test cohort. Across both sites, the gradient boosted machine was the most accurate model and outperformed a modified version of the Bedside Pediatric Early Warning Score that only used physiologic variables in terms of discrimination ( C -statistic site 1: 0.84 vs 0.71, p < 0.001; site 2: 0.80 vs 0.74, p < 0.001), sensitivity, specificity, and number needed to alert. CONCLUSIONS: We developed and externally validated a novel machine learning model that identifies ICU transfers in hospitalized children more accurately than current tools. Our model enables early detection of children at risk for deterioration, thereby creating opportunities for intervention and improvement in outcomes.


Assuntos
Registros Eletrônicos de Saúde , Aprendizado de Máquina , Criança , Estudos de Coortes , Humanos , Unidades de Terapia Intensiva Pediátrica , Estudos Retrospectivos , Sinais Vitais
17.
Am J Respir Crit Care Med ; 204(403-411)2021 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-33891529

RESUMO

RATIONALE: Variation in hospital mortality has been described for coronavirus disease 2019 (COVID-19), but the factors that explain these differences remain unclear. OBJECTIVE: Our objective was to utilize a large, nationally representative dataset of critically ill adults with COVID-19 to determine which factors explain mortality variability. METHODS: In this multicenter cohort study, we examined adults hospitalized in intensive care units with COVID-19 at 70 United States hospitals between March and June 2020. The primary outcome was 28-day mortality. We examined patient-level and hospital-level variables. Mixed-effects logistic regression was used to identify factors associated with interhospital variation. The median odds ratio (OR) was calculated to compare outcomes in higher- vs. lower-mortality hospitals. A gradient boosted machine algorithm was developed for individual-level mortality models. MEASUREMENTS AND MAIN RESULTS: A total of 4,019 patients were included, 1537 (38%) of whom died by 28 days. Mortality varied considerably across hospitals (0-82%). After adjustment for patient- and hospital-level domains, interhospital variation was attenuated (OR decline from 2.06 [95% CI, 1.73-2.37] to 1.22 [95% CI, 1.00-1.38]), with the greatest changes occurring with adjustment for acute physiology, socioeconomic status, and strain. For individual patients, the relative contribution of each domain to mortality risk was: acute physiology (49%), demographics and comorbidities (20%), socioeconomic status (12%), strain (9%), hospital quality (8%), and treatments (3%). CONCLUSION: There is considerable interhospital variation in mortality for critically ill patients with COVID-19, which is mostly explained by hospital-level socioeconomic status, strain, and acute physiologic differences. Individual mortality is driven mostly by patient-level factors. This article is open access and distributed under the terms of the Creative Commons Attribution Non-Commercial No Derivatives License 4.0 (http://creativecommons.org/licenses/by-nc-nd/4.0/).


Assuntos
Algoritmos , COVID-19/epidemiologia , Estado Terminal/terapia , Unidades de Terapia Intensiva/estatística & dados numéricos , Idoso , Comorbidade , Estado Terminal/epidemiologia , Feminino , Seguimentos , Mortalidade Hospitalar/tendências , Humanos , Incidência , Masculino , Pessoa de Meia-Idade , Prognóstico , Estudos Retrospectivos , Fatores de Risco , SARS-CoV-2 , Taxa de Sobrevida/tendências , Estados Unidos/epidemiologia
18.
Ann Intern Med ; 174(1): 50-57, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33105091

RESUMO

BACKGROUND: Across the United States, various social distancing measures were implemented to control the spread of coronavirus disease 2019 (COVID-19). However, the effectiveness of such measures for specific regions with varying population demographic characteristics and different levels of adherence to social distancing is uncertain. OBJECTIVE: To determine the effect of social distancing measures in unique regions. DESIGN: An agent-based simulation model. SETTING: Agent-based model applied to Dane County, Wisconsin; the Milwaukee metropolitan (metro) area; and New York City (NYC). PATIENTS: Synthetic population at different ages. INTERVENTION: Different times for implementing and easing social distancing measures at different levels of adherence. MEASUREMENTS: The model represented the social network and interactions among persons in a region, considering population demographic characteristics, limited testing availability, "imported" infections, asymptomatic disease transmission, and age-specific adherence to social distancing measures. The primary outcome was the total number of confirmed COVID-19 cases. RESULTS: The timing of and adherence to social distancing had a major effect on COVID-19 occurrence. In NYC, implementing social distancing measures 1 week earlier would have reduced the total number of confirmed cases from 203 261 to 41 366 as of 31 May 2020, whereas a 1-week delay could have increased the number of confirmed cases to 1 407 600. A delay in implementation had a differential effect on the number of cases in the Milwaukee metro area versus Dane County, indicating that the effect of social distancing measures varies even within the same state. LIMITATION: The effect of weather conditions on transmission dynamics was not considered. CONCLUSION: The timing of implementing and easing social distancing measures has major effects on the number of COVID-19 cases. PRIMARY FUNDING SOURCE: National Institute of Allergy and Infectious Diseases.


Assuntos
COVID-19/prevenção & controle , Comportamento Cooperativo , Distanciamento Físico , COVID-19/epidemiologia , Simulação por Computador , Humanos , Cidade de Nova Iorque/epidemiologia , SARS-CoV-2 , Estados Unidos/epidemiologia , Wisconsin/epidemiologia
19.
Am J Transplant ; 21(11): 3684-3693, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-33864733

RESUMO

Under the new US heart allocation policy, transplant centers listed significantly more candidates at high priority statuses (Status 1 and 2) with mechanical circulatory support devices than expected. We determined whether the practice change was widespread or concentrated among certain transplant centers. Using data from the Scientific Registry of Transplant Recipients, we used mixed-effect logistic regression to compare the observed listings of adult, heart-alone transplant candidates post-policy (December 2018 to February 2020) to seasonally matched pre-policy cohort (December 2016 to February 2018). US transplant centers (N = 96) listed similar number of candidates in each policy period (4472 vs. 4498) but listed significantly more at high priority status (25.5% vs. 7.0%, p < .001) than expected. Adjusted for candidate characteristics, 91 of 96 (94.8%) centers listed significantly more candidates at high-priority status than expected, with the unexpected increase varying from 4.8% to 50.4% (interquartile range [IQR]: 14.0%-23.3%). Centers in OPOs with highest Status 1A transplant rate pre-policy were significantly more likely to utilize high-priority status under the new policy (OR: 9.73, p = .01). The new heart allocation policy was associated with widespread and significantly variable changes in transplant center practice that may undermine the effectiveness of the new system.


Assuntos
Transplante de Coração , Obtenção de Tecidos e Órgãos , Adulto , Humanos , Políticas , Transplantados , Listas de Espera
20.
Crit Care Med ; 49(10): 1694-1705, 2021 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-33938715

RESUMO

OBJECTIVES: Early antibiotic administration is a central component of sepsis guidelines, and delays may increase mortality. However, prior studies have examined the delay to first antibiotic administration as a single time period even though it contains two distinct processes: antibiotic ordering and antibiotic delivery, which can each be targeted for improvement through different interventions. The objective of this study was to characterize and compare patients who experienced order or delivery delays, investigate the association of each delay type with mortality, and identify novel patient subphenotypes with elevated risk of harm from delays. DESIGN: Retrospective analysis of multicenter inpatient data. SETTING: Two tertiary care medical centers (2008-2018, 2006-2017) and four community-based hospitals (2008-2017). PATIENTS: All patients admitted through the emergency department who met clinical criteria for infection. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Patient demographics, vitals, laboratory values, medication order and administration times, and in-hospital survival data were obtained from the electronic health record. Order and delivery delays were calculated for each admission. Adjusted logistic regression models were used to examine the relationship between each delay and in-hospital mortality. Causal forests, a machine learning method, was used to identify a high-risk subgroup. A total of 60,817 admissions were included, and delays occurred in 58% of patients. Each additional hour of order delay (odds ratio, 1.04; 95% CI, 1.03-1.05) and delivery delay (odds ratio, 1.05; 95% CI, 1.02-1.08) was associated with increased mortality. A patient subgroup identified by causal forests with higher comorbidity burden, greater organ dysfunction, and abnormal initial lactate measurements had a higher risk of death associated with delays (odds ratio, 1.07; 95% CI, 1.06-1.09 vs odds ratio, 1.02; 95% CI, 1.01-1.03). CONCLUSIONS: Delays in antibiotic ordering and drug delivery are both associated with a similar increase in mortality. A distinct subgroup of high-risk patients exist who could be targeted for more timely therapy.


Assuntos
Antibacterianos/administração & dosagem , Fenótipo , Sepse/genética , Tempo para o Tratamento/estatística & dados numéricos , Idoso , Idoso de 80 Anos ou mais , Antibacterianos/uso terapêutico , Serviço Hospitalar de Emergência/organização & administração , Serviço Hospitalar de Emergência/estatística & dados numéricos , Feminino , Hospitalização/estatística & dados numéricos , Humanos , Illinois/epidemiologia , Masculino , Pessoa de Meia-Idade , Estudos Prospectivos , Estudos Retrospectivos , Sepse/tratamento farmacológico , Sepse/fisiopatologia , Fatores de Tempo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA