Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 152
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Crit Care Med ; 52(9): 1439-1450, 2024 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-39145702

RESUMO

Critical care trials evaluate the effect of interventions in patients with diverse personal histories and causes of illness, often under the umbrella of heterogeneous clinical syndromes, such as sepsis or acute respiratory distress syndrome. Given this variation, it is reasonable to expect that the effect of treatment on outcomes may differ for individuals with variable characteristics. However, in randomized controlled trials, efficacy is typically assessed by the average treatment effect (ATE), which quantifies the average effect of the intervention on the outcome in the study population. Importantly, the ATE may hide variations of the treatment's effect on a clinical outcome across levels of patient characteristics, which may erroneously lead to the conclusion that an intervention does not work overall when it may in fact benefit certain patients. In this review, we describe methodological approaches for assessing heterogeneity of treatment effect (HTE), including expert-derived subgrouping, data-driven subgrouping, baseline risk modeling, treatment effect modeling, and individual treatment rule estimation. Next, we outline how insights from HTE analyses can be incorporated into the design of clinical trials. Finally, we propose a research agenda for advancing the field and bringing HTE approaches to the bedside.


Assuntos
Cuidados Críticos , Medicina de Precisão , Humanos , Cuidados Críticos/métodos , Medicina de Precisão/métodos , Projetos de Pesquisa , Estudos Observacionais como Assunto/métodos , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos
2.
Ann Surg Oncol ; 31(1): 488-498, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37782415

RESUMO

BACKGROUND: While lower socioeconomic status has been shown to correlate with worse outcomes in cancer care, data correlating neighborhood-level metrics with outcomes are scarce. We aim to explore the association between neighborhood disadvantage and both short- and long-term postoperative outcomes in patients undergoing pancreatectomy for pancreatic ductal adenocarcinoma (PDAC). PATIENTS AND METHODS: We retrospectively analyzed 243 patients who underwent resection for PDAC at a single institution between 1 January 2010 and 15 September 2021. To measure neighborhood disadvantage, the cohort was divided into tertiles by Area Deprivation Index (ADI). Short-term outcomes of interest were minor complications, major complications, unplanned readmission within 30 days, prolonged hospitalization, and delayed gastric emptying (DGE). The long-term outcome of interest was overall survival. Logistic regression was used to test short-term outcomes; Cox proportional hazards models and Kaplan-Meier method were used for long-term outcomes. RESULTS: The median ADI of the cohort was 49 (IQR 32-64.5). On adjusted analysis, the high-ADI group demonstrated greater odds of suffering a major complication (odds ratio [OR], 2.78; 95% confidence interval [CI], 1.26-6.40; p = 0.01) and of an unplanned readmission (OR, 3.09; 95% CI, 1.16-9.28; p = 0.03) compared with the low-ADI group. There were no significant differences between groups in the odds of minor complications, prolonged hospitalization, or DGE (all p > 0.05). High ADI did not confer an increased hazard of death (p = 0.63). CONCLUSIONS: We found that worse neighborhood disadvantage is associated with a higher risk of major complication and unplanned readmission after pancreatectomy for PDAC.


Assuntos
Carcinoma Ductal Pancreático , Neoplasias Pancreáticas , Humanos , Pancreatectomia/efeitos adversos , Pancreatectomia/métodos , Estudos Retrospectivos , Neoplasias Pancreáticas/patologia , Carcinoma Ductal Pancreático/patologia , Características da Vizinhança
3.
Am J Respir Crit Care Med ; 207(10): 1300-1309, 2023 05 15.
Artigo em Inglês | MEDLINE | ID: mdl-36449534

RESUMO

Rationale: Despite etiologic and severity heterogeneity in neutropenic sepsis, management is often uniform. Understanding host response clinical subphenotypes might inform treatment strategies for neutropenic sepsis. Objectives: In this retrospective two-hospital study, we analyzed whether temperature trajectory modeling could identify distinct, clinically relevant subphenotypes among oncology patients with neutropenia and suspected infection. Methods: Among adult oncologic admissions with neutropenia and blood cultures within 24 hours, a previously validated model classified patients' initial 72-hour temperature trajectories into one of four subphenotypes. We analyzed subphenotypes' independent relationships with hospital mortality and bloodstream infection using multivariable models. Measurements and Main Results: Patients (primary cohort n = 1,145, validation cohort n = 6,564) fit into one of four temperature subphenotypes. "Hyperthermic slow resolvers" (pooled n = 1,140 [14.8%], mortality n = 104 [9.1%]) and "hypothermic" encounters (n = 1,612 [20.9%], mortality n = 138 [8.6%]) had higher mortality than "hyperthermic fast resolvers" (n = 1,314 [17.0%], mortality n = 47 [3.6%]) and "normothermic" (n = 3,643 [47.3%], mortality n = 196 [5.4%]) encounters (P < 0.001). Bloodstream infections were more common among hyperthermic slow resolvers (n = 248 [21.8%]) and hyperthermic fast resolvers (n = 240 [18.3%]) than among hypothermic (n = 188 [11.7%]) or normothermic (n = 418 [11.5%]) encounters (P < 0.001). Adjusted for confounders, hyperthermic slow resolvers had increased adjusted odds for mortality (primary cohort odds ratio, 1.91 [P = 0.03]; validation cohort odds ratio, 2.19 [P < 0.001]) and bloodstream infection (primary odds ratio, 1.54 [P = 0.04]; validation cohort odds ratio, 2.15 [P < 0.001]). Conclusions: Temperature trajectory subphenotypes were independently associated with important outcomes among hospitalized patients with neutropenia in two independent cohorts.


Assuntos
Neoplasias , Neutropenia , Sepse , Adulto , Humanos , Estudos Retrospectivos , Temperatura , Neutropenia/complicações , Sepse/complicações , Febre , Neoplasias/complicações , Neoplasias/terapia
4.
Am J Respir Crit Care Med ; 207(12): 1602-1611, 2023 06 15.
Artigo em Inglês | MEDLINE | ID: mdl-36877594

RESUMO

Rationale: A recent randomized trial found that using a bougie did not increase the incidence of successful intubation on first attempt in critically ill adults. The average effect of treatment in a trial population, however, may differ from effects for individuals. Objective: We hypothesized that application of a machine learning model to data from a clinical trial could estimate the effect of treatment (bougie vs. stylet) for individual patients based on their baseline characteristics ("individualized treatment effects"). Methods: This was a secondary analysis of the BOUGIE (Bougie or Stylet in Patients Undergoing Intubation Emergently) trial. A causal forest algorithm was used to model differences in outcome probabilities by randomized group assignment (bougie vs. stylet) for each patient in the first half of the trial (training cohort). This model was used to predict individualized treatment effects for each patient in the second half (validation cohort). Measurements and Main Results: Of 1,102 patients in the BOUGIE trial, 558 (50.6%) were the training cohort, and 544 (49.4%) were the validation cohort. In the validation cohort, individualized treatment effects predicted by the model significantly modified the effect of trial group assignment on the primary outcome (P value for interaction = 0.02; adjusted qini coefficient, 2.46). The most important model variables were difficult airway characteristics, body mass index, and Acute Physiology and Chronic Health Evaluation II score. Conclusions: In this hypothesis-generating secondary analysis of a randomized trial with no average treatment effect and no treatment effect in any prespecified subgroups, a causal forest machine learning algorithm identified patients who appeared to benefit from the use of a bougie over a stylet and from the use of a stylet over a bougie using complex interactions between baseline patient and operator characteristics.


Assuntos
Estado Terminal , Intubação Intratraqueal , Adulto , Humanos , Estado Terminal/terapia , Intubação Intratraqueal/efeitos adversos , Calibragem , Laringoscopia
5.
JAMA ; 331(14): 1195-1204, 2024 04 09.
Artigo em Inglês | MEDLINE | ID: mdl-38501205

RESUMO

Importance: Among critically ill adults, randomized trials have not found oxygenation targets to affect outcomes overall. Whether the effects of oxygenation targets differ based on an individual's characteristics is unknown. Objective: To determine whether an individual's characteristics modify the effect of lower vs higher peripheral oxygenation-saturation (Spo2) targets on mortality. Design, Setting, and Participants: A machine learning model to predict the effect of treatment with a lower vs higher Spo2 target on mortality for individual patients was derived in the Pragmatic Investigation of Optimal Oxygen Targets (PILOT) trial and externally validated in the Intensive Care Unit Randomized Trial Comparing Two Approaches to Oxygen Therapy (ICU-ROX) trial. Critically ill adults received invasive mechanical ventilation in an intensive care unit (ICU) in the United States between July 2018 and August 2021 for PILOT (n = 1682) and in 21 ICUs in Australia and New Zealand between September 2015 and May 2018 for ICU-ROX (n = 965). Exposures: Randomization to a lower vs higher Spo2 target group. Main Outcome and Measure: 28-Day mortality. Results: In the ICU-ROX validation cohort, the predicted effect of treatment with a lower vs higher Spo2 target for individual patients ranged from a 27.2% absolute reduction to a 34.4% absolute increase in 28-day mortality. For example, patients predicted to benefit from a lower Spo2 target had a higher prevalence of acute brain injury, whereas patients predicted to benefit from a higher Spo2 target had a higher prevalence of sepsis and abnormally elevated vital signs. Patients predicted to benefit from a lower Spo2 target experienced lower mortality when randomized to the lower Spo2 group, whereas patients predicted to benefit from a higher Spo2 target experienced lower mortality when randomized to the higher Spo2 group (likelihood ratio test for effect modification P = .02). The use of a Spo2 target predicted to be best for each patient, instead of the randomized Spo2 target, would have reduced the absolute overall mortality by 6.4% (95% CI, 1.9%-10.9%). Conclusion and relevance: Oxygenation targets that are individualized using machine learning analyses of randomized trials may reduce mortality for critically ill adults. A prospective trial evaluating the use of individualized oxygenation targets is needed.


Assuntos
Estado Terminal , Oxigênio , Adulto , Humanos , Oxigênio/uso terapêutico , Estado Terminal/terapia , Respiração Artificial , Estudos Prospectivos , Oxigenoterapia , Unidades de Terapia Intensiva
6.
JAMA ; 331(6): 500-509, 2024 02 13.
Artigo em Inglês | MEDLINE | ID: mdl-38349372

RESUMO

Importance: The US heart allocation system prioritizes medically urgent candidates with a high risk of dying without transplant. The current therapy-based 6-status system is susceptible to manipulation and has limited rank ordering ability. Objective: To develop and validate a candidate risk score that incorporates current clinical, laboratory, and hemodynamic data. Design, Setting, and Participants: A registry-based observational study of adult heart transplant candidates (aged ≥18 years) from the US heart allocation system listed between January 1, 2019, and December 31, 2022, split by center into training (70%) and test (30%) datasets. Adult candidates were listed between January 1, 2019, and December 31, 2022. Main Outcomes and Measures: A US candidate risk score (US-CRS) model was developed by adding a predefined set of predictors to the current French Candidate Risk Score (French-CRS) model. Sensitivity analyses were performed, which included intra-aortic balloon pumps (IABP) and percutaneous ventricular assist devices (VAD) in the definition of short-term mechanical circulatory support (MCS) for the US-CRS. Performance of the US-CRS model, French-CRS model, and 6-status model in the test dataset was evaluated by time-dependent area under the receiver operating characteristic curve (AUC) for death without transplant within 6 weeks and overall survival concordance (c-index) with integrated AUC. Results: A total of 16 905 adult heart transplant candidates were listed (mean [SD] age, 53 [13] years; 73% male; 58% White); 796 patients (4.7%) died without a transplant. The final US-CRS contained time-varying short-term MCS (ventricular assist-extracorporeal membrane oxygenation or temporary surgical VAD), the log of bilirubin, estimated glomerular filtration rate, the log of B-type natriuretic peptide, albumin, sodium, and durable left ventricular assist device. In the test dataset, the AUC for death within 6 weeks of listing for the US-CRS model was 0.79 (95% CI, 0.75-0.83), for the French-CRS model was 0.72 (95% CI, 0.67-0.76), and 6-status model was 0.68 (95% CI, 0.62-0.73). Overall c-index for the US-CRS model was 0.76 (95% CI, 0.73-0.80), for the French-CRS model was 0.69 (95% CI, 0.65-0.73), and 6-status model was 0.67 (95% CI, 0.63-0.71). Classifying IABP and percutaneous VAD as short-term MCS reduced the effect size by 54%. Conclusions and Relevance: In this registry-based study of US heart transplant candidates, a continuous multivariable allocation score outperformed the 6-status system in rank ordering heart transplant candidates by medical urgency and may be useful for the medical urgency component of heart allocation.


Assuntos
Insuficiência Cardíaca , Transplante de Coração , Obtenção de Tecidos e Órgãos , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Bilirrubina , Serviços de Laboratório Clínico , Coração , Fatores de Risco , Medição de Risco , Insuficiência Cardíaca/mortalidade , Insuficiência Cardíaca/cirurgia , Estados Unidos , Alocação de Recursos para a Atenção à Saúde/métodos , Valor Preditivo dos Testes , Obtenção de Tecidos e Órgãos/métodos , Obtenção de Tecidos e Órgãos/organização & administração
7.
Crit Care Med ; 51(12): 1697-1705, 2023 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-37378460

RESUMO

OBJECTIVES: To identify and validate novel COVID-19 subphenotypes with potential heterogenous treatment effects (HTEs) using electronic health record (EHR) data and 33 unique biomarkers. DESIGN: Retrospective cohort study of adults presenting for acute care, with analysis of biomarkers from residual blood collected during routine clinical care. Latent profile analysis (LPA) of biomarker and EHR data identified subphenotypes of COVID-19 inpatients, which were validated using a separate cohort of patients. HTE for glucocorticoid use among subphenotypes was evaluated using both an adjusted logistic regression model and propensity matching analysis for in-hospital mortality. SETTING: Emergency departments from four medical centers. PATIENTS: Patients diagnosed with COVID-19 based on International Classification of Diseases , 10th Revision codes and laboratory test results. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Biomarker levels generally paralleled illness severity, with higher levels among more severely ill patients. LPA of 522 COVID-19 inpatients from three sites identified two profiles: profile 1 ( n = 332), with higher levels of albumin and bicarbonate, and profile 2 ( n = 190), with higher inflammatory markers. Profile 2 patients had higher median length of stay (7.4 vs 4.1 d; p < 0.001) and in-hospital mortality compared with profile 1 patients (25.8% vs 4.8%; p < 0.001). These were validated in a separate, single-site cohort ( n = 192), which demonstrated similar outcome differences. HTE was observed ( p = 0.03), with glucocorticoid treatment associated with increased mortality for profile 1 patients (odds ratio = 4.54). CONCLUSIONS: In this multicenter study combining EHR data with research biomarker analysis of patients with COVID-19, we identified novel profiles with divergent clinical outcomes and differential treatment responses.


Assuntos
COVID-19 , Adulto , Humanos , Estudos Retrospectivos , Glucocorticoides/uso terapêutico , Biomarcadores , Mortalidade Hospitalar
8.
J Surg Res ; 291: 7-16, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37329635

RESUMO

INTRODUCTION: Weight gain among young adults continues to increase. Identifying adults at high risk for weight gain and intervening before they gain weight could have a major public health impact. Our objective was to develop and test electronic health record-based machine learning models to predict weight gain in young adults with overweight/class 1 obesity. METHODS: Seven machine learning models were assessed, including three regression models, random forest, single-layer neural network, gradient-boosted decision trees, and support vector machine (SVM) models. Four categories of predictors were included: 1) demographics; 2) obesity-related health conditions; 3) laboratory data and vital signs; and 4) neighborhood-level variables. The cohort was split 60:40 for model training and validation. Area under the receiver operating characteristic curves (AUC) were calculated to determine model accuracy at predicting high-risk individuals, defined by ≥ 10% total body weight gain within 2 y. Variable importance was measured via generalized analysis of variance procedures. RESULTS: Of the 24,183 patients (mean [SD] age, 32.0 [6.3] y; 55.1% females) in the study, 14.2% gained ≥10% total body weight. Area under the receiver operating characteristic curves varied from 0.557 (SVM) to 0.675 (gradient-boosted decision trees). Age, sex, and baseline body mass index were the most important predictors among the models except SVM and neural network. CONCLUSIONS: Our machine learning models performed similarly and had modest accuracy for identifying young adults at risk of weight gain. Future models may need to incorporate behavioral and/or genetic information to enhance model accuracy.


Assuntos
Aprendizado de Máquina , Aumento de Peso , Feminino , Humanos , Adulto Jovem , Adulto , Masculino , Redes Neurais de Computação , Registros Eletrônicos de Saúde , Obesidade/complicações , Obesidade/diagnóstico
9.
J Surg Oncol ; 128(2): 280-288, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37073788

RESUMO

BACKGROUND: Outcomes for pancreatic adenocarcinoma (PDAC) remain difficult to prognosticate. Multiple models attempt to predict survival following the resection of PDAC, but their utility in the neoadjuvant population is unknown. We aimed to assess their accuracy among patients that received neoadjuvant chemotherapy (NAC). METHODS: We performed a multi-institutional retrospective analysis of patients who received NAC and underwent resection of PDAC. Two prognostic systems were evaluated: the Memorial Sloan Kettering Cancer Center Pancreatic Adenocarcinoma Nomogram (MSKCCPAN) and the American Joint Committee on Cancer (AJCC) staging system. Discrimination between predicted and actual disease-specific survival was assessed using the Uno C-statistic and Kaplan-Meier method. Calibration of the MSKCCPAN was assessed using the Brier score. RESULTS: A total of 448 patients were included. There were 232 (51.8%) females, and the mean age was 64.1 years (±9.5). Most had AJCC Stage I or II disease (77.7%). For the MSKCCPAN, the Uno C-statistic at 12-, 24-, and 36-month time points was 0.62, 0.63, and 0.62, respectively. The AJCC system demonstrated similarly mediocre discrimination. The Brier score for the MSKCCPAN was 0.15 at 12 months, 0.26 at 24 months, and 0.30 at 36 months, demonstrating modest calibration. CONCLUSIONS: Current survival prediction models and staging systems for patients with PDAC undergoing resection after NAC have limited accuracy.


Assuntos
Adenocarcinoma , Carcinoma Ductal Pancreático , Neoplasias Pancreáticas , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adenocarcinoma/cirurgia , Carcinoma Ductal Pancreático/tratamento farmacológico , Carcinoma Ductal Pancreático/cirurgia , Terapia Neoadjuvante , Estadiamento de Neoplasias , Nomogramas , Neoplasias Pancreáticas/tratamento farmacológico , Neoplasias Pancreáticas/cirurgia , Prognóstico , Estudos Retrospectivos , Neoplasias Pancreáticas
10.
J Biomed Inform ; 142: 104346, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-37061012

RESUMO

Daily progress notes are a common note type in the electronic health record (EHR) where healthcare providers document the patient's daily progress and treatment plans. The EHR is designed to document all the care provided to patients, but it also enables note bloat with extraneous information that distracts from the diagnoses and treatment plans. Applications of natural language processing (NLP) in the EHR is a growing field with the majority of methods in information extraction. Few tasks use NLP methods for downstream diagnostic decision support. We introduced the 2022 National NLP Clinical Challenge (N2C2) Track 3: Progress Note Understanding - Assessment and Plan Reasoning as one step towards a new suite of tasks. The Assessment and Plan Reasoning task focuses on the most critical components of progress notes, Assessment and Plan subsections where health problems and diagnoses are contained. The goal of the task was to develop and evaluate NLP systems that automatically predict causal relations between the overall status of the patient contained in the Assessment section and its relation to each component of the Plan section which contains the diagnoses and treatment plans. The goal of the task was to identify and prioritize diagnoses as the first steps in diagnostic decision support to find the most relevant information in long documents like daily progress notes. We present the results of the 2022 N2C2 Track 3 and provide a description of the data, evaluation, participation and system performance.


Assuntos
Registros Eletrônicos de Saúde , Armazenamento e Recuperação da Informação , Humanos , Processamento de Linguagem Natural , Pessoal de Saúde
11.
J Biomed Inform ; 138: 104286, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36706848

RESUMO

The meaningful use of electronic health records (EHR) continues to progress in the digital era with clinical decision support systems augmented by artificial intelligence. A priority in improving provider experience is to overcome information overload and reduce the cognitive burden so fewer medical errors and cognitive biases are introduced during patient care. One major type of medical error is diagnostic error due to systematic or predictable errors in judgement that rely on heuristics. The potential for clinical natural language processing (cNLP) to model diagnostic reasoning in humans with forward reasoning from data to diagnosis and potentially reduce cognitive burden and medical error has not been investigated. Existing tasks to advance the science in cNLP have largely focused on information extraction and named entity recognition through classification tasks. We introduce a novel suite of tasks coined as Diagnostic Reasoning Benchmarks, Dr.Bench, as a new benchmark for developing and evaluating cNLP models with clinical diagnostic reasoning ability. The suite includes six tasks from ten publicly available datasets addressing clinical text understanding, medical knowledge reasoning, and diagnosis generation. DR.BENCH is the first clinical suite of tasks designed to be a natural language generation framework to evaluate pre-trained language models for diagnostic reasoning. The goal of DR. BENCH is to advance the science in cNLP to support downstream applications in computerized diagnostic decision support and improve the efficiency and accuracy of healthcare providers during patient care. We fine-tune and evaluate the state-of-the-art generative models on DR.BENCH. Experiments show that with domain adaptation pre-training on medical knowledge, the model demonstrated opportunities for improvement when evaluated in DR. BENCH. We share DR. BENCH as a publicly available GitLab repository with a systematic approach to load and evaluate models for the cNLP community. We also discuss the carbon footprint produced during the experiments and encourage future work on DR.BENCH to report the carbon footprint.


Assuntos
Inteligência Artificial , Processamento de Linguagem Natural , Humanos , Benchmarking , Resolução de Problemas , Armazenamento e Recuperação da Informação
12.
Crit Care Med ; 50(2): 212-223, 2022 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-35100194

RESUMO

OBJECTIVES: Body temperature trajectories of infected patients are associated with specific immune profiles and survival. We determined the association between temperature trajectories and distinct manifestations of coronavirus disease 2019. DESIGN: Retrospective observational study. SETTING: Four hospitals within an academic healthcare system from March 2020 to February 2021. PATIENTS: All adult patients hospitalized with coronavirus disease 2019. INTERVENTIONS: Using a validated group-based trajectory model, we classified patients into four previously defined temperature trajectory subphenotypes using oral temperature measurements from the first 72 hours of hospitalization. Clinical characteristics, biomarkers, and outcomes were compared between subphenotypes. MEASUREMENTS AND MAIN RESULTS: The 5,903 hospitalized coronavirus disease 2019 patients were classified into four subphenotypes: hyperthermic slow resolvers (n = 1,452, 25%), hyperthermic fast resolvers (1,469, 25%), normothermics (2,126, 36%), and hypothermics (856, 15%). Hypothermics had abnormal coagulation markers, with the highest d-dimer and fibrin monomers (p < 0.001) and the highest prevalence of cerebrovascular accidents (10%, p = 0.001). The prevalence of venous thromboembolism was significantly different between subphenotypes (p = 0.005), with the highest rate in hypothermics (8.5%) and lowest in hyperthermic slow resolvers (5.1%). Hyperthermic slow resolvers had abnormal inflammatory markers, with the highest C-reactive protein, ferritin, and interleukin-6 (p < 0.001). Hyperthermic slow resolvers had increased odds of mechanical ventilation, vasopressors, and 30-day inpatient mortality (odds ratio, 1.58; 95% CI, 1.13-2.19) compared with hyperthermic fast resolvers. Over the course of the pandemic, we observed a drastic decrease in the prevalence of hyperthermic slow resolvers, from representing 53% of admissions in March 2020 to less than 15% by 2021. We found that dexamethasone use was associated with significant reduction in probability of hyperthermic slow resolvers membership (27% reduction; 95% CI, 23-31%; p < 0.001). CONCLUSIONS: Hypothermics had abnormal coagulation markers, suggesting a hypercoagulable subphenotype. Hyperthermic slow resolvers had elevated inflammatory markers and the highest odds of mortality, suggesting a hyperinflammatory subphenotype. Future work should investigate whether temperature subphenotypes benefit from targeted antithrombotic and anti-inflammatory strategies.


Assuntos
Temperatura Corporal , COVID-19/patologia , Hipertermia/patologia , Hipotermia/patologia , Fenótipo , Centros Médicos Acadêmicos , Idoso , Anti-Inflamatórios/uso terapêutico , Biomarcadores/sangue , Coagulação Sanguínea , Estudos de Coortes , Dexametasona/uso terapêutico , Feminino , Humanos , Inflamação , Masculino , Pessoa de Meia-Idade , Escores de Disfunção Orgânica , Estudos Retrospectivos , SARS-CoV-2
13.
Crit Care Med ; 50(9): 1339-1347, 2022 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-35452010

RESUMO

OBJECTIVES: To determine the impact of a machine learning early warning risk score, electronic Cardiac Arrest Risk Triage (eCART), on mortality for elevated-risk adult inpatients. DESIGN: A pragmatic pre- and post-intervention study conducted over the same 10-month period in 2 consecutive years. SETTING: Four-hospital community-academic health system. PATIENTS: All adult patients admitted to a medical-surgical ward. INTERVENTIONS: During the baseline period, clinicians were blinded to eCART scores. During the intervention period, scores were presented to providers. Scores greater than or equal to 95th percentile were designated high risk prompting a physician assessment for ICU admission. Scores between the 89th and 95th percentiles were designated intermediate risk, triggering a nurse-directed workflow that included measuring vital signs every 2 hours and contacting a physician to review the treatment plan. MEASUREMENTS AND MAIN RESULTS: The primary outcome was all-cause inhospital mortality. Secondary measures included vital sign assessment within 2 hours, ICU transfer rate, and time to ICU transfer. A total of 60,261 patients were admitted during the study period, of which 6,681 (11.1%) met inclusion criteria (baseline period n = 3,191, intervention period n = 3,490). The intervention period was associated with a significant decrease in hospital mortality for the main cohort (8.8% vs 13.9%; p < 0.0001; adjusted odds ratio [OR], 0.60 [95% CI, 0.52-0.71]). A significant decrease in mortality was also seen for the average-risk cohort not subject to the intervention (0.49% vs 0.26%; p < 0.05; adjusted OR, 0.53 [95% CI, 0.41-0.74]). In subgroup analysis, the benefit was seen in both high- (17.9% vs 23.9%; p = 0.001) and intermediate-risk (2.0% vs 4.0 %; p = 0.005) patients. The intervention period was also associated with a significant increase in ICU transfers, decrease in time to ICU transfer, and increase in vital sign reassessment within 2 hours. CONCLUSIONS: Implementation of a machine learning early warning score-driven protocol was associated with reduced inhospital mortality, likely driven by earlier and more frequent ICU transfer.


Assuntos
Escore de Alerta Precoce , Parada Cardíaca , Adulto , Parada Cardíaca/diagnóstico , Parada Cardíaca/terapia , Mortalidade Hospitalar , Humanos , Unidades de Terapia Intensiva , Aprendizado de Máquina , Sinais Vitais
14.
Pediatr Crit Care Med ; 23(7): 514-523, 2022 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-35446816

RESUMO

OBJECTIVES: Unrecognized clinical deterioration during illness requiring hospitalization is associated with high risk of mortality and long-term morbidity among children. Our objective was to develop and externally validate machine learning algorithms using electronic health records for identifying ICU transfer within 12 hours indicative of a child's condition. DESIGN: Observational cohort study. SETTING: Two urban, tertiary-care, academic hospitals (sites 1 and 2). PATIENTS: Pediatric inpatients (age <18 yr). INTERVENTIONS: None. MEASUREMENT AND MAIN RESULTS: Our primary outcome was direct ward to ICU transfer. Using age, vital signs, and laboratory results, we derived logistic regression with regularization, restricted cubic spline regression, random forest, and gradient boosted machine learning models. Among 50,830 admissions at site 1 and 88,970 admissions at site 2, 1,993 (3.92%) and 2,317 (2.60%) experienced the primary outcome, respectively. Site 1 data were split longitudinally into derivation (2009-2017) and validation (2018-2019), whereas site 2 constituted the external test cohort. Across both sites, the gradient boosted machine was the most accurate model and outperformed a modified version of the Bedside Pediatric Early Warning Score that only used physiologic variables in terms of discrimination ( C -statistic site 1: 0.84 vs 0.71, p < 0.001; site 2: 0.80 vs 0.74, p < 0.001), sensitivity, specificity, and number needed to alert. CONCLUSIONS: We developed and externally validated a novel machine learning model that identifies ICU transfers in hospitalized children more accurately than current tools. Our model enables early detection of children at risk for deterioration, thereby creating opportunities for intervention and improvement in outcomes.


Assuntos
Registros Eletrônicos de Saúde , Aprendizado de Máquina , Criança , Estudos de Coortes , Humanos , Unidades de Terapia Intensiva Pediátrica , Estudos Retrospectivos , Sinais Vitais
15.
Am J Respir Crit Care Med ; 204(403-411)2021 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-33891529

RESUMO

RATIONALE: Variation in hospital mortality has been described for coronavirus disease 2019 (COVID-19), but the factors that explain these differences remain unclear. OBJECTIVE: Our objective was to utilize a large, nationally representative dataset of critically ill adults with COVID-19 to determine which factors explain mortality variability. METHODS: In this multicenter cohort study, we examined adults hospitalized in intensive care units with COVID-19 at 70 United States hospitals between March and June 2020. The primary outcome was 28-day mortality. We examined patient-level and hospital-level variables. Mixed-effects logistic regression was used to identify factors associated with interhospital variation. The median odds ratio (OR) was calculated to compare outcomes in higher- vs. lower-mortality hospitals. A gradient boosted machine algorithm was developed for individual-level mortality models. MEASUREMENTS AND MAIN RESULTS: A total of 4,019 patients were included, 1537 (38%) of whom died by 28 days. Mortality varied considerably across hospitals (0-82%). After adjustment for patient- and hospital-level domains, interhospital variation was attenuated (OR decline from 2.06 [95% CI, 1.73-2.37] to 1.22 [95% CI, 1.00-1.38]), with the greatest changes occurring with adjustment for acute physiology, socioeconomic status, and strain. For individual patients, the relative contribution of each domain to mortality risk was: acute physiology (49%), demographics and comorbidities (20%), socioeconomic status (12%), strain (9%), hospital quality (8%), and treatments (3%). CONCLUSION: There is considerable interhospital variation in mortality for critically ill patients with COVID-19, which is mostly explained by hospital-level socioeconomic status, strain, and acute physiologic differences. Individual mortality is driven mostly by patient-level factors. This article is open access and distributed under the terms of the Creative Commons Attribution Non-Commercial No Derivatives License 4.0 (http://creativecommons.org/licenses/by-nc-nd/4.0/).


Assuntos
Algoritmos , COVID-19/epidemiologia , Estado Terminal/terapia , Unidades de Terapia Intensiva/estatística & dados numéricos , Idoso , Comorbidade , Estado Terminal/epidemiologia , Feminino , Seguimentos , Mortalidade Hospitalar/tendências , Humanos , Incidência , Masculino , Pessoa de Meia-Idade , Prognóstico , Estudos Retrospectivos , Fatores de Risco , SARS-CoV-2 , Taxa de Sobrevida/tendências , Estados Unidos/epidemiologia
16.
Am J Transplant ; 21(11): 3684-3693, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-33864733

RESUMO

Under the new US heart allocation policy, transplant centers listed significantly more candidates at high priority statuses (Status 1 and 2) with mechanical circulatory support devices than expected. We determined whether the practice change was widespread or concentrated among certain transplant centers. Using data from the Scientific Registry of Transplant Recipients, we used mixed-effect logistic regression to compare the observed listings of adult, heart-alone transplant candidates post-policy (December 2018 to February 2020) to seasonally matched pre-policy cohort (December 2016 to February 2018). US transplant centers (N = 96) listed similar number of candidates in each policy period (4472 vs. 4498) but listed significantly more at high priority status (25.5% vs. 7.0%, p < .001) than expected. Adjusted for candidate characteristics, 91 of 96 (94.8%) centers listed significantly more candidates at high-priority status than expected, with the unexpected increase varying from 4.8% to 50.4% (interquartile range [IQR]: 14.0%-23.3%). Centers in OPOs with highest Status 1A transplant rate pre-policy were significantly more likely to utilize high-priority status under the new policy (OR: 9.73, p = .01). The new heart allocation policy was associated with widespread and significantly variable changes in transplant center practice that may undermine the effectiveness of the new system.


Assuntos
Transplante de Coração , Obtenção de Tecidos e Órgãos , Adulto , Humanos , Políticas , Transplantados , Listas de Espera
17.
Crit Care Med ; 49(7): e673-e682, 2021 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-33861547

RESUMO

OBJECTIVES: Recent sepsis studies have defined patients as "infected" using a combination of culture and antibiotic orders rather than billing data. However, the accuracy of these definitions is unclear. We aimed to compare the accuracy of different established criteria for identifying infected patients using detailed chart review. DESIGN: Retrospective observational study. SETTING: Six hospitals from three health systems in Illinois. PATIENTS: Adult admissions with blood culture or antibiotic orders, or Angus International Classification of Diseases infection codes and death were eligible for study inclusion as potentially infected patients. Nine-hundred to 1,000 of these admissions were randomly selected from each health system for chart review, and a proportional number of patients who did not meet chart review eligibility criteria were also included and deemed not infected. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: The accuracy of published billing code criteria by Angus et al and electronic health record criteria by Rhee et al and Seymour et al (Sepsis-3) was determined using the manual chart review results as the gold standard. A total of 5,215 patients were included, with 2,874 encounters analyzed via chart review and a proportional 2,341 added who did not meet chart review eligibility criteria. In the study cohort, 27.5% of admissions had at least one infection. This was most similar to the percentage of admissions with blood culture orders (26.8%), Angus infection criteria (28.7%), and the Sepsis-3 criteria (30.4%). Sepsis-3 criteria was the most sensitive (81%), followed by Angus (77%) and Rhee (52%), while Rhee (97%) and Angus (90%) were more specific than the Sepsis-3 criteria (89%). Results were similar for patients with organ dysfunction during their admission. CONCLUSIONS: Published criteria have a wide range of accuracy for identifying infected patients, with the Sepsis-3 criteria being the most sensitive and Rhee criteria being the most specific. These findings have important implications for studies investigating the burden of sepsis on a local and national level.


Assuntos
Confiabilidade dos Dados , Registros Eletrônicos de Saúde/normas , Infecções/epidemiologia , Armazenamento e Recuperação da Informação/métodos , Adulto , Idoso , Antibacterianos/uso terapêutico , Antibioticoprofilaxia/estatística & dados numéricos , Hemocultura , Chicago/epidemiologia , Reações Falso-Positivas , Feminino , Humanos , Infecções/diagnóstico , Classificação Internacional de Doenças , Masculino , Pessoa de Meia-Idade , Escores de Disfunção Orgânica , Admissão do Paciente/estatística & dados numéricos , Prevalência , Estudos Retrospectivos , Sensibilidade e Especificidade , Sepse/diagnóstico
18.
Crit Care Med ; 49(10): 1694-1705, 2021 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-33938715

RESUMO

OBJECTIVES: Early antibiotic administration is a central component of sepsis guidelines, and delays may increase mortality. However, prior studies have examined the delay to first antibiotic administration as a single time period even though it contains two distinct processes: antibiotic ordering and antibiotic delivery, which can each be targeted for improvement through different interventions. The objective of this study was to characterize and compare patients who experienced order or delivery delays, investigate the association of each delay type with mortality, and identify novel patient subphenotypes with elevated risk of harm from delays. DESIGN: Retrospective analysis of multicenter inpatient data. SETTING: Two tertiary care medical centers (2008-2018, 2006-2017) and four community-based hospitals (2008-2017). PATIENTS: All patients admitted through the emergency department who met clinical criteria for infection. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Patient demographics, vitals, laboratory values, medication order and administration times, and in-hospital survival data were obtained from the electronic health record. Order and delivery delays were calculated for each admission. Adjusted logistic regression models were used to examine the relationship between each delay and in-hospital mortality. Causal forests, a machine learning method, was used to identify a high-risk subgroup. A total of 60,817 admissions were included, and delays occurred in 58% of patients. Each additional hour of order delay (odds ratio, 1.04; 95% CI, 1.03-1.05) and delivery delay (odds ratio, 1.05; 95% CI, 1.02-1.08) was associated with increased mortality. A patient subgroup identified by causal forests with higher comorbidity burden, greater organ dysfunction, and abnormal initial lactate measurements had a higher risk of death associated with delays (odds ratio, 1.07; 95% CI, 1.06-1.09 vs odds ratio, 1.02; 95% CI, 1.01-1.03). CONCLUSIONS: Delays in antibiotic ordering and drug delivery are both associated with a similar increase in mortality. A distinct subgroup of high-risk patients exist who could be targeted for more timely therapy.


Assuntos
Antibacterianos/administração & dosagem , Fenótipo , Sepse/genética , Tempo para o Tratamento/estatística & dados numéricos , Idoso , Idoso de 80 Anos ou mais , Antibacterianos/uso terapêutico , Serviço Hospitalar de Emergência/organização & administração , Serviço Hospitalar de Emergência/estatística & dados numéricos , Feminino , Hospitalização/estatística & dados numéricos , Humanos , Illinois/epidemiologia , Masculino , Pessoa de Meia-Idade , Estudos Prospectivos , Estudos Retrospectivos , Sepse/tratamento farmacológico , Sepse/fisiopatologia , Fatores de Tempo
19.
Am J Respir Crit Care Med ; 202(7): 996-1004, 2020 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-32551817

RESUMO

Rationale: Two distinct phenotypes of acute respiratory distress syndrome (ARDS) with differential clinical outcomes and responses to randomly assigned treatment have consistently been identified in randomized controlled trial cohorts using latent class analysis. Plasma biomarkers, key components in phenotype identification, currently lack point-of-care assays and represent a barrier to the clinical implementation of phenotypes.Objectives: The objective of this study was to develop models to classify ARDS phenotypes using readily available clinical data only.Methods: Three randomized controlled trial cohorts served as the training data set (ARMA [High vs. Low Vt], ALVEOLI [Assessment of Low Vt and Elevated End-Expiratory Pressure to Obviate Lung Injury], and FACTT [Fluids and Catheter Treatment Trial]; n = 2,022), and a fourth served as the validation data set (SAILS [Statins for Acutely Injured Lungs from Sepsis]; n = 745). A gradient-boosted machine algorithm was used to develop classifier models using 24 variables (demographics, vital signs, laboratory, and respiratory variables) at enrollment. In two secondary analyses, the ALVEOLI and FACTT cohorts each, individually, served as the validation data set, and the remaining combined cohorts formed the training data set for each analysis. Model performance was evaluated against the latent class analysis-derived phenotype.Measurements and Main Results: For the primary analysis, the model accurately classified the phenotypes in the validation cohort (area under the receiver operating characteristic curve [AUC], 0.95; 95% confidence interval [CI], 0.94-0.96). Using a probability cutoff of 0.5 to assign class, inflammatory biomarkers (IL-6, IL-8, and sTNFR-1; P < 0.0001) and 90-day mortality (38% vs. 24%; P = 0.0002) were significantly higher in the hyperinflammatory phenotype as classified by the model. Model accuracy was similar when ALVEOLI (AUC, 0.94; 95% CI, 0.92-0.96) and FACTT (AUC, 0.94; 95% CI, 0.92-0.95) were used as the validation cohorts. Significant treatment interactions were observed with the clinical classifier model-assigned phenotypes in both ALVEOLI (P = 0.0113) and FACTT (P = 0.0072) cohorts.Conclusions: ARDS phenotypes can be accurately identified using machine learning models based on readily available clinical data and may enable rapid phenotype identification at the bedside.


Assuntos
Aprendizado de Máquina , Síndrome do Desconforto Respiratório/classificação , Fatores Etários , Área Sob a Curva , Bicarbonatos/metabolismo , Bilirrubina/metabolismo , Biomarcadores Tumorais , Pressão Sanguínea , Dióxido de Carbono/metabolismo , Creatinina/metabolismo , Humanos , Inflamação , Molécula 1 de Adesão Intercelular/metabolismo , Interleucina-6/metabolismo , Interleucina-8/metabolismo , Análise de Classes Latentes , Contagem de Leucócitos , Mortalidade , Oxigênio/metabolismo , Pressão Parcial , Fenótipo , Inibidor 1 de Ativador de Plasminogênio/metabolismo , Contagem de Plaquetas , Prognóstico , Proteína C/metabolismo , Ventilação Pulmonar , Ensaios Clínicos Controlados Aleatórios como Assunto , Receptores Tipo I de Fatores de Necrose Tumoral/metabolismo , Síndrome do Desconforto Respiratório/imunologia , Síndrome do Desconforto Respiratório/fisiopatologia , Síndrome do Desconforto Respiratório/terapia , Albumina Sérica/metabolismo , Volume de Ventilação Pulmonar , Vasoconstritores/uso terapêutico , Sinais Vitais
20.
Crit Care Med ; 48(11): e1020-e1028, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-32796184

RESUMO

OBJECTIVES: Bacteremia and fungemia can cause life-threatening illness with high mortality rates, which increase with delays in antimicrobial therapy. The objective of this study is to develop machine learning models to predict blood culture results at the time of the blood culture order using routine data in the electronic health record. DESIGN: Retrospective analysis of a large, multicenter inpatient data. SETTING: Two academic tertiary medical centers between the years 2007 and 2018. SUBJECTS: All hospitalized patients who received a blood culture during hospitalization. INTERVENTIONS: The dataset was partitioned temporally into development and validation cohorts: the logistic regression and gradient boosting machine models were trained on the earliest 80% of hospital admissions and validated on the most recent 20%. MEASUREMENTS AND MAIN RESULTS: There were 252,569 blood culture days-defined as nonoverlapping 24-hour periods in which one or more blood cultures were ordered. In the validation cohort, there were 50,514 blood culture days, with 3,762 cases of bacteremia (7.5%) and 370 cases of fungemia (0.7%). The gradient boosting machine model for bacteremia had significantly higher area under the receiver operating characteristic curve (0.78 [95% CI 0.77-0.78]) than the logistic regression model (0.73 [0.72-0.74]) (p < 0.001). The model identified a high-risk group with over 30 times the occurrence rate of bacteremia in the low-risk group (27.4% vs 0.9%; p < 0.001). Using the low-risk cut-off, the model identifies bacteremia with 98.7% sensitivity. The gradient boosting machine model for fungemia had high discrimination (area under the receiver operating characteristic curve 0.88 [95% CI 0.86-0.90]). The high-risk fungemia group had 252 fungemic cultures compared with one fungemic culture in the low-risk group (5.0% vs 0.02%; p < 0.001). Further, the high-risk group had a mortality rate 60 times higher than the low-risk group (28.2% vs 0.4%; p < 0.001). CONCLUSIONS: Our novel models identified patients at low and high-risk for bacteremia and fungemia using routinely collected electronic health record data. Further research is needed to evaluate the cost-effectiveness and impact of model implementation in clinical practice.


Assuntos
Bacteriemia/diagnóstico , Registros Eletrônicos de Saúde/estatística & dados numéricos , Fungemia/diagnóstico , Aprendizado de Máquina , Idoso , Bacteriemia/sangue , Bacteriemia/etiologia , Bacteriemia/microbiologia , Hemocultura , Feminino , Fungemia/sangue , Fungemia/etiologia , Fungemia/microbiologia , Hospitalização/estatística & dados numéricos , Humanos , Masculino , Pessoa de Meia-Idade , Modelos Estatísticos , Reprodutibilidade dos Testes , Estudos Retrospectivos , Fatores de Risco
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA