Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 27
Filtrar
1.
J Am Coll Emerg Physicians Open ; 5(2): e13117, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38500599

RESUMEN

Objective: Millions of Americans are infected by influenza annually. A minority seek care in the emergency department (ED) and, of those, only a limited number experience severe disease or death. ED clinicians must distinguish those at risk for deterioration from those who can be safely discharged. Methods: We developed random forest machine learning (ML) models to estimate needs for critical care within 24 h and inpatient care within 72 h in ED patients with influenza. Predictor data were limited to those recorded prior to ED disposition decision: demographics, ED complaint, medical problems, vital signs, supplemental oxygen use, and laboratory results. Our study population was comprised of adults diagnosed with influenza at one of five EDs in our university health system between January 1, 2017 and May 18, 2022; visits were divided into two cohorts to facilitate model development and validation. Prediction performance was assessed by the area under the receiver operating characteristic curve (AUC) and the Brier score. Results: Among 8032 patients with laboratory-confirmed influenza, incidence of critical care needs was 6.3% and incidence of inpatient care needs was 19.6%. The most common reasons for ED visit were symptoms of respiratory tract infection, fever, and shortness of breath. Model AUCs were 0.89 (95% CI 0.86-0.93) for prediction of critical care and 0.90 (95% CI 0.88-0.93) for inpatient care needs; Brier scores were 0.026 and 0.042, respectively. Importantpredictors included shortness of breath, increasing respiratory rate, and a high number of comorbid diseases. Conclusions: ML methods can be used to accurately predict clinical deterioration in ED patients with influenza and have potential to support ED disposition decision-making.

2.
JAMIA Open ; 6(4): ooad107, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38638298

RESUMEN

Objective: To investigate how missing data in the patient problem list may impact racial disparities in the predictive performance of a machine learning (ML) model for emergency department (ED) triage. Materials and Methods: Racial disparities may exist in the missingness of EHR data (eg, systematic differences in access, testing, and/or treatment) that can impact model predictions across racialized patient groups. We use an ML model that predicts patients' risk for adverse events to produce triage-level recommendations, patterned after a clinical decision support tool deployed at multiple EDs. We compared the model's predictive performance on sets of observed (problem list data at the point of triage) versus manipulated (updated to the more complete problem list at the end of the encounter) test data. These differences were compared between Black and non-Hispanic White patient groups using multiple performance measures relevant to health equity. Results: There were modest, but significant, changes in predictive performance comparing the observed to manipulated models across both Black and non-Hispanic White patient groups; c-statistic improvement ranged between 0.027 and 0.058. The manipulation produced no between-group differences in c-statistic by race. However, there were small between-group differences in other performance measures, with greater change for non-Hispanic White patients. Discussion: Problem list missingness impacted model performance for both patient groups, with marginal differences detected by race. Conclusion: Further exploration is needed to examine how missingness may contribute to racial disparities in clinical model predictions across settings. The novel manipulation method demonstrated may aid future research.

3.
Sci Rep ; 12(1): 21528, 2022 12 13.
Artículo en Inglés | MEDLINE | ID: mdl-36513693

RESUMEN

Monocyte distribution width (MDW) is a novel marker of monocyte activation, which is known to occur in the immune response to viral pathogens. Our objective was to determine the performance of MDW and other leukocyte parameters as screening tests for SARS-CoV-2 and influenza infection. This was a prospective cohort analysis of adult patients who underwent complete blood count (CBC) and SARS-CoV-2 or influenza testing in an Emergency Department (ED) between January 2020 and July 2021. The primary outcome was SARS-CoV-2 or influenza infection. Secondary outcomes were measures of severity of illness including inpatient hospitalization, critical care admission, hospital lengths of stay and mortality. Descriptive statistics and test performance measures were evaluated for monocyte percentage, MDW, white blood cell (WBC) count, and neutrophil to lymphocyte ratio (NLR). 3,425 ED patient visits were included. SARS-CoV-2 testing was performed during 1,922 visits with a positivity rate of 5.4%; influenza testing was performed during 2,090 with a positivity rate of 2.3%. MDW was elevated in patients with SARS-Cov-2 (median 23.0U; IQR 20.5-25.1) or influenza (median 24.1U; IQR 22.0-26.9) infection, as compared to those without (18.9U; IQR 17.4-20.7 and 19.1U; 17.4-21, respectively, P < 0.001). Monocyte percentage, WBC and NLR values were within normal range in patients testing positive for either virus. MDW identified SARS-CoV-2 and influenza positive patients with an area under the curve (AUC) of 0.83 (95% CI 0.79-0.86) and 0.83 (95% CI 0.77-0.88), respectively. At the accepted cut-off value of 20U for MDW, sensitivities were 83.7% (95% CI 76.5-90.8%) for SARS-CoV-2 and 89.6% (95% CI 80.9-98.2%) for influenza, compared to sensitivities below 45% for monocyte percentage, WBC and NLR. MDW negative predictive values were 98.6% (95% CI 98.0-99.3%) and 99.6% (95% CI 99.3-100.0%) respectively for SARS-CoV-2 and influenza. Monocyte Distribution Width (MDW), available as part of a routine complete blood count (CBC) with differential, may be a useful indicator of SARS-CoV-2 or influenza infection.


Asunto(s)
COVID-19 , Gripe Humana , Adulto , Humanos , SARS-CoV-2 , Prueba de COVID-19 , Gripe Humana/diagnóstico , Monocitos , Estudios Prospectivos , COVID-19/diagnóstico
4.
NPJ Digit Med ; 5(1): 94, 2022 Jul 16.
Artículo en Inglés | MEDLINE | ID: mdl-35842519

RESUMEN

Demand has outstripped healthcare supply during the coronavirus disease 2019 (COVID-19) pandemic. Emergency departments (EDs) are tasked with distinguishing patients who require hospital resources from those who may be safely discharged to the community. The novelty and high variability of COVID-19 have made these determinations challenging. In this study, we developed, implemented and evaluated an electronic health record (EHR) embedded clinical decision support (CDS) system that leverages machine learning (ML) to estimate short-term risk for clinical deterioration in patients with or under investigation for COVID-19. The system translates model-generated risk for critical care needs within 24 h and inpatient care needs within 72 h into rapidly interpretable COVID-19 Deterioration Risk Levels made viewable within ED clinician workflow. ML models were derived in a retrospective cohort of 21,452 ED patients who visited one of five ED study sites and were prospectively validated in 15,670 ED visits that occurred before (n = 4322) or after (n = 11,348) CDS implementation; model performance and numerous patient-oriented outcomes including in-hospital mortality were measured across study periods. Incidence of critical care needs within 24 h and inpatient care needs within 72 h were 10.7% and 22.5%, respectively and were similar across study periods. ML model performance was excellent under all conditions, with AUC ranging from 0.85 to 0.91 for prediction of critical care needs and 0.80-0.90 for inpatient care needs. Total mortality was unchanged across study periods but was reduced among high-risk patients after CDS implementation.

5.
Trials ; 23(1): 354, 2022 Apr 25.
Artículo en Inglés | MEDLINE | ID: mdl-35468807

RESUMEN

BACKGROUND: Early identification of HCV is a critical health priority, especially now that treatment options are available to limit further transmission and provide cure before long-term sequelae develop. Emergency departments (EDs) are important clinical settings for HCV screening given that EDs serve many at-risk patients who do not access other forms of healthcare. In this article, we describe the rationale and design of The Determining Effective Testing in Emergency Departments and Care Coordination on Treatment Outcomes (DETECT) for Hepatitis C (Hep C) Screening Trial. METHODS: The DETECT Hep C Screening Trial is a multi-center prospective pragmatic randomized two-arm parallel-group superiority trial to test the comparative effectiveness of nontargeted and targeted HCV screening in the ED with a primary hypothesis that nontargeted screening is superior to targeted screening when identifying newly diagnosed HCV. This trial will be performed in the EDs at Denver Health Medical Center (Denver, CO), Johns Hopkins Hospital (Baltimore, MD), and the University of Mississippi Medical Center (Jackson, MS), sites representing approximately 225,000 annual adult visits, and designed using the PRECIS-2 framework for pragmatic trials. When complete, we will have enrolled a minimum of 125,000 randomized patient visits and have performed 13,965 HCV tests. In Denver, the Screening Trial will serve as a conduit for a distinct randomized comparative effectiveness trial to evaluate linkage-to-HCV care strategies. All sites will further contribute to embedded observational studies to assess cost effectiveness, disparities, and social determinants of health in screening, linkage-to-care, and treatment for HCV. DISCUSSION: When complete, The DETECT Hep C Screening Trial will represent the largest ED-based pragmatic clinical trial to date and all studies, in aggregate, will significantly inform how to best perform ED-based HCV screening. TRIAL REGISTRATION: ClinicalTrials.gov ID: NCT04003454 . Registered on 1 July 2019.


Asunto(s)
Hepatitis C , Adulto , Servicio de Urgencia en Hospital , Hepacivirus , Hepatitis C/diagnóstico , Hepatitis C/tratamiento farmacológico , Humanos , Tamizaje Masivo , Estudios Prospectivos , Resultado del Tratamiento
6.
J Am Coll Emerg Physicians Open ; 3(2): e12679, 2022 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-35252973

RESUMEN

STUDY OBJECTIVE: Enhancement of a routine complete blood count (CBC) for detection of sepsis in the emergency department (ED) has pragmatic utility for early management. This study evaluated the performance of monocyte distribution width (MDW) alone and in combination with other routine CBC parameters as a screen for sepsis and septic shock in ED patients. METHODS: A prospective cohort analysis of adult patients with a CBC collected at an urban ED from January 2020 through July 2021. The performance of MDW, white blood count (WBC) count, and neutrophil-to-lymphocyte-ratio (NLR) to detect sepsis and septic shock (Sepsis-3 Criteria) was evaluated using diagnostic performance measures. RESULTS: The cohort included 7952 ED patients, with 180 meeting criteria for sepsis; 43 with septic shock and 137 without shock. MDW was highest for patients with septic shock (median 24.8 U, interquartile range [IQR] 22.0-28.1) and trended downward for patients with sepsis without shock (23.9 U, IQR 20.2-26.8), infection (20.4 U, IQR 18.2-23.3), then controls (18.6 U, IQR 17.1-20.4). In isolation, MDW detected sepsis and septic shock with an area under the receiver operator characteristic curve (AUC) of 0.80 (95% confidence interval [CI] 0.77-0.84) and 0.85 (95% CI 0.80-0 .91), respectively. Optimal performance was achieved in combination with WBC count and NLR for detection of sepsis (AUC 0.86, 95% CI 0.83-0.89) and septic shock (0.86, 95% CI 0.80-0.92). CONCLUSION: A CBC differential panel that includes MDW demonstrated strong performance characteristics in a broad ED population suggesting pragmatic value as a rapid screen for sepsis and septic shock.

7.
JAMA Netw Open ; 4(7): e2117763, 2021 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-34309668

RESUMEN

Importance: The National HIV Strategic Plan for the US recommends HIV screening in emergency departments (EDs). The most effective approach to ED-based HIV screening remains unknown. Objective: To compare strategies for HIV screening when integrated into usual ED practice. Design, Setting, and Participants: This randomized clinical trial included patients visiting EDs at 4 US urban hospitals between April 2014 and January 2016. Patients included were ages 16 years or older, not critically ill or mentally altered, not known to have an HIV positive status, and with an anticipated length of stay 30 minutes or longer. Data were analyzed through March 2021. Interventions: Consecutive patients underwent concealed randomization to either nontargeted screening, enhanced targeted screening using a quantitative HIV risk prediction tool, or traditional targeted screening as adapted from the Centers for Disease Control and Prevention. Screening was integrated into clinical practice using opt-out consent and fourth-generation antigen-antibody assays. Main Outcomes and Measures: New HIV diagnoses using intention-to-treat analysis, absolute differences, and risk ratios (RRs). Results: A total of 76 561 patient visits were randomized; median (interquartile range) age was 40 (28-54) years, 34 807 patients (51.2%) were women, and 26 776 (39.4%) were Black, 22 131 (32.6%) non-Hispanic White, and 14 542 (21.4%) Hispanic. A total of 25 469 were randomized to nontargeted screening; 25 453, enhanced targeted screening; and 25 639, traditional targeted screening. Of the nontargeted group, 6744 participants (26.5%) completed testing and 10 (0.15%) were newly diagnosed; of the enhanced targeted group, 13 883 participants (54.5%) met risk criteria, 4488 (32.3%) completed testing, and 7 (0.16%) were newly diagnosed; and of the traditional targeted group, 7099 participants (27.7%) met risk criteria, 3173 (44.7%) completed testing, and 7 (0.22%) were newly diagnosed. When compared with nontargeted screening, targeted strategies were not associated with a higher rate of new diagnoses (enhanced targeted and traditional targeted combined: difference, -0.01%; 95% CI, -0.04% to 0.02%; RR, 0.7; 95% CI, 0.30 to 1.56; P = .38; and enhanced targeted only: difference, -0.01%; 95% CI, -0.04% to 0.02%; RR, 0.70; 95% CI, 0.27 to 1.84; P = .47). Conclusions and Relevance: Targeted HIV screening was not superior to nontargeted HIV screening in the ED. Nontargeted screening resulted in significantly more tests performed, although all strategies identified relatively low numbers of new HIV diagnoses. Trial Registration: ClinicalTrials.gov Identifier: NCT01781949.


Asunto(s)
Servicio de Urgencia en Hospital/estadística & datos numéricos , Infecciones por VIH/diagnóstico , Tamizaje Masivo/métodos , Adolescente , Adulto , Femenino , Humanos , Análisis de Intención de Tratar , Masculino , Persona de Mediana Edad , Oportunidad Relativa , Estados Unidos , Adulto Joven
9.
Artículo en Inglés | MEDLINE | ID: mdl-36168488

RESUMEN

Objective: To reduce inappropriate antibiotic prescribing for acute respiratory infections (ARIs) by employing peer comparison with behavioral feedback in the emergency department (ED). Design: A controlled before-and-after study. Setting: The study was conducted in 5 adult EDs at teaching and community hospitals in a health system. Patients: Adults presenting to the ED with a respiratory condition diagnosis code. Hospitalized patients and those with a diagnosis code for a non-respiratory condition for which antibiotics are or may be warranted were excluded. Interventions: After a baseline period from January 2016 to March 2018, 3 EDs implemented a feedback intervention with peer comparison between April 2018 and December 2019 for attending physicians. Also, 2 EDs in the health system served as controls. Using interrupted time series analysis, the inappropriate ARI prescribing rate was calculated as the proportion of antibiotic-inappropriate ARI encounters with a prescription. Prescribing rates were also evaluated for all ARIs. Attending physicians at intervention sites received biannual e-mails with their inappropriate prescribing rate and had access to a dashboard that was updated daily showing their performance relative to their peers. Results: Among 28,544 ARI encounters, the inappropriate prescribing rate remained stable at the control EDs between the 2 periods (23.0% and 23.8%). At the intervention sites, the inappropriate prescribing rate decreased significantly from 22.0% to 15.2%. Between periods, the overall ARI prescribing rate was 38.1% and 40.6% in the control group and 35.9% and 30.6% in the intervention group. Conclusions: Behavioral feedback with peer comparison can be implemented effectively in the ED to reduce inappropriate prescribing for ARIs.

10.
JMIR Med Inform ; 7(4): e14756, 2019 Sep 16.
Artículo en Inglés | MEDLINE | ID: mdl-31579025

RESUMEN

BACKGROUND: Patients hospitalized with heart failure suffer the highest rates of 30-day readmission among other clinically defined patient populations in the United States. Investigation into the predictability of 30-day readmissions can lead to clinical decision support tools and targeted interventions that can help care providers to improve individual patient care and reduce readmission risk. OBJECTIVE: This study aimed to develop a dynamic readmission risk prediction model that yields daily predictions for patients hospitalized with heart failure toward identifying risk trajectories over time and identifying clinical predictors associated with different patterns in readmission risk trajectories. METHODS: A two-stage predictive modeling approach combining logistic and beta regression was applied to electronic health record data accumulated daily to predict 30-day readmission for 534 hospital encounters of patients with heart failure over 2750 patient days. Unsupervised clustering was performed on predictions to uncover time-dependent trends in readmission risk over the patient's hospital stay. We used data collected between September 1, 2013, and August 31, 2015, from a community hospital in Maryland (United States) for patients with a primary diagnosis of heart failure. Patients who died during the hospital stay or were transferred to other acute care hospitals or hospice care were excluded. RESULTS: Readmission occurred in 107 (107/534, 20.0%) encounters. The out-of-sample area under curve for the 2-stage predictive model was 0.73 (SD 0.08). Dynamic clinical predictors capturing laboratory results and vital signs had the highest predictive value compared with demographic, administrative, medical, and procedural data included. Unsupervised clustering identified four risk trajectory groups: decreasing risk (131/534, 24.5% encounters), high risk (113/534, 21.2%), moderate risk (177/534, 33.1%), and low risk (113/534, 21.2%). The decreasing risk group demonstrated change in average probability of readmission from admission (0.69) to discharge (0.30), whereas the high risk (0.75), moderate risk (0.61), and low risk (0.39) groups maintained consistency over the hospital course. A higher level of hemoglobin, larger decrease in potassium and diastolic blood pressure from admission to discharge, and smaller number of past hospitalizations are associated with decreasing readmission risk (P<.001). CONCLUSIONS: Dynamically predicting readmission and quantifying trends over patients' hospital stay illuminated differing risk trajectory groups. Identifying risk trajectory patterns and distinguishing predictors may shed new light on indicators of readmission and the isolated effects of the index hospitalization.

11.
Infect Control Hosp Epidemiol ; 40(5): 541-550, 2019 05.
Artículo en Inglés | MEDLINE | ID: mdl-30915928

RESUMEN

BACKGROUND: Targeted screening for carbapenem-resistant organisms (CROs), including carbapenem-resistant Enterobacteriaceae (CRE) and carbapenemase-producing organisms (CPOs), remains limited; recent data suggest that existing policies miss many carriers. OBJECTIVE: Our objective was to measure the prevalence of CRO and CPO perirectal colonization at hospital unit admission and to use machine learning methods to predict probability of CRO and/or CPO carriage. METHODS: We performed an observational cohort study of all patients admitted to the medical intensive care unit (MICU) or solid organ transplant (SOT) unit at The Johns Hopkins Hospital between July 1, 2016 and July 1, 2017. Admission perirectal swabs were screened for CROs and CPOs. More than 125 variables capturing preadmission clinical and demographic characteristics were collected from the electronic medical record (EMR) system. We developed models to predict colonization probabilities using decision tree learning. RESULTS: Evaluating 2,878 admission swabs from 2,165 patients, we found that 7.5% and 1.3% of swabs were CRO and CPO positive, respectively. Organism and carbapenemase diversity among CPO isolates was high. Despite including many characteristics commonly associated with CRO/CPO carriage or infection, overall, decision tree models poorly predicted CRO and CPO colonization (C statistics, 0.57 and 0.58, respectively). In subgroup analyses, however, models did accurately identify patients with recent CRO-positive cultures who use proton-pump inhibitors as having a high likelihood of CRO colonization. CONCLUSIONS: In this inpatient population, CRO carriage was infrequent but was higher than previously published estimates. Despite including many variables associated with CRO/CPO carriage, models poorly predicted colonization status, likely due to significant host and organism heterogeneity.


Asunto(s)
Enterobacteriaceae Resistentes a los Carbapenémicos/aislamiento & purificación , Portador Sano/microbiología , Infecciones por Enterobacteriaceae/diagnóstico , Infecciones por Enterobacteriaceae/epidemiología , Adulto , Anciano , Baltimore/epidemiología , Carbapenémicos , Estudios de Cohortes , Árboles de Decisión , Farmacorresistencia Bacteriana Múltiple , Infecciones por Enterobacteriaceae/microbiología , Femenino , Hospitales Universitarios , Humanos , Aprendizaje Automático , Masculino , Persona de Mediana Edad , Admisión del Paciente , Recto/microbiología , Sensibilidad y Especificidad , Adulto Joven
12.
J Am Med Inform Assoc ; 26(6): 506-515, 2019 06 01.
Artículo en Inglés | MEDLINE | ID: mdl-30889243

RESUMEN

OBJECTIVES: The study sought to identify collaborative electronic health record (EHR) usage patterns for pediatric trauma patients and determine how the usage patterns are related to patient outcomes. MATERIALS AND METHODS: A process mining-based network analysis was applied to EHR metadata and trauma registry data for a cohort of pediatric trauma patients with minor injuries at a Level I pediatric trauma center. The EHR metadata were processed into an event log that was segmented based on gaps in the temporal continuity of events. A usage pattern was constructed for each encounter by creating edges among functional roles that were captured within the same event log segment. These patterns were classified into groups using graph kernel and unsupervised spectral clustering methods. Demographics, clinical and network characteristics, and emergency department (ED) length of stay (LOS) of the groups were compared. RESULTS: Three distinct usage patterns that differed by network density were discovered: fully connected (clique), partially connected, and disconnected (isolated). Compared with the fully connected pattern, encounters with the partially connected pattern had an adjusted median ED LOS that was significantly longer (242.6 [95% confidence interval, 236.9-246.0] minutes vs 295.2 [95% confidence, 289.2-297.8] minutes), more frequently seen among day shift and weekday arrivals, and involved otolaryngology, ophthalmology services, and child life specialists. DISCUSSION: The clique-like usage pattern was associated with decreased ED LOS for the study cohort, suggesting greater degree of collaboration resulted in shorter stay. CONCLUSIONS: Further investigation to understand and address causal factors can lead to improvement in multidisciplinary collaboration.


Asunto(s)
Conducta Cooperativa , Registros Electrónicos de Salud , Servicio de Urgencia en Hospital/organización & administración , Relaciones Interprofesionales , Pediatría/organización & administración , Traumatología/organización & administración , Niño , Minería de Datos , Humanos , Tiempo de Internación , Metadatos , Grupo de Atención al Paciente , Análisis de Redes Sociales
14.
Int J Emerg Med ; 11(1): 3, 2018 Jan 15.
Artículo en Inglés | MEDLINE | ID: mdl-29335793

RESUMEN

BACKGROUND: Emergency department (ED) triage is performed to prioritize care for patients with critical and time-sensitive illness. Triage errors create opportunity for increased morbidity and mortality. Here, we sought to measure the frequency of under- and over-triage of patients by nurses using the Emergency Severity Index (ESI) in Brazil and to identify factors independently associated with each. METHODS: This was a single-center retrospective cohort study. The accuracy of initial ESI score assignment was determined by comparison with a score entered at the close of each ED encounter by treating physicians with full knowledge of actual resource utilization, disposition, and acute outcomes. Chi-square analysis was used to validate this surrogate gold standard, via comparison of associations with disposition and clinical outcomes. Independent predictors of under- and over-triage were identified by multivariate logistic regression. RESULTS: Initial ESI-determined triage score was classified as inaccurate for 16,426 of 96,071 patient encounters. Under-triage was associated with a significantly higher rate of admission and critical outcome, while over-triage was associated with a lower rate of both. A number of factors identifiable at time of presentation including advanced age, bradycardia, tachycardia, hypoxia, hyperthermia, and several specific chief complaints (i.e., neurologic complaints, chest pain, shortness of breath) were identified as independent predictors of under-triage, while other chief complaints (i.e., hypertension and allergic complaints) were independent predictors of over-triage. CONCLUSIONS: Despite rigorous and ongoing training of ESI users, a large number of patients in this cohort were under- or over-triaged. Advanced age, vital sign derangements, and specific chief complaints-all subject to limited guidance by the ESI algorithm-were particularly under-appreciated.

15.
Methods Inf Med ; 57(5-06): 261-269, 2018 11.
Artículo en Inglés | MEDLINE | ID: mdl-30875705

RESUMEN

BACKGROUND: Electronic health record (EHR) systems contain large volumes of novel heterogeneous data that can be linked to trauma registry data to enable innovative research not possible with either data source alone. OBJECTIVE: This article describes an approach for linking electronically extracted EHR data to trauma registry data at the institutional level and assesses the value of probabilistic linkage. METHODS: Encounter data were independently obtained from the EHR data warehouse (n = 1,632) and the pediatric trauma registry (n = 1,829) at a Level I pediatric trauma center. Deterministic linkage was attempted using nine different combinations of medical record number (MRN), encounter identity (ID) (visit ID), age, gender, and emergency department (ED) arrival date. True matches from the best performing variable combination were used to create a gold standard, which was used to evaluate the performance of each variable combination, and to train a probabilistic algorithm that was separately used to link records unmatched by deterministic linkage and the entire cohort. Additional records that matched probabilistically were investigated via chart review and compared against records that matched deterministically. RESULTS: Deterministic linkage with exact matching on any three of MRN, encounter ID, age, gender, and ED arrival date gave the best yield of 1,276 true matches while an additional probabilistic linkage step following deterministic linkage yielded 110 true matches. These records contained a significantly higher number of boys compared to records that matched deterministically and etiology was attributable to mismatch between MRNs in the two data sets. Probabilistic linkage of the entire cohort yielded 1,363 true matches. CONCLUSION: The combination of deterministic and an additional probabilistic method represents a robust approach for linking EHR data to trauma registry data. This approach may be generalizable to studies involving other registries and databases.


Asunto(s)
Registros Electrónicos de Salud , Registro Médico Coordinado , Sistema de Registros , Heridas y Lesiones/epidemiología , Algoritmos , Niño , Preescolar , Femenino , Humanos , Masculino
16.
Ann Emerg Med ; 71(5): 565-574.e2, 2018 05.
Artículo en Inglés | MEDLINE | ID: mdl-28888332

RESUMEN

STUDY OBJECTIVE: Standards for emergency department (ED) triage in the United States rely heavily on subjective assessment and are limited in their ability to risk-stratify patients. This study seeks to evaluate an electronic triage system (e-triage) based on machine learning that predicts likelihood of acute outcomes enabling improved patient differentiation. METHODS: A multisite, retrospective, cross-sectional study of 172,726 ED visits from urban and community EDs was conducted. E-triage is composed of a random forest model applied to triage data (vital signs, chief complaint, and active medical history) that predicts the need for critical care, an emergency procedure, and inpatient hospitalization in parallel and translates risk to triage level designations. Predicted outcomes and secondary outcomes of elevated troponin and lactate levels were evaluated and compared with the Emergency Severity Index (ESI). RESULTS: E-triage predictions had an area under the curve ranging from 0.73 to 0.92 and demonstrated equivalent or improved identification of clinical patient outcomes compared with ESI at both EDs. E-triage provided rationale for risk-based differentiation of the more than 65% of ED visits triaged to ESI level 3. Matching the ESI patient distribution for comparisons, e-triage identified more than 10% (14,326 patients) of ESI level 3 patients requiring up triage who had substantially increased risk of critical care or emergency procedure (1.7% ESI level 3 versus 6.2% up triaged) and hospitalization (18.9% versus 45.4%) across EDs. CONCLUSION: E-triage more accurately classifies ESI level 3 patients and highlights opportunities to use predictive analytics to support triage decisionmaking. Further prospective validation is needed.


Asunto(s)
Servicio de Urgencia en Hospital , Aprendizaje Automático , Triaje , Adulto , Algoritmos , Área Bajo la Curva , Estudios Transversales , Servicio de Urgencia en Hospital/tendencias , Femenino , Humanos , Aprendizaje Automático/normas , Aprendizaje Automático/tendencias , Masculino , Estudios Retrospectivos , Triaje/métodos , Triaje/tendencias , Estados Unidos , Signos Vitales
17.
Disaster Med Public Health Prep ; 12(4): 513-522, 2018 08.
Artículo en Inglés | MEDLINE | ID: mdl-29041994

RESUMEN

The National Center for the Study of Preparedness and Catastrophic Event Response (PACER) has created a publicly available simulation tool called Surge (accessible at http://www.pacerapps.org) to estimate surge capacity for user-defined hospitals. Based on user input, a Monte Carlo simulation algorithm forecasts available hospital bed capacity over a 7-day period and iteratively assesses the ability to accommodate disaster patients. Currently, the tool can simulate bed capacity for acute mass casualty events (such as explosions) only and does not specifically simulate staff and supply inventory. Strategies to expand hospital capacity, such as (1) opening unlicensed beds, (2) canceling elective admissions, and (3) implementing reverse triage, can be interactively evaluated. In the present application of the tool, various response strategies were systematically investigated for 3 nationally representative hospital settings (large urban, midsize community, small rural). The simulation experiments estimated baseline surge capacity between 7% (large hospitals) and 22% (small hospitals) of staffed beds. Combining all response strategies simulated surge capacity between 30% and 40% of staffed beds. Response strategies were more impactful in the large urban hospital simulation owing to higher baseline occupancy and greater proportion of elective admissions. The publicly available Surge tool enables proactive assessment of hospital surge capacity to support improved decision-making for disaster response. (Disaster Med Public Health Preparedness. 2018;12:513-522).


Asunto(s)
Defensa Civil/métodos , Simulación por Computador/estadística & datos numéricos , Capacidad de Reacción/estadística & datos numéricos , Defensa Civil/estadística & datos numéricos , Medicina de Desastres/instrumentación , Medicina de Desastres/métodos , Predicción/métodos , Humanos , Internet , Tiempo de Internación/estadística & datos numéricos , Incidentes con Víctimas en Masa/estadística & datos numéricos , Método de Montecarlo
18.
Ann Emerg Med ; 70(5): 607-614.e1, 2017 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-28751087

RESUMEN

STUDY OBJECTIVE: A proposed benefit of expanding Medicaid eligibility under the Patient Protection and Affordable Care Act (ACA) was a reduction in emergency department (ED) utilization for primary care needs. Pre-ACA studies found that new Medicaid enrollees increased their ED utilization rates, but the effect on system-level ED visits was less clear. Our objective was to estimate the effect of Medicaid expansion on aggregate and individual-based ED utilization patterns within Maryland. METHODS: We performed a retrospective cross-sectional study of ED utilization patterns across Maryland, using data from Maryland's Health Services Cost Review Commission. We also analyzed utilization differences between pre-ACA (July 2012 to December 2013) uninsured patients who returned post-ACA (July 2014 to December 2015). RESULTS: The total number of ED visits in Maryland decreased by 36,531 (-1.2%) between the 6 quarters pre-ACA and the 6 quarters post-ACA. Medicaid-covered ED visits increased from 23.3% to 28.9% (159,004 additional visits), whereas uninsured patient visits decreased from 16.3% to 10.4% (181,607 fewer visits). Coverage by other insurance types remained largely stable between periods. We found no significant relationship between Medicaid expansion and changes in ED volume by hospital. For patients uninsured pre-ACA who returned post-ACA, the adjusted visits per person during 6 quarters was 2.38 (95% confidence interval 2.35 to 2.40) for those newly enrolled in Medicaid post-ACA compared with 1.66 (95% confidence interval 1.64 to 1.68) for those remaining uninsured. CONCLUSION: There was a substantial increase in patients covered by Medicaid in the post-ACA period, but this did not significantly affect total ED volume. Returning patients newly enrolled in Medicaid visited the ED more than their uninsured counterparts; however, this cohort accounted for only a small percentage of total ED visits in Maryland.


Asunto(s)
Servicio de Urgencia en Hospital/economía , Servicio de Urgencia en Hospital/estadística & datos numéricos , Medicaid/normas , Adulto , Anciano , Estudios Transversales , Determinación de la Elegibilidad/métodos , Femenino , Accesibilidad a los Servicios de Salud/economía , Accesibilidad a los Servicios de Salud/estadística & datos numéricos , Humanos , Seguro de Salud/legislación & jurisprudencia , Seguro de Salud/estadística & datos numéricos , Masculino , Maryland/epidemiología , Medicaid/estadística & datos numéricos , Pacientes no Asegurados/estadística & datos numéricos , Persona de Mediana Edad , Patient Protection and Affordable Care Act/estadística & datos numéricos , Atención Primaria de Salud/estadística & datos numéricos , Estudios Retrospectivos , Estados Unidos
19.
Ann Emerg Med ; 69(5): 577-586.e4, 2017 May.
Artículo en Inglés | MEDLINE | ID: mdl-28131489

RESUMEN

STUDY OBJECTIVE: The study objective was to determine whether intravenous contrast administration for computed tomography (CT) is independently associated with increased risk for acute kidney injury and adverse clinical outcomes. METHODS: This single-center retrospective cohort analysis was performed in a large, urban, academic emergency department with an average census of 62,179 visits per year; 17,934 ED visits for patients who underwent contrast-enhanced, unenhanced, or no CT during a 5-year period (2009 to 2014) were included. The intervention was CT scan with or without intravenous contrast administration. The primary outcome was incidence of acute kidney injury. Secondary outcomes included new chronic kidney disease, dialysis, and renal transplantation at 6 months. Logistic regression modeling and between-groups odds ratios with and without propensity-score matching were used to test for an independent association between contrast administration and primary and secondary outcomes. Treatment decisions, including administration of contrast and intravenous fluids, were examined. RESULTS: Rates of acute kidney injury were similar among all groups. Contrast administration was not associated with increased incidence of acute kidney injury (contrast-induced nephropathy criteria odds ratio=0.96, 95% confidence interval 0.85 to 1.08; and Acute Kidney Injury Network/Kidney Disease Improving Global Outcomes criteria odds ratio=1.00, 95% confidence interval 0.87 to 1.16). This was true in all subgroup analyses regardless of baseline renal function and whether comparisons were made directly or after propensity matching. Contrast administration was not associated with increased incidence of chronic kidney disease, dialysis, or renal transplant at 6 months. Clinicians were less likely to prescribe contrast to patients with decreased renal function and more likely to prescribe intravenous fluids if contrast was administered. CONCLUSION: In the largest well-controlled study of acute kidney injury following contrast administration in the ED to date, intravenous contrast was not associated with an increased frequency of acute kidney injury.


Asunto(s)
Lesión Renal Aguda/inducido químicamente , Medios de Contraste/efectos adversos , Administración Intravenosa , Adulto , Anciano , Estudios de Casos y Controles , Servicio de Urgencia en Hospital , Femenino , Humanos , Masculino , Persona de Mediana Edad , Factores de Riesgo , Tomografía Computarizada por Rayos X/efectos adversos
20.
JAMA Pediatr ; 171(2): 157-164, 2017 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-27942705

RESUMEN

Importance: Sepsis and septic shock are common and, at times, fatal in pediatrics. Blood cultures are often obtained when clinicians suspect sepsis, yet are low-yield with a false-positive rate up to 50%. Objectives: To determine whether a novel, 2-part, clinical practice guideline could decrease the rates of total blood cultures and cultures collected from central venous catheters in critically ill children and to examine the effect of the guideline on patient outcomes. Design, Setting, and Participants: A retrospective cohort study was performed to determine the effect of a new clinical practice guideline on blood culture practices in a 36-bed, combined medical/surgical pediatric intensive care unit of an urban, academic, tertiary care center from April 1, 2013, to March 31, 2015. All patients admitted to the pediatric intensive care unit with length of stay of 4 hours or more were evaluated (4560 patient visits: 2204 preintervention, 2356 postintervention visits). Interventions: Two documents were developed: (1) fever/sepsis screening checklist and (2) blood culture decision algorithm. Clinicians consulted these documents when considering ordering blood cultures and for guidance about the culture source. Main Outcomes and Measures: Primary outcome was the total number of blood cultures collected per 100 patient-days. Results: Of the 2204 children evaluated before the intervention, 1215 were male (55.1%); median (interquartile range) age was 5 (1-13) years. Postintervention analysis included 2356 children; 1262 were male (53.6%) and median (interquartile range) age was 6 (1-13) years. A total of 1807 blood cultures were drawn before the intervention during 11 196 patient-days; 984 cultures were drawn after the intervention during 11 204 patient-days (incidence rate, 16.1 vs 8.8 cultures per 100 patient-days). There was a 46.0% reduction after the intervention in the blood culture collection rate (incidence rate ratio, 0.54; 95% CI, 0.50-0.59). After the intervention, there was an immediate 25.0% reduction in the rate of cultures per 100 patient-days (95% CI, 4.2%-39.7%; P = .02) and a sustained 6.6% (95% CI, 4.7%-8.4%; P < .001) monthly decrease in the rate of cultures per 100 patient-days. Significantly fewer cultures were collected from central venous catheters after vs before the intervention (389 [39.5%] vs 1321 [73.1%]; P < .001). Rates of episodes defined as suspected infection and suspected septic shock decreased significantly after the intervention, but patients meeting these criteria underwent cultures at unchanged frequencies before vs after the intervention (52.1% vs 47.0%, P = .09, compared with 56.7% vs 55.0%, P = .75). In-hospital mortality (45 [2.0] vs 37 [1.6]; P = .23) and hospital readmissions (107 [4.9] vs 103 [4.4]; P = .42) were unchanged after the intervention. Conclusions and Relevance: A systematic approach to blood cultures decreased the total number of cultures and central venous catheter cultures, without an increase in rates of mortality, readmission, or episodes of suspected infection and suspected septic shock.


Asunto(s)
Cultivo de Sangre/estadística & datos numéricos , Enfermedad Crítica , Guías de Práctica Clínica como Asunto , Mejoramiento de la Calidad , Sepsis/sangre , Catéteres Venosos Centrales , Niño , Técnicas de Apoyo para la Decisión , Femenino , Fiebre , Mortalidad Hospitalaria , Humanos , Unidades de Cuidado Intensivo Pediátrico , Masculino , Readmisión del Paciente/estadística & datos numéricos , Estudios Retrospectivos , Choque Séptico/sangre
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA