RESUMEN
BACKGROUND: Shorter half-life glucagon-like peptide-1 receptor agonists (GLP-1 RAs) delay gastric emptying (DGE) more than GLP-1 RAs with longer half-lives. DGE is a known risk factor for gastro-oesophageal reflux disease (GERD) and its complications. AIM: To determine whether short-acting or long-acting GLP-1 RAs are associated with an increased risk of new GERD or GERD-related complications DESIGN: We used the TriNetX global database to identify adult patients with type 2 diabetes mellitus and generated two cohorts totalling 1 543 351 patients on (1) GLP-1 RA or (2) other second-line diabetes medication. Using propensity-score matching, Kaplan-Meier Analysis and Cox-proportional hazards ratio (HR), we analysed outcomes and separately examined outcomes in patients starting short-acting (≤1 day) and long-acting (≥5 days) GLP-1 RAs. RESULTS: 177 666 patients were in each propensity-matched cohort. GLP-1 RA exposure was associated with an increased risk (HR 1.15; 95% CI 1.09 to 1.22) of erosive reflux disease (ERD). However, this was solely due to short-acting (HR 1.215; 95% CI 1.111 to 1.328), but not long-acting (HR 0.994; 95% CI 0.924 to 1.069) GLP-1 RA exposure. Short-acting GLP-1 RAs were also associated with increased risk of oesophageal stricture (HR 1.284; 95% CI 1.135 to 1.453), Barrett's without dysplasia (HR 1.372; 95% CI 1.217 to 1.546) and Barrett's with dysplasia (HR 1.505; 95% CI 1.164 to 1.946) whereas long-acting GLP-1 RAs were not. This association persisted in sensitivity analyses, and when individually examining the short-acting GLP-1 RAs liraglutide, lixisenatide and exenatide. CONCLUSION: Starting shorter-acting GLP-1 RAs is associated with increased risks of GERD and its complications.
Asunto(s)
Diabetes Mellitus Tipo 2 , Reflujo Gastroesofágico , Adulto , Humanos , Diabetes Mellitus Tipo 2/complicaciones , Diabetes Mellitus Tipo 2/tratamiento farmacológico , Agonistas Receptor de Péptidos Similares al Glucagón , Estudios de Cohortes , Estudios Retrospectivos , Reflujo Gastroesofágico/tratamiento farmacológico , Reflujo Gastroesofágico/complicaciones , Péptido 1 Similar al Glucagón/efectos adversos , Hipoglucemiantes/efectos adversosRESUMEN
PURPOSE: Triple negative breast cancer (TNBC) is an aggressive subtype of breast cancer (BC) with higher recurrence rates and poorer prognoses and most prevalent among non-Hispanic Black women. Studies of multiple health conditions and care processes suggest that neighborhood socioeconomic position is a key driver of health disparities. We examined roles of patients' neighborhood-level characteristics and race on prevalence, stage at diagnosis, and mortality among patients diagnosed with BC at a large safety-net healthcare system in Northeast Ohio. METHODS: We used tumor registry to identify BC cases from 2007 to 2020 and electronic health records and American Community Survey for individual- and area-level factors. We performed multivariable regression analyses to estimate associations between neighborhood-level characteristics, measured by the Area Deprivation Index (ADI), race and comparative TNBC prevalence, stage at diagnosis, and total mortality. RESULTS: TNBC was more common among non-Hispanic Black (53.7%) vs. non-Hispanic white patients (46.4%). Race and ADI were individually significant predictors of TNBC prevalence, stage at diagnosis, and total mortality. Race remained significantly associated with TNBC subtype, adjusting for covariates. Accounting for TNBC status, a more disadvantaged neighborhood was significantly associated with a worse stage at diagnosis and higher death rates. CONCLUSION: Our findings suggest that both neighborhood socioeconomic position and race are strongly associated with TNBC vs. other BC subtypes. The burden of TNBC appears to be highest among Black women in the most socioeconomically disadvantaged neighborhoods. Our study suggests a complex interplay of social conditions and biological disease characteristics contributing to racial disparities in BC outcomes.
Asunto(s)
Grupos Raciales , Características de la Residencia , Neoplasias de la Mama Triple Negativas , Femenino , Humanos , Registros Electrónicos de Salud , Multimorbilidad , Análisis Multivariante , Características del Vecindario , Ohio/epidemiología , Grupos Raciales/estadística & datos numéricos , Sistema de Registros , Características de la Residencia/estadística & datos numéricos , Neoplasias de la Mama Triple Negativas/epidemiología , Neoplasias de la Mama Triple Negativas/mortalidad , Persona de Mediana Edad , Anciano , Prevalencia , Diagnóstico Tardío , Oportunidad RelativaRESUMEN
BACKGROUND: Appointment no shows are prevalent in safety-net healthcare systems. The efficacy and equitability of using predictive algorithms to selectively add resource-intensive live telephone outreach to standard automated reminders in such a setting is not known. OBJECTIVE: To determine if adding risk-driven telephone outreach to standard automated reminders can improve in-person primary care internal medicine clinic no show rates without worsening racial and ethnic show-rate disparities. DESIGN: Randomized controlled quality improvement initiative. PARTICIPANTS: Adult patients with an in-person appointment at a primary care internal medicine clinic in a safety-net healthcare system from 1/1/2022 to 8/24/2022. INTERVENTIONS: A random forest model that leveraged electronic health record data to predict appointment no show risk was internally trained and validated to ensure fair performance. Schedulers leveraged the model to place reminder calls to patients in the augmented care arm who had a predicted no show rate of 15% or higher. MAINE MEASURES: The primary outcome was no show rate stratified by race and ethnicity. KEY RESULTS: There were 5840 appointments with a predicted no show rate of 15% or higher. A total of 2858 had been randomized to the augmented care group and 2982 randomized to standard care. The augmented care group had a significantly lower no show rate than the standard care group (33% vs 36%, p < 0.01). There was a significant reduction in no show rates for Black patients (36% vs 42% respectively, p < 0.001) not reflected in white, non-Hispanic patients. CONCLUSIONS: In this randomized controlled quality improvement initiative, adding model-driven telephone outreach to standard automated reminders was associated with a significant reduction of in-person no show rates in a diverse primary care clinic. The initiative reduced no show disparities by predominantly improving access for Black patients.
RESUMEN
BACKGROUND: Black and Latinx adults experience disproportionate asthma-related morbidity and limited specialty care access. The severe acute respiratory syndrome coronavirus 2 pandemic expanded telehealth use. OBJECTIVE: To evaluate visit type (telehealth [TH] vs in-person [IP]) preferences and the impact of visit type on asthma outcomes among Black and Latinx adults with moderate-to-severe asthma. METHODS: For this PREPARE trial ancillary study, visit type preference was surveyed by e-mail or telephone post-trial. Emergency medical record data on visit types and asthma outcomes were available for a subset (March 2020 to April 2021). Characteristics associated with visit type preferences, and relationships between visit type and asthma outcomes (control [Asthma Control Test] and asthma-related quality of life [Asthma Symptom Utility Index]), were tested using multivariable regression. RESULTS: A total of 866 participants consented to be surveyed, with 847 respondents. Among the participants with asthma care experience with both visit types, 42.0% preferred TH for regular checkups, which associated with employment (odds ratio [OR] = 1.61; 95% confidence interval [CI], 1.09-2.39; P = .02), lower asthma medication adherence (OR = 1.06; 95% CI, 1.01-1.11; P = .03), and having more historical emergency department and urgent care asthma visits (OR = 1.10 for each additional visit; 95% CI, 1.02-1.18; P = .02), after adjustment. Emergency medical record data were available for 98 participants (62 TH, 36 IP). Those with TH visits were more likely Latinx, from the Southwest, employed, using inhaled corticosteroid-only controller therapy, with lower body mass index, and lower self-reported asthma medication adherence vs those with IP visits only. Both groups had comparable Asthma Control Test (18.4 vs 18.9, P = .52) and Asthma Symptom Utility Index (0.79 vs 0.84, P = .16) scores after adjustment. CONCLUSION: TH may be similarly efficacious as and often preferred over IP among Black and Latinx adults with moderate-to-severe asthma, especially for regular checkups. TRIAL REGISTRATION: ClinicalTrials.gov Identifier: NCT02995733.
Asunto(s)
Asma , Prioridad del Paciente , Telemedicina , Adulto , Humanos , Corticoesteroides/uso terapéutico , Asma/tratamiento farmacológico , Asma/diagnóstico , Hispánicos o Latinos , Calidad de Vida , Negro o AfroamericanoRESUMEN
OBJECTIVES: Results of pre-post intervention studies of sepsis early warning systems have been mixed, and randomized clinical trials showing efficacy in the emergency department setting are lacking. Additionally, early warning systems can be resource-intensive and may cause unintended consequences such as antibiotic or IV fluid overuse. We assessed the impact of a pharmacist and provider facing sepsis early warning systems on timeliness of antibiotic administration and sepsis-related clinical outcomes in our setting. DESIGN: A randomized, controlled quality improvement initiative. SETTING: The main emergency department of an academic, safety-net healthcare system from August to December 2019. PATIENTS: Adults presenting to the emergency department. INTERVENTION: Patients were randomized to standard sepsis care or standard care augmented by the display of a sepsis early warning system-triggered flag in the electronic health record combined with electronic health record-based emergency department pharmacist notification. MEASUREMENTS AND MAIN RESULTS: The primary process measure was time to antibiotic administration from arrival. A total of 598 patients were included in the study over a 5-month period (285 in the intervention group and 313 in the standard care group). Time to antibiotic administration from emergency department arrival was shorter in the augmented care group than that in the standard care group (median, 2.3 hr [interquartile range, 1.4-4.7 hr] vs 3.0 hr [interquartile range, 1.6-5.5 hr]; p = 0.039). The hierarchical composite clinical outcome measure of days alive and out of hospital at 28 days was greater in the augmented care group than that in the standard care group (median, 24.1 vs 22.5 d; p = 0.011). Rates of fluid resuscitation and antibiotic utilization did not differ. CONCLUSIONS: In this single-center randomized quality improvement initiative, the display of an electronic health record-based sepsis early warning system-triggered flag combined with electronic health record-based pharmacist notification was associated with shorter time to antibiotic administration without an increase in undesirable or potentially harmful clinical interventions.
Asunto(s)
Antibacterianos/uso terapéutico , Protocolos Clínicos , Servicio de Urgencia en Hospital/organización & administración , Mejoramiento de la Calidad/organización & administración , Sepsis/tratamiento farmacológico , Tiempo de Tratamiento/estadística & datos numéricos , Algoritmos , Humanos , Evaluación de Procesos, Atención de SaludRESUMEN
INTRODUCTION: Coronavirus disease 2019 (COVID-19) is associated with high rates of morbidity and mortality. Primary hypothyroidism is a common comorbid condition, but little is known about its association with COVID-19 severity and outcomes. This study aims to identify the frequency of hypothyroidism in hospitalized patients with COVID-19 as well as describe the differences in outcomes between patients with and without pre-existing hypothyroidism using an observational, multinational registry. METHODS: In an observational cohort study we enrolled patients 18 years or older, with laboratory-confirmed severe acute respiratory syndrome coronavirus-2 infection between March 2020 and February 2021. The primary outcomes were (1) the disease severity defined as per the World Health Organization Scale for Clinical Improvement, which is an ordinal outcome corresponding with the highest severity level recorded during a patient's index COVID-19 hospitalization, (2) in-hospital mortality and (3) hospital-free days. Secondary outcomes were the rate of intensive care unit (ICU) admission and ICU mortality. RESULTS: Among the 20,366 adult patients included in the study, pre-existing hypothyroidism was identified in 1616 (7.9%). The median age for the Hypothyroidism group was 70 (interquartile range: 59-80) years, and 65% were female and 67% were White. The most common comorbidities were hypertension (68%), diabetes (42%), dyslipidemia (37%) and obesity (28%). After adjusting for age, body mass index, sex, admission date in the quarter year since March 2020, race, smoking history and other comorbid conditions (coronary artery disease, hypertension, diabetes and dyslipidemia), pre-existing hypothyroidism was not associated with higher odds of severe disease using the World Health Organization disease severity index (odds ratio [OR]: 1.02; 95% confidence interval [CI]: 0.92, 1.13; p = .69), in-hospital mortality (OR: 1.03; 95% CI: 0.92, 1.15; p = .58) or differences in hospital-free days (estimated difference 0.01 days; 95% CI: -0.45, 0.47; p = .97). Pre-existing hypothyroidism was not associated with ICU admission or ICU mortality in unadjusted as well as in adjusted analysis. CONCLUSIONS: In an international registry, hypothyroidism was identified in around 1 of every 12 adult hospitalized patients with COVID-19. Pre-existing hypothyroidism in hospitalized patients with COVID-19 was not associated with higher disease severity or increased risk of mortality or ICU admissions. However, more research on the possible effects of COVID-19 on the thyroid gland and its function is needed in the future.
RESUMEN
BACKGROUND: Hospitalized patients with SARS-CoV2 develop acute kidney injury (AKI) frequently, yet gaps remain in understanding why adults seem to have higher rates compared to children. Our objectives were to evaluate the epidemiology of SARS-CoV2-related AKI across the age spectrum and determine if known risk factors such as illness severity contribute to its pattern. METHODS: Secondary analysis of ongoing prospective international cohort registry. AKI was defined by KDIGO-creatinine only criteria. Log-linear, logistic and generalized estimating equations assessed odds ratios (OR), risk differences (RD), and 95% confidence intervals (CIs) for AKI and mortality adjusting for sex, pre-existing comorbidities, race/ethnicity, illness severity, and clustering within centers. Sensitivity analyses assessed different baseline creatinine estimators. RESULTS: Overall, among 6874 hospitalized patients, 39.6% (n = 2719) developed AKI. There was a bimodal distribution of AKI by age with peaks in older age (≥60 years) and middle childhood (5-15 years), which persisted despite controlling for illness severity, pre-existing comorbidities, or different baseline creatinine estimators. For example, the adjusted OR of developing AKI among hospitalized patients with SARS-CoV2 was 2.74 (95% CI 1.66-4.56) for 10-15-year-olds compared to 30-35-year-olds and similarly was 2.31 (95% CI 1.71-3.12) for 70-75-year-olds, while adjusted OR dropped to 1.39 (95% CI 0.97-2.00) for 40-45-year-olds compared to 30-35-year-olds. CONCLUSIONS: SARS-CoV2-related AKI is common with a bimodal age distribution that is not fully explained by known risk factors or confounders. As the pandemic turns to disproportionately impacting younger individuals, this deserves further investigation as the presence of AKI and SARS-CoV2 infection increases hospital mortality risk.
Asunto(s)
Lesión Renal Aguda/epidemiología , COVID-19/complicaciones , Pacientes Internos/estadística & datos numéricos , SARS-CoV-2 , Lesión Renal Aguda/etiología , Adolescente , Adulto , Distribución por Edad , Factores de Edad , Anciano , Anciano de 80 o más Años , COVID-19/epidemiología , Niño , Preescolar , Comorbilidad , Intervalos de Confianza , Creatinina/sangre , Salud Global/estadística & datos numéricos , Mortalidad Hospitalaria , Humanos , Persona de Mediana Edad , Oportunidad Relativa , Sistema de Registros/estadística & datos numéricos , Índice de Severidad de la EnfermedadRESUMEN
Spirometry is necessary to diagnose chronic obstructive pulmonary disease (COPD), yet a large proportion of patients are diagnosed and treated without having received testing. This study explored whether the effects of interventions using the electronic health record (EHR) to target patients diagnosed with COPD without confirmatory spirometry impacted the incidence rates of spirometry referrals and completions. This retrospective before and after study assessed the impact of provider-facing clinical decision support that identified patients who had a diagnosis of COPD but had not received spirometry. Spirometry referrals, completions, and results were ascertained 1.5 years prior to and 1.5 years after the interventions were initiated. Inhaler prescriptions by class were also tallied. There were 10,949 unique patients with a diagnosis of COPD who were eligible for inclusion. 4,895 patients (44.7%) were excluded because they had completed spirometry prior to the cohort start dates. The pre-intervention cohort consisted of 2,622 patients, while the post-intervention cohort had 3,392. Spirometry referral rates pre-intervention were 20.2% compared to 31.6% post-intervention (p < 0.001). Spirometry completion rates rose from 13.2% pre-intervention to 19.3% afterwards (p < 0.001). 61.7% (585 of 948) had no evidence of airflow obstruction. After excluding patients with a diagnosis of asthma, 25.8% (126 of 488) patients who had no evidence of airflow obstruction had prescriptions for long-acting bronchodilators or inhaled steroids. A concerted EHR intervention modestly increased spirometry referral and completion rates in patients with a diagnosis of COPD without prior spirometry and decreased misclassification of disease.
Asunto(s)
Registros Electrónicos de Salud , Enfermedad Pulmonar Obstructiva Crónica , Broncodilatadores/uso terapéutico , Humanos , Enfermedad Pulmonar Obstructiva Crónica/diagnóstico , Enfermedad Pulmonar Obstructiva Crónica/tratamiento farmacológico , Estudios Retrospectivos , Espirometría/métodosRESUMEN
INTRODUCTION: There is mounting interest in the use of risk prediction models to guide lung cancer screening. Electronic health records (EHRs) could facilitate such an approach, but smoking exposure documentation is notoriously inaccurate. While the negative impact of inaccurate EHR data on screening practices reliant on dichotomized age and smoking exposure-based criteria has been demonstrated, less is known regarding its impact on the performance of model-based screening. AIMS AND METHODS: Data were collected from a cohort of 37 422 ever-smokers between the ages of 55 and 74, seen at an academic safety-net healthcare system between 1999 and 2018. The National Lung Cancer Screening Trial (NLST) criteria, PLCOM2012 and LCRAT lung cancer risk prediction models were validated against time to lung cancer diagnosis. Discrimination (area under the receiver operator curve [AUC]) and calibration were assessed. The effect of substituting the last documented smoking variables with differentially retrieved "history conscious" measures was also determined. RESULTS: The PLCOM2012 and LCRAT models had AUCs of 0.71 (95% CI, 0.69 to 0.73) and 0.72 (95% CI, 0.70 to 0.74), respectively. Compared with the NLST criteria, PLCOM2012 had a significantly greater time-dependent sensitivity (69.9% vs. 64.5%, p < .01) and specificity (58.3% vs. 56.4%, p < .001). Unlike the NLST criteria, the performances of the PLCOM2012 and LCRAT models were not prone to historical variability in smoking exposure documentation. CONCLUSIONS: Despite the inaccuracies of EHR-documented smoking histories, leveraging model-based lung cancer risk estimation may be a reasonable strategy for screening, and is of greater value compared with using NLST criteria in the same setting. IMPLICATIONS: EHRs are potentially well suited to aid in the risk-based selection of lung cancer screening candidates, but healthcare providers and systems may elect not to leverage EHR data due to prior work that has shown limitations in structured smoking exposure data quality. Our findings suggest that despite potential inaccuracies in the underlying EHR data, screening approaches that use multivariable models may perform significantly better than approaches that rely on simpler age and exposure-based criteria. These results should encourage providers to consider using pre-existing smoking exposure data with a model-based approach to guide lung cancer screening practices.
Asunto(s)
Detección Precoz del Cáncer , Neoplasias Pulmonares , Anciano , Humanos , Neoplasias Pulmonares/diagnóstico , Neoplasias Pulmonares/epidemiología , Tamizaje Masivo , Persona de Mediana Edad , Estudios Retrospectivos , Medición de Riesgo , Fumar , Tomografía Computarizada por Rayos XRESUMEN
BACKGROUND: Emerging research has examined the prevalence of severe acute respiratory syndrome virus 2 (SARS-CoV-2) infections in numerous settings, but a critical gap in knowledge is an understanding of the rate of infection among first responders. METHODS: We conducted a prospective serial serologic survey by recruiting public first responders from Cleveland area emergency medical services agencies and fire departments. Volunteers submitted a nasopharyngeal swab for SARS-CoV-2 PCR testing and serum samples to detect the presence of antibodies to SARS-CoV-2 on two visits scheduled approximately 3 weeks apart. RESULTS: 296 respondents completed a first visit and 260 completed the second. While 71% of respondents reported exposure to SARS-CoV-2, only 5.4% (95% CI 3.1-8.6) had positive serologic testing. No subjects had a positive PCR. On the first visit, eight (50%) of the test-positive subjects had no symptoms and only one (6.2%) sought healthcare or missed school or work. None of the subjects that tested negative on the first visit were positive on their second. CONCLUSIONS: While our results show a relatively low rate of test positivity for SARS-CoV-2 amongst first responders, most were either asymptomatic or mildly symptomatic. The potential risk of asymptomatic transmission both between first responders and from first responders to vulnerable patients requires more study.
Asunto(s)
COVID-19 , Servicios Médicos de Urgencia , Adulto , Anciano , Femenino , Personal de Salud , Humanos , Masculino , Persona de Mediana Edad , Estudios Prospectivos , SARS-CoV-2RESUMEN
BACKGROUND: Clinical outcomes among patients with atrial fibrillation (AF) and heart failure with preserved ejection fraction (HFpEF) treated with catheter ablation (CA) versus antiarrhythmic therapy (AAT) are not well-known. OBJECTIVES: This study compared morbidity and mortality among patients with AF and HFpEF treated with CA versus AAT. METHODS: AF and HFpEF patients from January 2017-June 2023 were identified in TriNetX, a large global population-based database. Patients with prior diagnosis of HFrEF or crossover between AAT and CA were excluded. Baseline characteristics including age, sex, BMI, type of AF, comorbidities, and cardiovascular medications were compared. The two groups were 1:1 propensity matched for outcomes analysis. All-cause mortality, cerebrovascular accident (CVA)/transient ischemic attack (TIA), and acute HF were compared with Kaplan-Meier curves. RESULTS: Patients treated with CA (n=1959) and AAT (n=7689) were 1:1 propensity matched yielding 3632 patients with no significant differences in baseline characteristics. Compared to AAT, CA was associated with decreased mortality (9.2% vs. 20.5%; hazard ratio [HR]: 0.431; 95% confidence interval [CI]: 0.359 to 0.518; p<0.001). Additionally, CA was associated with reduced HFpEF (HR: 0.638; 95% CI, 0.550 to 0.741; p<0.001) and acute HFrEF (HR: 0.645; 95% CI, 0.452 to 0.920; p=0.015). There was no difference in composite of CVA/TIA (HR: 0.935; 95% CI: 0.725 to 1.207; p=0.607). CONCLUSION: In this retrospective study of patients with AF and HFpEF, CA was associated with lower mortality and risk of acute heart failure when compared with AAT.
RESUMEN
Early detection of sepsis in patients admitted to the emergency department (ED) is an important clinical objective as early identification and treatment can help reduce morbidity and mortality rate of 20% or higher. Hematologic changes during sepsis-associated organ dysfunction are well established and a new biomarker called Monocyte Distribution Width (MDW) has been recently approved by the US Food and Drug Administration for sepsis. However, MDW, which quantifies monocyte activation in sepsis patients, is not a routinely reported parameter and it requires specialized proprietary laboratory equipment. Further, the relative importance of MDW as compared to other routinely available hematologic parameters and vital signs has not been studied, which makes it difficult for resource constrained hospital systems to make informed decisions in this regard. To address this issue, we analyzed data from a cohort of ED patients (n=10,229) admitted to a large regional safety-net hospital in Cleveland, Ohio with suspected infection who later developed poor outcomes associated with sepsis. We developed a new analytical framework consisting of seven data models and an ensemble of high accuracy machine learning (ML) algorithms (accuracy values ranging from 0.83 to 0.90) for the prediction of outcomes more common in sepsis than uncomplicated infection (3-day intensive care unit stay or death). To characterize the contributions of individual hematologic parameters, we applied the Local Interpretable Model-Agnostic Explanation (LIME) and Shapley Additive Value (SHAP) interpretability methods to the high accuracy ML algorithms. The ML interpretability results were consistent in their findings that the value of MDW is grossly attenuated in the presence of other routinely reported hematologic parameters and vital signs data. Further, this study for the first time shows that complete blood count with differential (CBC-DIFF) together with vital signs data can be used as a substitute for MDW in high accuracy ML algorithms to screen for poor outcomes associated with sepsis.
RESUMEN
OBJECTIVE: To evaluate the effectiveness of Monocyte Distribution Width (MDW) in predicting sepsis outcomes in emergency department (ED) patients compared to other hematologic parameters and vital signs, and to determine whether routine parameters could substitute MDW in machine learning models. METHODS: We conducted a retrospective analysis of data from 10,229 ED patients admitted to a large regional safety-net hospital in Cleveland, Ohio who had suspected infections and developed sepsis-associated poor outcomes. We developed a new analytical framework consisting of seven data models and an ensemble of high accuracy machine learning (ML) algorithms (accuracy values ranging from 0.83 to 0.90) to predict sepsis-associated poor outcomes (3-day intensive care unit stay or death). Local Interpretable Model-Agnostic Explanation (LIME) and Shapley Additive Value (SHAP) interpretability methods were utilized to assess the contributions of individual hematologic parameters. RESULTS: The ML interpretability analysis indicated that the predictive value of MDW is significantly reduced when other hematological parameters and vital signs are considered. The results suggest that complete blood count with differential (CBD-DIFF) alongside vital signs can effectively replace MDW in high accuracy machine learning algorithms for screening poor outcome associated with sepsis. CONCLUSION: MDW, although a newly approved biomarker for sepsis, does not significantly enhance prediction models when combined with routinely available parameters and vital signs. Hospitals, especially those with resource constraints, can rely on existing parameters with high accuracy machine learning models to predict sepsis outcomes effectively, thereby reducing the need for specialized tests like MDW.
RESUMEN
Objective: The purpose of this study was to determine the association of a multi-pronged treatment program in emergency department (ED) patients with an acute presentation of opioid use disorder (OUD) on the rate of subsequent opioid overdose (OD). This approach included ED-initiated take-home naloxone, prescription buprenorphine, and an ED-based peer support and recovery program. Methods: This was a retrospective observational analysis of adult patients presenting to the ED at a large urban hospital system from November 1, 2017 to March 17, 2023. Patients with an ED discharge diagnosis of OD or OUD were included. Outcomes determined were subsequent 90-day OD and 180-day OD death. Post hoc analyses were performed to identify intervention utilization throughout the study period including the COVID-19 pandemic as well as ED characteristics associated with subsequent OD and OD death. Statistical comparisons were made using logistic regression and chi-squared test. Results: A total of 2634 patients presented to the ED with an opioid OD or diagnosis of OUD. Subsequent 90-day OD decreased significantly over time (11.5%-2.3%, odds ratio [OR] 0.85, confidence interval [CI] 0.82-0.89). No single intervention was independently associated with 90-day OD or 180-day OD death. Resource utilization was stable during the COVID-19 pandemic and increased afterward. A higher buprenorphine fill-rate among all patients and the Back race subgroup was associated with a decrease in 90-day OD. Conclusions: Subsequent OD and OD death decreased over time after implementation of a multi-pronged treatment program to ED patients with OUD. No single intervention was associated with a decrease of subsequent OD or OD death.
RESUMEN
INTRODUCTION: Rapid outpatient diagnostic programs (RODP) expedite lung cancer evaluation, but their impact on racial disparities in the timeliness of evaluation is less clear. MATERIALS AND METHODS: This was a retrospective analysis of the impact of an internally developed application-supported RODP on racial disparities in timely referral completion rates for patients with potential lung cancer at a safety-net healthcare system. An application screened referrals to pulmonology for indications of lung mass or nodule and presented relevant clinical information that enabled dedicated pulmonologists to efficiently review and triage cases according to urgency. Subsequent care coordination was overseen by a dedicated nurse coordinator. To determine the program's impact, we conducted an interrupted time series analysis of the monthly fraction of referrals completed within 30 days, stratified by those identified as White, non-Hispanic and those that were not (racial and ethnic minorities). RESULTS: There were 902 patients referred in the 2 years preintervention and 913 in the 2 years postintervention. Overall, the median age was 63 years, and 44.7% of referred patients were female. 44.2% were White, non-Hispanic while racial and ethnic minorities constituted 54.3%. After the intervention, there was a significant improvement in the proportion of referrals completed within 30 days (62.4% vs. 48.2%, P <.01). The interrupted time series revealed a significant immediate improvement in timely completion among racial and ethnic minorities (23%, P < .01) that was not reflected in the majority White, non-Hispanic subgroup (11%, not significant). CONCLUSION: A thoughtfully designed and implemented RODP reduced racial disparities in the timely evaluation of potential lung cancer.
Asunto(s)
Minorías Étnicas y Raciales , Disparidades en Atención de Salud , Neoplasias Pulmonares , Femenino , Humanos , Masculino , Persona de Mediana Edad , Análisis de Series de Tiempo Interrumpido , Neoplasias Pulmonares/diagnóstico , Neoplasias Pulmonares/etnología , Pacientes Ambulatorios , Estudios Retrospectivos , Estados UnidosRESUMEN
Background: In this study, we compare management of patients with high-risk chronic obstructive pulmonary disease (COPD) in the United States to national and international guidelines and quality standards, including the COllaboratioN on QUality improvement initiative for achieving Excellence in STandards of COPD care (CONQUEST). Methods: Patients were identified from the DARTNet Practice Performance Registry and categorized into three high-risk cohorts in each year from 2011 to 2019: newly diagnosed (≤12 months after diagnosis), already diagnosed, and patients with potential undiagnosed COPD. Patients were considered high-risk if they had a history of exacerbations or likely exacerbations (respiratory consult with prescribed medication). Descriptive statistics for 2019 are reported, along with annual trends. Findings: In 2019, 10% (n = 16,610/167,197) of patients met high-risk criteria. Evidence of spirometry for diagnosis was low; in 2019, 81% (n = 1228/1523) of patients newly diagnosed at high-risk had no record of spirometry/peak expiratory flow in the 12 months pre- or post-diagnosis and 43% (n = 651/1523) had no record of COPD symptom review. Among those newly and already diagnosed at high-risk, 52% (n = 4830/9350) had no evidence of COPD medication. Interpretation: Findings suggest inconsistent adherence to evidence-based guidelines, and opportunities to improve identification, documentation of services, assessment, therapeutic intervention, and follow-up of patients with COPD. Funding: This study was conducted by the Observational and Pragmatic Research Institute (OPRI) Pte Ltd and was partially funded by Optimum Patient Care Global and AstraZeneca Ltd. No funding was received by the Observational & Pragmatic Research Institute Pte Ltd (OPRI) for its contribution.