RESUMO
There has been a steady rise in the use of clinical decision support (CDS) tools to guide nephrology as well as general clinical care. Through guidance set by federal agencies and concerns raised by clinical investigators, there has been an equal rise in understanding whether such tools exhibit algorithmic bias leading to unfairness. This has spurred the more fundamental question of whether sensitive variables such as race should be included in CDS tools. In order to properly answer this question, it is necessary to understand how algorithmic bias arises. We break down 3 sources of bias encountered when using electronic health record data to develop CDS tools: (1) use of proxy variables, (2) observability concerns and (3) underlying heterogeneity. We discuss how answering the question of whether to include sensitive variables like race often hinges more on qualitative considerations than on quantitative analysis, dependent on the function that the sensitive variable serves. Based on our experience with our own institution's CDS governance group, we show how health system-based governance committees play a central role in guiding these difficult and important considerations. Ultimately, our goal is to foster a community practice of model development and governance teams that emphasizes consciousness about sensitive variables and prioritizes equity.
RESUMO
RATIONALE & OBJECTIVE: The life expectancy of patients treated with maintenance hemodialysis (MHD) is heterogeneous. Knowledge of life-expectancy may focus care decisions on near-term versus long-term goals. The current tools are limited and focus on near-term mortality. Here, we develop and assess potential utility for predicting near-term mortality and long-term survival on MHD. STUDY DESIGN: Predictive modeling study. SETTING & PARTICIPANTS: 42,351 patients contributing 997,381 patient months over 11 years, abstracted from the electronic health record (EHR) system of midsize, nonprofit dialysis providers. NEW PREDICTORS & ESTABLISHED PREDICTORS: Demographics, laboratory results, vital signs, and service utilization data available within dialysis EHR. OUTCOME: For each patient month, we ascertained death within the next 6 months (ie, near-term mortality) and survival over more than 5 years during receipt of MHD or after kidney transplantation (ie, long-term survival). ANALYTICAL APPROACH: We used least absolute shrinkage and selection operator logistic regression and gradient-boosting machines to predict each outcome. We compared these to time-to-event models spanning both time horizons. We explored the performance of decision rules at different cut points. RESULTS: All models achieved an area under the receiver operator characteristic curve of≥0.80 and optimal calibration metrics in the test set. The long-term survival models had significantly better performance than the near-term mortality models. The time-to-event models performed similarly to binary models. Applying different cut points spanning from the 1st to 90th percentile of the predictions, a positive predictive value (PPV) of 54% could be achieved for near-term mortality, but with poor sensitivity of 6%. A PPV of 71% could be achieved for long-term survival with a sensitivity of 67%. LIMITATIONS: The retrospective models would need to be prospectively validated before they could be appropriately used as clinical decision aids. CONCLUSIONS: A model built with readily available clinical variables to support easy implementation can predict clinically important life expectancy thresholds and shows promise as a clinical decision support tool for patients on MHD. Predicting long-term survival has better decision rule performance than predicting near-term mortality. PLAIN-LANGUAGE SUMMARY: Clinical prediction models (CPMs) are not widely used for patients undergoing maintenance hemodialysis (MHD). Although a variety of CPMs have been reported in the literature, many of these were not well-designed to be easily implementable. We consider the performance of an implementable CPM for both near-term mortality and long-term survival for patients undergoing MHD. Both near-term and long-term models have similar predictive performance, but the long-term models have greater clinical utility. We further consider how the differential performance of predicting over different time horizons may be used to impact clinical decision making. Although predictive modeling is not regularly used for MHD patients, such tools may help promote individualized care planning and foster shared decision making.
Assuntos
Falência Renal Crônica , Diálise Renal , Humanos , Diálise Renal/mortalidade , Diálise Renal/métodos , Masculino , Feminino , Pessoa de Meia-Idade , Falência Renal Crônica/terapia , Falência Renal Crônica/mortalidade , Idoso , Expectativa de Vida , Taxa de Sobrevida/tendências , Fatores de Tempo , Medição de Risco/métodos , Estudos RetrospectivosRESUMO
OBJECTIVES: We sought to identify the impact of preeclampsia on infant and maternal health among women with rheumatic diseases. METHODS: A retrospective single-center cohort study was conducted to describe pregnancy and infant outcomes among women with systemic lupus erythematosus (SLE) with and without preeclampsia as compared to women with other rheumatic diseases with and without preeclampsia. RESULTS: We identified 263 singleton deliveries born to 226 individual mothers (mean age 31 years, 35% non-Hispanic Black). Overall, 14% of women had preeclampsia; preeclampsia was more common among women with SLE than other rheumatic diseases (27% vs 8%). Women with preeclampsia had a longer hospital stay post-delivery. Infants born to mothers with preeclampsia were delivered an average of 3.3 weeks earlier than those without preeclampsia, were 4 times more likely to be born preterm, and twice as likely to be admitted to the neonatal intensive care unit. The large majority of women with SLE in this cohort were prescribed hydroxychloroquine and aspirin, with no clear association of these medications with preeclampsia. CONCLUSIONS: We found preeclampsia was an important driver of adverse infant and maternal outcomes. While preeclampsia was particularly common among women with SLE in this cohort, the impact of preeclampsia on the infants of all women with rheumatic diseases was similarly severe. In order to improve infant outcomes for women with rheumatic diseases, attention must be paid to preventing, identifying, and managing preeclampsia.
Assuntos
Lúpus Eritematoso Sistêmico , Pré-Eclâmpsia , Doenças Reumáticas , Gravidez , Recém-Nascido , Lactente , Humanos , Feminino , Adulto , Pré-Eclâmpsia/epidemiologia , Pré-Eclâmpsia/prevenção & controle , Lúpus Eritematoso Sistêmico/complicações , Lúpus Eritematoso Sistêmico/tratamento farmacológico , Lúpus Eritematoso Sistêmico/epidemiologia , Estudos de Coortes , Estudos Retrospectivos , Saúde Materna , Doenças Reumáticas/complicações , Doenças Reumáticas/tratamento farmacológico , Doenças Reumáticas/epidemiologia , Resultado da Gravidez/epidemiologiaRESUMO
OBJECTIVE: This study aimed to develop a novel approach using routinely collected electronic health records (EHRs) data to improve the prediction of a rare event. We illustrated this using an example of improving early prediction of an autism diagnosis, given its low prevalence, by leveraging correlations between autism and other neurodevelopmental conditions (NDCs). METHODS: To achieve this, we introduced a conditional multi-label model by merging conditional learning and multi-label methodologies. The conditional learning approach breaks a hard task into more manageable pieces in each stage, and the multi-label approach utilizes information from related neurodevelopmental conditions to learn predictive latent features. The study involved forecasting autism diagnosis by age 5.5 years, utilizing data from the first 18 months of life, and the analysis of feature importance correlations to explore the alignment within the feature space across different conditions. RESULTS: Upon analysis of health records from 18,156 children, we are able to generate a model that predicts a future autism diagnosis with moderate performance (AUROC=0.76). The proposed conditional multi-label method significantly improves predictive performance with an AUROC of 0.80 (p < 0.001). Further examination shows that both the conditional and multi-label approach alone provided marginal lift to the model performance compared to a one-stage one-label approach. We also demonstrated the generalizability and applicability of this method using simulated data with high correlation between feature vectors for different labels. CONCLUSION: Our findings underscore the effectiveness of the developed conditional multi-label model for early prediction of an autism diagnosis. The study introduces a versatile strategy applicable to prediction tasks involving limited target populations but sharing underlying features or etiology among related groups.
Assuntos
Transtorno Autístico , Registros Eletrônicos de Saúde , Humanos , Transtorno Autístico/diagnóstico , Pré-Escolar , Lactente , Masculino , Feminino , Criança , AlgoritmosRESUMO
BACKGROUND: Electronic Health Records (EHR) are widely used to develop clinical prediction models (CPMs). However, one of the challenges is that there is often a degree of informative missing data. For example, laboratory measures are typically taken when a clinician is concerned that there is a need. When data are the so-called Not Missing at Random (NMAR), analytic strategies based on other missingness mechanisms are inappropriate. In this work, we seek to compare the impact of different strategies for handling missing data on CPMs performance. METHODS: We considered a predictive model for rapid inpatient deterioration as an exemplar implementation. This model incorporated twelve laboratory measures with varying levels of missingness. Five labs had missingness rate levels around 50%, and the other seven had missingness levels around 90%. We included them based on the belief that their missingness status can be highly informational for the prediction. In our study, we explicitly compared the various missing data strategies: mean imputation, normal-value imputation, conditional imputation, categorical encoding, and missingness embeddings. Some of these were also combined with the last observation carried forward (LOCF). We implemented logistic LASSO regression, multilayer perceptron (MLP), and long short-term memory (LSTM) models as the downstream classifiers. We compared the AUROC of testing data and used bootstrapping to construct 95% confidence intervals. RESULTS: We had 105,198 inpatient encounters, with 4.7% having experienced the deterioration outcome of interest. LSTM models generally outperformed other cross-sectional models, where embedding approaches and categorical encoding yielded the best results. For the cross-sectional models, normal-value imputation with LOCF generated the best results. CONCLUSION: Strategies that accounted for the possibility of NMAR missing data yielded better model performance than those did not. The embedding method had an advantage as it did not require prior clinical knowledge. Using LOCF could enhance the performance of cross-sectional models but have countereffects in LSTM models.
Assuntos
Registros Eletrônicos de Saúde , Humanos , Deterioração Clínica , Modelos Estatísticos , Técnicas de Laboratório ClínicoRESUMO
Objectives: Describe contemporary ECMO utilization patterns among patients with traumatic brain injury (TBI) and examine clinical outcomes among TBI patients requiring ECMO. Design: Retrospective cohort study. Setting: Premier Healthcare Database (PHD) between January 2016 to June 2020. Subjects: Adult patients with TBI who were mechanically ventilated and stratified by exposure to ECMO. Results: Among patients exposed to ECMO, we examined the following clinical outcomes: hospital LOS, ICU LOS, duration of mechanical ventilation, and hospital mortality. Of our initial cohort (n = 59,612), 118 patients (0.2%) were placed on ECMO during hospitalization. Most patients were placed on ECMO within the first 2 days of admission (54.3%). Factors associated with ECMO utilization included younger age (OR 0.96, 95% CI (0.95-0.97)), higher injury severity score (ISS) (OR 1.03, 95% CI (1.01-1.04)), vasopressor utilization (2.92, 95% CI (1.90-4.48)), tranexamic acid utilization (OR 1.84, 95% CI (1.12-3.04)), baseline comorbidities (OR 1.06, 95% CI (1.03-1.09)), and care in a teaching hospital (OR 3.04, 95% CI 1.31-7.05). A moderate degree (ICC = 19.5%) of variation in ECMO use was explained at the individual hospital level. Patients exposed to ECMO had longer median (IQR) hospital and ICU length of stay (LOS) [26 days (11-36) versus 9 days (4-8) and 19.5 days (8-32) versus 5 days (2-11), respectively] and a longer median (IQR) duration of mechanical ventilation [18 days (8-31) versus 3 days (2-8)]. Patients exposed to ECMO experienced a hospital mortality rate of 33.9%, compared to 21.2% of TBI patients unexposed to ECMO. Conclusions: ECMO utilization in mechanically ventilated patients with TBI is rare, with significant variation across hospitals. The impact of ECMO on healthcare utilization and hospital mortality following TBI is comparable to non-TBI conditions requiring ECMO. Further research is necessary to better understand the role of ECMO following TBI and identify patients who may benefit from this therapy.
Assuntos
Lesões Encefálicas Traumáticas , Oxigenação por Membrana Extracorpórea , Adulto , Humanos , Estados Unidos/epidemiologia , Estudos Retrospectivos , Hospitalização , Tempo de Internação , Lesões Encefálicas Traumáticas/terapiaRESUMO
There is tremendous interest in understanding how neighborhoods impact health by linking extant social and environmental drivers of health (SDOH) data with electronic health record (EHR) data. Studies quantifying such associations often use static neighborhood measures. Little research examines the impact of gentrification-a measure of neighborhood change-on the health of long-term neighborhood residents using EHR data, which may have a more generalizable population than traditional approaches. We quantified associations between gentrification and health and healthcare utilization by linking longitudinal socioeconomic data from the American Community Survey with EHR data across two health systems accessed by long-term residents of Durham County, NC, from 2007 to 2017. Census block group-level neighborhoods were eligible to be gentrified if they had low socioeconomic status relative to the county average. Gentrification was defined using socioeconomic data from 2006 to 2010 and 2011-2015, with the Steinmetz-Wood definition. Multivariable logistic and Poisson regression models estimated associations between gentrification and development of health indicators (cardiovascular disease, hypertension, diabetes, obesity, asthma, depression) or healthcare encounters (emergency department [ED], inpatient, or outpatient). Sensitivity analyses examined two alternative gentrification measures. Of the 99 block groups within the city of Durham, 28 were eligible (N = 10,807; median age = 42; 83% Black; 55% female) and 5 gentrified. Individuals in gentrifying neighborhoods had lower odds of obesity (odds ratio [OR] = 0.89; 95% confidence interval [CI]: 0.81-0.99), higher odds of an ED encounter (OR = 1.10; 95% CI: 1.01-1.20), and lower risk for outpatient encounters (incidence rate ratio = 0.93; 95% CI: 0.87-1.00) compared with non-gentrifying neighborhoods. The association between gentrification and health and healthcare utilization was sensitive to gentrification definition.
Assuntos
Características de Residência , Segregação Residencial , Humanos , Feminino , Adulto , Masculino , Aceitação pelo Paciente de Cuidados de Saúde , Razão de Chances , ObesidadeRESUMO
BACKGROUND: Early hypotension following moderate to severe traumatic brain injury (TBI) is associated with increased mortality and poor long-term outcomes. Current guidelines suggest the use of intravenous vasopressors to support blood pressure following TBI; however, guidelines do not specify vasopressor type, resulting in variation in clinical practice. Minimal data are available to guide clinicians on optimal early vasopressor choice to support blood pressure following TBI. Therefore, we conducted a multicenter study to examine initial vasopressor choice for the support of blood pressure following TBI and its association with clinical and functional outcomes after injury. METHODS: We conducted a retrospective cohort study of patients enrolled in the transforming research and clinical knowledge in traumatic brain injury (TRACK-TBI) study, an 18-center prospective cohort study of patients with TBI evaluated in participating level I trauma centers. We examined adults with moderate to severe TBI (defined as Glasgow Coma Scale score < 13) who were admitted to the intensive care unit and received an intravenous vasopressor within 48 h of admission. The primary exposure was initial vasopressor choice (phenylephrine versus norepinephrine), and the primary outcome was 6-month Glasgow Outcomes Scale Extended (GOSE), with the following secondary outcomes: length of hospital stay, length of intensive care unit stay, in-hospital mortality, new requirement for dialysis, and 6-month Disability Rating Scale. Regression analysis was used to assess differences in outcomes between patients exposed to norepinephrine versus phenylephrine, with propensity weighting to address selection bias due to the nonrandom allocation of the treatment groups and patient dropout. RESULTS: The final study sample included 156 patients, of whom 79 (51%) received norepinephrine, 69 (44%) received phenylephrine, and 8 (5%) received an alternate drug as their initial vasopressor. 121 (77%) of patients were men, with a mean age of 43.1 years. Of patients receiving norepinephrine as their initial vasopressor, 32% had a favorable outcome (GOSE 5-8), whereas 40% of patients receiving phenylephrine as their initial vasopressor had a favorable outcome. Compared with phenylephrine, exposure to norepinephrine was not significantly associated with improved 6-month GOSE (weighted odds ratio 1.40, 95% confidence interval 0.66-2.96, p = 0.37) or any secondary outcome. CONCLUSIONS: The majority of patients with moderate to severe TBI received either phenylephrine or norepinephrine as first-line agents for blood pressure support following brain injury. Initial choice of norepinephrine, compared with phenylephrine, was not associated with improved clinical or functional outcomes.
Assuntos
Lesões Encefálicas Traumáticas , Adulto , Lesões Encefálicas Traumáticas/complicações , Lesões Encefálicas Traumáticas/tratamento farmacológico , Escala de Coma de Glasgow , Humanos , Masculino , Estudos Prospectivos , Estudos Retrospectivos , Vasoconstritores/uso terapêuticoRESUMO
BACKGROUND: In the early stages of the COVID-19 pandemic our institution was interested in forecasting how long surgical patients receiving elective procedures would spend in the hospital. Initial examination of our models indicated that, due to the skewed nature of the length of stay, accurate prediction was challenging and we instead opted for a simpler classification model. In this work we perform a deeper examination of predicting in-hospital length of stay. METHODS: We used electronic health record data on length of stay from 42,209 elective surgeries. We compare different loss-functions (mean squared error, mean absolute error, mean relative error), algorithms (LASSO, Random Forests, multilayer perceptron) and data transformations (log and truncation). We also assess the performance of two stage hybrid classification-regression approach. RESULTS: Our results show that while it is possible to accurately predict short length of stays, predicting longer length of stay is extremely challenging. As such, we opt for a two-stage model that first classifies patients into long versus short length of stays and then a second stage that fits a regresssor among those predicted to have a short length of stay. DISCUSSION: The results indicate both the challenges and considerations necessary to applying machine-learning methods to skewed outcomes. CONCLUSIONS: Two-stage models allow those developing clinical decision support tools to explicitly acknowledge where they can and cannot make accurate predictions.
Assuntos
COVID-19 , Pandemias , COVID-19/epidemiologia , Hospitais , Humanos , Tempo de Internação , Aprendizado de MáquinaRESUMO
BACKGROUND: Clinical decision support (CDS) tools built using adult data do not typically perform well for children. We explored how best to leverage adult data to improve the performance of such tools. This study assesses whether it is better to build CDS tools for children using data from children alone or to use combined data from both adults and children. METHODS: Retrospective cohort using data from 2017 to 2020. Participants include all individuals (adults and children) receiving an elective surgery at a large academic medical center that provides adult and pediatric services. We predicted need for mechanical ventilation or admission to the intensive care unit (ICU). Predictor variables included demographic, clinical, and service utilization factors known prior to surgery. We compared predictive models built using machine learning to regression-based methods that used a pediatric or combined adult-pediatric cohort. We compared model performance based on Area Under the Receiver Operator Characteristic. RESULTS: While we found that adults and children have different risk factors, machine learning methods are able to appropriately model the underlying heterogeneity of each population and produce equally accurate predictive models whether using data only from pediatric patients or combined data from both children and adults. Results from regression-based methods were improved by the use of pediatric-specific data. CONCLUSIONS: CDS tools for children can successfully use combined data from adults and children if the model accounts for underlying heterogeneity, as in machine learning models.
Assuntos
Sistemas de Apoio a Decisões Clínicas , Adulto , Criança , Hospitalização , Humanos , Unidades de Terapia Intensiva , Aprendizado de Máquina , Estudos RetrospectivosRESUMO
BACKGROUND: Asthma exacerbations are triggered by a variety of clinical and environmental factors, but their relative impacts on exacerbation risk are unclear. There is a critical need to develop methods to identify children at high-risk for future exacerbation to allow targeted prevention measures. We sought to evaluate the utility of models using spatiotemporally resolved climatic data and individual electronic health records (EHR) in predicting pediatric asthma exacerbations. METHODS: We extracted retrospective EHR data for 5982 children with asthma who had an encounter within the Duke University Health System between January 1, 2014 and December 31, 2019. EHR data were linked to spatially resolved environmental data, and temporally resolved climate, pollution, allergen, and influenza case data. We used xgBoost to build predictive models of asthma exacerbation over 30-180 day time horizons, and evaluated the contributions of different data types to model performance. RESULTS: Models using readily available EHR data performed moderately well, as measured by the area under the receiver operating characteristic curve (AUC 0.730-0.742) over all three time horizons. Inclusion of spatial and temporal data did not significantly improve model performance. Generating a decision rule with a sensitivity of 70% produced a positive predictive value of 13.8% for 180 day outcomes but only 2.9% for 30 day outcomes. CONCLUSIONS: EHR data-based models perform moderately wellover a 30-180 day time horizon to identify children who would benefit from asthma exacerbation prevention measures. Due to the low rate of exacerbations, longer-term models are likely to be most clinically useful. TRIAL REGISTRATION: Not applicable.
Assuntos
Asma , Aprendizado de Máquina , Criança , Registros Eletrônicos de Saúde , Humanos , Curva ROC , Estudos RetrospectivosRESUMO
BACKGROUND: Sodium-glucose transporter 2 (SGLT2) inhibitors and glucagon-like peptide-1 (GLP-1) agonists have demonstrated beneficial outcomes in patients with type 2 diabetes at high cardiovascular risk. Unfortunately, these agents are still underutilized in primary care practice. A clinical pharmacist was embedded at a primary care clinic to provide diabetes and hypertension management under a collaborative practice agreement with a supervising physician. OBJECTIVES: This study will evaluate whether the presence of an embedded pharmacist in a primary care clinic affects prescribing patterns of novel, evidence-based diabetes therapies. METHODS: We abstracted information on SGLT2 inhibitor and GLP-1 agonist prescribing patterns from 3 primary care clinics across 2 time periods as a single-center, retrospective cohort study. We used a difference-in-difference analysis to compare prescription rates and assess the impact of embedding the pharmacist into clinical practice. Prescriptions written by the pharmacist were excluded. RESULTS: Across all 3 clinics, 1309 and 1489 patients were included in the pre-intervention and postintervention periods, respectively. The percentage of patients prescribed either an SGLT2 inhibitor or GLP-1 agonist, similar between both groups at baseline, rose to 11.6% in the nonintervention clinics and 15.0% in the intervention clinic. There was a statistically significant increase in SGLT2 inhibitor and GLP-1 agonist prescribing in the intervention clinic compared with nonintervention clinics (P = 0.034). This change in prescribing patterns appeared even greater when excluding prescribers who were not present during both pre-intervention and postintervention periods (P = 0.009). CONCLUSION: The presence of a pharmacist is associated with increased SGLT2 inhibitor and GLP-1 agonist prescribing within a clinic, even in patients not seen directly by the pharmacist. These results suggest that an on-site clinical pharmacist providing care for patients with diabetes may indirectly influence the prescribing behavior of co-located primary care providers, increasing the adoption of novel noninsulin diabetic medications.
Assuntos
Diabetes Mellitus Tipo 2 , Inibidores do Transportador 2 de Sódio-Glicose , Diabetes Mellitus Tipo 2/tratamento farmacológico , Humanos , Farmacêuticos , Atenção Primária à Saúde , Estudos RetrospectivosRESUMO
Statins failed to reduce cardiovascular (CV) events in trials of patients on dialysis. However, trial populations used criteria that often excluded those with atherosclerotic heart disease (ASHD), in whom statins have the greatest benefit, and included outcome composites with high rates of nonatherosclerotic CV events that may not be modified by statins. Here, we study whether statin use associates with lower atherosclerotic CV risk among patients with known ASHD on dialysis, including in those likely to receive a kidney transplant, a group excluded within trials but with lower competing mortality risks. METHODS: Using data from the United States Renal Data System including Medicare claims, we identified adults initiating dialysis with ASHD. We matched statin users 1:1 to statin nonusers with propensity scores incorporating hard matches for age and kidney transplant listing status. Using Cox models, we evaluated associations of statin use with the primary composite of fatal/nonfatal myocardial infarction and stroke (including within prespecified subgroups of younger age [<50â¯years] and waitlisting status); secondary outcomes included all-cause mortality and the composite of all-cause mortality, nonfatal myocardial infarction, or stroke. RESULTS: Of 197,716 patients with ASHD, 47,562 (24%) were consistent statin users from which we created 46,186 matched pairs. Over a median 662â¯days, statin users had similar risk of fatal/nonfatal myocardial infarction or stroke overall (hazard ratio [HR] 1.00, 95% CI 0.97-1.02), or in subgroups (age<â¯50â¯years [HRâ¯=â¯1.05, 95% CI 0.95-1.17]; waitlisted for kidney transplant [HR 0.99, 95% CI 0.97-1.02]). Statin use was modestly associated with lower all-cause mortality (HR 0.96, 95% CI 0.94-0.98; E value = 1.21) and, similarly, a modest lower composite risk of all-cause mortality, nonfatal myocardial infarction, or stroke over the first 2 years (HR 0.90, 95% CI 0.88-0.91) but attenuated thereafter (HR 0.98, 95% CI 0.96-1.01). CONCLUSIONS: Our large observational analyses are consistent with trials in more selected populations and suggest that statins may not meaningfully reduce atherosclerotic CV events even among incident dialysis patients with established ASHD and those likely to receive kidney transplants.
Assuntos
Aterosclerose/tratamento farmacológico , Doença das Coronárias/tratamento farmacológico , Inibidores de Hidroximetilglutaril-CoA Redutases/uso terapêutico , Falência Renal Crônica/terapia , Diálise Renal , Adulto , Fatores Etários , Idoso , Idoso de 80 Anos ou mais , Aterosclerose/epidemiologia , Causas de Morte , Doença das Coronárias/epidemiologia , Feminino , Humanos , Estimativa de Kaplan-Meier , Transplante de Rim , Masculino , Pessoa de Meia-Idade , Infarto do Miocárdio/epidemiologia , Pontuação de Propensão , Acidente Vascular Cerebral/epidemiologiaRESUMO
OBJECTIVES: Traumatic brain injury is a leading cause of death and disability in the United States. While the impact of early multiple organ dysfunction syndrome has been studied in many critical care paradigms, the clinical impact of early multiple organ dysfunction syndrome in traumatic brain injury is poorly understood. We examined the incidence and impact of early multiple organ dysfunction syndrome on clinical, functional, and disability outcomes over the year following traumatic brain injury. DESIGN: Retrospective cohort study. SETTING: Patients enrolled in the Transforming Clinical Research and Knowledge in Traumatic Brain Injury study, an 18-center prospective cohort study of traumatic brain injury patients evaluated in participating level 1 trauma centers. SUBJECTS: Adult (age > 17 yr) patients with moderate-severe traumatic brain injury (Glasgow Coma Scale < 13). We excluded patients with major extracranial injury (Abbreviated Injury Scale score ≥ 3). INTERVENTIONS: Development of early multiple organ dysfunction syndrome, defined as a maximum modified Sequential Organ Failure Assessment score greater than 7 during the initial 72 hours following admission. MEASUREMENTS AND MAIN RESULTS: The main outcomes were: hospital mortality, length of stay, 6-month functional and disability domains (Glasgow Outcome Scale-Extended and Disability Rating Scale), and 1-year mortality. Secondary outcomes included: ICU length of stay, 3-month Glasgow Outcome Scale-Extended, 3-month Disability Rating Scale, 1-year Glasgow Outcome Scale-Extended, and 1-year Disability Rating Scale. We examined 373 subjects with moderate-severe traumatic brain injury. The mean (sd) Glasgow Coma Scale in the emergency department was 5.8 (3.2), with 280 subjects (75%) classified as severe traumatic brain injury (Glasgow Coma Scale 3-8). Among subjects with moderate-severe traumatic brain injury, 252 (68%) developed early multiple organ dysfunction syndrome. Subjects that developed early multiple organ dysfunction syndrome had a 75% decreased odds of a favorable outcome (Glasgow Outcome Scale-Extended 5-8) at 6 months (adjusted odds ratio, 0.25; 95% CI, 0.12-0.51) and increased disability (higher Disability Rating Scale score) at 6 months (adjusted mean difference, 2.04; 95% CI, 0.92-3.17). Subjects that developed early multiple organ dysfunction syndrome experienced an increased hospital length of stay (adjusted mean difference, 11.4 d; 95% CI, 7.1-15.8), with a nonsignificantly decreased survival to hospital discharge (odds ratio, 0.47; 95% CI, 0.18-1.2). CONCLUSIONS: Early multiple organ dysfunction following moderate-severe traumatic brain injury is common and independently impacts multiple domains (mortality, function, and disability) over the year following injury. Further research is necessary to understand underlying mechanisms, improve early recognition, and optimize management strategies.
Assuntos
Lesões Encefálicas Traumáticas/complicações , Estado Funcional , Insuficiência de Múltiplos Órgãos/etiologia , Adulto , Lesões Encefálicas Traumáticas/epidemiologia , Estudos de Coortes , Feminino , Escala de Coma de Glasgow , Escala de Resultado de Glasgow , Humanos , Masculino , Insuficiência de Múltiplos Órgãos/epidemiologia , Escores de Disfunção Orgânica , Modelos de Riscos Proporcionais , Estudos Prospectivos , Estudos RetrospectivosRESUMO
Electronic health records data are becoming a key data resource in clinical research. Owing to issues of data efficiency, electronic health records data are being used for clinical trials. This includes both large-scale pragmatic trails and smaller-more focused-point-of-care trials. While electronic health records data open up a number of scientific opportunities, they also present a number of analytic challenges. This article discusses five particular challenges related to organizing electronic health records data for analytic purposes. These are as follows: (1) data are not organized for research purposes, (2) data are both densely and irregularly observed, (3) we don't have all data elements we may want or need, (4) data are both cross-sectional and longitudinal, and (5) data may be informatively observed. While laying out these challenges, the article notes how many of these challenges can be addressed by careful and thoughtful study design as well as by integration of clinicians and informaticians into the analytic team.
Assuntos
Ensaios Clínicos como Assunto/métodos , Registros Eletrônicos de Saúde , Estudos Transversais , Coleta de Dados , Humanos , Estudos Longitudinais , Seleção de Pacientes , Ensaios Clínicos Pragmáticos como Assunto/métodos , Projetos de PesquisaRESUMO
BACKGROUND: Asthma exacerbations in children often require medications, urgent care, and hospitalization. Multiple environmental triggers have been associated with asthma exacerbations, including particulate matter 2.5 (PM2.5) and ozone, which are primarily generated by motor vehicle exhaust. There is mixed evidence as to whether proximity to highways increases risk of asthma exacerbations. METHODS: To evaluate the impact of highway proximity, we assessed the association between asthma exacerbations and the distance of child's primary residence to two types of roadways in Durham County, North Carolina, accounting for other patient-level factors. We abstracted data from the Duke University Health System electronic health record (EHR), identifying 6208 children with asthma between 2014 and 2019. We geocoded each child's distance to roadways (both 35 MPH+ and 55 MPH+). We classified asthma exacerbation severity into four tiers and fitted a recurrent event survival model to account for multiple exacerbations. RESULTS: There was a no observed effect of residential distance from 55+ MPH highway (Hazard Ratio: 0.98 (95% confidence interval: 0.94, 1.01)) and distance to 35+ MPH roadway (Hazard Ratio: 0.98 (95% confidence interval: 0.83, 1.15)) and any asthma exacerbation. Even those children living closest to highways (less 0.25 miles) had no increased risk of exacerbation. These results were consistent across different demographic strata. CONCLUSIONS: While the results were non-significant, the characteristics of the study sample - namely farther distance to roadways and generally good ambient environmental pollution may contribute to the lack of effect. Compared to previous studies, which often relied on self-reported measures, we were able to obtain a more objective assessment of outcomes. Overall, this work highlights the opportunity to use EHR data to study environmental impacts on disease.
Assuntos
Poluentes Atmosféricos , Poluição do Ar , Asma , Poluentes Atmosféricos/análise , Poluentes Atmosféricos/toxicidade , Poluição do Ar/análise , Poluição do Ar/estatística & dados numéricos , Asma/epidemiologia , Criança , Registros Eletrônicos de Saúde , Exposição Ambiental/estatística & dados numéricos , Humanos , North Carolina/epidemiologia , Emissões de Veículos/análise , Emissões de Veículos/toxicidadeRESUMO
BACKGROUND: The National Comprehensive Cancer Network (NCCN) Distress Thermometer (DT) uses a 10-point scale (in which 0 indicates no distress and 10 indicates extreme distress) to measure patient-reported distress. In the current study, the authors sought to examine the relationship between treatment and NCCN DT scores in patients with breast cancer over time. METHODS: The authors included women aged ≥18 years who were diagnosed with stage 0 to stage IV breast cancer (according to the seventh edition of the American Joint Commission on Cancer staging system) at a 3-hospital health system from January 2014 to July 2016. Linear mixed effects models adjusted for covariates including stage of disease, race/ethnicity, insurance, and treatment sequence (neoadjuvant vs adjuvant) were used to estimate adjusted mean changes in the DT score (MSCs) per week for patients undergoing lumpectomy, mastectomy only, and mastectomy with reconstruction (MR). RESULTS: The authors analyzed 12,569 encounters for 1029 unique patients (median score, 4; median follow-up, 67 weeks). Patients treated with MR (118 patients) were younger and more likely to be married, white, and privately insured compared with patients undergoing lumpectomy (620 patients) and mastectomy only (291 patients) (all P < .01). After adjusting for covariates, distress scores were found to decline significantly across all 3 surgical cohorts, with patients undergoing MR found to have both the most preoperative distress and the greatest decline in distress prior to surgery (MSC/week: -0.073 for MR vs -0.031 for lumpectomy vs -0.033 for mastectomy only; P = .001). Neoadjuvant therapy was associated with a longitudinal decline in distress for patients treated with lumpectomy (-1.023) and mastectomy only (-0.964). Over time, ductal carcinoma in situ (-0.503) and black race (-1.198) were found to be associated with declining distress among patients treated with lumpectomy and MR, respectively, whereas divorced patients who were treated with mastectomy only (0.948) and single patients treated with lumpectomy (0.476) experienced increased distress (all P < .05). CONCLUSIONS: When examined longitudinally in consecutive patients, the NCCN DT can provide patient-reported data to inform expectations and guide targeted support for patients with breast cancer.
Assuntos
Neoplasias da Mama/diagnóstico , Neoplasias da Mama/psicologia , Angústia Psicológica , Idoso , Neoplasias da Mama/terapia , Carcinoma Intraductal não Infiltrante/diagnóstico , Carcinoma Intraductal não Infiltrante/psicologia , Carcinoma Intraductal não Infiltrante/terapia , Feminino , Humanos , Seguro Saúde , Mamoplastia/psicologia , Estado Civil , Mastectomia/psicologia , Mastectomia Segmentar , Pessoa de Meia-IdadeRESUMO
OBJECTIVES: Previous studies have looked at National Early Warning Score performance in predicting in-hospital deterioration and death, but data are lacking with respect to patient outcomes following implementation of National Early Warning Score. We sought to determine the effectiveness of National Early Warning Score implementation on predicting and preventing patient deterioration in a clinical setting. DESIGN: Retrospective cohort study. SETTING: Tertiary care academic facility and a community hospital. PATIENTS: Patients 18 years old or older hospitalized from March 1, 2014, to February 28, 2015, during preimplementation of National Early Warning Score to August 1, 2015, to July 31, 2016, after National Early Warning Score was implemented. INTERVENTIONS: Implementation of National Early Warning Score within the electronic health record and associated best practice alert. MEASUREMENTS AND MAIN RESULTS: In this study of 85,322 patients (42,402 patients pre-National Early Warning Score and 42,920 patients post-National Early Warning Score implementation), the primary outcome of rate of ICU transfer or death did not change after National Early Warning Score implementation, with adjusted hazard ratio of 0.94 (0.84-1.05) and 0.90 (0.77-1.05) at our academic and community hospital, respectively. In total, 175,357 best practice advisories fired during the study period, with the best practice advisory performing better at the community hospital than the academic at predicting an event within 12 hours 7.4% versus 2.2% of the time, respectively. Retraining National Early Warning Score with newly generated hospital-specific coefficients improved model performance. CONCLUSIONS: At both our academic and community hospital, National Early Warning Score had poor performance characteristics and was generally ignored by frontline nursing staff. As a result, National Early Warning Score implementation had no appreciable impact on defined clinical outcomes. Refitting of the model using site-specific data improved performance and supports validating predictive models on local data.
Assuntos
Alarmes Clínicos , Deterioração Clínica , Gravidade do Paciente , Centros Médicos Acadêmicos , Adulto , Idoso , Atitude do Pessoal de Saúde , Estudos de Coortes , Diagnóstico Precoce , Feminino , Mortalidade Hospitalar , Hospitais Comunitários , Humanos , Unidades de Terapia Intensiva , Masculino , Pessoa de Meia-Idade , North Carolina , Recursos Humanos de Enfermagem Hospitalar , Transferência de Pacientes/estatística & dados numéricos , Estudos RetrospectivosRESUMO
Objective: This study evaluated whether there is a difference in the proportion of patients with type 2 diabetes who achieve a hemoglobin A1c (HbA1c) <7% within one year following treatment by an endocrinologist or primary care physician (PCP). Methods: We conducted a retrospective, propensity-matched study of patients with type 2 diabetes that were not optimally controlled and seen within our health system from 2007-2016. We assessed differences in short term health outcomes for patients following an endocrinologist visit compared to a PCP visit. Results: Patients seen by endocrinologists obtained HbA1c control at a faster rate (hazard ratio = 1.226; 95% confidence interval = 1.01 to 1.488) than those seen by a PCP. Furthermore, 34.5% and 29.5% of those treated by endocrinologists and PCPs, respectively, obtained HbA1c control by one year. Endocrinologists were more likely to prescribe a new medication class within 90 days than PCPs (14.1% versus 10.3%, respectively, P = .043). There was no difference in the risk of hospitalization between groups; 24.4% and 24.1% of those treated by endocrinologists and PCPs, respectively, were hospitalized within one year. Conclusion: Patients treated by endocrinology specialists were more likely to achieve a target HbA1c of <7% (53 mmol/mol) than those treated by PCPs in our health-care system. The performance difference may be partially explained by a higher rate of adding new classes of diabetes medications to the patient's pharmacologic regimens within 90 days by endocrinologists compared with PCPs. The long-term impact of these differences is unknown but has the potential to have an unfavorable impact on the health of the population. Abbreviations: ACP = American College of Physicians; CI = confidence interval; DUHS = Duke University Health System; HbA1c = hemoglobin A1c; HR = hazard ratio; PCP = primary care physician; SMD = standard mean difference.