Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24
Filtrar
Mais filtros

Bases de dados
País/Região como assunto
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
J Urban Health ; 99(6): 984-997, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36367672

RESUMO

There is tremendous interest in understanding how neighborhoods impact health by linking extant social and environmental drivers of health (SDOH) data with electronic health record (EHR) data. Studies quantifying such associations often use static neighborhood measures. Little research examines the impact of gentrification-a measure of neighborhood change-on the health of long-term neighborhood residents using EHR data, which may have a more generalizable population than traditional approaches. We quantified associations between gentrification and health and healthcare utilization by linking longitudinal socioeconomic data from the American Community Survey with EHR data across two health systems accessed by long-term residents of Durham County, NC, from 2007 to 2017. Census block group-level neighborhoods were eligible to be gentrified if they had low socioeconomic status relative to the county average. Gentrification was defined using socioeconomic data from 2006 to 2010 and 2011-2015, with the Steinmetz-Wood definition. Multivariable logistic and Poisson regression models estimated associations between gentrification and development of health indicators (cardiovascular disease, hypertension, diabetes, obesity, asthma, depression) or healthcare encounters (emergency department [ED], inpatient, or outpatient). Sensitivity analyses examined two alternative gentrification measures. Of the 99 block groups within the city of Durham, 28 were eligible (N = 10,807; median age = 42; 83% Black; 55% female) and 5 gentrified. Individuals in gentrifying neighborhoods had lower odds of obesity (odds ratio [OR] = 0.89; 95% confidence interval [CI]: 0.81-0.99), higher odds of an ED encounter (OR = 1.10; 95% CI: 1.01-1.20), and lower risk for outpatient encounters (incidence rate ratio = 0.93; 95% CI: 0.87-1.00) compared with non-gentrifying neighborhoods. The association between gentrification and health and healthcare utilization was sensitive to gentrification definition.


Assuntos
Características de Residência , Segregação Residencial , Humanos , Feminino , Adulto , Masculino , Aceitação pelo Paciente de Cuidados de Saúde , Razão de Chances , Obesidade
2.
Crit Care Med ; 47(1): 49-55, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-30247239

RESUMO

OBJECTIVES: Previous studies have looked at National Early Warning Score performance in predicting in-hospital deterioration and death, but data are lacking with respect to patient outcomes following implementation of National Early Warning Score. We sought to determine the effectiveness of National Early Warning Score implementation on predicting and preventing patient deterioration in a clinical setting. DESIGN: Retrospective cohort study. SETTING: Tertiary care academic facility and a community hospital. PATIENTS: Patients 18 years old or older hospitalized from March 1, 2014, to February 28, 2015, during preimplementation of National Early Warning Score to August 1, 2015, to July 31, 2016, after National Early Warning Score was implemented. INTERVENTIONS: Implementation of National Early Warning Score within the electronic health record and associated best practice alert. MEASUREMENTS AND MAIN RESULTS: In this study of 85,322 patients (42,402 patients pre-National Early Warning Score and 42,920 patients post-National Early Warning Score implementation), the primary outcome of rate of ICU transfer or death did not change after National Early Warning Score implementation, with adjusted hazard ratio of 0.94 (0.84-1.05) and 0.90 (0.77-1.05) at our academic and community hospital, respectively. In total, 175,357 best practice advisories fired during the study period, with the best practice advisory performing better at the community hospital than the academic at predicting an event within 12 hours 7.4% versus 2.2% of the time, respectively. Retraining National Early Warning Score with newly generated hospital-specific coefficients improved model performance. CONCLUSIONS: At both our academic and community hospital, National Early Warning Score had poor performance characteristics and was generally ignored by frontline nursing staff. As a result, National Early Warning Score implementation had no appreciable impact on defined clinical outcomes. Refitting of the model using site-specific data improved performance and supports validating predictive models on local data.


Assuntos
Alarmes Clínicos , Deterioração Clínica , Gravidade do Paciente , Centros Médicos Acadêmicos , Adulto , Idoso , Atitude do Pessoal de Saúde , Estudos de Coortes , Diagnóstico Precoce , Feminino , Mortalidade Hospitalar , Hospitais Comunitários , Humanos , Unidades de Terapia Intensiva , Masculino , Pessoa de Meia-Idade , North Carolina , Recursos Humanos de Enfermagem Hospitalar , Transferência de Pacientes/estatística & dados numéricos , Estudos Retrospectivos
3.
Am Heart J ; 203: 39-48, 2018 09.
Artigo em Inglês | MEDLINE | ID: mdl-30015067

RESUMO

BACKGROUND: We aimed to determine the association of MR severity and type with all-cause death in a large, real-world, clinical setting. METHODS: We reviewed full echocardiography studies at Duke Echocardiography Laboratory (01/01/1995-12/31/2010), classifying MR based on valve morphology, presence of coronary artery disease, and left ventricular size and function. Survival was compared among patients stratified by MR type and baseline severity. RESULTS: Of 93,007 qualifying patients, 32,137 (34.6%) had ≥mild MR. A total of 8094 (8.7%) had moderate/severe MR, which was primary myxomatous (14.1%), primary non-myxomatous (6.2%), secondary non-ischemic (17.0%), and secondary ischemic (49.4%). At 10 years, patients with primary myxomatous MR or MR due to indeterminate cause had survival rates of >60%; primary non-myxomatous, secondary ischemic, and non-ischemic MR had survival rates <50%. While mild (HR 1.06, 95% CI 1.03-1.09), moderate (HR 1.31, 95% CI 1.27-1.37), and severe (HR 1.55, 95% CI 1.46-1.65) MR were independently associated with all-cause death, the relationship of increasing MR severity with mortality varied across MR types (P ≤ .001 for interaction); the highest risk associated with worsening severity was seen in primary myxomatous MR followed by secondary ischemic MR and primary non-myxomatous MR. CONCLUSIONS: Although MR severity is independently associated with increased all-cause death risk for most forms of MR, the absolute mortality rates associated with worse MR severity are much higher for primary myxomatous, non-myxomatous, and secondary ischemic MR. The findings from this study support carefully defining MR by type and severity.


Assuntos
Ecocardiografia Doppler em Cores/métodos , Insuficiência da Valva Mitral/diagnóstico , Valva Mitral/diagnóstico por imagem , Volume Sistólico/fisiologia , Função Ventricular Esquerda/fisiologia , Adulto , Idoso , Causas de Morte/tendências , Feminino , Seguimentos , Humanos , Masculino , Pessoa de Meia-Idade , Insuficiência da Valva Mitral/fisiopatologia , Prognóstico , Estudos Retrospectivos , Índice de Gravidade de Doença , Taxa de Sobrevida/tendências , Fatores de Tempo , Estados Unidos/epidemiologia
4.
Am J Epidemiol ; 184(11): 847-855, 2016 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-27852603

RESUMO

Electronic health records (EHRs) are an increasingly utilized resource for clinical research. While their size allows for many analytical opportunities, as with most observational data there is also the potential for bias. One of the key sources of bias in EHRs is what we term informed presence-the notion that inclusion in an EHR is not random but rather indicates that the subject is ill, making people in EHRs systematically different from those not in EHRs. In this article, we use simulated and empirical data to illustrate the conditions under which such bias can arise and how conditioning on the number of health-care encounters can be one way to remove this bias. In doing so, we also show when such an approach can impart M bias, or bias from conditioning on a collider. Finally, we explore the conditions under which number of medical encounters can serve as a proxy for general health. We apply these methods to an EHR data set from a university medical center covering the years 2007-2013.


Assuntos
Pesquisa Biomédica/métodos , Pesquisa Biomédica/normas , Registros Eletrônicos de Saúde/estatística & dados numéricos , Projetos de Pesquisa Epidemiológica , Viés de Seleção , Simulação por Computador , Fatores de Confusão Epidemiológicos , Depressão/epidemiologia , Diabetes Mellitus/epidemiologia , Serviços de Saúde/estatística & dados numéricos , Nível de Saúde , Humanos , Reprodutibilidade dos Testes
5.
J Nucl Cardiol ; 23(6): 1280-1287, 2016 12.
Artigo em Inglês | MEDLINE | ID: mdl-26122879

RESUMO

BACKGROUND: New multipinhole cadmium-zinc-telluride (CZT) cameras allow for faster imaging and lower radiation doses for single photon emission computed tomography (SPECT) studies, but assessment of prognostic ability is necessary. METHODS AND RESULTS: We collected data from all myocardial SPECT perfusion studies performed over 15 months at our institution, using either a CZT or conventional Anger camera. A Cox proportional hazards model was used to assess the relationship between camera type, imaging results, and either death or myocardial infarction (MI). Clinical variables including age, sex, body mass index (BMI), and historical risk factors were used for population description and model adjustments. We had 2,088 patients with a total of 69 deaths and 65 MIs (122 events altogether). A 3% increase in DDB (difference defect burden) represented a 12% increase in the risk of death or MI, whereas a 3% increase in rest defect burden or stress defect burden represented an 8% increase; these risks were the same for both cameras (P > .24, interaction tests). CONCLUSIONS: The CZT camera has similar prognostic values for death and MI to conventional Anger cameras. This suggests that it may successfully be used to decrease patient dose.


Assuntos
Cádmio , Câmaras gama/estatística & dados numéricos , Infarto do Miocárdio/diagnóstico por imagem , Infarto do Miocárdio/mortalidade , Imagem de Perfusão do Miocárdio/instrumentação , Telúrio , Tomografia Computadorizada de Emissão de Fóton Único/instrumentação , Zinco , Adolescente , Adulto , Distribuição por Idade , Idoso , Idoso de 80 Anos ou mais , Desenho de Equipamento , Análise de Falha de Equipamento , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Imagem de Perfusão do Miocárdio/estatística & dados numéricos , North Carolina/epidemiologia , Prevalência , Prognóstico , Reprodutibilidade dos Testes , Medição de Risco , Sensibilidade e Especificidade , Distribuição por Sexo , Análise de Sobrevida , Adulto Jovem
6.
Eur Heart J ; 36(40): 2733-41, 2015 Oct 21.
Artigo em Inglês | MEDLINE | ID: mdl-26233850

RESUMO

AIMS: The management and outcomes of patients with functional moderate/severe mitral regurgitation and severe left ventricular (LV) systolic dysfunction are not well defined. We sought to determine the characteristics, management strategies, and outcomes of patients with moderate or severe mitral regurgitation (MR) and LV systolic dysfunction. METHODS AND RESULTS: For the period 1995-2010, the Duke Echocardiography Laboratory and Duke Databank for Cardiovascular Diseases databases were merged to identify patients with moderate or severe functional MR and severe LV dysfunction (defined as LV ejection fraction ≤ 30% or LV end-systolic diameter > 55 mm). We examined treatment effects in two ways. (i) A multivariable Cox proportional hazards model was used to assess the independent relationship of different treatment strategies and long-term event (death, LV assist device, or transplant)-free survival among those with and without coronary artery disease (CAD). (ii) To examine the association of mitral valve (MV) surgery with outcomes, we divided the entire cohort into two groups, those who underwent MV surgery and those who did not; we used inverse probability weighted (IPW) propensity adjustment to account for non-random treatment assignment. Among 1441 patients with moderate (70%) or severe (30%) MR, a significant history of hypertension (59%), diabetes (28%), symptomatic heart failure (83%), and CAD (52%) was observed. Past revascularization in 26% was noted. At 1 year, 1094 (75%) patients were treated medically. Percutaneous coronary intervention was performed in 114 patients, coronary artery bypass graft (CABG) surgery in 82, CABG and MV surgery in 96, and MV surgery alone in 55 patients. Among patients with CAD, compared with medical therapy alone, the treatment strategies of CABG surgery [hazard ratio (HR) 0.56, 95% confidence interval (CI) 0.42-0.76] and CABG with MV surgery (HR 0.58, 95% CI 0.44-0.78) were associated with long-term, event-free survival benefit. Percutaneous intervention treatment produced a borderline result (HR 0.78, 95% CI 0.61-1.00). However, the relationship with isolated MV surgery did not achieve statistical significance (HR 0.64, 95% CI 0.33-1.27, P = 0.202). Among those with CAD, following IPW adjustment, MV surgery was associated with a significant event-free survival benefit compared with patients without MV surgery (HR 0.71, 95% CI 0.52-0.95). In the entire cohort, following IPW adjustment, the use of MV surgery was associated with higher event-free survival (HR 0.69, 95% CI 0.53-0.88). CONCLUSION: In patients with moderate or severe MR and severe LV dysfunction, mortality was substantial, and among those selected for surgery, MV surgery, though performed in a small number of patients, was independently associated with higher event-free survival.


Assuntos
Insuficiência da Valva Mitral/terapia , Disfunção Ventricular Esquerda/terapia , Cardiotônicos/uso terapêutico , Ponte de Artéria Coronária/mortalidade , Intervalo Livre de Doença , Feminino , Humanos , Estimativa de Kaplan-Meier , Masculino , Pessoa de Meia-Idade , Valva Mitral/cirurgia , Insuficiência da Valva Mitral/complicações , Insuficiência da Valva Mitral/mortalidade , Intervenção Coronária Percutânea/mortalidade , Resultado do Tratamento , Disfunção Ventricular Esquerda/complicações , Disfunção Ventricular Esquerda/mortalidade
7.
J Electrocardiol ; 48(4): 637-42, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25959263

RESUMO

BACKGROUND: New-onset left bundle branch block (LBBB) is a known complication during Transcatheter Aortic Valve Replacement (TAVR). This study evaluated the influence of pre-TAVR cardiac conditions on left ventricular functions in patients with new persistent LBBB post-TAVR. METHODS: Only 11 patients qualified for this study because of the strict inclusion criteria. Pre-TAVR electrocardiograms were evaluated for Selvester QRS infarct score and QRS duration, and left ventricular end-systolic volume (LVESV) was used as outcome variable. RESULTS: There was a trend towards a positive correlation between QRS score and LVESV of r=0.59 (p=0.058), while there was no relationship between QRS duration and LVESV (r=-0.18 [p=0.59]). CONCLUSION: This study showed that patients with new LBBB and higher pre-TAVR QRS infarct score may have worse post-TAVR left ventricular function, however, pre-TAVR QRS duration has no such predictive value. Because of the small sample size these results should be interpreted with caution and assessed in a larger study population.


Assuntos
Estenose da Valva Aórtica/diagnóstico , Estenose da Valva Aórtica/cirurgia , Bloqueio de Ramo/diagnóstico , Bloqueio de Ramo/etiologia , Eletrocardiografia/métodos , Substituição da Valva Aórtica Transcateter/efeitos adversos , Idoso de 80 Anos ou mais , Diagnóstico por Computador/métodos , Ecocardiografia/métodos , Feminino , Seguimentos , Humanos , Masculino , Prognóstico , Reprodutibilidade dos Testes , Estudos Retrospectivos , Medição de Risco , Sensibilidade e Especificidade , Volume Sistólico , Resultado do Tratamento
8.
J Womens Health (Larchmt) ; 33(7): 853-862, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38533846

RESUMO

Background: Pregnancy-related cardiovascular (CV) conditions, including hypertensive disorders of pregnancy (HDP) and gestational diabetes (GDM), are associated with increased long-term CV risk. Methods: This retrospective cohort study defined the prevalence of HDP and GDM within a large, academic health system in the southeast United States between 2012 and 2015 and described health care utilization and routine CV screening up to 1-year following delivery among those with pregnancy-related CV conditions. Rates of follow-up visits and blood pressure, hemoglobin A1c (HbA1c), and lipid screening in the first postpartum year were compared by provider type and pregnancy-related CV condition. Results: Of the 6027 deliveries included, 20% were complicated by HDP and/or GDM. Rates of pre-pregnancy CV risk factors were high, with a significantly higher proportion of pre-pregnancy obesity among women with HDP than in normal pregnancies. Those with both HDP/GDM had the highest rates of follow-up by 1-year postpartum, yet only half of those with any pregnancy-related CV condition had any follow-up visit after 12 weeks. Although most (70%) of those with HDP had postpartum blood pressure screening, less than one-third of those with GDM had a repeat HbA1c by 12 months. Overall, postpartum lipid screening was rare (<20%). Conclusion: There is a high burden of pregnancy-related CV conditions in a large U.S. academic health system. Although overall rates of follow-up in the early postpartum period were high, gaps in longitudinal follow-up exist. Low rates of CV risk factor follow-up at 1 year indicate a missed opportunity for early CV prevention.


Assuntos
Doenças Cardiovasculares , Diabetes Gestacional , Fatores de Risco de Doenças Cardíacas , Período Pós-Parto , Humanos , Feminino , Gravidez , Adulto , Estudos Retrospectivos , Diabetes Gestacional/epidemiologia , Doenças Cardiovasculares/epidemiologia , Doenças Cardiovasculares/prevenção & controle , Fatores de Risco , Estudos Longitudinais , Hipertensão Induzida pela Gravidez/epidemiologia , Hemoglobinas Glicadas/análise , Prevalência , Pressão Sanguínea , Programas de Rastreamento , Estudos de Coortes , Adulto Jovem
9.
Health Promot Pract ; 14(6): 901-8, 2013 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-23449666

RESUMO

OBJECTIVE: To examine whether newspaper coverage of the Michigan smoke-free law was favorable or hostile, contained positive messages that had been disseminated by public health groups, contained negative messages, and differed across regions. METHOD: Articles about the smoke-free law in print or online editions of Michigan newspapers the month immediately before and after the law took effect were identified and were coded for tone, positive messages contained in media outreach materials, and negative messages commonly disseminated by smoke-free law opponents. RESULTS: A total of 303 print and online articles were identified; the majority were coded as "both positive and negative" (34%) or "mainly positive" in tone (32%). Of 303 articles, 75% contained at least one pro-law message and 56% contained at least one anti-law message. The most common pro-law messages were information about enforcement of the law (52%) and the benefits of smoke-free air (48%); the most common anti-law messages were about potential negative economic impact (36%), government intrusion/overreach (31%), and difficulties with enforcement (28%). CONCLUSIONS: Public health departments and partners play an important role in implementation of smoke-free laws by providing the public, businesses, and other stakeholders with clear and accurate rationale, provisions, and impacts of these policies.


Assuntos
Jornais como Assunto/estatística & dados numéricos , Política Antifumo/legislação & jurisprudência , Economia , Humanos , Disseminação de Informação , Aplicação da Lei , Michigan
10.
Artigo em Inglês | MEDLINE | ID: mdl-37711220

RESUMO

Background: JAK1 is a signaling molecule downstream of cytokine receptors, including IL-4 receptor α. Abrocitinib is an oral JAK1 inhibitor; it is a safe and effective US Food and Drug Administration-approved treatment for adults with moderate-to-severe atopic dermatitis. Objective: Our objective was to investigate the effect of abrocitinib on basophil activation and T-cell activation in patients with peanut allergy to determine the potential for use of JAK1 inhibitors as a monotherapy or an adjuvant to peanut oral immunotherapy. Methods: Basophil activation in whole blood was measured by detection of CD63 expression using flow cytometry. Activation of CD4+ effector and regulatory T cells was determined by the upregulation of CD154 and CD137, respectively, on anti-CD3/CD28- or peanut-stimulated PBMCs. For the quantification of peanut-induced cytokines, PBMCs were stimulated with peanut for 5 days before harvesting supernatant. Results: Abrocitinib decreased the allergen-specific activation of basophils in response to peanut. We showed suppression of effector T-cell activation when stimulated by CD3/CD28 beads in the presence of 10 ng of abrocitinib, whereas activation of regulatory T-cell populations was preserved in the presence of abrocitinib. Abrocitinib induced statistically significant dose-dependent inhibition in IL-5, IL-13, IL-10, IL-9, and TNF-α in the presence of peanut stimulation. Conclusion: These results support our hypothesis that JAK1 inhibition decreases basophil activation and TH2 cytokine signaling, reducing in vitro allergic responses in subjects with peanut allergy. Abrocitinib may be an effective adjunctive immune modulator in conjunction with peanut oral immunotherapy or as a monotherapy for individuals with food allergy.

11.
JAMA Netw Open ; 4(3): e213460, 2021 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-33779743

RESUMO

Importance: Comparisons of antimicrobial use among hospitals are difficult to interpret owing to variations in patient case mix. Risk-adjustment strategies incorporating larger numbers of variables haves been proposed as a method to improve comparisons for antimicrobial stewardship assessments. Objective: To evaluate whether variables of varying complexity and feasibility of measurement, derived retrospectively from the electronic health records, accurately identify inpatient antimicrobial use. Design, Setting, and Participants: Retrospective cohort study, using a 2-stage random forests machine learning modeling analysis of electronic health record data. Data were split into training and testing sets to measure model performance using area under the curve and absolute error. All adult and pediatric inpatient encounters from October 1, 2015, to September 30, 2017, at 2 community hospitals and 1 academic medical center in the Duke University Health System were analyzed. A total of 204 candidate variables were categorized into 4 tiers based on feasibility of measurement from the electronic health records. Main Outcomes and Measures: Antimicrobial exposure was measured at the encounter level in 2 ways: binary (ever or never) and number of days of therapy. Analyses were stratified by age (pediatric or adult), unit type, and antibiotic group. Results: The data set included 170 294 encounters and 204 candidate variables from 3 hospitals during the 3-year study period. Antimicrobial exposure occurred in 80 190 encounters (47%); 64 998 (38%) received 1 to 6 days of therapy, and 15 192 (9%) received 7 or more days of therapy. Two-stage models identified antimicrobial use with high fidelity (mean area under the curve, 0.85; mean absolute error, 1.0 days of therapy). Addition of more complex variables increased accuracy, with largest improvements occurring with inclusion of diagnosis information. Accuracy varied based on location and antibiotic group. Models underestimated the number of days of therapy of encounters with long lengths of stay. Conclusions and Relevance: Models using variables derived from electronic health records identified antimicrobial exposure accurately. Future risk-adjustment strategies incorporating encounter-level information may make comparisons of antimicrobial use more meaningful for hospital antimicrobial stewardship assessments.


Assuntos
Antibacterianos/farmacologia , Gestão de Antimicrobianos/métodos , Registros Eletrônicos de Saúde/estatística & dados numéricos , Pacientes Internados , Aprendizado de Máquina , Medição de Risco/métodos , Adolescente , Adulto , Idoso , Criança , Pré-Escolar , Feminino , Seguimentos , Humanos , Lactente , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos , Adulto Jovem
12.
Prev Med Rep ; 24: 101615, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34976671

RESUMO

Data on patterns of weight change among adults with overweight or obesity are minimal. We aimed to examine patterns of weight change and associated hospitalizations in a large health system, and to develop a model to predict 2-year significant weight gain. Data from the Duke University Health System was abstracted from 1/1/13 to 12/31/16 on patients with BMI ≥ 25 kg/m2 in 2014. A regression model was developed to predict patients that would increase their weight by 10% within 2 years. We estimated the association between weight change category and all-cause hospitalization using Cox proportional hazards models. Of the 37,253 patients in our cohort, 59% had stable weight over 2 years, while 24% gained ≥ 5% weight and 17% lost ≥ 5% weight. Our predictive model had reasonable discriminatory capacity to predict which individuals would gain ≥ 10% weight over 2 years (AUC 0.73). Compared with stable weight, the risk of hospitalization was increased by 37% for individuals with > 10% weight loss [adj. HR (95% CI): 1.37 (1.25,1.5)], by 30% for those with > 10% weight gain [adj. HR (95% CI): 1.3 (1.19,1.42)], by 18% for those with 5-10% weight loss [adj. HR (95% CI): 1.18 (1.09,1.28)], and by 10% for those with 5-10% weight gain [adj. HR (95% CI): 1.1 (1.02,1.19)]. In this examination of a large health system, significant weight gain or loss of > 10% was associated with increased all-cause hospitalization over 2 years compared with stable weight. This analysis adds to the increasing observational evidence that weight stability may be a key health driver.

13.
Am J Prev Med ; 58(6): 817-824, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-32444000

RESUMO

INTRODUCTION: Both medication and surgical interventions can be used to treat obesity, yet their use and effectiveness in routine clinical practice are not clear. This study sought to characterize the prevalence and management of patients with obesity within a large U.S. academic medical center. METHODS: All patients aged ≥18 years who were seen in a primary care clinic within the Duke Health System between 2013 and 2016 were included. Patients were categorized according to baseline BMI as underweight or normal weight (<25 kg/m2), overweight (25-29.9 kg/m2), Class I obesity (30-34.9 kg/m2), Class II obesity (35-39.9 kg/m2), and Class III obesity (≥40 kg/m2). Baseline characteristics and use of weight loss medication were assessed by BMI category. Predicted change in BMI was modeled over 3 years. All data were analyzed between 2017 and 2018. RESULTS: Of the 173,462 included patients, most were overweight (32%) or obese (40%). Overall, <1% (n=295) of obese patients were prescribed medication for weight loss or underwent bariatric surgery within the 3-year study period. Most patients had no change in BMI class (70%) at 3 years. CONCLUSIONS: Despite a high prevalence of obesity within primary care clinics of a large, U.S. academic health center, the use of pharmacologic and surgical therapies was low, and most patients had no weight change over 3 years. This highlights the significant need for improvement in obesity care at a health system level.


Assuntos
Centros Médicos Acadêmicos , Fármacos Antiobesidade/uso terapêutico , Índice de Massa Corporal , Obesidade , Orlistate/uso terapêutico , Atenção Primária à Saúde , Comorbidade , Feminino , Humanos , Estudos Longitudinais , Masculino , Pessoa de Meia-Idade , Obesidade/tratamento farmacológico , Obesidade/epidemiologia , Obesidade/cirurgia , Prevalência , Estudos Retrospectivos , Estados Unidos/epidemiologia
14.
MDM Policy Pract ; 5(1): 2381468319899663, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-31976373

RESUMO

Background. Identification of patients at risk of deteriorating during their hospitalization is an important concern. However, many off-shelf scores have poor in-center performance. In this article, we report our experience developing, implementing, and evaluating an in-hospital score for deterioration. Methods. We abstracted 3 years of data (2014-2016) and identified patients on medical wards that died or were transferred to the intensive care unit. We developed a time-varying risk model and then implemented the model over a 10-week period to assess prospective predictive performance. We compared performance to our currently used tool, National Early Warning Score. In order to aid clinical decision making, we transformed the quantitative score into a three-level clinical decision support tool. Results. The developed risk score had an average area under the curve of 0.814 (95% confidence interval = 0.79-0.83) versus 0.740 (95% confidence interval = 0.72-0.76) for the National Early Warning Score. We found the proposed score was able to respond to acute clinical changes in patients' clinical status. Upon implementing the score, we were able to achieve the desired positive predictive value but needed to retune the thresholds to get the desired sensitivity. Discussion. This work illustrates the potential for academic medical centers to build, refine, and implement risk models that are targeted to their patient population and work flow.

15.
Eur J Heart Fail ; 22(7): 1174-1182, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-31863532

RESUMO

AIMS: Worsening heart failure (HF) is associated with shorter left ventricular systolic ejection time (SET), but there are limited data describing the relationship between SET and clinical outcomes. Thus, the objective was to describe the association between SET and clinical outcomes in an ambulatory HF population irrespective of ejection fraction (EF). METHODS AND RESULTS: We identified ambulatory patients with HF with reduced EF (HFrEF) and HF with preserved EF (HFpEF) who had an outpatient transthoracic echocardiogram performed between August 2008 and July 2010 at a tertiary referral centre. Multivariable logistic regression was used to evaluate the association between SET and 1-year outcomes. A total of 545 HF patients (171 HFrEF, 374 HFpEF) met eligibility criteria. Compared with HFpEF, HFrEF patients were younger [median age 60 years (25th-75th percentiles 50-69) vs. 64 years (25th-75th percentiles 53-74], with fewer females (30% vs. 56%) and a similar percentage of African Americans (36% vs. 35%). Median (25th-75th percentiles) EF with HFrEF was 30% (25-35%) and with HFpEF was 54% (48-58%). Median SET was shorter (280 ms vs. 315 ms, P < 0.001), median pre-ejection period was longer (114 ms vs. 89 ms, P < 0.001), and median relaxation time was shorter (78.7 ms vs. 93.3 ms, P < 0.001) among patients with HFrEF vs. HFpEF. Death or HF hospitalization occurred in 26.9% (n = 46) HFrEF and 11.8% (n = 44) HFpEF patients. After adjustment, longer SET was associated with lower odds of the composite of death or HF hospitalization at 1 year among HFrEF but not HFpEF patients. CONCLUSION: Longer SET is independently associated with improved outcomes among HFrEF patients but not HFpEF patients, supporting a potential role for normalizing SET as a therapeutic strategy with systolic dysfunction.


Assuntos
Insuficiência Cardíaca , Idoso , Antagonistas de Receptores de Angiotensina , Inibidores da Enzima Conversora de Angiotensina , Diabetes Mellitus Tipo 2 , Feminino , Insuficiência Cardíaca/epidemiologia , Humanos , Masculino , Pessoa de Meia-Idade , Intervenção Coronária Percutânea , Prognóstico , Volume Sistólico
16.
J Am Med Inform Assoc ; 26(12): 1609-1617, 2019 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-31553474

RESUMO

OBJECTIVE: Electronic health records (EHR) data have become a central data source for clinical research. One concern for using EHR data is that the process through which individuals engage with the health system, and find themselves within EHR data, can be informative. We have termed this process informed presence. In this study we use simulation and real data to assess how the informed presence can impact inference. MATERIALS AND METHODS: We first simulated a visit process where a series of biomarkers were observed informatively and uninformatively over time. We further compared inference derived from a randomized control trial (ie, uninformative visits) and EHR data (ie, potentially informative visits). RESULTS: We find that only when there is both a strong association between the biomarker and the outcome as well as the biomarker and the visit process is there bias. Moreover, once there are some uninformative visits this bias is mitigated. In the data example we find, that when the "true" associations are null, there is no observed bias. DISCUSSION: These results suggest that an informative visit process can exaggerate an association but cannot induce one. Furthermore, careful study design can, mitigate the potential bias when some noninformative visits are included. CONCLUSIONS: While there are legitimate concerns regarding biases that "messy" EHR data may induce, the conditions for such biases are extreme and can be accounted for.


Assuntos
Viés , Biomarcadores , Registros Eletrônicos de Saúde , Idoso , Pesquisa Biomédica , Simulação por Computador , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Modelos Biológicos , Visita a Consultório Médico
17.
J Am Med Inform Assoc ; 26(5): 429-437, 2019 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-30869798

RESUMO

OBJECTIVE: Participants enrolled into randomized controlled trials (RCTs) often do not reflect real-world populations. Previous research in how best to transport RCT results to target populations has focused on weighting RCT data to look like the target data. Simulation work, however, has suggested that an outcome model approach may be preferable. Here, we describe such an approach using source data from the 2 × 2 factorial NAVIGATOR (Nateglinide And Valsartan in Impaired Glucose Tolerance Outcomes Research) trial, which evaluated the impact of valsartan and nateglinide on cardiovascular outcomes and new-onset diabetes in a prediabetic population. MATERIALS AND METHODS: Our target data consisted of people with prediabetes serviced at the Duke University Health System. We used random survival forests to develop separate outcome models for each of the 4 treatments, estimating the 5-year risk difference for progression to diabetes, and estimated the treatment effect in our local patient populations, as well as subpopulations, and compared the results with the traditional weighting approach. RESULTS: Our models suggested that the treatment effect for valsartan in our patient population was the same as in the trial, whereas for nateglinide treatment effect was stronger than observed in the original trial. Our effect estimates were more efficient than the weighting approach and we effectively estimated subgroup differences. CONCLUSIONS: The described method represents a straightforward approach to efficiently transporting an RCT result to any target population.


Assuntos
Anti-Hipertensivos/uso terapêutico , Hipoglicemiantes/uso terapêutico , Aprendizado de Máquina , Nateglinida/uso terapêutico , Estado Pré-Diabético/tratamento farmacológico , Valsartana/uso terapêutico , Doenças Cardiovasculares/prevenção & controle , Diabetes Mellitus Tipo 2 , Progressão da Doença , Registros Eletrônicos de Saúde , Medicina Baseada em Evidências , Humanos , Avaliação de Resultados em Cuidados de Saúde , Ensaios Clínicos Controlados Aleatórios como Assunto , Pesquisa Translacional Biomédica
18.
JAMA Netw Open ; 1(5): e182716, 2018 09 07.
Artigo em Inglês | MEDLINE | ID: mdl-30646172

RESUMO

Importance: Data from electronic health records (EHRs) are increasingly used for risk prediction. However, EHRs do not reliably collect sociodemographic and neighborhood information, which has been shown to be associated with health. The added contribution of neighborhood socioeconomic status (nSES) in predicting health events is unknown and may help inform population-level risk reduction strategies. Objective: To quantify the association of nSES with adverse outcomes and the value of nSES in predicting the risk of adverse outcomes in EHR-based risk models. Design, Setting, and Participants: Cohort study in which data from 90 097 patients 18 years or older in the Duke University Health System and Lincoln Community Health Center EHR from January 1, 2009, to December 31, 2015, with at least 1 health care encounter and residence in Durham County, North Carolina, in the year prior to the index date were linked with census tract data to quantify the association between nSES and the risk of adverse outcomes. Machine learning methods were used to develop risk models and determine how adding nSES to EHR data affects risk prediction. Neighborhood socioeconomic status was defined using the Agency for Healthcare Research and Quality SES index, a weighted measure of multiple indicators of neighborhood deprivation. Main Outcomes and Measures: Outcomes included use of health care services (emergency department and inpatient and outpatient encounters) and hospitalizations due to accidents, asthma, influenza, myocardial infarction, and stroke. Results: Among the 90 097 patients in the training set of the study (57 507 women and 32 590 men; mean [SD] age, 47.2 [17.7] years) and the 122 812 patients in the testing set of the study (75 517 women and 47 295 men; mean [SD] age, 46.2 [17.9] years), those living in neighborhoods with lower nSES had a shorter time to use of emergency department services and inpatient encounters, as well as a shorter time to hospitalizations due to accidents, asthma, influenza, myocardial infarction, and stroke. The predictive value of nSES varied by outcome of interest (C statistic ranged from 0.50 to 0.63). When added to EHR variables, nSES did not improve predictive performance for any health outcome. Conclusions and Relevance: Social determinants of health, including nSES, are associated with the health of a patient. However, the results of this study suggest that information on nSES may not contribute much more to risk prediction above and beyond what is already provided by EHR data. Although this result does not mean that integrating social determinants of health into the EHR has no benefit, researchers may be able to use EHR data alone for population risk assessment.


Assuntos
Registros Eletrônicos de Saúde/estatística & dados numéricos , Disparidades nos Níveis de Saúde , Características de Residência/estatística & dados numéricos , Classe Social , Adulto , Idoso , Estudos de Coortes , Feminino , Humanos , Renda/estatística & dados numéricos , Masculino , Pessoa de Meia-Idade , North Carolina/etnologia , Avaliação de Resultados em Cuidados de Saúde/métodos , Avaliação de Resultados em Cuidados de Saúde/estatística & dados numéricos , Grupos Raciais/etnologia , Grupos Raciais/estatística & dados numéricos , Determinantes Sociais da Saúde/etnologia , Determinantes Sociais da Saúde/estatística & dados numéricos
19.
J Am Med Inform Assoc ; 25(2): 150-157, 2018 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-28645207

RESUMO

Background: Electronic medical record (EMR) computed algorithms allow investigators to screen thousands of patient records to identify specific disease cases. No computed algorithms have been developed to detect all cases of human immunodeficiency virus (HIV) infection using administrative, laboratory, and clinical documentation data outside of the Veterans Health Administration. We developed novel EMR-based algorithms for HIV detection and validated them in a cohort of subjects in the Duke University Health System (DUHS). Methods: We created 2 novel algorithms to identify HIV-infected subjects. Algorithm 1 used laboratory studies and medications to identify HIV-infected subjects, whereas Algorithm 2 used International Classification of Diseases, Ninth Revision (ICD-9) codes, medications, and laboratory testing. We applied the algorithms to a well-characterized cohort of patients and validated both against the gold standard of physician chart review. We determined sensitivity, specificity, and prevalence of HIV between 2007 and 2011 in patients seen at DUHS. Results: A total of 172 271 patients were detected with complete data; 1063 patients met algorithm criteria for HIV infection. In all, 970 individuals were identified by both algorithms, 78 by Algorithm 1 alone, and 15 by Algorithm 2 alone. The sensitivity and specificity of each algorithm were 78% and 99%, respectively, for Algorithm 1 and 77% and 100% for Algorithm 2. The estimated prevalence of HIV infection at DUHS between 2007 and 2011 was 0.6%. Conclusions: EMR-based phenotypes of HIV infection are capable of detecting cases of HIV-infected adults with good sensitivity and specificity. These algorithms have the potential to be adapted to other EMR systems, allowing for the creation of cohorts of patients across EMR systems.


Assuntos
Algoritmos , Registros Eletrônicos de Saúde , Infecções por HIV/diagnóstico , HIV-1 , Adulto , Humanos , Fenótipo , Sensibilidade e Especificidade
20.
EGEMS (Wash DC) ; 5(1): 22, 2017 Dec 06.
Artigo em Inglês | MEDLINE | ID: mdl-29930963

RESUMO

Electronic health record (EHR) data are becoming a primary resource for clinical research. Compared to traditional research data, such as those from clinical trials and epidemiologic cohorts, EHR data have a number of appealing characteristics. However, because they do not have mechanisms set in place to ensure that the appropriate data are collected, they also pose a number of analytic challenges. In this paper, we illustrate that how a patient interacts with a health system influences which data are recorded in the EHR. These interactions are typically informative, potentially resulting in bias. We term the overall set of induced biases informed presence. To illustrate this, we use examples from EHR based analyses. Specifically, we show that: 1) Where a patient receives services within a health facility can induce selection bias; 2) Which health system a patient chooses for an encounter can result in information bias; and 3) Referral encounters can create an admixture bias. While often times addressing these biases can be straightforward, it is important to understand how they are induced in any EHR based analysis.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA