Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 112
Filtrar
Mais filtros

Intervalo de ano de publicação
1.
J Gen Intern Med ; 38(3): 699-706, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-35819683

RESUMO

BACKGROUND: Patterns of opioid use vary, including prescribed use without aberrancy, limited aberrant use, and potential opioid use disorder (OUD). In clinical practice, similar opioid-related International Classification of Disease (ICD) codes are applied across this spectrum, limiting understanding of how groups vary by sociodemographic factors, comorbidities, and long-term risks. OBJECTIVE: (1) Examine how Veterans assigned opioid abuse/dependence ICD codes vary at diagnosis and with respect to long-term risks. (2) Determine whether those with limited aberrant use share more similarities to likely OUD vs those using opioids as prescribed. DESIGN: Longitudinal observational cohort study. PARTICIPANTS: National sample of Veterans categorized as having (1) likely OUD, (2) limited aberrant opioid use, or (3) prescribed, non-aberrant use based upon enhanced medical chart review. MAIN MEASURES: Comparison of sociodemographic and clinical factors at diagnosis and rates of age-adjusted mortality, non-fatal opioid overdose, and hospitalization after diagnosis. An exploratory machine learning analysis investigated how closely those with limited aberrant use resembled those with likely OUD. KEY RESULTS: Veterans (n = 483) were categorized as likely OUD (62.1%), limited aberrant use (17.8%), and prescribed, non-aberrant use (20.1%). Age, proportion experiencing homelessness, chronic pain, anxiety disorders, and non-opioid substance use disorders differed by group. All-cause mortality was high (44.2 per 1000 person-years (95% CI 33.9, 56.7)). Hospitalization rates per 1000 person-years were highest in the likely OUD group (831.5 (95% CI 771.0, 895.5)), compared to limited aberrant use (739.8 (95% CI 637.1, 854.4)) and prescribed, non-aberrant use (411.9 (95% CI 342.6, 490.4). The exploratory analysis reclassified 29.1% of those with limited aberrant use as having likely OUD with high confidence. CONCLUSIONS: Veterans assigned opioid abuse/dependence ICD codes are heterogeneous and face variable long-term risks. Limited aberrant use confers increased risk compared to no aberrant use, and some may already have OUD. Findings warrant future investigation of this understudied population.


Assuntos
Pessoas Mal Alojadas , Overdose de Opiáceos , Transtornos Relacionados ao Uso de Opioides , Veteranos , Humanos , Transtornos Relacionados ao Uso de Opioides/diagnóstico , Transtornos Relacionados ao Uso de Opioides/epidemiologia , Transtornos Relacionados ao Uso de Opioides/tratamento farmacológico , Analgésicos Opioides/efeitos adversos , Overdose de Opiáceos/tratamento farmacológico
2.
Health Care Manag Sci ; 26(1): 93-116, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36284034

RESUMO

Preventing chronic diseases is an essential aspect of medical care. To prevent chronic diseases, physicians focus on monitoring their risk factors and prescribing the necessary medication. The optimal monitoring policy depends on the patient's risk factors and demographics. Monitoring too frequently may be unnecessary and costly; on the other hand, monitoring the patient infrequently means the patient may forgo needed treatment and experience adverse events related to the disease. We propose a finite horizon and finite-state Markov decision process to define monitoring policies. To build our Markov decision process, we estimate stochastic models based on longitudinal observational data from electronic health records for a large cohort of patients seen in the national U.S. Veterans Affairs health system. We use our model to study policies for whether or when to assess the need for cholesterol-lowering medications. We further use our model to investigate the role of gender and race on optimal monitoring policies.


Assuntos
Anticolesterolemiantes , Doenças Cardiovasculares , Humanos , Doenças Cardiovasculares/prevenção & controle , Fatores de Risco
3.
JAMA ; 330(8): 715-724, 2023 08 22.
Artigo em Inglês | MEDLINE | ID: mdl-37606674

RESUMO

Importance: Aspirin is an effective and low-cost option for reducing atherosclerotic cardiovascular disease (CVD) events and improving mortality rates among individuals with established CVD. To guide efforts to mitigate the global CVD burden, there is a need to understand current levels of aspirin use for secondary prevention of CVD. Objective: To report and evaluate aspirin use for secondary prevention of CVD across low-, middle-, and high-income countries. Design, Setting, and Participants: Cross-sectional analysis using pooled, individual participant data from nationally representative health surveys conducted between 2013 and 2020 in 51 low-, middle-, and high-income countries. Included surveys contained data on self-reported history of CVD and aspirin use. The sample of participants included nonpregnant adults aged 40 to 69 years. Exposures: Countries' per capita income levels and world region; individuals' socioeconomic demographics. Main Outcomes and Measures: Self-reported use of aspirin for secondary prevention of CVD. Results: The overall pooled sample included 124 505 individuals. The median age was 52 (IQR, 45-59) years, and 50.5% (95% CI, 49.9%-51.1%) were women. A total of 10 589 individuals had a self-reported history of CVD (8.1% [95% CI, 7.6%-8.6%]). Among individuals with a history of CVD, aspirin use for secondary prevention in the overall pooled sample was 40.3% (95% CI, 37.6%-43.0%). By income group, estimates were 16.6% (95% CI, 12.4%-21.9%) in low-income countries, 24.5% (95% CI, 20.8%-28.6%) in lower-middle-income countries, 51.1% (95% CI, 48.2%-54.0%) in upper-middle-income countries, and 65.0% (95% CI, 59.1%-70.4%) in high-income countries. Conclusion and Relevance: Worldwide, aspirin is underused in secondary prevention, particularly in low-income countries. National health policies and health systems must develop, implement, and evaluate strategies to promote aspirin therapy.


Assuntos
Aspirina , Doenças Cardiovasculares , Prevenção Secundária , Adulto , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Aspirina/uso terapêutico , Doenças Cardiovasculares/epidemiologia , Doenças Cardiovasculares/mortalidade , Doenças Cardiovasculares/prevenção & controle , Estudos Transversais , Países Desenvolvidos/economia , Países Desenvolvidos/estatística & dados numéricos , Países em Desenvolvimento/economia , Países em Desenvolvimento/estatística & dados numéricos , Prevenção Secundária/economia , Prevenção Secundária/métodos , Prevenção Secundária/estatística & dados numéricos , Autorrelato/economia , Autorrelato/estatística & dados numéricos , Fármacos Cardiovasculares/uso terapêutico
4.
J Thromb Thrombolysis ; 54(4): 639-646, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-35699872

RESUMO

Recent trials suggest that aspirin for primary prevention may do more harm than good for some, including adults over 70 years of age. We sought to assess how primary care providers (PCPs) use aspirin for the primary prevention in older patients and to identify barriers to use according to recent guidelines, which recommend against routine use in patients over age 70. We surveyed PCPs about whether they would recommend aspirin in clinical vignettes of a 75-year-old patient with a 10-year atherosclerotic cardiovascular disease risk of 25%. We also queried perceived difficulty following guideline recommendations, as well as perceived barriers and facilitators. We obtained responses from 372 PCPs (47.9% response). In the patient vignette, 45.4% of clinicians recommended aspirin use, which did not vary by whether the patient was using aspirin initially (p = 0.21); 41.7% believed aspirin was beneficial. Perceived barriers to guideline-based aspirin use included concern about patients being upset (41.6%), possible malpractice claims (25.0%), and not having a strategy for discussing aspirin use (24.5%). The estimated adjusted probability of rating the guideline as "hard to follow" was higher in clinicians who believed aspirin was beneficial (29.4% vs. 8.0%; p < 0.001) and who worried the patient would be upset if told to stop aspirin (26.7% vs. 12.5%; p = 0.001). Internists vary considerably in their recommendations for aspirin use for primary prevention in older patients. A high proportion of PCPs continue to believe aspirin is beneficial in this setting. These results can inform de-implementation efforts to optimize evidence-based aspirin use.


Assuntos
Aspirina , Médicos , Humanos , Idoso , Idoso de 80 Anos ou mais , Aspirina/uso terapêutico , Atitude do Pessoal de Saúde , Inquéritos e Questionários
5.
BMC Health Serv Res ; 22(1): 739, 2022 Jun 03.
Artigo em Inglês | MEDLINE | ID: mdl-35659234

RESUMO

BACKGROUND: Hospital-specific template matching (HS-TM) is a newer method of hospital performance assessment. OBJECTIVE: To assess the interpretability, credibility, and usability of HS-TM-based vs. regression-based performance assessments. RESEARCH DESIGN: We surveyed hospital leaders (January-May 2021) and completed follow-up semi-structured interviews. Surveys included four hypothetical performance assessment vignettes, with method (HS-TM, regression) and hospital mortality randomized. SUBJECTS: Nationwide Veterans Affairs Chiefs of Staff, Medicine, and Hospital Medicine. MEASURES: Correct interpretation; self-rated confidence in interpretation; and self-rated trust in assessment (via survey). Concerns about credibility and main uses (via thematic analysis of interview transcripts). RESULTS: In total, 84 participants completed 295 survey vignettes. Respondents correctly interpreted 81.8% HS-TM vs. 56.5% regression assessments, p < 0.001. Respondents "trusted the results" for 70.9% HS-TM vs. 58.2% regression assessments, p = 0.03. Nine concerns about credibility were identified: inadequate capture of case-mix and/or illness severity; inability to account for specialized programs (e.g., transplant center); comparison to geographically disparate hospitals; equating mortality with quality; lack of criterion standards; low power; comparison to dissimilar hospitals; generation of rankings; and lack of transparency. Five concerns were equally relevant to both methods, one more pertinent to HS-TM, and three more pertinent to regression. Assessments were mainly used to trigger further quality evaluation (a "check oil light") and motivate behavior change. CONCLUSIONS: HS-TM-based performance assessments were more interpretable and more credible to VA hospital leaders than regression-based assessments. However, leaders had a similar set of concerns related to credibility for both methods and felt both were best used as a screen for further evaluation.


Assuntos
Grupos Diagnósticos Relacionados , Hospitais , Atenção à Saúde , Mortalidade Hospitalar , Humanos , Inquéritos e Questionários
6.
Ann Intern Med ; 174(12): 1666-1673, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34606315

RESUMO

BACKGROUND: There are 2 approaches to intensifying antihypertensive treatment when target blood pressure is not reached, adding a new medication and maximizing dose. Which strategy is better is unknown. OBJECTIVE: To assess the frequency of intensification by adding a new medication versus maximizing dose, as well as the association of each method with intensification sustainability and follow-up systolic blood pressure (SBP). DESIGN: Large-scale, population-based, retrospective cohort study. Observational data were used to emulate a target trial with 2 groups, new medication and maximizing dose, who underwent intensification of their drug regimen. SETTING: Veterans Health Administration (2011 to 2013). PATIENTS: Veterans aged 65 years or older with hypertension, an SBP of 130 mm Hg or higher, and at least 1 antihypertensive medication at less than the maximum dose. MEASUREMENTS: The following 2 intensification approaches were emulated: adding a new medication, defined as a total dose increase with new medication, and maximizing dose, defined as a total dose increase without new medication. Inverse probability weighting was used to assess the observational effectiveness of the intensification approach on sustainability of intensified treatment and follow-up SBP at 3 and 12 months. RESULTS: Among 178 562 patients, 45 575 (25.5%) had intensification by adding a new medication and 132 987 (74.5%) by maximizing dose. Compared with maximizing dose, adding a new medication was associated with less intensification sustainability (average treatment effect, -15.2% [95% CI, -15.7% to -14.6%] at 3 months and -15.1% [CI, -15.6% to -14.5%] at 12 months) but a slightly larger reduction in mean SBP (-0.8 mm Hg [CI, -1.2 to -0.4 mm Hg] at 3 months and -1.1 mm Hg [CI, -1.6 to -0.6 mm Hg] at 12 months). LIMITATION: Observational data; largely male population. CONCLUSION: Adding a new antihypertensive medication was less frequent and was associated with less intensification sustainability but slightly larger reductions in SBP. Trials would provide the most definitive support for our findings. PRIMARY FUNDING SOURCE: National Institute on Aging and Veterans Health Administration.


Assuntos
Anti-Hipertensivos/uso terapêutico , Hipertensão/tratamento farmacológico , Idoso , Anti-Hipertensivos/administração & dosagem , Relação Dose-Resposta a Droga , Feminino , Humanos , Masculino , Estudos Retrospectivos , Estados Unidos , Veteranos
8.
Health Care Manag Sci ; 24(1): 1-25, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-33483911

RESUMO

Atherosclerotic cardiovascular disease (ASCVD) is among the leading causes of death in the US. Although research has shown that ASCVD has genetic elements, the understanding of how genetic testing influences its prevention and treatment has been limited. To this end, we model the health trajectory of patients stochastically and determine treatment and testing decisions simultaneously. Since the cholesterol level of patients is one controllable risk factor for ASCVD events, we model cholesterol treatment plans as Markov decision processes. We determine whether and when patients should receive a genetic test using value of information analysis. By simulating the health trajectory of over 64 million adult patients, we find that 6.73 million patients undergo genetic testing. The optimal treatment plans informed with clinical and genetic information save 5,487 more quality-adjusted life-years while costing $1.18 billion less than the optimal treatment plans informed with clinical information only. As precision medicine becomes increasingly important, understanding the impact of genetic information becomes essential.


Assuntos
Aterosclerose/prevenção & controle , Doenças Cardiovasculares/prevenção & controle , Testes Genéticos , Hipercolesterolemia/tratamento farmacológico , Adulto , Anticolesterolemiantes/uso terapêutico , Aterosclerose/tratamento farmacológico , Aterosclerose/genética , Doenças Cardiovasculares/genética , Simulação por Computador , Feminino , Predisposição Genética para Doença , Humanos , Masculino , Cadeias de Markov , Pessoa de Meia-Idade , Anos de Vida Ajustados por Qualidade de Vida
9.
BMC Health Serv Res ; 21(1): 561, 2021 Jun 07.
Artigo em Inglês | MEDLINE | ID: mdl-34098973

RESUMO

BACKGROUND: Although risk prediction has become an integral part of clinical practice guidelines for cardiovascular disease (CVD) prevention, multiple studies have shown that patients' risk still plays almost no role in clinical decision-making. Because little is known about why this is so, we sought to understand providers' views on the opportunities, barriers, and facilitators of incorporating risk prediction to guide their use of cardiovascular preventive medicines. METHODS: We conducted semi-structured interviews with primary care providers (n = 33) at VA facilities in the Midwest. Facilities were chosen using a maximum variation approach according to their geography, size, proportion of MD to non-MD providers, and percentage of full-time providers. Providers included MD/DO physicians, physician assistants, nurse practitioners, and clinical pharmacists. Providers were asked about their reaction to a hypothetical situation in which the VA would introduce a risk prediction-based approach to CVD treatment. We conducted matrix and content analysis to identify providers' reactions to risk prediction, reasons for their reaction, and exemplar quotes. RESULTS: Most providers were classified as Enthusiastic (n = 14) or Cautious Adopters (n = 15), with only a few Non-Adopters (n = 4). Providers described four key concerns toward adopting risk prediction. Their primary concern was that risk prediction is not always compatible with a "whole patient" approach to patient care. Other concerns included questions about the validity of the proposed risk prediction model, potential workflow burdens, and whether risk prediction adds value to existing clinical practice. Enthusiastic, Cautious, and Non-Adopters all expressed both doubts about and support for risk prediction categorizable in the above four key areas of concern. CONCLUSIONS: Providers were generally supportive of adopting risk prediction into CVD prevention, but many had misgivings, which included concerns about impact on workflow, validity of predictive models, the value of making this change, and possible negative effects on providers' ability to address the whole patient. These concerns have likely contributed to the slow introduction of risk prediction into clinical practice. These concerns will need to be addressed for risk prediction, and other approaches relying on "big data" including machine learning and artificial intelligence, to have a meaningful role in clinical practice.


Assuntos
Inteligência Artificial , Médicos , Atitude , Atitude do Pessoal de Saúde , Pessoal de Saúde , Humanos , Pesquisa Qualitativa
10.
Curr Opin Crit Care ; 26(5): 500-507, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32773618

RESUMO

PURPOSE OF REVIEW: Critical illness survivorship is associated with new and worsening physical, cognitive, and emotional status. Survivors are vulnerable to further health set-backs, most commonly because of infection and exacerbation of chronic medical conditions. Awareness of survivors' challenges are important given the anticipated rise in critical illness survivors because of SARS-CoV-2 viral sepsis. RECENT FINDINGS: Studies continue to document challenges of critical illness survivorship. Beyond the cognitive, physical, and mental health sequelae encompassed by postintensive case syndrome, patients commonly experience persistent immunosuppression, re-hospitalization, inability to resume prior employment, and reduced quality of life. Although recommended practices for enhancing recovery from sepsis are associated with better outcomes, only a minority of patients receive all recommended practices. ICU follow-up programs or peer support groups remain important interventions to learn about and address the multifaceted challenges of critical illness survivorship, but there is little evidence of benefit to date. SUMMARY: Survivors of sepsis and critical illness commonly experience impaired health status, reduced quality of life, and inability to return to prior employment. Although the challenges of critical illness survivorship are increasingly well documented, there are relatively few studies on enhancing recovery. Future studies must focus on identifying best practices for optimizing recovery and strategies to promote their implementation.


Assuntos
Estado Terminal , Unidades de Terapia Intensiva , Sobrevivência , Betacoronavirus , COVID-19 , Infecções por Coronavirus , Nível de Saúde , Humanos , Pandemias , Pneumonia Viral , Qualidade de Vida , Retorno ao Trabalho , SARS-CoV-2
11.
Stroke ; 50(7): 1669-1675, 2019 07.
Artigo em Inglês | MEDLINE | ID: mdl-31138085

RESUMO

Background and Purpose- Effective stroke prevention depends on accurate stroke risk prediction. We determined the discriminative ability of NfL (neurofilament light chain) levels for distinguishing between adults with diabetes mellitus who develop incident stroke and those who remain stroke free during a 7-year follow-up period. Methods- We performed a case-control study of participants selected from the previously completed ACCORD trial (Action to Control Cardiovascular Risk in Diabetes). Cases were all ACCORD subjects who were stroke free at enrollment and developed incident stroke during follow-up (n=113). Control subjects (n=250) were randomly selected ACCORD subjects who had no stroke events either before or after randomization. NfL was measured in baseline samples using Single Molecule Array technology (Quanterix). Results- Baseline NfL levels were higher in stroke subjects, compared to controls, after adjusting for age, race, blood pressure, weight, and the Framingham Stroke Risk Score. Relative to the subjects in the lowest quintile of NfL levels, the hazard ratios of incident stroke for subjects in the second to fifth quintiles were 3.91 (1.45-10.53), 4.05 (1.52-10.79), 5.63 (2.16-14.66), and 9.75 (3.84-27.71), respectively, after adjusting for race and Framingham Stroke Risk Score. Incorporating NfL levels into a predictive score that already included race and Framingham Stroke Risk Score increased the score's C statistic from 0.71 (95% CI, 0.66-0.77) to 0.78 (95% CI, 0.73-0.83), P<0.001. Older age, nonwhite race, higher systolic blood pressure, glomerular filtration rate <60, and higher hemoglobin A1C were independent predictors of serum NfL in this cohort but diastolic blood pressure, durations of hypertension or diabetes mellitus, and lipid levels were not. In total, cardiovascular disease risk factors explained 19.2% of the variability in baseline NfL levels. Conclusions- Serum NfL levels predict incident stroke and add considerably to the discriminatory power of the Framingham Stroke Risk Score in a cohort of middle-aged and older adults with diabetes mellitus.


Assuntos
Complicações do Diabetes/sangue , Complicações do Diabetes/epidemiologia , Proteínas de Neurofilamentos/sangue , Acidente Vascular Cerebral/sangue , Acidente Vascular Cerebral/epidemiologia , Adulto , Fatores Etários , Idoso , Estudos de Casos e Controles , Estudos de Coortes , Etnicidade , Feminino , Taxa de Filtração Glomerular , Hemoglobinas Glicadas/análise , Humanos , Hipertensão/epidemiologia , Incidência , Masculino , Pessoa de Meia-Idade , Medição de Risco , Fatores Socioeconômicos
13.
Ann Intern Med ; 169(1): 20-29, 2018 07 03.
Artigo em Inglês | MEDLINE | ID: mdl-29868850

RESUMO

Background: The 2013 pooled cohort equations (PCEs) are central in prevention guidelines for cardiovascular disease (CVD) but can misestimate CVD risk. Objective: To improve the clinical accuracy of CVD risk prediction by revising the 2013 PCEs using newer data and statistical methods. Design: Derivation and validation of risk equations. Setting: Population-based. Participants: 26 689 adults aged 40 to 79 years without prior CVD from 6 U.S. cohorts. Measurements: Nonfatal myocardial infarction, death from coronary heart disease, or fatal or nonfatal stroke. Results: The 2013 PCEs overestimated 10-year risk for atherosclerotic CVD by an average of 20% across risk groups. Misestimation of risk was particularly prominent among black adults, of whom 3.9 million (33% of eligible black persons) had extreme risk estimates (<70% or >250% those of white adults with otherwise-identical risk factor values). Updating these equations improved accuracy among all race and sex subgroups. Approximately 11.8 million U.S. adults previously labeled high-risk (10-year risk ≥7.5%) by the 2013 PCEs would be relabeled lower-risk by the updated equations. Limitations: Updating the 2013 PCEs with data from modern cohorts reduced the number of persons considered to be at high risk. Clinicians and patients should consider the potential benefits and harms of reducing the number of persons recommended aspirin, blood pressure, or statin therapy. Our findings also indicate that risk equations will generally become outdated over time and require routine updating. Conclusion: Revised PCEs can improve the accuracy of CVD risk estimates. Primary Funding Source: National Institutes of Health.


Assuntos
Doença da Artéria Coronariana/etiologia , Medição de Risco/métodos , Adulto , Negro ou Afro-Americano/estatística & dados numéricos , Idoso , Doença da Artéria Coronariana/epidemiologia , Doença da Artéria Coronariana/mortalidade , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Modelos Estatísticos , Fatores de Risco , Estados Unidos/epidemiologia , População Branca/estatística & dados numéricos
14.
J Gen Intern Med ; 33(12): 2132-2137, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-30284172

RESUMO

BACKGROUND: Implementation of new practice guidelines for statin use was very poor. OBJECTIVE: To test a multi-component quality improvement intervention to encourage use of new guidelines for statin use. DESIGN: Cluster-randomized, usual-care controlled trial. PARTICIPANTS: The study population was primary care visits for patients who were recommended statins by the 2013 guidelines, but were not receiving them. We excluded patients who were over 75 years old, or had an ICD9 or ICD10 code for end-stage renal disease, muscle pain, pregnancy, or in vitro fertilization in the 2 years prior to the study visit. INTERVENTIONS: A novel quality improvement intervention consisting of a personalized decision support tool, an educational program, a performance measure, and an audit and feedback system. Randomization was at the level of the primary care team. MAIN MEASURES: Our primary outcome was prescription of a medium- or high-strength statin. We studied how receiving the intervention changed care during the quality improvement intervention compared to before it and if that change continued after the intervention. KEY RESULTS: Among 3787 visits to 43 primary care providers, being in the intervention arm tripled the odds of patients being prescribed an appropriate statin (OR 3.0, 95% CI 1.8-4.9), though the effect resolved after the personalized decision support ended (OR 1.7, 95% CI 0.99-2.77). CONCLUSIONS: A simple, personalized quality improvement intervention is promising for enabling the adoption of new guidelines. CLINICALTRIALS. GOV IDENTIFIER: NCT02820870.


Assuntos
Inibidores de Hidroximetilglutaril-CoA Redutases/uso terapêutico , Medicina de Precisão/normas , Atenção Primária à Saúde/normas , Melhoria de Qualidade/normas , United States Department of Veterans Affairs/normas , Veteranos , Idoso , Doenças Cardiovasculares/tratamento farmacológico , Doenças Cardiovasculares/epidemiologia , Análise por Conglomerados , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Medicina de Precisão/tendências , Atenção Primária à Saúde/tendências , Melhoria de Qualidade/tendências , Estados Unidos/epidemiologia , United States Department of Veterans Affairs/tendências
15.
J Gen Intern Med ; 33(1): 34-41, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-28905179

RESUMO

BACKGROUND: Deintensification of diabetic therapy is often clinically appropriate for older adults, because the benefit of aggressive diabetes treatment declines with age, while the risks increase. OBJECTIVE: We examined rates of overtreatment and deintensification of therapy for older adults with diabetes, and whether these rates differed by medical, demographic, and socioeconomic characteristics. DESIGN, SUBJECTS, AND MAIN MEASURES: We analyzed Medicare claims data from 10 states, linked to outpatient laboratory values to identify patients potentially overtreated for diabetes (HbA1c < 6.5% with fills for any diabetes medications beyond metformin, 1/1/2011-6/30/2011). We examined characteristics associated with deintensification for potentially overtreated diabetic patients. We used multinomial logistic regression to examine whether patient characteristics associated with overtreatment of diabetes differed from those associated with undertreatment (i.e. HbA1c > 9.0%). KEY RESULTS: Of 78,792 Medicare recipients with diabetes, 8560 (10.9%) were potentially overtreated. Overtreatment of diabetes was more common among those who were over 75 years of age and enrolled in Medicaid (p < 0.001), and was less common among Hispanics (p = 0.009). Therapy was deintensified for 14% of overtreated diabetics. Appropriate deintensification of diabetic therapy was more common for patients with six or more chronic conditions, more outpatient visits, or living in urban areas; deintensification was less common for those over age 75. Only 6.9% of Medicare recipients with diabetes were potentially undertreated. Variables associated with overtreatment of diabetes differed from those associated with undertreatment. CONCLUSIONS: Medicare recipients are more frequently overtreated than undertreated for diabetes. Medicare recipients who are overtreated for diabetes rarely have their regimens deintensified.


Assuntos
Diabetes Mellitus/epidemiologia , Diabetes Mellitus/terapia , Hipoglicemiantes/administração & dosagem , Uso Excessivo dos Serviços de Saúde/prevenção & controle , Medicare/normas , Idoso , Idoso de 80 Anos ou mais , Glicemia/efeitos dos fármacos , Glicemia/metabolismo , Estudos de Coortes , Feminino , Humanos , Hipoglicemiantes/efeitos adversos , Masculino , Estados Unidos/epidemiologia
16.
Ann Intern Med ; 166(5): 354-360, 2017 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-28055048

RESUMO

BACKGROUND: Two recent randomized trials produced discordant results when testing the benefits and harms of treatment to reduce blood pressure (BP) in patients with cardiovascular disease (CVD). OBJECTIVE: To perform a theoretical modeling study to identify whether large, clinically important differences in benefit and harm among patients (heterogeneous treatment effects [HTEs]) can be hidden in, and explain discordant results between, treat-to-target BP trials. DESIGN: Microsimulation. DATA SOURCES: Results of 2 trials comparing standard (systolic BP target <140 mm Hg) with intensive (systolic BP target <120 mm Hg) BP treatment and data from the National Health and Nutrition Examination Survey (2013 to 2014). TARGET POPULATION: U.S. adults. TIME HORIZON: 5 years. PERSPECTIVE: Societal. INTERVENTION: BP treatment. OUTCOME MEASURES: CVD events and mortality. RESULTS OF BASE-CASE ANALYSIS: Clinically important HTEs could explain differences in outcomes between 2 trials of intensive BP treatment, particularly diminishing benefit with each additional BP agent (for example, adding a second agent reduces CVD risk [hazard ratio, 0.61], but adding a fourth agent to a third has no benefit) and increasing harm at low diastolic BP. RESULTS OF SENSITIVITY ANALYSIS: Conventional treat-to-target trial designs had poor (<5%) statistical power to detect the HTEs, despite large samples (n > 20 000), and produced biased effect estimates. In contrast, a trial with sequential randomization to more intensive therapy achieved greater than 80% power and unbiased HTE estimates, despite small samples (n = 3500). LIMITATIONS: The HTEs as a function of the number of BP agents only were explored. Simulated aggregate data from the trials were used as model inputs because individual-participant data were not available. CONCLUSION: Clinically important heterogeneity in intensive BP treatment effects remains undetectable in conventional trial designs but can be detected in sequential randomization trial designs. PRIMARY FUNDING SOURCE: National Institutes of Health and U.S. Department of Veterans Affairs.


Assuntos
Anti-Hipertensivos/uso terapêutico , Hipertensão/tratamento farmacológico , Ensaios Clínicos Controlados Aleatórios como Assunto/normas , Projetos de Pesquisa/normas , Adulto , Doenças Cardiovasculares/complicações , Doenças Cardiovasculares/prevenção & controle , Simulação por Computador , Humanos , Hipertensão/complicações , Fatores de Risco
18.
Circulation ; 133(9): 840-8, 2016 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-26762520

RESUMO

BACKGROUND: The World Health Organization aims to reduce mortality from chronic diseases including cardiovascular disease (CVD) by 25% by 2025. High blood pressure is a leading CVD risk factor. We sought to compare 3 strategies for treating blood pressure in China and India: a treat-to-target (TTT) strategy emphasizing lowering blood pressure to a target, a benefit-based tailored treatment (BTT) strategy emphasizing lowering CVD risk, or a hybrid strategy currently recommended by the World Health Organization. METHODS AND RESULTS: We developed a microsimulation model of adults aged 30 to 70 years in China and in India to compare the 2 treatment approaches across a 10-year policy-planning horizon. In the model, a BTT strategy treating adults with a 10-year CVD event risk of ≥ 10% used similar financial resources but averted ≈ 5 million more disability-adjusted life-years in both China and India than a TTT approach based on current US guidelines. The hybrid strategy in the current World Health Organization guidelines produced no substantial benefits over TTT. BTT was more cost-effective at $205 to $272/disability-adjusted life-year averted, which was $142 to $182 less per disability-adjusted life-year than TTT or hybrid strategies. The comparative effectiveness of BTT was robust to uncertainties in CVD risk estimation and to variations in the age range analyzed, the BTT treatment threshold, or rates of treatment access, adherence, or concurrent statin therapy. CONCLUSIONS: In model-based analyses, a simple BTT strategy was more effective and cost-effective than TTT or hybrid strategies in reducing mortality.


Assuntos
Doenças Cardiovasculares/mortalidade , Doenças Cardiovasculares/terapia , Simulação por Computador , Objetivos , Hipertensão/mortalidade , Hipertensão/terapia , Adulto , Idoso , Pressão Sanguínea/fisiologia , Doenças Cardiovasculares/diagnóstico , China/epidemiologia , Análise Custo-Benefício/métodos , Feminino , Humanos , Hipertensão/diagnóstico , Índia/epidemiologia , Masculino , Pessoa de Meia-Idade , Fatores de Risco
19.
PLoS Med ; 14(10): e1002410, 2017 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-29040268

RESUMO

BACKGROUND: Intensive blood pressure (BP) treatment can avert cardiovascular disease (CVD) events but can cause some serious adverse events. We sought to develop and validate risk models for predicting absolute risk difference (increased risk or decreased risk) for CVD events and serious adverse events from intensive BP therapy. A secondary aim was to test if the statistical method of elastic net regularization would improve the estimation of risk models for predicting absolute risk difference, as compared to a traditional backwards variable selection approach. METHODS AND FINDINGS: Cox models were derived from SPRINT trial data and validated on ACCORD-BP trial data to estimate risk of CVD events and serious adverse events; the models included terms for intensive BP treatment and heterogeneous response to intensive treatment. The Cox models were then used to estimate the absolute reduction in probability of CVD events (benefit) and absolute increase in probability of serious adverse events (harm) for each individual from intensive treatment. We compared the method of elastic net regularization, which uses repeated internal cross-validation to select variables and estimate coefficients in the presence of collinearity, to a traditional backwards variable selection approach. Data from 9,069 SPRINT participants with complete data on covariates were utilized for model development, and data from 4,498 ACCORD-BP participants with complete data were utilized for model validation. Participants were exposed to intensive (goal systolic pressure < 120 mm Hg) versus standard (<140 mm Hg) treatment. Two composite primary outcome measures were evaluated: (i) CVD events/deaths (myocardial infarction, acute coronary syndrome, stroke, congestive heart failure, or CVD death), and (ii) serious adverse events (hypotension, syncope, electrolyte abnormalities, bradycardia, or acute kidney injury/failure). The model for CVD chosen through elastic net regularization included interaction terms suggesting that older age, black race, higher diastolic BP, and higher lipids were associated with greater CVD risk reduction benefits from intensive treatment, while current smoking was associated with fewer benefits. The model for serious adverse events chosen through elastic net regularization suggested that male sex, current smoking, statin use, elevated creatinine, and higher lipids were associated with greater risk of serious adverse events from intensive treatment. SPRINT participants in the highest predicted benefit subgroup had a number needed to treat (NNT) of 24 to prevent 1 CVD event/death over 5 years (absolute risk reduction [ARR] = 0.042, 95% CI: 0.018, 0.066; P = 0.001), those in the middle predicted benefit subgroup had a NNT of 76 (ARR = 0.013, 95% CI: -0.0001, 0.026; P = 0.053), and those in the lowest subgroup had no significant risk reduction (ARR = 0.006, 95% CI: -0.007, 0.018; P = 0.71). Those in the highest predicted harm subgroup had a number needed to harm (NNH) of 27 to induce 1 serious adverse event (absolute risk increase [ARI] = 0.038, 95% CI: 0.014, 0.061; P = 0.002), those in the middle predicted harm subgroup had a NNH of 41 (ARI = 0.025, 95% CI: 0.012, 0.038; P < 0.001), and those in the lowest subgroup had no significant risk increase (ARI = -0.007, 95% CI: -0.043, 0.030; P = 0.72). In ACCORD-BP, participants in the highest subgroup of predicted benefit had significant absolute CVD risk reduction, but the overall ACCORD-BP participant sample was skewed towards participants with less predicted benefit and more predicted risk than in SPRINT. The models chosen through traditional backwards selection had similar ability to identify absolute risk difference for CVD as the elastic net models, but poorer ability to correctly identify absolute risk difference for serious adverse events. A key limitation of the analysis is the limited sample size of the ACCORD-BP trial, which expanded confidence intervals for ARI among persons with type 2 diabetes. Additionally, it is not possible to mechanistically explain the physiological relationships explaining the heterogeneous treatment effects captured by the models, since the study was an observational secondary data analysis. CONCLUSIONS: We found that predictive models could help identify subgroups of participants in both SPRINT and ACCORD-BP who had lower versus higher ARRs in CVD events/deaths with intensive BP treatment, and participants who had lower versus higher ARIs in serious adverse events.


Assuntos
Anti-Hipertensivos/uso terapêutico , Pressão Sanguínea/efeitos dos fármacos , Hipertensão/tratamento farmacológico , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Insuficiência Cardíaca/tratamento farmacológico , Humanos , Inibidores de Hidroximetilglutaril-CoA Redutases/uso terapêutico , Hipertensão/complicações , Masculino , Pessoa de Meia-Idade , Infarto do Miocárdio/tratamento farmacológico , Modelos de Riscos Proporcionais , Fatores de Risco , Acidente Vascular Cerebral/tratamento farmacológico , Acidente Vascular Cerebral/prevenção & controle , Resultado do Tratamento
20.
Med Care ; 55(9): 864-870, 2017 09.
Artigo em Inglês | MEDLINE | ID: mdl-28763374

RESUMO

BACKGROUND: Accurately estimating cardiovascular risk is fundamental to good decision-making in cardiovascular disease (CVD) prevention, but risk scores developed in one population often perform poorly in dissimilar populations. We sought to examine whether a large integrated health system can use their electronic health data to better predict individual patients' risk of developing CVD. METHODS: We created a cohort using all patients ages 45-80 who used Department of Veterans Affairs (VA) ambulatory care services in 2006 with no history of CVD, heart failure, or loop diuretics. Our outcome variable was new-onset CVD in 2007-2011. We then developed a series of recalibrated scores, including a fully refit "VA Risk Score-CVD (VARS-CVD)." We tested the different scores using standard measures of prediction quality. RESULTS: For the 1,512,092 patients in the study, the Atherosclerotic cardiovascular disease risk score had similar discrimination as the VARS-CVD (c-statistic of 0.66 in men and 0.73 in women), but the Atherosclerotic cardiovascular disease model had poor calibration, predicting 63% more events than observed. Calibration was excellent in the fully recalibrated VARS-CVD tool, but simpler techniques tested proved less reliable. CONCLUSIONS: We found that local electronic health record data can be used to estimate CVD better than an established risk score based on research populations. Recalibration improved estimates dramatically, and the type of recalibration was important. Such tools can also easily be integrated into health system's electronic health record and can be more readily updated.


Assuntos
Doenças Cardiovasculares/epidemiologia , Registros Eletrônicos de Saúde/estatística & dados numéricos , Indicadores Básicos de Saúde , Distribuição por Idade , Idoso , Aterosclerose/epidemiologia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Medição de Risco , Fatores de Risco , Distribuição por Sexo , Fatores Socioeconômicos , Estados Unidos , United States Department of Veterans Affairs
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA