Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 187
Filtrar
1.
Med Care ; 61(1): 36-44, 2023 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-36477618

RESUMO

BACKGROUND: Few performance measures assess presurgical value (quality and utilization). OBJECTIVES: Using carpal tunnel syndrome (CTS) as a case study: (1) develop a model to evaluate presurgical quality and utilization and (2) identify opportunities for value improvement. RESEARCH DESIGN: A retrospective cohort study utilizing Veterans Affairs (VA) national administrative data. SUBJECTS: Patients who were evaluated in a VA primary care clinic on at least 1 occasion for CTS and received carpal tunnel release over a 7-year period. MEASURES: We modeled facility-level performance on 2 outcomes: surgical delay (marker of quality) and number of presurgical encounters (utilization) for CTS, and examined association between patient, facility, and care process variables and performance. RESULTS: Among 41,912 Veterans undergoing carpal tunnel release at 127 VA medical centers, the median facility-level predicted probability of surgical delay was 48%, with 16 (13%) facilities having significantly less delay than the median and 13 (10%) facilities having greater delay. The median facility-level predicted number of presurgical encounters was 8.8 visits, with 22 (17%) facilities having significantly fewer encounters and 22 (17%) facilities having more. Care processes had a stronger association with both outcomes than structural variables included in the models. Processes associated with the greatest deviations in predicted delay and utilization included receipt of repeat electrodiagnostic testing, use of 2 or more nonoperative treatments, and community referral outside of VA. CONCLUSIONS: Using CTS as a test case, this study demonstrates the potential to assess presurgical value and identify modifiable care processes associated with presurgical delay and utilization performance.


Assuntos
Síndrome do Túnel Carpal , Humanos , Síndrome do Túnel Carpal/diagnóstico , Síndrome do Túnel Carpal/cirurgia , Estudos Retrospectivos
2.
Health Care Manag Sci ; 26(1): 93-116, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36284034

RESUMO

Preventing chronic diseases is an essential aspect of medical care. To prevent chronic diseases, physicians focus on monitoring their risk factors and prescribing the necessary medication. The optimal monitoring policy depends on the patient's risk factors and demographics. Monitoring too frequently may be unnecessary and costly; on the other hand, monitoring the patient infrequently means the patient may forgo needed treatment and experience adverse events related to the disease. We propose a finite horizon and finite-state Markov decision process to define monitoring policies. To build our Markov decision process, we estimate stochastic models based on longitudinal observational data from electronic health records for a large cohort of patients seen in the national U.S. Veterans Affairs health system. We use our model to study policies for whether or when to assess the need for cholesterol-lowering medications. We further use our model to investigate the role of gender and race on optimal monitoring policies.


Assuntos
Anticolesterolemiantes , Doenças Cardiovasculares , Humanos , Doenças Cardiovasculares/prevenção & controle , Fatores de Risco
3.
N Engl J Med ; 380(23): 2215-2224, 2019 Jun 06.
Artigo em Inglês | MEDLINE | ID: mdl-31167051

RESUMO

BACKGROUND: We previously reported that a median of 5.6 years of intensive as compared with standard glucose lowering in 1791 military veterans with type 2 diabetes resulted in a risk of major cardiovascular events that was significantly lower (by 17%) after a total of 10 years of combined intervention and observational follow-up. We now report the full 15-year follow-up. METHODS: We observationally followed enrolled participants (complete cohort) after the conclusion of the original clinical trial by using central databases to identify cardiovascular events, hospitalizations, and deaths. Participants were asked whether they would be willing to provide additional data by means of surveys and chart reviews (survey cohort). The prespecified primary outcome was a composite of major cardiovascular events, including nonfatal myocardial infarction, nonfatal stroke, new or worsening congestive heart failure, amputation for ischemic gangrene, and death from cardiovascular causes. Death from any cause was a prespecified secondary outcome. RESULTS: There were 1655 participants in the complete cohort and 1391 in the survey cohort. During the trial (which originally enrolled 1791 participants), the separation of the glycated hemoglobin curves between the intensive-therapy group (892 participants) and the standard-therapy group (899 participants) averaged 1.5 percentage points, and this difference declined to 0.2 to 0.3 percentage points by 3 years after the trial ended. Over a period of 15 years of follow-up (active treatment plus post-trial observation), the risks of major cardiovascular events or death were not lower in the intensive-therapy group than in the standard-therapy group (hazard ratio for primary outcome, 0.91; 95% confidence interval [CI], 0.78 to 1.06; P = 0.23; hazard ratio for death, 1.02; 95% CI, 0.88 to 1.18). The risk of major cardiovascular disease outcomes was reduced, however, during an extended interval of separation of the glycated hemoglobin curves (hazard ratio, 0.83; 95% CI, 0.70 to 0.99), but this benefit did not continue after equalization of the glycated hemoglobin levels (hazard ratio, 1.26; 95% CI, 0.90 to 1.75). CONCLUSIONS: Participants with type 2 diabetes who had been randomly assigned to intensive glucose control for 5.6 years had a lower risk of cardiovascular events than those who received standard therapy only during the prolonged period in which the glycated hemoglobin curves were separated. There was no evidence of a legacy effect or a mortality benefit with intensive glucose control. (Funded by the VA Cooperative Studies Program; VADT ClinicalTrials.gov number, NCT00032487.).


Assuntos
Glicemia/análise , Doenças Cardiovasculares/prevenção & controle , Diabetes Mellitus Tipo 2/tratamento farmacológico , Hipoglicemiantes/administração & dosagem , Doenças Cardiovasculares/epidemiologia , Doenças Cardiovasculares/mortalidade , Diabetes Mellitus Tipo 2/sangue , Feminino , Seguimentos , Humanos , Hiperglicemia/prevenção & controle , Masculino , Pessoa de Meia-Idade , Qualidade de Vida , Ensaios Clínicos Controlados Aleatórios como Assunto , Veteranos
5.
Med Care ; 59(Suppl 3): S279-S285, 2021 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-33976077

RESUMO

BACKGROUND: The US Department of Veterans Affairs (VA) enacted policies offering Veterans care in the community, aiming to improve access challenges. However, the impact of receipt of community care on wait times for Veterans receiving surgical care is poorly understood. OBJECTIVES: To compare wait times for surgery for Veterans with carpal tunnel syndrome who receive VA care plus community care (mixed care) and those who receive care solely within the VA (VA-only). RESEARCH DESIGN: Retrospective cohort study. SUBJECTS: Veterans undergoing carpal tunnel release (CTR) between January 1, 2010 and December 31, 2016. MEASURES: Our primary outcome was time from primary care physician (PCP) referral to CTR. RESULTS: Of the 29,242 Veterans undergoing CTR, 23,330 (79.8%) received VA-only care and 5912 (20.1%) received mixed care. Veterans receiving mixed care had significantly longer time from PCP referral to CTR (median mixed care: 378 days; median VA-only care: 176 days, P<0.001). After controlling for patient and facility covariates, mixed care was associated with a 37% increased time from PCP referral to CTR (adjusted hazard ratio, 0.63; 95% confidence interval, 0.61-0.65). Each additional service provided in the community was associated with a 23% increase in time to surgery (adjusted hazard ratio, 0.77; 95% confidence interval, 0.76-0.78). CONCLUSIONS: VA-only care was associated with a shorter time to surgery compared with mixed care. Moreover, there were additional delays for each service received in the community. With likely increases in Veterans seeking community care, strategies must be used to identify and mitigate sources of delay through the spectrum of care between referral and definitive treatment.


Assuntos
Síndrome do Túnel Carpal/cirurgia , Serviços de Saúde Comunitária/estatística & dados numéricos , Encaminhamento e Consulta/estatística & dados numéricos , Tempo para o Tratamento/estatística & dados numéricos , Veteranos/estatística & dados numéricos , Idoso , Serviços de Saúde Comunitária/legislação & jurisprudência , Feminino , Acessibilidade aos Serviços de Saúde/legislação & jurisprudência , Acessibilidade aos Serviços de Saúde/estatística & dados numéricos , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos , Estados Unidos , United States Department of Veterans Affairs , Saúde dos Veteranos/legislação & jurisprudência , Saúde dos Veteranos/estatística & dados numéricos
6.
J Hand Surg Am ; 46(7): 544-551, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-33867201

RESUMO

PURPOSE: The U.S. Department of Veterans Affairs (VA) health care system monitors time from referral to specialist visit. We compared wait times for carpal tunnel release (CTR) at a VA hospital and its academic affiliate. METHODS: We selected patients who underwent CTR at a VA hospital and its academic affiliate (AA) (2010-2015). We analyzed time from primary care physician (PCP) referral to CTR, which was subdivided into PCP referral to surgical consultation and surgical consultation to CTR. Electrodiagnostic testing (EDS) was categorized in relation to surgical consultation (prereferral vs postreferral). Multivariable Cox proportional hazard models were used to examine associations between clinical variables and surgical location. RESULTS: Between 2010 and 2015, VA patients had a shorter median time from PCP referral to CTR (VA: 168 days; AA: 410 days), shorter time from PCP referral to surgical consultation (VA: 43 days; AA: 191 days), but longer time from surgical consultation to CTR (VA: 98 days; AA: 55 days). Using multivariable models, the VA was associated with a 35% shorter time to CTR (AA hazard ratio [HR], 0.65; 95% confidence interval [CI], 0.52-0.82) and 75% shorter time to surgical consultation (AA HR, 0.25; 95% CI, 0.20-0.03). Receiving both prereferral and postreferral EDS was associated with almost a 2-fold prolonged time to CTR (AA HR, 0.49; 95% CI, 0.36-0.67). CONCLUSIONS: The VA was associated with shorter overall time to CTR compared with its AA. However, the VA policy of prioritizing time from referral to surgical consultation may not optimally incentivize time to surgery. Repeat EDS was associated with longer wait times in both systems. CLINICAL RELEVANCE: Given differences in where delays occur in each health care system, initiatives to improve efficiency will require targeting the appropriate sources of preoperative delay. Judicious use of EDS may be one avenue to decrease wait times in both systems.


Assuntos
Síndrome do Túnel Carpal , Síndrome do Túnel Carpal/cirurgia , Atenção à Saúde , Humanos , Duração da Cirurgia , Setor Privado , Estados Unidos , United States Department of Veterans Affairs
7.
J Gen Intern Med ; 35(10): 3045-3049, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32779137

RESUMO

Policymakers and researchers are strongly encouraging clinicians to support patient autonomy through shared decision-making (SDM). In setting policies for clinical care, decision-makers need to understand that current models of SDM have tended to focus on major decisions (e.g., surgeries and chemotherapy) and focused less on everyday primary care decisions. Most decisions in primary care are substantive everyday decisions: intermediate-stakes decisions that occur dozens of times every day, yet are non-trivial for patients, such as whether routine mammography should start at age 40, 45, or 50. Expectations that busy clinicians use current models of SDM (here referred to as "detailed" SDM) for these decisions can feel overwhelming to clinicians. Evidence indicates that detailed SDM is simply not realistic for most of these decisions and without a feasible alternative, clinicians usually default to a decision-making approach with little to no personalization. We propose, for discussion and refinement, a compromise approach to personalizing these decisions (everyday SDM). Everyday SDM is based on a feasible process for supporting patient autonomy that also allows clinicians to continue being respectful health advocates for their patients. We propose that alternatives to detailed SDM are needed to make progress toward more patient-centered care.


Assuntos
Tomada de Decisões , Participação do Paciente , Adulto , Tomada de Decisão Compartilhada , Humanos , Assistência Centrada no Paciente , Atenção Primária à Saúde
8.
CMAJ ; 192(32): E901-E906, 2020 Aug 10.
Artigo em Inglês | MEDLINE | ID: mdl-32778601

RESUMO

BACKGROUND: Most randomized controlled trials (RCTs) and meta-analyses of RCTs examine effect modification (also called a subgroup effect or interaction), in which the effect of an intervention varies by another variable (e.g., age or disease severity). Assessing the credibility of an apparent effect modification presents challenges; therefore, we developed the Instrument for assessing the Credibility of Effect Modification Analyses (ICEMAN). METHODS: To develop ICEMAN, we established a detailed concept; identified candidate credibility considerations in a systematic survey of the literature; together with experts, performed a consensus study to identify key considerations and develop them into instrument items; and refined the instrument based on feedback from trial investigators, systematic review authors and journal editors, who applied drafts of ICEMAN to published claims of effect modification. RESULTS: The final instrument consists of a set of preliminary considerations, core questions (5 for RCTs, 8 for meta-analyses) with 4 response options, 1 optional item for additional considerations and a rating of credibility on a visual analogue scale ranging from very low to high. An accompanying manual provides rationales, detailed instructions and examples from the literature. Seventeen potential users tested ICEMAN; their suggestions improved the user-friendliness of the instrument. INTERPRETATION: The Instrument for assessing the Credibility of Effect Modification Analyses offers explicit guidance for investigators, systematic reviewers, journal editors and others considering making a claim of effect modification or interpreting a claim made by others.


Assuntos
Metanálise como Assunto , Ensaios Clínicos Controlados Aleatórios como Assunto , Projetos de Pesquisa/normas , Consenso , Humanos
9.
BMC Neurol ; 19(1): 295, 2019 Nov 22.
Artigo em Inglês | MEDLINE | ID: mdl-31757218

RESUMO

BACKGROUND: Carotid endarterectomy (CEA) is routinely performed for asymptomatic carotid stenosis, yet its average net benefit is small. Risk stratification may identify high risk patients that would clearly benefit from treatment. METHODS: Retrospective cohort study using data from the Asymptomatic Carotid Atherosclerosis Study (ACAS). Risk factors for poor outcomes were included in backward and forward selection procedures to develop baseline risk models estimating the risk of non-perioperative ipsilateral stroke/TIA. Baseline risk was estimated for all ACAS participants and externally validated using data from the Atherosclerosis Risk in Communities (ARIC) study. Baseline risk was then included in a treatment risk model that explored the interaction of baseline risk and treatment status (CEA vs. medical management) on the patient-centered outcome of any stroke or death, including peri-operative events. RESULTS: Three baseline risk factors (BMI, creatinine and degree of contralateral stenosis) were selected into our baseline risk model (c-statistic 0.59 [95% CI 0.54-0.65]). The model stratified absolute risk between the lowest and highest risk quintiles (5.1% vs. 12.5%). External validation in ARIC found similar predictiveness (c-statistic 0.58 [0.49-0.67]), but poor calibration across the risk spectrum. In the treatment risk model, CEA was superior to medical management across the spectrum of baseline risk and the magnitude of the treatment effect varied widely between the lowest and highest absolute risk quintiles (3.2% vs. 10.7%). CONCLUSION: Even modestly predictive risk stratification tools have the potential to meaningfully influence clinical decision making in asymptomatic carotid disease. However, our ACAS model requires target population recalibration prior to clinical application.


Assuntos
Estenose das Carótidas/cirurgia , Endarterectomia das Carótidas , Medição de Risco/métodos , Idoso , Doenças Assintomáticas , Tomada de Decisão Clínica/métodos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos , Fatores de Risco , Resultado do Tratamento
10.
Ann Intern Med ; 169(1): 1-9, 2018 07 03.
Artigo em Inglês | MEDLINE | ID: mdl-29809244

RESUMO

Background: Many health systems are exploring how to implement low-dose computed tomography (LDCT) screening programs that are effective and patient-centered. Objective: To examine factors that influence when LDCT screening is preference-sensitive. Design: State-transition microsimulation model. Data Sources: Two large randomized trials, published decision analyses, and the SEER (Surveillance, Epidemiology, and End Results) cancer registry. Target Population: U.S.-representative sample of simulated patients meeting current U.S. Preventive Services Task Force criteria for screening eligibility. Time Horizon: Lifetime. Perspective: Individual. Intervention: LDCT screening annually for 3 years. Outcome Measures: Lifetime quality-adjusted life-year gains and reduction in lung cancer mortality. To examine the effect of preferences on net benefit, disutilities (the "degree of dislike") quantifying the burden of screening and follow-up were varied across a likely range. The effect of varying the rate of false-positive screening results and overdiagnosis associated with screening was also examined. Results of Base-Case Analysis: Moderate differences in preferences about the downsides of LDCT screening influenced whether screening was appropriate for eligible persons with annual lung cancer risk less than 0.3% or life expectancy less than 10.5 years. For higher-risk eligible persons with longer life expectancy (roughly 50% of the study population), the benefits of LDCT screening overcame even highly negative views about screening and its downsides. Results of Sensitivity Analysis: Rates of false-positive findings and overdiagnosed lung cancer were not highly influential. Limitation: The quantitative thresholds that were identified may vary depending on the structure of the microsimulation model. Conclusion: Identifying circumstances in which LDCT screening is more versus less preference-sensitive may help clinicians personalize their screening discussions, tailoring to both preferences and clinical benefit. Primary Funding Source: None.


Assuntos
Detecção Precoce de Câncer/métodos , Neoplasias Pulmonares/diagnóstico , Idoso , Idoso de 80 Anos ou mais , Simulação por Computador , Feminino , Humanos , Pulmão/diagnóstico por imagem , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/mortalidade , Masculino , Cadeias de Markov , Pessoa de Meia-Idade , Avaliação de Processos e Resultados em Cuidados de Saúde , Anos de Vida Ajustados por Qualidade de Vida , Medição de Risco , Fatores de Risco , Programa de SEER , Tomografia Computadorizada por Raios X
11.
Ann Intern Med ; 169(1): 20-29, 2018 07 03.
Artigo em Inglês | MEDLINE | ID: mdl-29868850

RESUMO

Background: The 2013 pooled cohort equations (PCEs) are central in prevention guidelines for cardiovascular disease (CVD) but can misestimate CVD risk. Objective: To improve the clinical accuracy of CVD risk prediction by revising the 2013 PCEs using newer data and statistical methods. Design: Derivation and validation of risk equations. Setting: Population-based. Participants: 26 689 adults aged 40 to 79 years without prior CVD from 6 U.S. cohorts. Measurements: Nonfatal myocardial infarction, death from coronary heart disease, or fatal or nonfatal stroke. Results: The 2013 PCEs overestimated 10-year risk for atherosclerotic CVD by an average of 20% across risk groups. Misestimation of risk was particularly prominent among black adults, of whom 3.9 million (33% of eligible black persons) had extreme risk estimates (<70% or >250% those of white adults with otherwise-identical risk factor values). Updating these equations improved accuracy among all race and sex subgroups. Approximately 11.8 million U.S. adults previously labeled high-risk (10-year risk ≥7.5%) by the 2013 PCEs would be relabeled lower-risk by the updated equations. Limitations: Updating the 2013 PCEs with data from modern cohorts reduced the number of persons considered to be at high risk. Clinicians and patients should consider the potential benefits and harms of reducing the number of persons recommended aspirin, blood pressure, or statin therapy. Our findings also indicate that risk equations will generally become outdated over time and require routine updating. Conclusion: Revised PCEs can improve the accuracy of CVD risk estimates. Primary Funding Source: National Institutes of Health.


Assuntos
Doença da Artéria Coronariana/etiologia , Medição de Risco/métodos , Adulto , Negro ou Afro-Americano/estatística & dados numéricos , Idoso , Doença da Artéria Coronariana/epidemiologia , Doença da Artéria Coronariana/mortalidade , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Modelos Estatísticos , Fatores de Risco , Estados Unidos/epidemiologia , População Branca/estatística & dados numéricos
12.
Diabetologia ; 61(2): 295-299, 2018 02.
Artigo em Inglês | MEDLINE | ID: mdl-29101421

RESUMO

AIMS/HYPOTHESIS: We conducted an analysis of data collected during the Veterans Affairs Diabetes Trial (VADT) and the follow-up study (VADT-F) to determine whether intensive (INT) compared with standard (STD) glycaemic control during the VADT resulted in better long-term kidney outcomes. METHODS: VADT randomly assigned 1791 veterans from 20 Veterans Affairs (VA) medical centres who had type 2 diabetes mellitus and a mean HbA1c of 9.4 ± 2% (79.2 mmol/mol) at baseline to receive either INT or STD glucose control for a median of 5.6 years (randomisation December 2000 to May 2003; intervention ending in May 2008). After the trial, participants received routine care through their own physicians within the VA. This is an interim analysis of the VADT-F (June 2008 to December 2013). We collected data using VA and National databases and report renal outcomes based on serum creatinine, eGFR and urine albumin to creatinine ratio (ACR) in 1033 people who provided informed consent to participate in the VADT-F. RESULTS: By the end of the VADT-F, significantly more people who received INT treatment during the VADT maintained an eGFR >60 ml min-1 1.73 m-2 (OR 1.34 [95% CI 1.05, 1.71], p = 0.02). This benefit was most evident in those who were classified as at moderate risk (INT vs STD, RR 1.3, p = 0.03) or high risk (RR 2.3, p = 0.04) of chronic kidney disease on the Kidney Disease Improving Global Outcomes (KDIGO-CKD) at the beginning of VADT. At the end of VADT-F, significantly more people from the INT group improved to a low KDIGO risk category (RR 6.1, p = 0.002). During the VADT-F there were no significant differences between INT and STD for average HbA1c, blood pressure or lipid levels. CONCLUSIONS/INTERPRETATION: After just over 11 years of follow-up, there was a 34% greater odds of maintaining an eGFR of >60 ml min-1 1.73 m-2 and of improving the KDIGO category in individuals with type 2 diabetes who had received INT for a median of 5.6 years. VADT clinical trials.gov number: NCT 00032487.


Assuntos
Diabetes Mellitus Tipo 2/tratamento farmacológico , Hipoglicemiantes/uso terapêutico , Rim/fisiopatologia , Glicemia/efeitos dos fármacos , Creatinina/urina , Diabetes Mellitus Tipo 2/sangue , Retinopatia Diabética/tratamento farmacológico , Retinopatia Diabética/metabolismo , Feminino , Seguimentos , Taxa de Filtração Glomerular/efeitos dos fármacos , Hemoglobinas Glicadas/metabolismo , Humanos , Insulina/uso terapêutico , Rim/efeitos dos fármacos , Rim/metabolismo , Masculino , Albumina Sérica Humana/urina , Resultado do Tratamento , Veteranos
13.
Ann Surg ; 268(6): 903-907, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-29697451

RESUMO

OBJECTIVE: Our objective was to understand the reliability of profiling surgeons on average health care spending. SUMMARY OF BACKGROUND DATA: Under its Merit-based Incentive Payment System (MIPS), Medicare will measure surgeon spending and tie performance to payments. Although the intent of this cost-profiling is to reward low-cost surgeons, it is unknown whether surgeons can be accurately distinguished from their peers. METHODS: We used Michigan Medicare and commercial payer claims data to construct episodes of surgical care and to calculate average annual spending for individual surgeons. We then estimated the "reliability" (ie, the ability to distinguish surgeons from their peers) of these cost-profiles and the case-volume that surgeons would need in order to achieve high reliability [intraclass correlation coefficient (ICC) >0.8]. Finally, we calculated the reliability of 2 alternative methods of profiling surgeons (ie, using multiple years of data and grouping surgeons by hospitals). RESULTS: We found that annual cost-profiles of individual surgeons had poor reliability; the ICC ranged from <0.001 for CABG to 0.061 for cholecystectomy. We found that few surgeons in the state of Michigan have sufficient case-volume to be reliably compared; 1% had the minimum yearly case. Finally, we found that the reliability of the cost-profiles can be improved by measuring spending at the hospital-level and/or by incorporating additional years of data. CONCLUSION: These findings suggest that the Medicare program should measure surgeon spending at a group level or incorporate multiple years of data to reduce misclassification of surgeon performance in the MIPS program.


Assuntos
Custos de Cuidados de Saúde , Planos de Incentivos Médicos , Cirurgiões/economia , Cuidado Periódico , Humanos , Michigan , Sistema de Registros , Reprodutibilidade dos Testes , Estados Unidos
14.
N Engl J Med ; 372(23): 2197-206, 2015 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-26039600

RESUMO

BACKGROUND: The Veterans Affairs Diabetes Trial previously showed that intensive glucose lowering, as compared with standard therapy, did not significantly reduce the rate of major cardiovascular events among 1791 military veterans (median follow-up, 5.6 years). We report the extended follow-up of the study participants. METHODS: After the conclusion of the clinical trial, we followed participants, using central databases to identify procedures, hospitalizations, and deaths (complete cohort, with follow-up data for 92.4% of participants). Most participants agreed to additional data collection by means of annual surveys and periodic chart reviews (survey cohort, with 77.7% follow-up). The primary outcome was the time to the first major cardiovascular event (heart attack, stroke, new or worsening congestive heart failure, amputation for ischemic gangrene, or cardiovascular-related death). Secondary outcomes were cardiovascular mortality and all-cause mortality. RESULTS: The difference in glycated hemoglobin levels between the intensive-therapy group and the standard-therapy group averaged 1.5 percentage points during the trial (median level, 6.9% vs. 8.4%) and declined to 0.2 to 0.3 percentage points by 3 years after the trial ended. Over a median follow-up of 9.8 years, the intensive-therapy group had a significantly lower risk of the primary outcome than did the standard-therapy group (hazard ratio, 0.83; 95% confidence interval [CI], 0.70 to 0.99; P=0.04), with an absolute reduction in risk of 8.6 major cardiovascular events per 1000 person-years, but did not have reduced cardiovascular mortality (hazard ratio, 0.88; 95% CI, 0.64 to 1.20; P=0.42). No reduction in total mortality was evident (hazard ratio in the intensive-therapy group, 1.05; 95% CI, 0.89 to 1.25; P=0.54; median follow-up, 11.8 years). CONCLUSIONS: After nearly 10 years of follow-up, patients with type 2 diabetes who had been randomly assigned to intensive glucose control for 5.6 years had 8.6 fewer major cardiovascular events per 1000 person-years than those assigned to standard therapy, but no improvement was seen in the rate of overall survival. (Funded by the VA Cooperative Studies Program and others; VADT ClinicalTrials.gov number, NCT00032487.).


Assuntos
Glicemia/metabolismo , Doenças Cardiovasculares/epidemiologia , Diabetes Mellitus Tipo 2/sangue , Hemoglobinas Glicadas/análise , Hipoglicemiantes/administração & dosagem , Idoso , Doenças Cardiovasculares/mortalidade , Doenças Cardiovasculares/prevenção & controle , Diabetes Mellitus Tipo 2/tratamento farmacológico , Diabetes Mellitus Tipo 2/mortalidade , Feminino , Seguimentos , Humanos , Masculino , Pessoa de Meia-Idade , Risco , Análise de Sobrevida
15.
Circulation ; 133(9): 840-8, 2016 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-26762520

RESUMO

BACKGROUND: The World Health Organization aims to reduce mortality from chronic diseases including cardiovascular disease (CVD) by 25% by 2025. High blood pressure is a leading CVD risk factor. We sought to compare 3 strategies for treating blood pressure in China and India: a treat-to-target (TTT) strategy emphasizing lowering blood pressure to a target, a benefit-based tailored treatment (BTT) strategy emphasizing lowering CVD risk, or a hybrid strategy currently recommended by the World Health Organization. METHODS AND RESULTS: We developed a microsimulation model of adults aged 30 to 70 years in China and in India to compare the 2 treatment approaches across a 10-year policy-planning horizon. In the model, a BTT strategy treating adults with a 10-year CVD event risk of ≥ 10% used similar financial resources but averted ≈ 5 million more disability-adjusted life-years in both China and India than a TTT approach based on current US guidelines. The hybrid strategy in the current World Health Organization guidelines produced no substantial benefits over TTT. BTT was more cost-effective at $205 to $272/disability-adjusted life-year averted, which was $142 to $182 less per disability-adjusted life-year than TTT or hybrid strategies. The comparative effectiveness of BTT was robust to uncertainties in CVD risk estimation and to variations in the age range analyzed, the BTT treatment threshold, or rates of treatment access, adherence, or concurrent statin therapy. CONCLUSIONS: In model-based analyses, a simple BTT strategy was more effective and cost-effective than TTT or hybrid strategies in reducing mortality.


Assuntos
Doenças Cardiovasculares/mortalidade , Doenças Cardiovasculares/terapia , Simulação por Computador , Objetivos , Hipertensão/mortalidade , Hipertensão/terapia , Adulto , Idoso , Pressão Sanguínea/fisiologia , Doenças Cardiovasculares/diagnóstico , China/epidemiologia , Análise Custo-Benefício/métodos , Feminino , Humanos , Hipertensão/diagnóstico , Índia/epidemiologia , Masculino , Pessoa de Meia-Idade , Fatores de Risco
16.
PLoS Med ; 14(10): e1002410, 2017 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-29040268

RESUMO

BACKGROUND: Intensive blood pressure (BP) treatment can avert cardiovascular disease (CVD) events but can cause some serious adverse events. We sought to develop and validate risk models for predicting absolute risk difference (increased risk or decreased risk) for CVD events and serious adverse events from intensive BP therapy. A secondary aim was to test if the statistical method of elastic net regularization would improve the estimation of risk models for predicting absolute risk difference, as compared to a traditional backwards variable selection approach. METHODS AND FINDINGS: Cox models were derived from SPRINT trial data and validated on ACCORD-BP trial data to estimate risk of CVD events and serious adverse events; the models included terms for intensive BP treatment and heterogeneous response to intensive treatment. The Cox models were then used to estimate the absolute reduction in probability of CVD events (benefit) and absolute increase in probability of serious adverse events (harm) for each individual from intensive treatment. We compared the method of elastic net regularization, which uses repeated internal cross-validation to select variables and estimate coefficients in the presence of collinearity, to a traditional backwards variable selection approach. Data from 9,069 SPRINT participants with complete data on covariates were utilized for model development, and data from 4,498 ACCORD-BP participants with complete data were utilized for model validation. Participants were exposed to intensive (goal systolic pressure < 120 mm Hg) versus standard (<140 mm Hg) treatment. Two composite primary outcome measures were evaluated: (i) CVD events/deaths (myocardial infarction, acute coronary syndrome, stroke, congestive heart failure, or CVD death), and (ii) serious adverse events (hypotension, syncope, electrolyte abnormalities, bradycardia, or acute kidney injury/failure). The model for CVD chosen through elastic net regularization included interaction terms suggesting that older age, black race, higher diastolic BP, and higher lipids were associated with greater CVD risk reduction benefits from intensive treatment, while current smoking was associated with fewer benefits. The model for serious adverse events chosen through elastic net regularization suggested that male sex, current smoking, statin use, elevated creatinine, and higher lipids were associated with greater risk of serious adverse events from intensive treatment. SPRINT participants in the highest predicted benefit subgroup had a number needed to treat (NNT) of 24 to prevent 1 CVD event/death over 5 years (absolute risk reduction [ARR] = 0.042, 95% CI: 0.018, 0.066; P = 0.001), those in the middle predicted benefit subgroup had a NNT of 76 (ARR = 0.013, 95% CI: -0.0001, 0.026; P = 0.053), and those in the lowest subgroup had no significant risk reduction (ARR = 0.006, 95% CI: -0.007, 0.018; P = 0.71). Those in the highest predicted harm subgroup had a number needed to harm (NNH) of 27 to induce 1 serious adverse event (absolute risk increase [ARI] = 0.038, 95% CI: 0.014, 0.061; P = 0.002), those in the middle predicted harm subgroup had a NNH of 41 (ARI = 0.025, 95% CI: 0.012, 0.038; P < 0.001), and those in the lowest subgroup had no significant risk increase (ARI = -0.007, 95% CI: -0.043, 0.030; P = 0.72). In ACCORD-BP, participants in the highest subgroup of predicted benefit had significant absolute CVD risk reduction, but the overall ACCORD-BP participant sample was skewed towards participants with less predicted benefit and more predicted risk than in SPRINT. The models chosen through traditional backwards selection had similar ability to identify absolute risk difference for CVD as the elastic net models, but poorer ability to correctly identify absolute risk difference for serious adverse events. A key limitation of the analysis is the limited sample size of the ACCORD-BP trial, which expanded confidence intervals for ARI among persons with type 2 diabetes. Additionally, it is not possible to mechanistically explain the physiological relationships explaining the heterogeneous treatment effects captured by the models, since the study was an observational secondary data analysis. CONCLUSIONS: We found that predictive models could help identify subgroups of participants in both SPRINT and ACCORD-BP who had lower versus higher ARRs in CVD events/deaths with intensive BP treatment, and participants who had lower versus higher ARIs in serious adverse events.


Assuntos
Anti-Hipertensivos/uso terapêutico , Pressão Sanguínea/efeitos dos fármacos , Hipertensão/tratamento farmacológico , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Insuficiência Cardíaca/tratamento farmacológico , Humanos , Inibidores de Hidroximetilglutaril-CoA Redutases/uso terapêutico , Hipertensão/complicações , Masculino , Pessoa de Meia-Idade , Infarto do Miocárdio/tratamento farmacológico , Modelos de Riscos Proporcionais , Fatores de Risco , Acidente Vascular Cerebral/tratamento farmacológico , Acidente Vascular Cerebral/prevenção & controle , Resultado do Tratamento
18.
Med Care ; 55(9): 864-870, 2017 09.
Artigo em Inglês | MEDLINE | ID: mdl-28763374

RESUMO

BACKGROUND: Accurately estimating cardiovascular risk is fundamental to good decision-making in cardiovascular disease (CVD) prevention, but risk scores developed in one population often perform poorly in dissimilar populations. We sought to examine whether a large integrated health system can use their electronic health data to better predict individual patients' risk of developing CVD. METHODS: We created a cohort using all patients ages 45-80 who used Department of Veterans Affairs (VA) ambulatory care services in 2006 with no history of CVD, heart failure, or loop diuretics. Our outcome variable was new-onset CVD in 2007-2011. We then developed a series of recalibrated scores, including a fully refit "VA Risk Score-CVD (VARS-CVD)." We tested the different scores using standard measures of prediction quality. RESULTS: For the 1,512,092 patients in the study, the Atherosclerotic cardiovascular disease risk score had similar discrimination as the VARS-CVD (c-statistic of 0.66 in men and 0.73 in women), but the Atherosclerotic cardiovascular disease model had poor calibration, predicting 63% more events than observed. Calibration was excellent in the fully recalibrated VARS-CVD tool, but simpler techniques tested proved less reliable. CONCLUSIONS: We found that local electronic health record data can be used to estimate CVD better than an established risk score based on research populations. Recalibration improved estimates dramatically, and the type of recalibration was important. Such tools can also easily be integrated into health system's electronic health record and can be more readily updated.


Assuntos
Doenças Cardiovasculares/epidemiologia , Registros Eletrônicos de Saúde/estatística & dados numéricos , Indicadores Básicos de Saúde , Distribuição por Idade , Idoso , Aterosclerose/epidemiologia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Medição de Risco , Fatores de Risco , Distribuição por Sexo , Fatores Socioeconômicos , Estados Unidos , United States Department of Veterans Affairs
19.
Stat Med ; 36(13): 2148-2160, 2017 06 15.
Artigo em Inglês | MEDLINE | ID: mdl-28245528

RESUMO

Creating accurate risk prediction models from Big Data resources such as Electronic Health Records (EHRs) is a critical step toward achieving precision medicine. A major challenge in developing these tools is accounting for imperfect aspects of EHR data, particularly the potential for misclassified outcomes. Misclassification, the swapping of case and control outcome labels, is well known to bias effect size estimates for regression prediction models. In this paper, we study the effect of misclassification on accuracy assessment for risk prediction models and find that it leads to bias in the area under the curve (AUC) metric from standard ROC analysis. The extent of the bias is determined by the false positive and false negative misclassification rates as well as disease prevalence. Notably, we show that simply correcting for misclassification while building the prediction model is not sufficient to remove the bias in AUC. We therefore introduce an intuitive misclassification-adjusted ROC procedure that accounts for uncertainty in observed outcomes and produces bias-corrected estimates of the true AUC. The method requires that misclassification rates are either known or can be estimated, quantities typically required for the modeling step. The computational simplicity of our method is a key advantage, making it ideal for efficiently comparing multiple prediction models on very large datasets. Finally, we apply the correction method to a hospitalization prediction model from a cohort of over 1 million patients from the Veterans Health Administrations EHR. Implementations of the ROC correction are provided for Stata and R. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.


Assuntos
Modelos Estatísticos , Curva ROC , Área Sob a Curva , Viés , Registros Eletrônicos de Saúde , Hospitalização/estatística & dados numéricos , Humanos , Medição de Risco/métodos , Estados Unidos , United States Department of Veterans Affairs/estatística & dados numéricos
20.
J Hand Surg Am ; 42(8): 623-629.e1, 2017 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-28666673

RESUMO

PURPOSE: We sought to evaluate how often physicians who perform carpal tunnel release in the state of Michigan routinely request electrodiagnostic studies (EDS) or other diagnostic tests prior to an initial consultation and whether provider or practice characteristics had an influence on requirements for preconsultation diagnostic tests. METHODS: Through online data sources, we identified 356 providers in 261 practices throughout the state of Michigan with profiles confirming hand surgery practice or surgical treatment of carpal tunnel syndrome (CTS). We recorded American Society for Surgery of the Hand (ASSH) membership, teaching facility status, practice size, and primary specialty for each provider. Using a standardized telephone script, 219 providers were contacted by telephone to determine whether any diagnostic tests were needed before an appointment. Using multivariable logistic regression, we evaluated the relationship between the requirement for preconsultation testing and surgeon and practice characteristics. RESULTS: Among the 134 providers who were confirmed to perform carpal tunnel release, 57% (n = 76) required and 9% (n = 12) recommended a diagnostic test prior to the initial consultation. Of the 88 physicians who required/recommended testing, 85% (n = 75) requested EDS, 22% (n = 19) requested magnetic resonance imaging, 13% (n = 11) requested a computed tomography scan, and 9% (n = 8) requested an x-ray. Patients were asked to have multiple studies by 19 (22%) of the 88 surgeons who requested/recommended testing. In the multivariable analysis, ASSH membership, size of practice, and teaching facility status did not have a significant relationship with the requirement for preconsultation testing. CONCLUSIONS: Most surgeons who treat CTS in the state of Michigan routinely request EDS before evaluation, rather than reserving the test for cases in which the diagnosis is unclear. CLINICAL RELEVANCE: In the quest for high-value care, providers must consider whether the benefit of diagnostic tests for CTS likely outweighs the costs, inconvenience, and potential for treatment delay.


Assuntos
Síndrome do Túnel Carpal/diagnóstico , Eletrodiagnóstico , Padrões de Prática Médica , Síndrome do Túnel Carpal/cirurgia , Tomada de Decisão Clínica , Humanos , Imageamento por Ressonância Magnética , Michigan , Seleção de Pacientes , Tomografia Computadorizada por Raios X
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA