RESUMEN
OBJECTIVE: To compare changes in cognitive trajectories after stroke between younger (18-64) and older (65+) adults, accounting for pre-stroke cognitive trajectories. MATERIALS AND METHODS: Pooled cohort study using individual participant data from 3 US cohorts (1971-2019), the Atherosclerosis Risk In Communities Study (ARIC), Framingham Offspring Study (FOS), and REasons for Geographic And Racial Differences in Stroke Study (REGARDS). Linear mixed effect models evaluated the association between age and the initial change (intercept) and rate of change (slope) in cognition after compared to before stroke. Outcomes were global cognition (primary), memory and executive function. RESULTS: We included 1,292 participants with stroke; 197 younger (47.2 % female, 32.5 % Black race) and 1,095 older (50.2 % female, 46.4 % Black race). Median (IQR) age at stroke was 59.7 (56.6-61.7) (younger group) and 75.2 (70.5-80.2) years (older group). Compared to the young, older participants had greater declines in global cognition (-1.69 point [95 % CI, -2.82 to -0.55] greater), memory (-1.05 point [95 % CI, -1.92 to -0.17] greater), and executive function (-3.72 point [95 % CI, -5.23 to -2.21] greater) initially after stroke. Older age was associated with faster declines in global cognition (-0.18 points per year [95 % CI, -0.36 to -0.01] faster) and executive function (-0.16 [95 % CI, -0.26 to -0.06] points per year for every 10 years of higher age), but not memory (-0.006 [95 % CI, -0.15 to 0.14]), after compared to before stroke. CONCLUSION: Older age was associated with greater post-stroke cognitive declines, accounting for differences in pre-stroke cognitive trajectories between the old and the young.
RESUMEN
BACKGROUND: Few performance measures assess presurgical value (quality and utilization). OBJECTIVES: Using carpal tunnel syndrome (CTS) as a case study: (1) develop a model to evaluate presurgical quality and utilization and (2) identify opportunities for value improvement. RESEARCH DESIGN: A retrospective cohort study utilizing Veterans Affairs (VA) national administrative data. SUBJECTS: Patients who were evaluated in a VA primary care clinic on at least 1 occasion for CTS and received carpal tunnel release over a 7-year period. MEASURES: We modeled facility-level performance on 2 outcomes: surgical delay (marker of quality) and number of presurgical encounters (utilization) for CTS, and examined association between patient, facility, and care process variables and performance. RESULTS: Among 41,912 Veterans undergoing carpal tunnel release at 127 VA medical centers, the median facility-level predicted probability of surgical delay was 48%, with 16 (13%) facilities having significantly less delay than the median and 13 (10%) facilities having greater delay. The median facility-level predicted number of presurgical encounters was 8.8 visits, with 22 (17%) facilities having significantly fewer encounters and 22 (17%) facilities having more. Care processes had a stronger association with both outcomes than structural variables included in the models. Processes associated with the greatest deviations in predicted delay and utilization included receipt of repeat electrodiagnostic testing, use of 2 or more nonoperative treatments, and community referral outside of VA. CONCLUSIONS: Using CTS as a test case, this study demonstrates the potential to assess presurgical value and identify modifiable care processes associated with presurgical delay and utilization performance.
Asunto(s)
Síndrome del Túnel Carpiano , Humanos , Síndrome del Túnel Carpiano/diagnóstico , Síndrome del Túnel Carpiano/cirugía , Estudios RetrospectivosRESUMEN
Preventing chronic diseases is an essential aspect of medical care. To prevent chronic diseases, physicians focus on monitoring their risk factors and prescribing the necessary medication. The optimal monitoring policy depends on the patient's risk factors and demographics. Monitoring too frequently may be unnecessary and costly; on the other hand, monitoring the patient infrequently means the patient may forgo needed treatment and experience adverse events related to the disease. We propose a finite horizon and finite-state Markov decision process to define monitoring policies. To build our Markov decision process, we estimate stochastic models based on longitudinal observational data from electronic health records for a large cohort of patients seen in the national U.S. Veterans Affairs health system. We use our model to study policies for whether or when to assess the need for cholesterol-lowering medications. We further use our model to investigate the role of gender and race on optimal monitoring policies.
Asunto(s)
Anticolesterolemiantes , Enfermedades Cardiovasculares , Humanos , Enfermedades Cardiovasculares/prevención & control , Factores de RiesgoRESUMEN
BACKGROUND: We previously reported that a median of 5.6 years of intensive as compared with standard glucose lowering in 1791 military veterans with type 2 diabetes resulted in a risk of major cardiovascular events that was significantly lower (by 17%) after a total of 10 years of combined intervention and observational follow-up. We now report the full 15-year follow-up. METHODS: We observationally followed enrolled participants (complete cohort) after the conclusion of the original clinical trial by using central databases to identify cardiovascular events, hospitalizations, and deaths. Participants were asked whether they would be willing to provide additional data by means of surveys and chart reviews (survey cohort). The prespecified primary outcome was a composite of major cardiovascular events, including nonfatal myocardial infarction, nonfatal stroke, new or worsening congestive heart failure, amputation for ischemic gangrene, and death from cardiovascular causes. Death from any cause was a prespecified secondary outcome. RESULTS: There were 1655 participants in the complete cohort and 1391 in the survey cohort. During the trial (which originally enrolled 1791 participants), the separation of the glycated hemoglobin curves between the intensive-therapy group (892 participants) and the standard-therapy group (899 participants) averaged 1.5 percentage points, and this difference declined to 0.2 to 0.3 percentage points by 3 years after the trial ended. Over a period of 15 years of follow-up (active treatment plus post-trial observation), the risks of major cardiovascular events or death were not lower in the intensive-therapy group than in the standard-therapy group (hazard ratio for primary outcome, 0.91; 95% confidence interval [CI], 0.78 to 1.06; P = 0.23; hazard ratio for death, 1.02; 95% CI, 0.88 to 1.18). The risk of major cardiovascular disease outcomes was reduced, however, during an extended interval of separation of the glycated hemoglobin curves (hazard ratio, 0.83; 95% CI, 0.70 to 0.99), but this benefit did not continue after equalization of the glycated hemoglobin levels (hazard ratio, 1.26; 95% CI, 0.90 to 1.75). CONCLUSIONS: Participants with type 2 diabetes who had been randomly assigned to intensive glucose control for 5.6 years had a lower risk of cardiovascular events than those who received standard therapy only during the prolonged period in which the glycated hemoglobin curves were separated. There was no evidence of a legacy effect or a mortality benefit with intensive glucose control. (Funded by the VA Cooperative Studies Program; VADT ClinicalTrials.gov number, NCT00032487.).
Asunto(s)
Glucemia/análisis , Enfermedades Cardiovasculares/prevención & control , Diabetes Mellitus Tipo 2/tratamiento farmacológico , Hipoglucemiantes/administración & dosificación , Enfermedades Cardiovasculares/epidemiología , Enfermedades Cardiovasculares/mortalidad , Diabetes Mellitus Tipo 2/sangre , Femenino , Estudios de Seguimiento , Humanos , Hiperglucemia/prevención & control , Masculino , Persona de Mediana Edad , Calidad de Vida , Ensayos Clínicos Controlados Aleatorios como Asunto , VeteranosRESUMEN
[This corrects the article DOI: 10.1371/journal.pmed.1002410.].
RESUMEN
BACKGROUND: The US Department of Veterans Affairs (VA) enacted policies offering Veterans care in the community, aiming to improve access challenges. However, the impact of receipt of community care on wait times for Veterans receiving surgical care is poorly understood. OBJECTIVES: To compare wait times for surgery for Veterans with carpal tunnel syndrome who receive VA care plus community care (mixed care) and those who receive care solely within the VA (VA-only). RESEARCH DESIGN: Retrospective cohort study. SUBJECTS: Veterans undergoing carpal tunnel release (CTR) between January 1, 2010 and December 31, 2016. MEASURES: Our primary outcome was time from primary care physician (PCP) referral to CTR. RESULTS: Of the 29,242 Veterans undergoing CTR, 23,330 (79.8%) received VA-only care and 5912 (20.1%) received mixed care. Veterans receiving mixed care had significantly longer time from PCP referral to CTR (median mixed care: 378 days; median VA-only care: 176 days, P<0.001). After controlling for patient and facility covariates, mixed care was associated with a 37% increased time from PCP referral to CTR (adjusted hazard ratio, 0.63; 95% confidence interval, 0.61-0.65). Each additional service provided in the community was associated with a 23% increase in time to surgery (adjusted hazard ratio, 0.77; 95% confidence interval, 0.76-0.78). CONCLUSIONS: VA-only care was associated with a shorter time to surgery compared with mixed care. Moreover, there were additional delays for each service received in the community. With likely increases in Veterans seeking community care, strategies must be used to identify and mitigate sources of delay through the spectrum of care between referral and definitive treatment.
Asunto(s)
Síndrome del Túnel Carpiano/cirugía , Servicios de Salud Comunitaria/estadística & datos numéricos , Derivación y Consulta/estadística & datos numéricos , Tiempo de Tratamiento/estadística & datos numéricos , Veteranos/estadística & datos numéricos , Anciano , Servicios de Salud Comunitaria/legislación & jurisprudencia , Femenino , Accesibilidad a los Servicios de Salud/legislación & jurisprudencia , Accesibilidad a los Servicios de Salud/estadística & datos numéricos , Humanos , Masculino , Persona de Mediana Edad , Estudios Retrospectivos , Estados Unidos , United States Department of Veterans Affairs , Salud de los Veteranos/legislación & jurisprudencia , Salud de los Veteranos/estadística & datos numéricosRESUMEN
The PATH (Predictive Approaches to Treatment effect Heterogeneity) Statement was developed to promote the conduct of, and provide guidance for, predictive analyses of heterogeneity of treatment effects (HTE) in clinical trials. The goal of predictive HTE analysis is to provide patient-centered estimates of outcome risk with versus without the intervention, taking into account all relevant patient attributes simultaneously, to support more personalized clinical decision making than can be made on the basis of only an overall average treatment effect. The authors distinguished 2 categories of predictive HTE approaches (a "risk-modeling" and an "effect-modeling" approach) and developed 4 sets of guidance statements: criteria to determine when risk-modeling approaches are likely to identify clinically meaningful HTE, methodological aspects of risk-modeling methods, considerations for translation to clinical practice, and considerations and caveats in the use of effect-modeling approaches. They discuss limitations of these methods and enumerate research priorities for advancing methods designed to generate more personalized evidence. This explanation and elaboration document describes the intent and rationale of each recommendation and discusses related analytic considerations, caveats, and reservations.
Asunto(s)
Toma de Decisiones Clínicas , Ensayos Clínicos Controlados Aleatorios como Asunto/normas , Resultado del Tratamiento , Reglas de Decisión Clínica , Toma de Decisiones Clínicas/métodos , Medicina Basada en la Evidencia/normas , Humanos , Individualidad , Modelos Estadísticos , Medición de RiesgoRESUMEN
Heterogeneity of treatment effect (HTE) refers to the nonrandom variation in the magnitude or direction of a treatment effect across levels of a covariate, as measured on a selected scale, against a clinical outcome. In randomized controlled trials (RCTs), HTE is typically examined through a subgroup analysis that contrasts effects in groups of patients defined "1 variable at a time" (for example, male vs. female or old vs. young). The authors of this statement present guidance on an alternative approach to HTE analysis, "predictive HTE analysis." The goal of predictive HTE analysis is to provide patient-centered estimates of outcome risks with versus without the intervention, taking into account all relevant patient attributes simultaneously. The PATH (Predictive Approaches to Treatment effect Heterogeneity) Statement was developed using a multidisciplinary technical expert panel, targeted literature reviews, simulations to characterize potential problems with predictive approaches, and a deliberative process engaging the expert panel. The authors distinguish 2 categories of predictive HTE approaches: a "risk-modeling" approach, wherein a multivariable model predicts the risk for an outcome and is applied to disaggregate patients within RCTs to define risk-based variation in benefit, and an "effect-modeling" approach, wherein a model is developed on RCT data by incorporating a term for treatment assignment and interactions between treatment and baseline covariates. Both approaches can be used to predict differential absolute treatment effects, the most relevant scale for clinical decision making. The authors developed 4 sets of guidance: criteria to determine when risk-modeling approaches are likely to identify clinically important HTE, methodological aspects of risk-modeling methods, considerations for translation to clinical practice, and considerations and caveats in the use of effect-modeling approaches. The PATH Statement, together with its explanation and elaboration document, may guide future analyses and reporting of RCTs.
Asunto(s)
Ensayos Clínicos Controlados Aleatorios como Asunto/normas , Resultado del Tratamiento , Reglas de Decisión Clínica , Toma de Decisiones Clínicas , Medicina Basada en la Evidencia/normas , Humanos , Individualidad , Modelos Estadísticos , Medición de RiesgoRESUMEN
PURPOSE: The U.S. Department of Veterans Affairs (VA) health care system monitors time from referral to specialist visit. We compared wait times for carpal tunnel release (CTR) at a VA hospital and its academic affiliate. METHODS: We selected patients who underwent CTR at a VA hospital and its academic affiliate (AA) (2010-2015). We analyzed time from primary care physician (PCP) referral to CTR, which was subdivided into PCP referral to surgical consultation and surgical consultation to CTR. Electrodiagnostic testing (EDS) was categorized in relation to surgical consultation (prereferral vs postreferral). Multivariable Cox proportional hazard models were used to examine associations between clinical variables and surgical location. RESULTS: Between 2010 and 2015, VA patients had a shorter median time from PCP referral to CTR (VA: 168 days; AA: 410 days), shorter time from PCP referral to surgical consultation (VA: 43 days; AA: 191 days), but longer time from surgical consultation to CTR (VA: 98 days; AA: 55 days). Using multivariable models, the VA was associated with a 35% shorter time to CTR (AA hazard ratio [HR], 0.65; 95% confidence interval [CI], 0.52-0.82) and 75% shorter time to surgical consultation (AA HR, 0.25; 95% CI, 0.20-0.03). Receiving both prereferral and postreferral EDS was associated with almost a 2-fold prolonged time to CTR (AA HR, 0.49; 95% CI, 0.36-0.67). CONCLUSIONS: The VA was associated with shorter overall time to CTR compared with its AA. However, the VA policy of prioritizing time from referral to surgical consultation may not optimally incentivize time to surgery. Repeat EDS was associated with longer wait times in both systems. CLINICAL RELEVANCE: Given differences in where delays occur in each health care system, initiatives to improve efficiency will require targeting the appropriate sources of preoperative delay. Judicious use of EDS may be one avenue to decrease wait times in both systems.
Asunto(s)
Síndrome del Túnel Carpiano , Síndrome del Túnel Carpiano/cirugía , Atención a la Salud , Humanos , Tempo Operativo , Sector Privado , Estados Unidos , United States Department of Veterans AffairsRESUMEN
Policymakers and researchers are strongly encouraging clinicians to support patient autonomy through shared decision-making (SDM). In setting policies for clinical care, decision-makers need to understand that current models of SDM have tended to focus on major decisions (e.g., surgeries and chemotherapy) and focused less on everyday primary care decisions. Most decisions in primary care are substantive everyday decisions: intermediate-stakes decisions that occur dozens of times every day, yet are non-trivial for patients, such as whether routine mammography should start at age 40, 45, or 50. Expectations that busy clinicians use current models of SDM (here referred to as "detailed" SDM) for these decisions can feel overwhelming to clinicians. Evidence indicates that detailed SDM is simply not realistic for most of these decisions and without a feasible alternative, clinicians usually default to a decision-making approach with little to no personalization. We propose, for discussion and refinement, a compromise approach to personalizing these decisions (everyday SDM). Everyday SDM is based on a feasible process for supporting patient autonomy that also allows clinicians to continue being respectful health advocates for their patients. We propose that alternatives to detailed SDM are needed to make progress toward more patient-centered care.
Asunto(s)
Toma de Decisiones , Participación del Paciente , Adulto , Toma de Decisiones Conjunta , Humanos , Atención Dirigida al Paciente , Atención Primaria de SaludRESUMEN
BACKGROUND: Most randomized controlled trials (RCTs) and meta-analyses of RCTs examine effect modification (also called a subgroup effect or interaction), in which the effect of an intervention varies by another variable (e.g., age or disease severity). Assessing the credibility of an apparent effect modification presents challenges; therefore, we developed the Instrument for assessing the Credibility of Effect Modification Analyses (ICEMAN). METHODS: To develop ICEMAN, we established a detailed concept; identified candidate credibility considerations in a systematic survey of the literature; together with experts, performed a consensus study to identify key considerations and develop them into instrument items; and refined the instrument based on feedback from trial investigators, systematic review authors and journal editors, who applied drafts of ICEMAN to published claims of effect modification. RESULTS: The final instrument consists of a set of preliminary considerations, core questions (5 for RCTs, 8 for meta-analyses) with 4 response options, 1 optional item for additional considerations and a rating of credibility on a visual analogue scale ranging from very low to high. An accompanying manual provides rationales, detailed instructions and examples from the literature. Seventeen potential users tested ICEMAN; their suggestions improved the user-friendliness of the instrument. INTERPRETATION: The Instrument for assessing the Credibility of Effect Modification Analyses offers explicit guidance for investigators, systematic reviewers, journal editors and others considering making a claim of effect modification or interpreting a claim made by others.
Asunto(s)
Metaanálisis como Asunto , Ensayos Clínicos Controlados Aleatorios como Asunto , Proyectos de Investigación/normas , Consenso , HumanosRESUMEN
BACKGROUND: Carotid endarterectomy (CEA) is routinely performed for asymptomatic carotid stenosis, yet its average net benefit is small. Risk stratification may identify high risk patients that would clearly benefit from treatment. METHODS: Retrospective cohort study using data from the Asymptomatic Carotid Atherosclerosis Study (ACAS). Risk factors for poor outcomes were included in backward and forward selection procedures to develop baseline risk models estimating the risk of non-perioperative ipsilateral stroke/TIA. Baseline risk was estimated for all ACAS participants and externally validated using data from the Atherosclerosis Risk in Communities (ARIC) study. Baseline risk was then included in a treatment risk model that explored the interaction of baseline risk and treatment status (CEA vs. medical management) on the patient-centered outcome of any stroke or death, including peri-operative events. RESULTS: Three baseline risk factors (BMI, creatinine and degree of contralateral stenosis) were selected into our baseline risk model (c-statistic 0.59 [95% CI 0.54-0.65]). The model stratified absolute risk between the lowest and highest risk quintiles (5.1% vs. 12.5%). External validation in ARIC found similar predictiveness (c-statistic 0.58 [0.49-0.67]), but poor calibration across the risk spectrum. In the treatment risk model, CEA was superior to medical management across the spectrum of baseline risk and the magnitude of the treatment effect varied widely between the lowest and highest absolute risk quintiles (3.2% vs. 10.7%). CONCLUSION: Even modestly predictive risk stratification tools have the potential to meaningfully influence clinical decision making in asymptomatic carotid disease. However, our ACAS model requires target population recalibration prior to clinical application.
Asunto(s)
Estenosis Carotídea/cirugía , Endarterectomía Carotidea , Medición de Riesgo/métodos , Anciano , Enfermedades Asintomáticas , Toma de Decisiones Clínicas/métodos , Femenino , Humanos , Masculino , Persona de Mediana Edad , Estudios Retrospectivos , Factores de Riesgo , Resultado del TratamientoRESUMEN
Background: Many health systems are exploring how to implement low-dose computed tomography (LDCT) screening programs that are effective and patient-centered. Objective: To examine factors that influence when LDCT screening is preference-sensitive. Design: State-transition microsimulation model. Data Sources: Two large randomized trials, published decision analyses, and the SEER (Surveillance, Epidemiology, and End Results) cancer registry. Target Population: U.S.-representative sample of simulated patients meeting current U.S. Preventive Services Task Force criteria for screening eligibility. Time Horizon: Lifetime. Perspective: Individual. Intervention: LDCT screening annually for 3 years. Outcome Measures: Lifetime quality-adjusted life-year gains and reduction in lung cancer mortality. To examine the effect of preferences on net benefit, disutilities (the "degree of dislike") quantifying the burden of screening and follow-up were varied across a likely range. The effect of varying the rate of false-positive screening results and overdiagnosis associated with screening was also examined. Results of Base-Case Analysis: Moderate differences in preferences about the downsides of LDCT screening influenced whether screening was appropriate for eligible persons with annual lung cancer risk less than 0.3% or life expectancy less than 10.5 years. For higher-risk eligible persons with longer life expectancy (roughly 50% of the study population), the benefits of LDCT screening overcame even highly negative views about screening and its downsides. Results of Sensitivity Analysis: Rates of false-positive findings and overdiagnosed lung cancer were not highly influential. Limitation: The quantitative thresholds that were identified may vary depending on the structure of the microsimulation model. Conclusion: Identifying circumstances in which LDCT screening is more versus less preference-sensitive may help clinicians personalize their screening discussions, tailoring to both preferences and clinical benefit. Primary Funding Source: None.
Asunto(s)
Detección Precoz del Cáncer/métodos , Neoplasias Pulmonares/diagnóstico , Anciano , Anciano de 80 o más Años , Simulación por Computador , Femenino , Humanos , Pulmón/diagnóstico por imagen , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/mortalidad , Masculino , Cadenas de Markov , Persona de Mediana Edad , Evaluación de Procesos y Resultados en Atención de Salud , Años de Vida Ajustados por Calidad de Vida , Medición de Riesgo , Factores de Riesgo , Programa de VERF , Tomografía Computarizada por Rayos XRESUMEN
Background: The 2013 pooled cohort equations (PCEs) are central in prevention guidelines for cardiovascular disease (CVD) but can misestimate CVD risk. Objective: To improve the clinical accuracy of CVD risk prediction by revising the 2013 PCEs using newer data and statistical methods. Design: Derivation and validation of risk equations. Setting: Population-based. Participants: 26 689 adults aged 40 to 79 years without prior CVD from 6 U.S. cohorts. Measurements: Nonfatal myocardial infarction, death from coronary heart disease, or fatal or nonfatal stroke. Results: The 2013 PCEs overestimated 10-year risk for atherosclerotic CVD by an average of 20% across risk groups. Misestimation of risk was particularly prominent among black adults, of whom 3.9 million (33% of eligible black persons) had extreme risk estimates (<70% or >250% those of white adults with otherwise-identical risk factor values). Updating these equations improved accuracy among all race and sex subgroups. Approximately 11.8 million U.S. adults previously labeled high-risk (10-year risk ≥7.5%) by the 2013 PCEs would be relabeled lower-risk by the updated equations. Limitations: Updating the 2013 PCEs with data from modern cohorts reduced the number of persons considered to be at high risk. Clinicians and patients should consider the potential benefits and harms of reducing the number of persons recommended aspirin, blood pressure, or statin therapy. Our findings also indicate that risk equations will generally become outdated over time and require routine updating. Conclusion: Revised PCEs can improve the accuracy of CVD risk estimates. Primary Funding Source: National Institutes of Health.
Asunto(s)
Enfermedad de la Arteria Coronaria/etiología , Medición de Riesgo/métodos , Adulto , Negro o Afroamericano/estadística & datos numéricos , Anciano , Enfermedad de la Arteria Coronaria/epidemiología , Enfermedad de la Arteria Coronaria/mortalidad , Femenino , Humanos , Masculino , Persona de Mediana Edad , Modelos Estadísticos , Factores de Riesgo , Estados Unidos/epidemiología , Población Blanca/estadística & datos numéricosRESUMEN
AIMS/HYPOTHESIS: We conducted an analysis of data collected during the Veterans Affairs Diabetes Trial (VADT) and the follow-up study (VADT-F) to determine whether intensive (INT) compared with standard (STD) glycaemic control during the VADT resulted in better long-term kidney outcomes. METHODS: VADT randomly assigned 1791 veterans from 20 Veterans Affairs (VA) medical centres who had type 2 diabetes mellitus and a mean HbA1c of 9.4 ± 2% (79.2 mmol/mol) at baseline to receive either INT or STD glucose control for a median of 5.6 years (randomisation December 2000 to May 2003; intervention ending in May 2008). After the trial, participants received routine care through their own physicians within the VA. This is an interim analysis of the VADT-F (June 2008 to December 2013). We collected data using VA and National databases and report renal outcomes based on serum creatinine, eGFR and urine albumin to creatinine ratio (ACR) in 1033 people who provided informed consent to participate in the VADT-F. RESULTS: By the end of the VADT-F, significantly more people who received INT treatment during the VADT maintained an eGFR >60 ml min-1 1.73 m-2 (OR 1.34 [95% CI 1.05, 1.71], p = 0.02). This benefit was most evident in those who were classified as at moderate risk (INT vs STD, RR 1.3, p = 0.03) or high risk (RR 2.3, p = 0.04) of chronic kidney disease on the Kidney Disease Improving Global Outcomes (KDIGO-CKD) at the beginning of VADT. At the end of VADT-F, significantly more people from the INT group improved to a low KDIGO risk category (RR 6.1, p = 0.002). During the VADT-F there were no significant differences between INT and STD for average HbA1c, blood pressure or lipid levels. CONCLUSIONS/INTERPRETATION: After just over 11 years of follow-up, there was a 34% greater odds of maintaining an eGFR of >60 ml min-1 1.73 m-2 and of improving the KDIGO category in individuals with type 2 diabetes who had received INT for a median of 5.6 years. VADT clinical trials.gov number: NCT 00032487.
Asunto(s)
Diabetes Mellitus Tipo 2/tratamiento farmacológico , Hipoglucemiantes/uso terapéutico , Riñón/fisiopatología , Glucemia/efectos de los fármacos , Creatinina/orina , Diabetes Mellitus Tipo 2/sangre , Retinopatía Diabética/tratamiento farmacológico , Retinopatía Diabética/metabolismo , Femenino , Estudios de Seguimiento , Tasa de Filtración Glomerular/efectos de los fármacos , Hemoglobina Glucada/metabolismo , Humanos , Insulina/uso terapéutico , Riñón/efectos de los fármacos , Riñón/metabolismo , Masculino , Albúmina Sérica Humana/orina , Resultado del Tratamiento , VeteranosRESUMEN
OBJECTIVE: Our objective was to understand the reliability of profiling surgeons on average health care spending. SUMMARY OF BACKGROUND DATA: Under its Merit-based Incentive Payment System (MIPS), Medicare will measure surgeon spending and tie performance to payments. Although the intent of this cost-profiling is to reward low-cost surgeons, it is unknown whether surgeons can be accurately distinguished from their peers. METHODS: We used Michigan Medicare and commercial payer claims data to construct episodes of surgical care and to calculate average annual spending for individual surgeons. We then estimated the "reliability" (ie, the ability to distinguish surgeons from their peers) of these cost-profiles and the case-volume that surgeons would need in order to achieve high reliability [intraclass correlation coefficient (ICC) >0.8]. Finally, we calculated the reliability of 2 alternative methods of profiling surgeons (ie, using multiple years of data and grouping surgeons by hospitals). RESULTS: We found that annual cost-profiles of individual surgeons had poor reliability; the ICC ranged from <0.001 for CABG to 0.061 for cholecystectomy. We found that few surgeons in the state of Michigan have sufficient case-volume to be reliably compared; 1% had the minimum yearly case. Finally, we found that the reliability of the cost-profiles can be improved by measuring spending at the hospital-level and/or by incorporating additional years of data. CONCLUSION: These findings suggest that the Medicare program should measure surgeon spending at a group level or incorporate multiple years of data to reduce misclassification of surgeon performance in the MIPS program.
Asunto(s)
Costos de la Atención en Salud , Planes de Incentivos para los Médicos , Cirujanos/economía , Episodio de Atención , Humanos , Michigan , Sistema de Registros , Reproducibilidad de los Resultados , Estados UnidosRESUMEN
BACKGROUND: The Veterans Affairs Diabetes Trial previously showed that intensive glucose lowering, as compared with standard therapy, did not significantly reduce the rate of major cardiovascular events among 1791 military veterans (median follow-up, 5.6 years). We report the extended follow-up of the study participants. METHODS: After the conclusion of the clinical trial, we followed participants, using central databases to identify procedures, hospitalizations, and deaths (complete cohort, with follow-up data for 92.4% of participants). Most participants agreed to additional data collection by means of annual surveys and periodic chart reviews (survey cohort, with 77.7% follow-up). The primary outcome was the time to the first major cardiovascular event (heart attack, stroke, new or worsening congestive heart failure, amputation for ischemic gangrene, or cardiovascular-related death). Secondary outcomes were cardiovascular mortality and all-cause mortality. RESULTS: The difference in glycated hemoglobin levels between the intensive-therapy group and the standard-therapy group averaged 1.5 percentage points during the trial (median level, 6.9% vs. 8.4%) and declined to 0.2 to 0.3 percentage points by 3 years after the trial ended. Over a median follow-up of 9.8 years, the intensive-therapy group had a significantly lower risk of the primary outcome than did the standard-therapy group (hazard ratio, 0.83; 95% confidence interval [CI], 0.70 to 0.99; P=0.04), with an absolute reduction in risk of 8.6 major cardiovascular events per 1000 person-years, but did not have reduced cardiovascular mortality (hazard ratio, 0.88; 95% CI, 0.64 to 1.20; P=0.42). No reduction in total mortality was evident (hazard ratio in the intensive-therapy group, 1.05; 95% CI, 0.89 to 1.25; P=0.54; median follow-up, 11.8 years). CONCLUSIONS: After nearly 10 years of follow-up, patients with type 2 diabetes who had been randomly assigned to intensive glucose control for 5.6 years had 8.6 fewer major cardiovascular events per 1000 person-years than those assigned to standard therapy, but no improvement was seen in the rate of overall survival. (Funded by the VA Cooperative Studies Program and others; VADT ClinicalTrials.gov number, NCT00032487.).
Asunto(s)
Glucemia/metabolismo , Enfermedades Cardiovasculares/epidemiología , Diabetes Mellitus Tipo 2/sangre , Hemoglobina Glucada/análisis , Hipoglucemiantes/administración & dosificación , Anciano , Enfermedades Cardiovasculares/mortalidad , Enfermedades Cardiovasculares/prevención & control , Diabetes Mellitus Tipo 2/tratamiento farmacológico , Diabetes Mellitus Tipo 2/mortalidad , Femenino , Estudios de Seguimiento , Humanos , Masculino , Persona de Mediana Edad , Riesgo , Análisis de SupervivenciaRESUMEN
BACKGROUND: The World Health Organization aims to reduce mortality from chronic diseases including cardiovascular disease (CVD) by 25% by 2025. High blood pressure is a leading CVD risk factor. We sought to compare 3 strategies for treating blood pressure in China and India: a treat-to-target (TTT) strategy emphasizing lowering blood pressure to a target, a benefit-based tailored treatment (BTT) strategy emphasizing lowering CVD risk, or a hybrid strategy currently recommended by the World Health Organization. METHODS AND RESULTS: We developed a microsimulation model of adults aged 30 to 70 years in China and in India to compare the 2 treatment approaches across a 10-year policy-planning horizon. In the model, a BTT strategy treating adults with a 10-year CVD event risk of ≥ 10% used similar financial resources but averted ≈ 5 million more disability-adjusted life-years in both China and India than a TTT approach based on current US guidelines. The hybrid strategy in the current World Health Organization guidelines produced no substantial benefits over TTT. BTT was more cost-effective at $205 to $272/disability-adjusted life-year averted, which was $142 to $182 less per disability-adjusted life-year than TTT or hybrid strategies. The comparative effectiveness of BTT was robust to uncertainties in CVD risk estimation and to variations in the age range analyzed, the BTT treatment threshold, or rates of treatment access, adherence, or concurrent statin therapy. CONCLUSIONS: In model-based analyses, a simple BTT strategy was more effective and cost-effective than TTT or hybrid strategies in reducing mortality.
Asunto(s)
Enfermedades Cardiovasculares/mortalidad , Enfermedades Cardiovasculares/terapia , Simulación por Computador , Objetivos , Hipertensión/mortalidad , Hipertensión/terapia , Adulto , Anciano , Presión Sanguínea/fisiología , Enfermedades Cardiovasculares/diagnóstico , China/epidemiología , Análisis Costo-Beneficio/métodos , Femenino , Humanos , Hipertensión/diagnóstico , India/epidemiología , Masculino , Persona de Mediana Edad , Factores de RiesgoRESUMEN
BACKGROUND: Intensive blood pressure (BP) treatment can avert cardiovascular disease (CVD) events but can cause some serious adverse events. We sought to develop and validate risk models for predicting absolute risk difference (increased risk or decreased risk) for CVD events and serious adverse events from intensive BP therapy. A secondary aim was to test if the statistical method of elastic net regularization would improve the estimation of risk models for predicting absolute risk difference, as compared to a traditional backwards variable selection approach. METHODS AND FINDINGS: Cox models were derived from SPRINT trial data and validated on ACCORD-BP trial data to estimate risk of CVD events and serious adverse events; the models included terms for intensive BP treatment and heterogeneous response to intensive treatment. The Cox models were then used to estimate the absolute reduction in probability of CVD events (benefit) and absolute increase in probability of serious adverse events (harm) for each individual from intensive treatment. We compared the method of elastic net regularization, which uses repeated internal cross-validation to select variables and estimate coefficients in the presence of collinearity, to a traditional backwards variable selection approach. Data from 9,069 SPRINT participants with complete data on covariates were utilized for model development, and data from 4,498 ACCORD-BP participants with complete data were utilized for model validation. Participants were exposed to intensive (goal systolic pressure < 120 mm Hg) versus standard (<140 mm Hg) treatment. Two composite primary outcome measures were evaluated: (i) CVD events/deaths (myocardial infarction, acute coronary syndrome, stroke, congestive heart failure, or CVD death), and (ii) serious adverse events (hypotension, syncope, electrolyte abnormalities, bradycardia, or acute kidney injury/failure). The model for CVD chosen through elastic net regularization included interaction terms suggesting that older age, black race, higher diastolic BP, and higher lipids were associated with greater CVD risk reduction benefits from intensive treatment, while current smoking was associated with fewer benefits. The model for serious adverse events chosen through elastic net regularization suggested that male sex, current smoking, statin use, elevated creatinine, and higher lipids were associated with greater risk of serious adverse events from intensive treatment. SPRINT participants in the highest predicted benefit subgroup had a number needed to treat (NNT) of 24 to prevent 1 CVD event/death over 5 years (absolute risk reduction [ARR] = 0.042, 95% CI: 0.018, 0.066; P = 0.001), those in the middle predicted benefit subgroup had a NNT of 76 (ARR = 0.013, 95% CI: -0.0001, 0.026; P = 0.053), and those in the lowest subgroup had no significant risk reduction (ARR = 0.006, 95% CI: -0.007, 0.018; P = 0.71). Those in the highest predicted harm subgroup had a number needed to harm (NNH) of 27 to induce 1 serious adverse event (absolute risk increase [ARI] = 0.038, 95% CI: 0.014, 0.061; P = 0.002), those in the middle predicted harm subgroup had a NNH of 41 (ARI = 0.025, 95% CI: 0.012, 0.038; P < 0.001), and those in the lowest subgroup had no significant risk increase (ARI = -0.007, 95% CI: -0.043, 0.030; P = 0.72). In ACCORD-BP, participants in the highest subgroup of predicted benefit had significant absolute CVD risk reduction, but the overall ACCORD-BP participant sample was skewed towards participants with less predicted benefit and more predicted risk than in SPRINT. The models chosen through traditional backwards selection had similar ability to identify absolute risk difference for CVD as the elastic net models, but poorer ability to correctly identify absolute risk difference for serious adverse events. A key limitation of the analysis is the limited sample size of the ACCORD-BP trial, which expanded confidence intervals for ARI among persons with type 2 diabetes. Additionally, it is not possible to mechanistically explain the physiological relationships explaining the heterogeneous treatment effects captured by the models, since the study was an observational secondary data analysis. CONCLUSIONS: We found that predictive models could help identify subgroups of participants in both SPRINT and ACCORD-BP who had lower versus higher ARRs in CVD events/deaths with intensive BP treatment, and participants who had lower versus higher ARIs in serious adverse events.