Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 33
Filter
Add more filters

Country/Region as subject
Publication year range
1.
N Engl J Med ; 380(23): 2215-2224, 2019 Jun 06.
Article in English | MEDLINE | ID: mdl-31167051

ABSTRACT

BACKGROUND: We previously reported that a median of 5.6 years of intensive as compared with standard glucose lowering in 1791 military veterans with type 2 diabetes resulted in a risk of major cardiovascular events that was significantly lower (by 17%) after a total of 10 years of combined intervention and observational follow-up. We now report the full 15-year follow-up. METHODS: We observationally followed enrolled participants (complete cohort) after the conclusion of the original clinical trial by using central databases to identify cardiovascular events, hospitalizations, and deaths. Participants were asked whether they would be willing to provide additional data by means of surveys and chart reviews (survey cohort). The prespecified primary outcome was a composite of major cardiovascular events, including nonfatal myocardial infarction, nonfatal stroke, new or worsening congestive heart failure, amputation for ischemic gangrene, and death from cardiovascular causes. Death from any cause was a prespecified secondary outcome. RESULTS: There were 1655 participants in the complete cohort and 1391 in the survey cohort. During the trial (which originally enrolled 1791 participants), the separation of the glycated hemoglobin curves between the intensive-therapy group (892 participants) and the standard-therapy group (899 participants) averaged 1.5 percentage points, and this difference declined to 0.2 to 0.3 percentage points by 3 years after the trial ended. Over a period of 15 years of follow-up (active treatment plus post-trial observation), the risks of major cardiovascular events or death were not lower in the intensive-therapy group than in the standard-therapy group (hazard ratio for primary outcome, 0.91; 95% confidence interval [CI], 0.78 to 1.06; P = 0.23; hazard ratio for death, 1.02; 95% CI, 0.88 to 1.18). The risk of major cardiovascular disease outcomes was reduced, however, during an extended interval of separation of the glycated hemoglobin curves (hazard ratio, 0.83; 95% CI, 0.70 to 0.99), but this benefit did not continue after equalization of the glycated hemoglobin levels (hazard ratio, 1.26; 95% CI, 0.90 to 1.75). CONCLUSIONS: Participants with type 2 diabetes who had been randomly assigned to intensive glucose control for 5.6 years had a lower risk of cardiovascular events than those who received standard therapy only during the prolonged period in which the glycated hemoglobin curves were separated. There was no evidence of a legacy effect or a mortality benefit with intensive glucose control. (Funded by the VA Cooperative Studies Program; VADT ClinicalTrials.gov number, NCT00032487.).


Subject(s)
Blood Glucose/analysis , Cardiovascular Diseases/prevention & control , Diabetes Mellitus, Type 2/drug therapy , Hypoglycemic Agents/administration & dosage , Cardiovascular Diseases/epidemiology , Cardiovascular Diseases/mortality , Diabetes Mellitus, Type 2/blood , Female , Follow-Up Studies , Humans , Hyperglycemia/prevention & control , Male , Middle Aged , Quality of Life , Randomized Controlled Trials as Topic , Veterans
2.
BMC Health Serv Res ; 22(1): 739, 2022 Jun 03.
Article in English | MEDLINE | ID: mdl-35659234

ABSTRACT

BACKGROUND: Hospital-specific template matching (HS-TM) is a newer method of hospital performance assessment. OBJECTIVE: To assess the interpretability, credibility, and usability of HS-TM-based vs. regression-based performance assessments. RESEARCH DESIGN: We surveyed hospital leaders (January-May 2021) and completed follow-up semi-structured interviews. Surveys included four hypothetical performance assessment vignettes, with method (HS-TM, regression) and hospital mortality randomized. SUBJECTS: Nationwide Veterans Affairs Chiefs of Staff, Medicine, and Hospital Medicine. MEASURES: Correct interpretation; self-rated confidence in interpretation; and self-rated trust in assessment (via survey). Concerns about credibility and main uses (via thematic analysis of interview transcripts). RESULTS: In total, 84 participants completed 295 survey vignettes. Respondents correctly interpreted 81.8% HS-TM vs. 56.5% regression assessments, p < 0.001. Respondents "trusted the results" for 70.9% HS-TM vs. 58.2% regression assessments, p = 0.03. Nine concerns about credibility were identified: inadequate capture of case-mix and/or illness severity; inability to account for specialized programs (e.g., transplant center); comparison to geographically disparate hospitals; equating mortality with quality; lack of criterion standards; low power; comparison to dissimilar hospitals; generation of rankings; and lack of transparency. Five concerns were equally relevant to both methods, one more pertinent to HS-TM, and three more pertinent to regression. Assessments were mainly used to trigger further quality evaluation (a "check oil light") and motivate behavior change. CONCLUSIONS: HS-TM-based performance assessments were more interpretable and more credible to VA hospital leaders than regression-based assessments. However, leaders had a similar set of concerns related to credibility for both methods and felt both were best used as a screen for further evaluation.


Subject(s)
Diagnosis-Related Groups , Hospitals , Delivery of Health Care , Hospital Mortality , Humans , Surveys and Questionnaires
3.
Med Care ; 59(12): 1090-1098, 2021 12 01.
Article in English | MEDLINE | ID: mdl-34629424

ABSTRACT

BACKGROUND: Hospital-specific template matching is a newer method of hospital performance measurement that may be fairer than regression-based benchmarking. However, it has been tested in only limited research settings. OBJECTIVE: The objective of this study was to test the feasibility of hospital-specific template matching assessments in the Veterans Affairs (VA) health care system and determine power to detect greater-than-expected 30-day mortality. RESEARCH DESIGN: Observational cohort study with hospital-specific template matching assessment. For each VA hospital, the 30-day mortality of a representative subset of hospitalizations was compared with the pooled mortality from matched hospitalizations at a set of comparison VA hospitals treating sufficiently similar patients. The simulation was used to determine power to detect greater-than-expected mortality. SUBJECTS: A total of 556,266 hospitalizations at 122 VA hospitals in 2017. MEASURES: A number of comparison hospitals identified per hospital; 30-day mortality. RESULTS: Each hospital had a median of 38 comparison hospitals (interquartile range: 33, 44) identified, and 116 (95.1%) had at least 20 comparison hospitals. In total, 8 hospitals (6.6%) had a significantly lower 30-day mortality than their benchmark, 5 hospitals (4.1%) had a significantly higher 30-day mortality, and the remaining 109 hospitals (89.3%) were similar to their benchmark. Power to detect a standardized mortality ratio of 2.0 ranged from 72.5% to 79.4% for a hospital with the fewest (6) versus most (64) comparison hospitals. CONCLUSIONS: Hospital-specific template matching may be feasible for assessing hospital performance in the diverse VA health care system, but further refinements are needed to optimize the approach before operational use. Our findings are likely applicable to other large and diverse multihospital systems.


Subject(s)
Benchmarking/methods , Hospitals/classification , Quality of Health Care/standards , Benchmarking/trends , Cohort Studies , Hospitals/trends , Humans , Quality Indicators, Health Care/trends , Quality of Health Care/statistics & numerical data , United States
4.
BMC Health Serv Res ; 21(1): 797, 2021 Aug 11.
Article in English | MEDLINE | ID: mdl-34380495

ABSTRACT

BACKGROUND: While the Veterans Health Administration (VHA) MOVE! weight management program is effective in helping patients lose weight and is available at every VHA medical center across the United States, reaching patients to engage them in treatment remains a challenge. Facility-based MOVE! programs vary in structures, processes of programming, and levels of reach, with no single factor explaining variation in reach. Configurational analysis, based on Boolean algebra and set theory, represents a mathematical approach to data analysis well-suited for discerning how conditions interact and identifying multiple pathways leading to the same outcome. We applied configurational analysis to identify facility-level obesity treatment program arrangements that directly linked to higher reach. METHODS: A national survey was fielded in March 2017 to elicit information about more than 75 different components of obesity treatment programming in all VHA medical centers. This survey data was linked to reach scores available through administrative data. Reach scores were calculated by dividing the total number of Veterans who are candidates for obesity treatment by the number of "new" MOVE! visits in 2017 for each program and then multiplied by 1000. Programs with the top 40 % highest reach scores (n = 51) were compared to those in the lowest 40 % (n = 51). Configurational analysis was applied to identify specific combinations of conditions linked to reach rates. RESULTS: One hundred twenty-seven MOVE! program representatives responded to the survey and had complete reach data. The final solution consisted of 5 distinct pathways comprising combinations of program components related to pharmacotherapy, bariatric surgery, and comprehensive lifestyle intervention; 3 of the 5 pathways depended on the size/complexity of medical center. The 5 pathways explained 78 % (40/51) of the facilities in the higher-reach group with 85 % consistency (40/47). CONCLUSIONS: Specific combinations of facility-level conditions identified through configurational analysis uniquely distinguished facilities with higher reach from those with lower reach. Solutions demonstrated the importance of how local context plus specific program components linked together to account for a key implementation outcome. These findings will guide system recommendations about optimal program structures to maximize reach to patients who would benefit from obesity treatment such as the MOVE!


Subject(s)
United States Department of Veterans Affairs , Veterans , Humans , Life Style , Obesity/prevention & control , United States , Veterans Health
5.
Med Care ; 57(4): e22-e27, 2019 04.
Article in English | MEDLINE | ID: mdl-30394981

ABSTRACT

BACKGROUND: Electronic health records provide clinically rich data for research and quality improvement work. However, the data are often unstructured text, may be inconsistently recorded and extracted into centralized databases, making them difficult to use for research. OBJECTIVES: We sought to quantify the variation in how key laboratory measures are recorded in the Department of Veterans Affairs (VA) Corporate Data Warehouse (CDW) across hospitals and over time. We included 6 laboratory tests commonly drawn within the first 24 hours of hospital admission (albumin, bilirubin, creatinine, hemoglobin, sodium, white blood cell count) from fiscal years 2005-2015. RESULTS: We assessed laboratory test capture for 5,454,411 acute hospital admissions at 121 sites across the VA. The mapping of standardized laboratory nomenclature (Logical Observation Identifiers Names and Codes, LOINCs) to test results in CDW varied within hospital by laboratory test. The relationship between LOINCs and laboratory test names improved over time; by FY2015, 109 (95.6%) hospitals had >90% of the 6 laboratory tests mapped to an appropriate LOINC. All fields used to classify test results are provided in an Appendix (Supplemental Digital Content 1, http://links.lww.com/MLR/B635). CONCLUSIONS: The use of electronic health record data for research requires assessing data consistency and quality. Using laboratory test results requires the use of both unstructured text fields and the identification of appropriate LOINCs. When using data from multiple facilities, the results should be carefully examined by facility and over time to maximize the capture of data fields.


Subject(s)
Data Warehousing/statistics & numerical data , Electronic Health Records/statistics & numerical data , Electronic Health Records/standards , Hospitals, Veterans , Logical Observation Identifiers Names and Codes , Humans , Longitudinal Studies , Middle Aged , United States , United States Department of Veterans Affairs
6.
BMC Med Res Methodol ; 19(1): 94, 2019 05 08.
Article in English | MEDLINE | ID: mdl-31068135

ABSTRACT

BACKGROUND: To study patient physiology throughout a period of acute hospitalization, we sought to create accessible, standardized nationwide data at the level of the individual patient-facility-day. This methodology paper summarizes the development, organization, and characteristics of the Veterans Affairs Patient Database 2014-2017 (VAPD 2014-2017). The VAPD 2014-2017 contains acute hospitalizations from all parts of the nationwide VA healthcare system with daily physiology including clinical data (labs, vitals, medications, risk scores, etc.), intensive care unit (ICU) indicators, facility, patient, and hospitalization characteristics. METHODS: The VA data structure and database organization represents a complex multi-hospital system. We define a single-site hospitalization as one or more consecutive stays with an acute treating specialty at a single facility. The VAPD 2014-2017 is structured at the patient-facility-day level, where every patient-day in a hospital is a row with separate identification variables for facility, patient, and hospitalization. The VAPD 2014-2017 includes daily laboratory, vital signs, and inpatient medication. Such data were validated and verified through lab value range and comparison with patient charts. Sepsis, risk scores, and organ dysfunction definitions were standardized and calculated. RESULTS: We identified 565,242 single-site hospitalizations (SSHs) in 2014; 558,060 SSHs in 2015; 553,961 SSHs in 2016; and 550,236 SSHs in 2017 at 141 VA hospitals. The average length of stay was four days for all study years. In-hospital mortality decreased from 2014 to 2017 (1.7 to 1.4%), 30-day readmission rates increased from 15.3% in 2014 to 15.6% in 2017; 30-day mortality also decreased from 4.4% in 2014 to 4.1% in 2017. From 2014 to 2017, there were 107,512 (4.8%) of SSHs that met the Center for Disease Control and Prevention's Electronic Health Record-based retrospective definition of sepsis. CONCLUSION: The VAPD 2014-2017 represents a large, standardized collection of granular data from a heterogeneous nationwide healthcare system. It is also a direct resource for studying the evolution of inpatient physiology during both acute and critical illness.


Subject(s)
Databases, Factual , Electronic Health Records/statistics & numerical data , Hospitalization/statistics & numerical data , Aged , Female , Hospital Mortality , Hospitals/statistics & numerical data , Humans , Length of Stay , Male , Middle Aged , Sepsis , Severity of Illness Index , United States , United States Department of Veterans Affairs
7.
N Engl J Med ; 372(23): 2197-206, 2015 Jun 04.
Article in English | MEDLINE | ID: mdl-26039600

ABSTRACT

BACKGROUND: The Veterans Affairs Diabetes Trial previously showed that intensive glucose lowering, as compared with standard therapy, did not significantly reduce the rate of major cardiovascular events among 1791 military veterans (median follow-up, 5.6 years). We report the extended follow-up of the study participants. METHODS: After the conclusion of the clinical trial, we followed participants, using central databases to identify procedures, hospitalizations, and deaths (complete cohort, with follow-up data for 92.4% of participants). Most participants agreed to additional data collection by means of annual surveys and periodic chart reviews (survey cohort, with 77.7% follow-up). The primary outcome was the time to the first major cardiovascular event (heart attack, stroke, new or worsening congestive heart failure, amputation for ischemic gangrene, or cardiovascular-related death). Secondary outcomes were cardiovascular mortality and all-cause mortality. RESULTS: The difference in glycated hemoglobin levels between the intensive-therapy group and the standard-therapy group averaged 1.5 percentage points during the trial (median level, 6.9% vs. 8.4%) and declined to 0.2 to 0.3 percentage points by 3 years after the trial ended. Over a median follow-up of 9.8 years, the intensive-therapy group had a significantly lower risk of the primary outcome than did the standard-therapy group (hazard ratio, 0.83; 95% confidence interval [CI], 0.70 to 0.99; P=0.04), with an absolute reduction in risk of 8.6 major cardiovascular events per 1000 person-years, but did not have reduced cardiovascular mortality (hazard ratio, 0.88; 95% CI, 0.64 to 1.20; P=0.42). No reduction in total mortality was evident (hazard ratio in the intensive-therapy group, 1.05; 95% CI, 0.89 to 1.25; P=0.54; median follow-up, 11.8 years). CONCLUSIONS: After nearly 10 years of follow-up, patients with type 2 diabetes who had been randomly assigned to intensive glucose control for 5.6 years had 8.6 fewer major cardiovascular events per 1000 person-years than those assigned to standard therapy, but no improvement was seen in the rate of overall survival. (Funded by the VA Cooperative Studies Program and others; VADT ClinicalTrials.gov number, NCT00032487.).


Subject(s)
Blood Glucose/metabolism , Cardiovascular Diseases/epidemiology , Diabetes Mellitus, Type 2/blood , Glycated Hemoglobin/analysis , Hypoglycemic Agents/administration & dosage , Aged , Cardiovascular Diseases/mortality , Cardiovascular Diseases/prevention & control , Diabetes Mellitus, Type 2/drug therapy , Diabetes Mellitus, Type 2/mortality , Female , Follow-Up Studies , Humans , Male , Middle Aged , Risk , Survival Analysis
8.
Med Care ; 55(9): 864-870, 2017 09.
Article in English | MEDLINE | ID: mdl-28763374

ABSTRACT

BACKGROUND: Accurately estimating cardiovascular risk is fundamental to good decision-making in cardiovascular disease (CVD) prevention, but risk scores developed in one population often perform poorly in dissimilar populations. We sought to examine whether a large integrated health system can use their electronic health data to better predict individual patients' risk of developing CVD. METHODS: We created a cohort using all patients ages 45-80 who used Department of Veterans Affairs (VA) ambulatory care services in 2006 with no history of CVD, heart failure, or loop diuretics. Our outcome variable was new-onset CVD in 2007-2011. We then developed a series of recalibrated scores, including a fully refit "VA Risk Score-CVD (VARS-CVD)." We tested the different scores using standard measures of prediction quality. RESULTS: For the 1,512,092 patients in the study, the Atherosclerotic cardiovascular disease risk score had similar discrimination as the VARS-CVD (c-statistic of 0.66 in men and 0.73 in women), but the Atherosclerotic cardiovascular disease model had poor calibration, predicting 63% more events than observed. Calibration was excellent in the fully recalibrated VARS-CVD tool, but simpler techniques tested proved less reliable. CONCLUSIONS: We found that local electronic health record data can be used to estimate CVD better than an established risk score based on research populations. Recalibration improved estimates dramatically, and the type of recalibration was important. Such tools can also easily be integrated into health system's electronic health record and can be more readily updated.


Subject(s)
Cardiovascular Diseases/epidemiology , Electronic Health Records/statistics & numerical data , Health Status Indicators , Age Distribution , Aged , Atherosclerosis/epidemiology , Female , Humans , Male , Middle Aged , Risk Assessment , Risk Factors , Sex Distribution , Socioeconomic Factors , United States , United States Department of Veterans Affairs
9.
Stat Med ; 36(13): 2148-2160, 2017 06 15.
Article in English | MEDLINE | ID: mdl-28245528

ABSTRACT

Creating accurate risk prediction models from Big Data resources such as Electronic Health Records (EHRs) is a critical step toward achieving precision medicine. A major challenge in developing these tools is accounting for imperfect aspects of EHR data, particularly the potential for misclassified outcomes. Misclassification, the swapping of case and control outcome labels, is well known to bias effect size estimates for regression prediction models. In this paper, we study the effect of misclassification on accuracy assessment for risk prediction models and find that it leads to bias in the area under the curve (AUC) metric from standard ROC analysis. The extent of the bias is determined by the false positive and false negative misclassification rates as well as disease prevalence. Notably, we show that simply correcting for misclassification while building the prediction model is not sufficient to remove the bias in AUC. We therefore introduce an intuitive misclassification-adjusted ROC procedure that accounts for uncertainty in observed outcomes and produces bias-corrected estimates of the true AUC. The method requires that misclassification rates are either known or can be estimated, quantities typically required for the modeling step. The computational simplicity of our method is a key advantage, making it ideal for efficiently comparing multiple prediction models on very large datasets. Finally, we apply the correction method to a hospitalization prediction model from a cohort of over 1 million patients from the Veterans Health Administrations EHR. Implementations of the ROC correction are provided for Stata and R. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.


Subject(s)
Models, Statistical , ROC Curve , Area Under Curve , Bias , Electronic Health Records , Hospitalization/statistics & numerical data , Humans , Risk Assessment/methods , United States , United States Department of Veterans Affairs/statistics & numerical data
10.
Crit Care Med ; 43(7): 1368-74, 2015 Jul.
Article in English | MEDLINE | ID: mdl-25803652

ABSTRACT

OBJECTIVES: There is systematic variation between hospitals in their care of severe sepsis, but little information on whether this variation impacts sepsis-related mortality, or how hospitals' and health-systems' impacts have changed over time. We examined whether hospital and regional organization of severe sepsis care is associated with meaningful differences in 30-day mortality in a large integrated health care system, and the extent to which those effects are stable over time. DESIGN: In this retrospective cohort study, we used risk- and reliability-adjusted hierarchical logistic regression to estimate hospital- and region-level random effects after controlling for severity of illness using a rich mix of administrative and clinical laboratory data. SETTING: One hundred fourteen U.S. Department of Veterans Affairs hospitals in 21 geographic regions. PATIENTS: Forty-three thousand seven hundred thirty-three patients with severe sepsis in 2012, compared to 33,095 such patients in 2008. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: The median hospital in the worst quintile of performers had a risk-adjusted 30-day mortality of 16.7% (95% CI, 13.5%, 20.5%) in 2012 compared with the best quintile, which had a risk-adjusted mortality of 12.8% (95% CI, 10.7%, 15.3%). Hospitals and regions explained a statistically and clinically significant proportion of the variation in patient outcomes. Thirty-day mortality after severe sepsis declined from 18.3% in 2008 to 14.7% in 2012 despite very similar severity of illness between years. The proportion of the variance in sepsis-related mortality explained by hospitals and regions was stable between 2008 and 2012. CONCLUSIONS: In this large integrated healthcare system, there is clinically significant variation in sepsis-related mortality associated with hospitals and regions. The proportion of variance explained by hospitals and regions has been stable over time, although sepsis-related mortality has declined.


Subject(s)
Sepsis/mortality , Sepsis/therapy , Aged , Cohort Studies , Delivery of Health Care , Female , Hospitals , Humans , Male , Patient Outcome Assessment , Retrospective Studies , Time Factors , United States , United States Department of Veterans Affairs
11.
J Gen Intern Med ; 29 Suppl 2: S675-81, 2014 Jul.
Article in English | MEDLINE | ID: mdl-24715403

ABSTRACT

BACKGROUND: Clinical Pharmacy Specialists (CPSs) and Registered Nurses (RNs) are integrally involved in the Patient Aligned Care Teams (PACT) model, especially as physician extenders in the management of chronic disease states. CPSs may be an alternative to physicians as a supporting prescriber for RN case management (RNCM) of poorly controlled hypertension. OBJECTIVE: To compare CPS-directed versus physician-directed RNCM for patients with poorly controlled hypertension. DESIGN: Non-randomized, retrospective comparison of a natural experiment. SETTING: A large Midwestern Veterans Affairs (VA) medical center. INTERVENTION: Utilizing CPSs as alternatives to physicians for directing RNCM of poorly controlled hypertension. PATIENTS: All 126 patients attended RNCM appointments for poorly controlled hypertension between 20 September 2011 and 31 October 2011 with either CPS or physician involvement in the clinical decision making. Patients were excluded if both a CPS and a physician were involved in the index visit, or they were enrolled in Home Based Primary Care, or if they displayed non-adherence to the plan. MAIN MEASURES: All data were obtained from review of electronic medical records. Outcomes included whether a patient received medication intensification at the index visit, and as the main measure, blood pressures between the index and next consecutive visit. KEY RESULTS: All patients had medication intensification. Patients receiving CPS-directed RNCM had greater decreases in systolic blood pressure compared to those receiving physician-directed RNCM (14 ± 13 mmHg versus 10 ± 11 mmHg; p = 0.04). After adjusting for the time between visits, initial systolic blood pressure, and prior stroke, provider type was no longer significant (p = 0.24). Change in diastolic blood pressure and attainment of blood pressure < 140/90 mm Hg were similar between groups (p = 0.93, p = 0.91, respectively). CONCLUSIONS: CPS-directed and physician-directed RNCM for hypertension demonstrated similar blood pressure reduction. These results support the utilization of CPSs as prescribers to support RNCM for chronic diseases.


Subject(s)
Case Management , Hospitals, Veterans , Hypertension/therapy , Nurses , Patient Care Team , Pharmacists , Aged , Case Management/standards , Cooperative Behavior , Female , Hospitals, Veterans/standards , Humans , Hypertension/diagnosis , Male , Middle Aged , Nurses/standards , Patient Care Team/standards , Pharmacists/standards , Retrospective Studies
12.
Med Care ; 51(3): 251-8, 2013 Mar.
Article in English | MEDLINE | ID: mdl-23269109

ABSTRACT

BACKGROUND: Use of the electronic health record (EHR) is expected to increase rapidly in the near future, yet little research exists on whether analyzing internal EHR data using flexible, adaptive statistical methods could improve clinical risk prediction. Extensive implementation of EHR in the Veterans Health Administration provides an opportunity for exploration. OBJECTIVES: To compare the performance of various approaches for predicting risk of cerebrovascular and cardiovascular (CCV) death, using traditional risk predictors versus more comprehensive EHR data. RESEARCH DESIGN: Retrospective cohort study. We identified all Veterans Health Administration patients without recent CCV events treated at 12 facilities from 2003 to 2007, and predicted risk using the Framingham risk score, logistic regression, generalized additive modeling, and gradient tree boosting. MEASURES: The outcome was CCV-related death within 5 years. We assessed each method's predictive performance with the area under the receiver operating characteristic curve (AUC), the Hosmer-Lemeshow goodness-of-fit test, plots of estimated risk, and reclassification tables, using cross-validation to penalize overfitting. RESULTS: Regression methods outperformed the Framingham risk score, even with the same predictors (AUC increased from 71% to 73% and calibration also improved). Even better performance was attained in models using additional EHR-derived predictor variables (AUC increased to 78% and net reclassification improvement was as large as 0.29). Nonparametric regression further improved calibration and discrimination compared with logistic regression. CONCLUSIONS: Despite the EHR lacking some risk factors and its imperfect data quality, health care systems may be able to substantially improve risk prediction for their patients by using internally developed EHR-derived models and flexible statistical methodology.


Subject(s)
Cardiovascular Diseases/prevention & control , Electronic Health Records , Models, Statistical , Risk Assessment/methods , Cardiovascular Diseases/mortality , Electronic Health Records/statistics & numerical data , Female , Humans , Male , Middle Aged , Regression Analysis , Reproducibility of Results , Retrospective Studies , Risk Assessment/statistics & numerical data , Sensitivity and Specificity , Statistics, Nonparametric , United States/epidemiology , United States Department of Veterans Affairs/statistics & numerical data
13.
Crit Care Med ; 40(9): 2569-75, 2012 Sep.
Article in English | MEDLINE | ID: mdl-22732289

ABSTRACT

OBJECTIVE: To assess the relationship between volume of nonoperative mechanically ventilated patients receiving care in a specific Veterans Health Administration hospital and their mortality. DESIGN: Retrospective cohort study. SETTING: One-hundred nineteen Veterans Health Administration medical centers. PATIENTS: We identified 5,131 hospitalizations involving mechanically ventilated patients in an intensive care unit during 2009, who did not receive surgery. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: We extracted demographic and clinical data from the VA Inpatient Evaluation Center. For each hospital, we defined volume as the total number of nonsurgical admissions receiving mechanical ventilation in an intensive care unit during 2009. We examined the hospital contribution to 30-day mortality using multilevel logistic regression models with a random intercept for each hospital. We quantified the extent of interhospital variation in 30-day mortality using the intraclass correlation coefficient and median odds ratio. We used generalized estimating equations to examine the relationship between volume and 30-day mortality and risk-adjusted all models using a patient-level prognostic score derived from clinical data representing the risk of death conditional on treatment at a high-volume hospital. Mean age for the sample was 65 (SD 11) yrs, 97% were men, and 60% were white. The median VA hospital cared for 40 (interquartile range 19-62) mechanically ventilated patients in 2009. Crude 30-day mortality for these patients was 36.9%. After reliability and risk adjustment to the median patient, adjusted hospital-level mortality varied from 33.5% to 40.6%. The intraclass correlation coefficient for the hospital-level variation was 0.6% (95% confidence interval 0.1, 3.4%), with a median odds ratio of 1.15 (95% confidence interval 1.06, 1.38). The relationship between hospital volume of mechanically ventilated and 30-day mortality was not statistically significant: each 50-patient increase in volume was associated with a nonsignificant 2% decrease in the odds of death within 30 days (odds ratio 0.98, 95% confidence interval 0.87-1.10). CONCLUSIONS: Veterans Health Administration hospitals caring for lower volumes of mechanically ventilated patients do not have worse mortality. Mechanisms underlying this finding are unclear, but, if elucidated, may offer other integrated health systems ways to overcome the disadvantages of small-volume centers in achieving good outcomes.


Subject(s)
Cause of Death , Critical Illness/mortality , Hospital Mortality/trends , Hospitals, Veterans/statistics & numerical data , Respiration, Artificial/mortality , Aged , Cohort Studies , Confidence Intervals , Critical Illness/therapy , Databases, Factual , Female , Humans , Intensive Care Units , Length of Stay , Logistic Models , Male , Middle Aged , Odds Ratio , Quality Control , Respiration, Artificial/methods , Respiration, Artificial/statistics & numerical data , Retrospective Studies , Risk Assessment , Surgical Procedures, Operative , Survival Analysis , United States , Workload
14.
Transl Behav Med ; 12(11): 1029-1037, 2022 11 21.
Article in English | MEDLINE | ID: mdl-36408955

ABSTRACT

Obesity is a well-established risk factor for increased morbidity and mortality. Comprehensive lifestyle interventions, pharmacotherapy, and bariatric surgery are three effective treatment approaches for obesity. The Veterans Health Administration (VHA) offers all three domains but in different configurations across medical facilities. Study aim was to explore the relationship between configurations of three types of obesity treatments, context, and population impact across VHA using coincidence analysis. This was a cross-sectional analysis of survey data describing weight management treatment components linked with administrative data to compute population impact for each facility. Coincidence analysis was used to identify combinations of treatment components that led to higher population impact. Facilities with higher impact were in the top two quintiles for (1) reach to eligible patients and (2) weight outcomes. Sixty-nine facilities were included in the analyses. The final model explained 88% (29/33) of the higher-impact facilities with 91% consistency (29/32) and was comprised of five distinct pathways. Each of the five pathways depended on facility complexity-level plus factors from one or more of the three domains of weight management: comprehensive lifestyle interventions, pharmacotherapy, and/or bariatric surgery. Three pathways include components from multiple treatment domains. Combinations of conditions formed "recipes" that lead to higher population impact. Our coincidence analyses highlighted both the importance of local context and how combinations of specific conditions consistently and uniquely distinguished higher impact facilities from lower impact facilities for weight management.


Obesity can contribute to increased rates of ill health and earlier death. Proven treatments for obesity include programs that help people improve lifestyle behaviors (e.g., being physically active), medications, and/or bariatric surgery. In the Veterans Health Administration (VHA), all three types of treatments are offered, but not at every medical center­in practice, individual medical centers offer different combinations of treatment options to their patients. VHA medical centers also have a wide range of population impact. We identified high-impact medical centers (centers with the most patients participating in obesity treatment who would benefit from treatment AND that reported the most weight loss for their patients) and examined which treatment configurations led to better population-level outcomes (i.e., higher population impact). We used a novel analysis approach that allows us to compare combinations of treatment components, instead of analyzing them one-by-one. We found that optimal combinations are context-sensitive and depend on the type of center (e.g., large centers affiliated with a university vs. smaller rural centers). We list five different "recipes" of treatment combinations leading to higher population-level impact. This information can be used by clinical leaders to design treatment programs to maximize benefits for their patients.


Subject(s)
Veterans Health , Veterans , United States/epidemiology , Humans , United States Department of Veterans Affairs , Cross-Sectional Studies , Obesity/therapy , Obesity/epidemiology
15.
Am J Manag Care ; 27(12): e413-e419, 2021 12 01.
Article in English | MEDLINE | ID: mdl-34889583

ABSTRACT

OBJECTIVES: Use of anesthesia-assisted (AA) sedation for routine gastrointestinal (GI) endoscopy has increased markedly. Clinical uncertainty about which patients are most likely to benefit from AA sedation contributes to this increased use. We aimed to estimate the prevalence of failed endoscopist-directed sedation and to identify patients at elevated risk of failing standard sedation. STUDY DESIGN: Retrospective longitudinal study of national Veterans Health Administration (VA) data of all patients who underwent esophagogastroduodenoscopy and/or colonoscopy in 2009-2013. METHODS: Using multivariable logistic regression, we sought to identify patient and procedural risk factors for failed sedation. Failed sedation cases were identified electronically and validated by chart review. RESULTS: Of 302,247 standard sedation procedures performed at VA facilities offering AA sedation, we identified 313 cases of failed sedation (prevalence, 0.10%). None of the factors found to be associated with increased risk of failed sedation (eg, high-dose opioid use, younger age) had an odds ratio greater than 3. Even among the highest-risk patients (top decile), the prevalence of failed sedation was only 0.29%. CONCLUSIONS: Failed sedation among patients undergoing routine outpatient GI endoscopy with standard sedation is very rare, even among patients at highest risk. This suggests that concerns regarding failed sedation due to commonly cited factors such as chronic opioid use and obesity do not justify forgoing standard sedation in favor of AA sedation in most patients. It also suggests that use of AA sedation is generally unnecessary. Reinstatement of endoscopist-directed sedation, rather than AA sedation, as the default sedation standard is warranted to reduce low-value care and prevent undue financial burdens on patients.


Subject(s)
Anesthesia , Clinical Decision-Making , Colonoscopy , Conscious Sedation , Endoscopy, Gastrointestinal , Humans , Hypnotics and Sedatives , Longitudinal Studies , Retrospective Studies , Uncertainty
16.
PLoS One ; 16(9): e0257520, 2021.
Article in English | MEDLINE | ID: mdl-34543353

ABSTRACT

INTRODUCTION: Previous work had shown that machine learning models can predict inflammatory bowel disease (IBD)-related hospitalizations and outpatient corticosteroid use based on patient demographic and laboratory data in a cohort of United States Veterans. This study aimed to replicate this modeling framework in a nationally representative cohort. METHODS: A retrospective cohort design using Optum Electronic Health Records (EHR) were used to identify IBD patients, with at least 12 months of follow-up between 2007 and 2018. IBD flare was defined as an inpatient/emergency visit with a diagnosis of IBD or an outpatient corticosteroid prescription for IBD. Predictors included demographic and laboratory data. Logistic regression and random forest (RF) models were used to predict IBD flare within 6 months of each visit. A 70% training and 30% validation approach was used. RESULTS: A total of 95,878 patients across 780,559 visits were identified. Of these, 22,245 (23.2%) patients had at least one IBD flare. Patients were predominantly White (87.7%) and female (57.1%), with a mean age of 48.0 years. The logistic regression model had an area under the receiver operating curve (AuROC) of 0.66 (95% CI: 0.65-0.66), sensitivity of 0.69 (95% CI: 0.68-0.70), and specificity of 0.74 (95% CI: 0.73-0.74) in the validation cohort. The RF model had an AuROC of 0.80 (95% CI: 0.80-0.81), sensitivity of 0.74 (95% CI: 0.73-0.74), and specificity of 0.72 (95% CI: 0.72-0.72) in the validation cohort. Important predictors of IBD flare in the RF model were the number of previous flares, age, potassium, and white blood cell count. CONCLUSION: The machine learning modeling framework was replicated and results showed a similar predictive accuracy in a nationally representative cohort of IBD patients. This modeling framework could be embedded in routine practice as a tool to distinguish high-risk patients for disease activity.


Subject(s)
Adrenal Cortex Hormones/therapeutic use , Algorithms , Inflammatory Bowel Diseases/drug therapy , Adult , Area Under Curve , Female , Hospitalization , Humans , Inflammatory Bowel Diseases/diagnosis , Leukocytes/cytology , Logistic Models , Male , Middle Aged , ROC Curve , Retrospective Studies
17.
JAMA Netw Open ; 4(3): e210313, 2021 03 01.
Article in English | MEDLINE | ID: mdl-33646314

ABSTRACT

Importance: Inflammatory bowel disease (IBD) is commonly treated with corticosteroids and anti-tumor necrosis factor (TNF) drugs; however, medications have well-described adverse effects. Prior work suggests that anti-TNF therapy may reduce all-cause mortality compared with prolonged corticosteroid use among Medicare and Medicaid beneficiaries with IBD. Objective: To examine the association between use of anti-TNF or corticosteroids and all-cause mortality in a national cohort of veterans with IBD. Design, Setting, and Participants: This cohort study used a well-established Veteran's Health Administration cohort of 2997 patients with IBD treated with prolonged corticosteroids (≥3000-mg prednisone equivalent and/or ≥600 mg of budesonide within a 12-month period) and/or new anti-TNF therapy from January 1, 2006, to October 1, 2015. Data were analyzed between July 1, 2019, and December 31, 2020. Exposures: Use of corticosteroids or anti-TNF. Main Outcomes and Measures: The primary end point was all-cause mortality as defined by the Veterans Health Administration vital status file. Marginal structural modeling was used to compare associations between anti-TNF therapy or corticosteroid use and all-cause mortality. Results: A total of 2997 patients (2725 men [90.9%]; mean [SD] age, 50.0 [17.4] years) were included in the final analysis, 1734 (57.9%) with Crohn disease (CD) and 1263 (42.1%) with ulcerative colitis (UC). All-cause mortality was 8.5% (n = 256) over a mean (SD) of 3.9 (2.3) years' follow-up. At cohort entry, 1836 patients were new anti-TNF therapy users, and 1161 were prolonged corticosteroid users. Anti-TNF therapy use was associated with a lower likelihood of mortality for CD (odds ratio [OR], 0.54; 95% CI, 0.31-0.93) but not for UC (OR, 0.33; 95% CI, 0.10-1.10). In a sensitivity analysis adjusting prolonged corticosteroid users to include patients receiving corticosteroids within 90 to 270 days after initiation of anti-TNF therapy, the OR for UC was statistically significant, at 0.33 (95% CI, 0.13-0.84), and the OR for CD was 0.55 (95% CI, 0.33-0.92). Conclusions and Relevance: This study suggests that anti-TNF therapy may be associated with reduced mortality compared with long-term corticosteroid use among veterans with CD, and potentially among those with UC.


Subject(s)
Budesonide/therapeutic use , Colitis, Ulcerative/drug therapy , Colitis, Ulcerative/mortality , Crohn Disease/drug therapy , Crohn Disease/mortality , Glucocorticoids/therapeutic use , Prednisone/therapeutic use , Tumor Necrosis Factor Inhibitors/therapeutic use , Adult , Aged , Aged, 80 and over , Cause of Death , Cohort Studies , Female , Humans , Male , Middle Aged , United States , United States Department of Veterans Affairs , Veterans Health , Young Adult
18.
Open Forum Infect Dis ; 7(5): ofaa149, 2020 May.
Article in English | MEDLINE | ID: mdl-32500088

ABSTRACT

BACKGROUND: Between 2007 and 2015, inpatient fluoroquinolone use declined in US Veterans Affairs (VA) hospitals. Whether fluoroquinolone use at discharge also declined, in particular since antibiotic stewardship programs became mandated at VA hospitals in 2014, is unknown. METHODS: In this retrospective cohort study of hospitalizations with infection between January 1, 2014, and December 31, 2017, at 125 VA hospitals, we assessed inpatient and discharge fluoroquinolone (ciprofloxacin, levofloxacin, moxifloxacin) use as (a) proportion of hospitalizations with a fluoroquinolone prescribed and (b) fluoroquinolone-days per 1000 hospitalizations. After adjusting for illness severity, comorbidities, and age, we used multilevel logit and negative binomial models to assess for hospital-level variation and longitudinal prescribing trends. RESULTS: Of 560219 hospitalizations meeting inclusion criteria as hospitalizations with infection, 37.4% (209602/560219) had a fluoroquinolone prescribed either during hospitalization (32.5%, 182337/560219) or at discharge (19.6%, 110003/560219). Hospitals varied appreciably in inpatient, discharge, and total fluoroquinolone use, with 71% of hospitals in the highest prescribing quartile located in the Southern United States. Nearly all measures of fluoroquinolone use decreased between 2014 and 2017, with the largest decreases found in inpatient fluoroquinolone and ciprofloxacin use. In contrast, there was minimal decline in fluoroquinolone use at discharge, which accounted for a growing percentage of hospitalization-related fluoroquinolone-days (52.0% in 2014; 61.3% by 2017). CONCLUSIONS: Between 2014 and 2017, fluoroquinolone use decreased in VA hospitals, largely driven by decreased inpatient fluoroquinolone (especially ciprofloxacin) use. Fluoroquinolone prescribing at discharge, as well as levofloxacin prescribing overall, is a growing target for stewardship.

19.
Obesity (Silver Spring) ; 28(7): 1205-1214, 2020 07.
Article in English | MEDLINE | ID: mdl-32478469

ABSTRACT

OBJECTIVE: Administrative data are increasingly used in research and evaluation yet lack standardized guidelines for constructing measures using these data. Body weight measures from administrative data serve critical functions of monitoring patient health, evaluating interventions, and informing research. This study aimed to describe the algorithms used by researchers to construct and use weight measures. METHODS: A structured, systematic literature review of studies that constructed body weight measures from the Veterans Health Administration was conducted. Key information regarding time frames and time windows of data collection, measure calculations, data cleaning, treatment of missing and outlier weight values, and validation processes was collected. RESULTS: We identified 39 studies out of 492 nonduplicated records for inclusion. Studies parameterized weight outcomes as change in weight from baseline to follow-up (62%), weight trajectory over time (21%), proportion of participants meeting weight threshold (46%), or multiple methods (28%). Most (90%) reported total time in follow-up and number of time points. Fewer reported time windows (54%), outlier values (51%), missing values (34%), or validation strategies (15%). CONCLUSIONS: A high variability in the operationalization of weight measures was found. Improving methods to construct clinical measures will support transparency and replicability in approaches, guide interpretation of findings, and facilitate comparisons across studies.


Subject(s)
Body Weight , Body Weights and Measures/statistics & numerical data , Databases, Factual/supply & distribution , National Health Programs/organization & administration , Body Weights and Measures/methods , Databases, Factual/standards , Humans , National Health Programs/standards , National Health Programs/statistics & numerical data , Registries , Research Design , United States/epidemiology , Veterans/statistics & numerical data , Veterans Health Services/organization & administration , Veterans Health Services/statistics & numerical data
20.
Medicine (Baltimore) ; 99(24): e20385, 2020 Jun 12.
Article in English | MEDLINE | ID: mdl-32541458

ABSTRACT

Template matching is a proposed approach for hospital benchmarking, which measures performance based on matching a subset of comparable patient hospitalizations from each hospital. We assessed the ability to create the required matched samples and thus the feasibility of template matching to benchmark hospital performance in a diverse healthcare system.Nationwide Veterans Affairs (VA) hospitals, 2017.Observational cohort study.We used administrative and clinical data from 668,592 hospitalizations at 134 VA hospitals in 2017. A standardized template of 300 hospitalizations was selected, and then 300 hospitalizations were matched to the template from each hospital.There was substantial case-mix variation across VA hospitals, which persisted after excluding small hospitals, hospitals with primarily psychiatric admissions, and hospitalizations for rare diagnoses. Median age ranged from 57 to 75 years across hospitals; percent surgical admissions ranged from 0.0% to 21.0%; percent of admissions through the emergency department, 0.1% to 98.7%; and percent Hispanic patients, 0.2% to 93.3%. Characteristics for which there was substantial variation across hospitals could not be balanced with any matching algorithm tested. Although most other variables could be balanced, we were unable to identify a matching algorithm that balanced more than ∼20 variables simultaneously.We were unable to identify a template matching approach that could balance hospitals on all measured characteristics potentially important to benchmarking. Given the magnitude of case-mix variation across VA hospitals, a single template is likely not feasible for general hospital benchmarking.


Subject(s)
Benchmarking/methods , Delivery of Health Care, Integrated/statistics & numerical data , Hospitalization/statistics & numerical data , Hospitals, Veterans/statistics & numerical data , Aged , Algorithms , Benchmarking/standards , Cohort Studies , Diagnosis-Related Groups/trends , Emergency Service, Hospital/statistics & numerical data , Feasibility Studies , Female , Hispanic or Latino/statistics & numerical data , Hospitalization/trends , Humans , Male , Middle Aged , Mortality/trends , Outcome Assessment, Health Care/methods , Quality of Health Care/statistics & numerical data , Surgery Department, Hospital/statistics & numerical data , United States/epidemiology , United States Department of Veterans Affairs/organization & administration
SELECTION OF CITATIONS
SEARCH DETAIL