RESUMO
BACKGROUND: Accurate risk adjustment is the key to a reliable comparison of cost and quality performance among providers and hospitals. However, the existing case-mix algorithms based on age, sex, and diagnoses can only explain up to 50% of the cost variation. More accurate risk adjustment is desired for provider performance assessment and improvement. OBJECTIVE: To develop a case-mix algorithm that hospitals and payers can use to measure and compare cost and quality performance of their providers. METHODS: All 6,048,895 patients with valid diagnoses and cost recorded in the US Veterans health care system in fiscal year 2016 were included in this study. The dependent variable was total cost at the patient level, and the explanatory variables were age, sex, and comorbidities represented by 762 clinically homogeneous groups, which were created by expanding the 283 categories from Clinical Classifications Software based on ICD-10-CM codes. The split-sample method was used to assess model overfitting and coefficient stability. The predictive power of the algorithms was ascertained by comparing the R, mean absolute percentage error, root mean square error, predictive ratios, and c-statistics. RESULTS: The expansion of the Clinical Classifications Software categories resulted in higher predictive power. The R reached 0.72 and 0.52 for the transformed and raw scale cost, respectively. CONCLUSIONS: The case-mix algorithm we developed based on age, sex, and diagnoses outperformed the existing case-mix models reported in the literature. The method developed in this study can be used by other health systems to produce tailored risk models for their specific purpose.
Assuntos
Algoritmos , Grupos Diagnósticos Relacionados/normas , Modelos Estatísticos , Garantia da Qualidade dos Cuidados de Saúde/normas , Adulto , Feminino , Humanos , Classificação Internacional de Doenças , Masculino , Pessoa de Meia-Idade , Garantia da Qualidade dos Cuidados de Saúde/economia , Veteranos/estatística & dados numéricosRESUMO
This study assessed the 2014 clinical productivity of 5,959 physician assistants (PAs) and nurse practitioners (NPs) in the US Department of Veterans Affairs' Veterans Health Administration (VHA). Total work relative value units divided by the direct clinical full-time equivalent measured annual productivity, and correlated factors were examined using weighted analysis of variance. PAs and NPs in adult primary care roles were more productive than those in other specialties. Both providers were more productive in rural than in nonrural settings and less productive in teaching than nonteaching hospitals. Men were slightly more productive than women but age and years of VHA employment were not correlates of productivity. PAs were more productive when their scope of practice allowed significant autonomy; NP productivity was unaffected by supervisory requirements. PAs and NPs are an important component of the VHA provider workforce, and their productivity correlates with a number of factors. More organizational research is necessary to better understand the contributing roles PAs and NPs provide in a rapidly evolving, vertically integrated, national health delivery system.
Assuntos
Profissionais de Enfermagem , Assistentes Médicos , Saúde dos Veteranos , Adulto , Feminino , Humanos , Masculino , Atenção Primária à Saúde , Estados Unidos , United States Department of Veterans AffairsRESUMO
BACKGROUND: Hospitalizations due to ambulatory care sensitive conditions (ACSCs) are widely accepted as an indicator of primary care access and effectiveness. However, broad early intervention to all patients in a health care system may be deemed infeasible due to limited resources. OBJECTIVE: To develop a predictive model to identify high-risk patients for early intervention to reduce ACSC hospitalizations, and to explore the predictive power of different variables. METHODS: The study population included all patients treated for ACSCs in the VA system in fiscal years (FY) 2011 and 2012 (n=2,987,052). With all predictors from FY2011, we developed a statistical model using hierarchical logistic regression with a random intercept to predict the risk of ACSC hospitalizations in the first 90 days and the full year of FY2012. In addition, we configured separate models to assess the predictive power of different variables. We used a random split-sample method to prevent overfitting. RESULTS: For hospitalizations within the first 90 days of FY2012, the full model reached c-statistics of 0.856 (95% CI, 0.853-0.860) and 0.856 (95% CI, 0.852-0.860) for the development and validation samples, respectively. For predictive power of the variables, the model with only a random intercept yielded c-statistics of 0.587 (95% CI, 0.582-0.593) and 0.578 (95% CI, 0.573-0.583), respectively; with patient demographic and socioeconomic variables added, the c-statistics improved to 0.725 (95% CI, 0.720-0.729) and 0.721 (95% CI, 0.717-0.726), respectively; adding prior year utilization and cost raised the c-statistics to 0.826 (95% CI, 0.822-0.830) and 0.826 (95% CI,0.822-0.830), respectively; the full model was reached with HCCs added. For the 1-year hospitalizations, only the full model was fitted, which yielded c-statistics of 0.835 (95% CI, 0.831-0.837) and 0.833 (95% CI, 0.830-0.837), respectively, for development and validation samples. CONCLUSIONS: Our analyses demonstrate that administrative data can be effective in predicting ACSC hospitalizations. With high predictive ability, the model can assist primary care providers to identify high-risk patients for early intervention to reduce ACSC hospitalizations.
Assuntos
Hospitalização/estatística & dados numéricos , Adulto , Idoso , Assistência Ambulatorial/estatística & dados numéricos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Modelos Estatísticos , Fatores de Risco , Estados Unidos , United States Department of Veterans Affairs/estatística & dados numéricosRESUMO
BACKGROUND: Studies about nurse staffing and patient outcomes often lack adequate risk adjustment because of limited access to patient information. OBJECTIVE: The aim of this study was to examine the impact of patient-level risk adjustment on the associations of unit-level nurse staffing and 30-day inpatient mortality. METHODS: This retrospective cross-sectional study included 284,097 patients discharged during 2007-2008 from 446 acute care nursing units at 128 Veterans Affairs medical centers. The association of nurse staffing with 30-day mortality was assessed using hierarchical logistic models under three levels of risk-adjustment conditions: using no patient information (low), using patient demographics and diagnoses (moderate), or using patient demographics and diagnoses plus physiological measures (high). RESULTS: Discriminability of the models improved as the level of risk adjustment increased. The c-statistics for models of low, moderate, and high risk adjustment were 0.64, 0.74, and 0.88 for non-ICU patients and 0.66, 0.76, and 0.88 for ICU patients. For non-ICU patients, higher RN skill mix was associated with lower 30-day mortality across all three levels of risk adjustment. For ICU patients, higher total nursing hours per patient day was strongly associated with higher mortality with moderate risk adjustment (p = .0002), but this counterintuitive association was not significant with low or high risk adjustment. DISCUSSION: Inadequate risk adjustment may lead to biased estimates about nurse staffing and patient outcomes. Combining physiological measures with commonly used administrative data is a promising risk-adjustment approach to reduce potential biases.
Assuntos
Cuidados Críticos , Mortalidade Hospitalar , Hospitais de Veteranos , Recursos Humanos de Enfermagem Hospitalar/estatística & dados numéricos , Admissão e Escalonamento de Pessoal/estatística & dados numéricos , Risco Ajustado , Idoso , Estudos Transversais , Feminino , Humanos , Modelos Logísticos , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos , Viés de Seleção , Estados Unidos , Recursos HumanosRESUMO
OBJECTIVE: To assess the relationship between volume of nonoperative mechanically ventilated patients receiving care in a specific Veterans Health Administration hospital and their mortality. DESIGN: Retrospective cohort study. SETTING: One-hundred nineteen Veterans Health Administration medical centers. PATIENTS: We identified 5,131 hospitalizations involving mechanically ventilated patients in an intensive care unit during 2009, who did not receive surgery. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: We extracted demographic and clinical data from the VA Inpatient Evaluation Center. For each hospital, we defined volume as the total number of nonsurgical admissions receiving mechanical ventilation in an intensive care unit during 2009. We examined the hospital contribution to 30-day mortality using multilevel logistic regression models with a random intercept for each hospital. We quantified the extent of interhospital variation in 30-day mortality using the intraclass correlation coefficient and median odds ratio. We used generalized estimating equations to examine the relationship between volume and 30-day mortality and risk-adjusted all models using a patient-level prognostic score derived from clinical data representing the risk of death conditional on treatment at a high-volume hospital. Mean age for the sample was 65 (SD 11) yrs, 97% were men, and 60% were white. The median VA hospital cared for 40 (interquartile range 19-62) mechanically ventilated patients in 2009. Crude 30-day mortality for these patients was 36.9%. After reliability and risk adjustment to the median patient, adjusted hospital-level mortality varied from 33.5% to 40.6%. The intraclass correlation coefficient for the hospital-level variation was 0.6% (95% confidence interval 0.1, 3.4%), with a median odds ratio of 1.15 (95% confidence interval 1.06, 1.38). The relationship between hospital volume of mechanically ventilated and 30-day mortality was not statistically significant: each 50-patient increase in volume was associated with a nonsignificant 2% decrease in the odds of death within 30 days (odds ratio 0.98, 95% confidence interval 0.87-1.10). CONCLUSIONS: Veterans Health Administration hospitals caring for lower volumes of mechanically ventilated patients do not have worse mortality. Mechanisms underlying this finding are unclear, but, if elucidated, may offer other integrated health systems ways to overcome the disadvantages of small-volume centers in achieving good outcomes.
Assuntos
Causas de Morte , Estado Terminal/mortalidade , Mortalidade Hospitalar/tendências , Hospitais de Veteranos/estatística & dados numéricos , Respiração Artificial/mortalidade , Idoso , Estudos de Coortes , Intervalos de Confiança , Estado Terminal/terapia , Bases de Dados Factuais , Feminino , Humanos , Unidades de Terapia Intensiva , Tempo de Internação , Modelos Logísticos , Masculino , Pessoa de Meia-Idade , Razão de Chances , Controle de Qualidade , Respiração Artificial/métodos , Respiração Artificial/estatística & dados numéricos , Estudos Retrospectivos , Medição de Risco , Procedimentos Cirúrgicos Operatórios , Análise de Sobrevida , Estados Unidos , Carga de TrabalhoRESUMO
INTRODUCTION: Reliance on administrative data sources and a cohort with restricted age range (Medicare 65 y and above) may limit conclusions drawn from public reporting of 30-day mortality rates in 3 diagnoses [acute myocardial infarction (AMI), congestive heart failure (CHF), pneumonia (PNA)] from Center for Medicaid and Medicare Services. METHODS: We categorized patients with diagnostic codes for AMI, CHF, and PNA admitted to 138 Veterans Administration hospitals (2006-2009) into 2 groups (less than 65 y or ALL), then applied 3 different models that predicted 30-day mortality [Center for Medicaid and Medicare Services administrative (ADM), ADM+laboratory data (PLUS), and clinical (CLIN)] to each age/diagnosis group. C statistic (CSTAT) and Hosmer Lemeshow Goodness of Fit measured discrimination and calibration. Pearson correlation coefficient (r) compared relationship between the hospitals' risk-standardized mortality rates (RSMRs) calculated with different models. Hospitals were rated as significantly different (SD) when confidence intervals (bootstrapping) omitted National RSMR. RESULTS: The ≥ 65-year models included 57%-67% of all patients (78%-82% deaths). The PLUS models improved discrimination and calibration across diagnoses and age groups (CSTAT-CHF/65 y and above: 0.67 vs. 0. 773 vs. 0.761; ADM/PLUS/CLIN; Hosmer Lemeshow Goodness of Fit significant 4/6 ADM vs. 2/6 PLUS). Correlation of RSMR was good between ADM and PLUS (r-AMI 0.859; CHF 0.821; PNA 0.750), and 65 years and above and ALL (r>0.90). SD ratings changed in 1%-12% of hospitals (greatest change in PNA). CONCLUSIONS: Performance measurement systems should include laboratory data, which improve model performance. Changes in SD ratings suggest caution in using a single metric to label hospital performance.
Assuntos
Centers for Medicare and Medicaid Services, U.S./estatística & dados numéricos , Coleta de Dados/métodos , Insuficiência Cardíaca/mortalidade , Infarto do Miocárdio/mortalidade , Pneumonia/mortalidade , Fatores Etários , Idoso , Técnicas de Laboratório Clínico , Comorbidade , Hospitais de Veteranos , Humanos , Modelos Estatísticos , Risco Ajustado , Estados Unidos/epidemiologiaRESUMO
OBJECTIVES: Hyperglycemia during critical illness is common and is associated with increased mortality. Intensive insulin therapy has improved outcomes in some, but not all, intervention trials. It is unclear whether the benefits of treatment differ among specific patient populations. The purpose of the study was to determine the association between hyperglycemia and risk- adjusted mortality in critically ill patients and in separate groups stratified by admission diagnosis. A secondary purpose was to determine whether mortality risk from hyperglycemia varies with intensive care unit type, length of stay, or diagnosed diabetes. DESIGN: Retrospective cohort study. SETTING: One hundred seventy-three U.S. medical, surgical, and cardiac intensive care units. PATIENTS: Two hundred fifty-nine thousand and forty admissions from October 2002 to September 2005; unadjusted mortality rate, 11.2%. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: A two-level logistic regression model determined the relationship between glycemia and mortality. Age, diagnosis, comorbidities, and laboratory variables were used to calculate a predicted mortality rate, which was then analyzed with mean glucose to determine the association of hyperglycemia with hospital mortality. Hyperglycemia was associated with increased mortality independent of illness severity. Compared with normoglycemic individuals (70-110 mg/dL), adjusted odds of mortality (odds ratio, [95% confidence interval]) for mean glucose 111-145, 146-199, 200-300, and >300 mg/dL was 1.31 (1.26-1.36), 1.82 (1.74-1.90), 2.13 (2.03-2.25), and 2.85 (2.58-3.14), respectively. Furthermore, the adjusted odds of mortality related to hyperglycemia varied with admission diagnosis, demonstrating a clear association in some patients (acute myocardial infarction, arrhythmia, unstable angina, pulmonary embolism) and little or no association in others. Hyperglycemia was associated with increased mortality independent of intensive care unit type, length of stay, and diabetes. CONCLUSIONS: The association between hyperglycemia and mortality implicates hyperglycemia as a potentially harmful and correctable abnormality in critically ill patients. The finding that hyperglycemia-related risk varied with admission diagnosis suggests differences in the interaction between specific medical conditions and injury from hyperglycemia. The design and interpretation of future trials should consider the primary disease states of patients and the balance of medical conditions in the intensive care unit studied.
Assuntos
Hiperglicemia/mortalidade , Adulto , Idoso , Estudos de Coortes , Estado Terminal , Feminino , Humanos , Hiperglicemia/complicações , Masculino , Pessoa de Meia-Idade , Admissão do Paciente , Estudos Retrospectivos , Fatores de RiscoRESUMO
Historically, the prevalence of smoking and smoking-related illnesses has been higher among veteran patients in the Veterans Health Administration (VHA) in comparison to that of the general population. Although rates of tobacco use have remained high, smoking cessation interventions continued to be greatly underutilized in VHA clinical settings just as they have been nationally. To address tobacco use as a public health priority, VHA has implemented a number of evidence-based national initiatives in recent years. This paper describes these initiatives, including: adoption of a population-health approach to smoking cessation; increased access to nicotine replacement therapy and/or smoking cessation medications; elimination of outpatient copayments for smoking cessation counseling; clinical practice guidelines; and collaboration with mental health and substance use disorder health care providers to promote integration of smoking cessation into routine treatment of psychiatric populations. The context of tobacco use among the newest veteran populations is also discussed, as well as recent efforts to evaluate the current state of smoking cessation care in VHA.
Assuntos
Medicina Baseada em Evidências , Promoção da Saúde/métodos , Saúde Pública , Abandono do Hábito de Fumar/métodos , Fumar/epidemiologia , United States Department of Veterans Affairs , Prioridades em Saúde , Humanos , Militares , Fumar/efeitos adversos , Fumar/mortalidade , Estados Unidos/epidemiologiaRESUMO
IMPORTANCE: The use of perioperative pharmacologic ß-blockade in patients at low risk of myocardial ischemic events undergoing noncardiac surgery (NCS) is controversial because of the risk of stroke and hypotension. Published studies have not found a consistent benefit in this cohort. OBJECTIVE: To determine the effect of perioperative ß-blockade on patients undergoing NCS, particularly those with no risk factors. DESIGN, SETTING, AND PARTICIPANTS: This is a retrospective observational analysis of patients undergoing surgery in Veterans Affairs hospitals from October 1, 2008, through September 31, 2013. METHODS: ß-Blocker use was determined if a dose was ordered at any time between 8 hours before surgery and 24 hours postoperatively. Data from the Veterans Affairs electronic database included demographics, diagnosis and procedural codes, medications, perioperative laboratory values, and date of death. A 4-point cardiac risk score was calculated by assigning 1 point each for renal failure, coronary artery disease, diabetes mellitus, and surgery in a major body cavity. Previously validated linear regression models for all hospitalized acute care medical or surgical patients were used to calculate predicted mortality and then to calculate odds ratios (ORs). MAIN OUTCOMES AND MEASURES: The end point was 30-day surgical mortality. RESULTS: There were 326,489 patients in this cohort: 314,114 underwent NCS and 12,375 underwent cardiac surgery. ß-Blockade lowered the OR for mortality significantly in patients with 3 to 4 cardiac risk factors undergoing NCS (OR, 0.63; 95% CI, 0.43-0.93). It had no effect on patients with 1 to 2 risk factors. However, ß-blockade resulted in a significantly higher chance of death in patients (OR, 1.19; 95% CI, 1.06-1.35) with no risk factors undergoing NCS. CONCLUSIONS AND RELEVANCE: In this large series, ß-blockade appears to be beneficial perioperatively in patients with high cardiac risk undergoing NCS. However, the use of ß-blockers in patients with no cardiac risk factors undergoing NCS increased risk of death in this patient cohort.
Assuntos
Antagonistas Adrenérgicos beta/uso terapêutico , Isquemia Miocárdica/mortalidade , Assistência Perioperatória/métodos , Complicações Pós-Operatórias/mortalidade , Procedimentos Cirúrgicos Operatórios/mortalidade , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Seguimentos , Humanos , Masculino , Pessoa de Meia-Idade , Isquemia Miocárdica/prevenção & controle , Complicações Pós-Operatórias/prevenção & controle , Estudos Retrospectivos , Fatores de Risco , Taxa de Sobrevida/tendências , Estados Unidos/epidemiologiaRESUMO
OBJECTIVE: To examine the impact on infection rates and hospital rank for catheter-associated urinary tract infection (CAUTI), central line-associated bloodstream infection (CLABSI), and ventilator-associated pneumonia (VAP) using device days and bed days as the denominator DESIGN: Retrospective survey from October 2010 to July 2013 SETTING: Veterans Health Administration medical centers providing acute medical and surgical care PATIENTS: Patients admitted to 120 Veterans Health Administration medical centers reporting healthcare-associated infections METHODS: We examined the importance of using device days and bed days as the denominator between infection rates and hospital rank for CAUTI, CLABSI, and VAP for each medical center. The relationship between device days and bed days as the denominator was assessed using a Pearson correlation, and changes in infection rates and device utilization were evaluated by an analysis of variance. RESULTS: A total of 7.9 million bed days were included. From 2011 to 2013, CAUTI decreased whether measured by device days (2.32 to 1.64, P=.001) or bed days (4.21 to 3.02, P=.006). CLABSI decreased when measured by bed days (1.67 to 1.19, P=.04). VAP rates and device utilization ratios for CAUTI, CLABSI, and VAP were not statistically different across time. Infection rates calculated with device days were strongly correlated with infection rates calculated with bed days (r=0.79-0.94, P<.001). Hospital relative performance measured by ordered rank was also strongly correlated for both denominators (r=0.82-0.96, P<.001). CONCLUSIONS: These findings suggest that device days and bed days are equally effective adjustment metrics for comparing healthcare-associated infection rates between hospitals in the setting of stable device utilization.
Assuntos
Bacteriemia , Infecções Relacionadas a Cateter , Infecção Hospitalar , Hospitais de Veteranos , Controle de Infecções , Infecções Urinárias , Adulto , Bacteriemia/epidemiologia , Bacteriemia/etiologia , Bacteriemia/terapia , Infecções Relacionadas a Cateter/epidemiologia , Infecções Relacionadas a Cateter/etiologia , Infecções Relacionadas a Cateter/terapia , Cateteres Venosos Centrais/efeitos adversos , Infecção Hospitalar/epidemiologia , Infecção Hospitalar/etiologia , Infecção Hospitalar/terapia , Feminino , Hospitais de Veteranos/normas , Hospitais de Veteranos/estatística & dados numéricos , Humanos , Controle de Infecções/métodos , Controle de Infecções/normas , Tempo de Internação/estatística & dados numéricos , Masculino , Padrões de Referência , Estudos Retrospectivos , Fatores de Tempo , Estados Unidos/epidemiologia , Infecções Urinárias/epidemiologia , Infecções Urinárias/etiologia , Infecções Urinárias/terapia , Revisão da Utilização de Recursos de SaúdeRESUMO
STUDY OBJECTIVE: To determine if cell wall-deficient forms (CWDF) of mycobacteria can be grown in culture of blood from subjects with sarcoidosis. DESIGN: A special multicenter study of sarcoidosis (A Case Control Etiologic Study of Sarcoidosis), supported by the National Heart, Lung, and Blood Institute. PATIENTS AND CONTROL SUBJECTS: PATIENTS AND CONTROL SUBJECTS were recruited at 10 institutions in the United States. Control subjects (controls) were of the same gender and race, and within 5 years of age as matching patients with sarcoidosis (cases). RESULTS: Cultures were incubated from 347 blood specimens (197 cases, 150 controls). Two investigators trained to recognize CWDF mycobacteria examined material obtained from culture tubes after 3 weeks. Structures thought to be CWDF were seen with equal frequency in cases (38%) and controls (41%). Thirty-nine percent of cases and 37% of controls were read as negative for CWDF. CONCLUSION: This study fails to confirm earlier reports that CWDF mycobacteria can be grown from the blood of patients with sarcoidosis, but not from control subjects.
Assuntos
Sangue/microbiologia , Formas L/isolamento & purificação , Infecções por Mycobacterium/microbiologia , Sarcoidose Pulmonar/microbiologia , Técnicas Bacteriológicas , Estudos de Casos e Controles , Humanos , Estudos Prospectivos , Valores de ReferênciaRESUMO
OBJECTIVE: The relationship of respiratory symptoms to pulmonary function parameters and smoking status was assessed in subjects with chronic (>1 year) spinal cord injury (SCI). METHODS AND PARTICIPANTS: As part of their annual physical examination, subjects were queried regarding respiratory symptoms and underwent pulmonary function studies. The 180 patients who successfully completed pulmonary function testing were evaluated, including 79 subjects with tetraplegia (56 nonsmokers and 23 smokers) and 101 subjects with paraplegia (78 nonsmokers and 23 smokers). FINDINGS: Logistic-regression analysis revealed the following independent predictors of breathlessness: level of injury (tetraplegia, paraplegia, odds ratio = 3.5, P < 0.0015), cough combined with phlegm and/or wheeze (CPWZ, odds ratio = 3.1, P < 0.015), total lung capacity percentage predicted (TLC <60%, odds ratio = 3.9, P < 0.02), and expiratory reserve volume (ERV < 0.6 L, odds ratio = 2.5, P < 0.05). Independent predictors of CPWZ were current smoking (odds ratio = 3.3, P < 0.004), breathlessness (odds ratio = 2.9, P < 0.03), and forced expiratory volume in 1 second (FEV1 <60%, odds ratio = 3.2, P < 0.01). CONCLUSION: Altered respiratory mechanics associated with tetraplegia contribute to breathlessness, restrictive ventilatory impairment (low TLC%), and reduced expiratory muscle strength (low ERV). These factors apparently overshadow adverse effects caused by smoking. Conversely, smoking and reduction of airflow (low FEV1%) were predictive of CPWZ, symptoms commonly associated with cigarette use.
Assuntos
Transtornos Respiratórios/complicações , Transtornos Respiratórios/fisiopatologia , Fumar/fisiopatologia , Traumatismos da Medula Espinal/complicações , Traumatismos da Medula Espinal/fisiopatologia , Adulto , Doença Crônica , Feminino , Humanos , Medidas de Volume Pulmonar , Masculino , Pessoa de Meia-Idade , Valor Preditivo dos Testes , Análise de Regressão , Testes de Função Respiratória , Espirometria , Índices de Gravidade do TraumaRESUMO
BACKGROUND: Elimination of hospital-acquired infections is an important patient safety goal. SETTING: All 174 medical, cardiac, surgical and mixed Veterans Administration (VA) intensive care units (ICUs). INTERVENTION: A centralised infrastructure (Inpatient Evaluation Center (IPEC)) supported the practice bundle implementation (handwashing, maximal barriers, chlorhexidinegluconate site disinfection, avoidance of femoral catheterisation and timely removal) to reduce central line-associated bloodstream infections (CLABSI). Support included recruiting leadership, benchmarked feedback, learning tools and selective mentoring. DATA COLLECTION: Sites recorded the number of CLABSI, line days and audit results of bundle compliance on a secure website. ANALYSIS: CLABSI rates between years were compared with incidence rate ratios (IRRs) from a Poisson regression and with National Healthcare Safety Network referent rates (standardised infection ratio (SIR)). Pearson's correlation coefficient compared bundle adherence with CLABSI rates. Semi-structured interviews with teams struggling to reduce CLABSI identified common themes. RESULTS: From 2006 to 2009, CLABSI rates fell (3.8-1.8/1000 line days; p<0.01); as did IRR (2007; 0.83 (95% CI 0.73 to 0.94), 2008; 0.65 (95% CI 0.56 to 0.76), 2009; 0.47 (95% CI 0.40 to 0.55)). Bundle adherence and CLABSI rates showed strong correlation (r = 0.81). VA CLABSI SIR, January to June 2009, was 0.76 (95% CI 0.69 to 0.90), and for all FY2009 0.88 (95% CI 0.80 to 0.97). Struggling sites lacked a functional team, forcing functions and feedback systems. CONCLUSION: Capitalising on a large healthcare system, VA IPEC used strategies applicable to non-federal healthcare systems and communities. Such tactics included measurement through information technology, leadership, learning tools and mentoring.
Assuntos
Infecções Relacionadas a Cateter/prevenção & controle , Infecção Hospitalar/prevenção & controle , Controle de Infecções/organização & administração , Unidades de Terapia Intensiva/organização & administração , Sepse/prevenção & controle , Estudos de Coortes , Humanos , Capacitação em Serviço/organização & administração , Mentores , Melhoria de Qualidade/organização & administração , Estados Unidos , United States Department of Veterans AffairsRESUMO
There is widespread belief that the US health care system could realize significant improvements in efficiency, savings, and patient outcomes if care were provided in a more integrated and accountable way. We examined efficiency and its relationship to quality of care for medical centers run by the Veterans Health Administration of the Department of Veterans Affairs (VA), a national, vertically integrated health care system that is accountable for a large patient population. After devising a statistical model to indicate efficiency, we found that VA medical centers were highly efficient. We also found only modest variation in the level of efficiency and cost across VA medical centers, and a positive correlation overall between greater efficiency and higher inpatient quality. These findings for VA medical centers suggest that efforts to drive integration and accountability in other parts of the US health care system might have important payoffs in reducing variations in cost without sacrificing quality. Policy makers should focus on what aspects of certain VA medical centers allow them to provide better care at lower costs and consider policies that incentivize other providers, both within and outside the VA, to adopt these practices.
Assuntos
Eficiência Organizacional , Hospitais de Veteranos/normas , Qualidade da Assistência à Saúde/normas , Eficiência Organizacional/economia , Eficiência Organizacional/tendências , Hospitais de Veteranos/economia , Humanos , Padrões de Prática MédicaRESUMO
BACKGROUND Veterans Health Administration (VA) intensive care units (ICUs) develop an infrastructure for quality improvement using information technology and recruiting leadership. METHODS Setting Participation by the 183 ICUs in the quality improvement program is required. Infrastructure includes measurement (electronic data extraction, analysis), quarterly web-based reporting and implementation support of evidence-based practices. Leaders prioritise measures based on quality improvement objectives. The electronic extraction is validated manually against the medical record, selecting hospitals whose data elements and measures fall at the extremes (10th, 90th percentile). results are depicted in graphic, narrative and tabular reports benchmarked by type and complexity of ICU. RESULTS The VA admits 103â689±1156 ICU patients/year. Variation in electronic business practices, data location and normal range of some laboratory tests affects data quality. A data management website captures data elements important to ICU performance and not available electronically. A dashboard manages the data overload (quarterly reports ranged 106-299 pages). More than 85% of ICU directors and nurse managers review their reports. Leadership interest is sustained by including ICU targets in executive performance contracts, identification of local improvement opportunities with analytic software, and focused reviews. CONCLUSION Lessons relevant to non-VA institutions include the: (1) need for ongoing data validation, (2) essential involvement of leadership at multiple levels, (3) supplementation of electronic data when key elements are absent, (4) utility of a good but not perfect electronic indicator to move practice while improving data elements and (5) value of a dashboard.