Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 198
Filter
1.
Ann Thorac Surg ; 111(6): 1842-1848, 2021 06.
Article in English | MEDLINE | ID: mdl-33011169

ABSTRACT

BACKGROUND: Current smokers undergoing lobectomy are at greater risk of complications than are former smokers. The Society of Thoracic Surgeons (STS) composite score for rating program performance for lobectomy adjusts for smoking status, a modifiable risk factor. This study examined variability in the proportion of current smokers undergoing lobectomy among STS database participants. Additionally, the study determined whether each participant's rating changed if smoking was excluded from the risk adjustment model. METHODS: This is a retrospective analysis of the STS cohort used to develop the composite score for rating program performance for lobectomy. The study summarized the variability among STS database participants for performing lobectomy on current smokers and compared star ratings developed from models with and without smoking status. RESULTS: There were 24,912 patients with smoking status data: 23% current smokers, 62% former smokers, and 15% never smokers. There was significant variability among participants in the proportion of current smokers undergoing lobectomy (3% to 48.6%; P < .001). Major morbidity or mortality (composite) was greater in current smokers (12.1%) than in former smokers (8.6%) and never smokers (4.2%) (P < .001). Using the current risk adjustment model, participant star ratings were as follows: 1 star, n = 6 (3.2%); 2 stars, n = 170 (91.4%); and 3 stars, n = 10 (5.4%). When smoking status was excluded from the model, 1 participant shifted from a 2-star to a 3-star program. CONCLUSIONS: There is substantial variability among STS database participants with regard to the proportion of current smokers undergoing lobectomy. However, exclusion of smoking status from the model did not significantly affect participant star rating.


Subject(s)
Pneumonectomy/statistics & numerical data , Risk Adjustment/statistics & numerical data , Smoking , Aged , Databases, Factual , Female , Humans , Male , Middle Aged , Retrospective Studies , Societies, Medical , Thoracic Surgery
2.
Acta Orthop ; 91(6): 794-800, 2020 12.
Article in English | MEDLINE | ID: mdl-32698642

ABSTRACT

Background and purpose - The optimal type and duration of antibiotic prophylaxis for primary arthroplasty of the hip and knee are subject to debate. We compared the risk of complete revision (obtained by a 1- or 2-stage procedure) for periprosthetic joint infection (PJI) after primary total hip or knee arthroplasty between patients receiving a single dose of prophylactic antibiotics and patients receiving multiple doses of antibiotics for prevention of PJI. Patients and methods - A cohort of 130,712 primary total hip and 111,467 knee arthroplasties performed between 2011 and 2015 in the Netherlands was analyzed. We linked data from the Dutch arthroplasty register to a survey collected across all Dutch institutions on hospital-level antibiotic prophylaxis policy. We used restricted cubic spline Poisson models adjusted for hospital clustering to compare the risk of revision for infection according to type and duration of antibiotic prophylaxis received. Results - For total hip arthroplasties, the rates of revision for infection were 31/10,000 person-years (95% CI 28-35), 39 (25-59), and 23 (15-34) in the groups that received multiple doses of cefazolin, multiple doses of cefuroxime, and a single dose of cefazolin, respectively. The rates for knee arthroplasties were 27/10,000 person-years (95% CI 24-31), 40 (24-62), and 24 (16-36). Similar risk of complete revision for infection among antibiotic prophylaxis regimens was found when adjusting for confounders. Interpretation - In a large observational cohort we found no apparent association between the type or duration of antibiotic prophylaxis and the risk of complete revision for infection. This does question whether there is any advantage to the use of prolonged antibiotic prophylaxis beyond a single dose.


Subject(s)
Arthroplasty, Replacement, Hip , Arthroplasty, Replacement, Knee , Cefazolin/administration & dosage , Cefuroxime/administration & dosage , Prosthesis-Related Infections , Reoperation , Risk Adjustment , Anti-Bacterial Agents/administration & dosage , Antibiotic Prophylaxis/methods , Arthroplasty, Replacement, Hip/adverse effects , Arthroplasty, Replacement, Hip/methods , Arthroplasty, Replacement, Knee/adverse effects , Arthroplasty, Replacement, Knee/methods , Dose-Response Relationship, Drug , Duration of Therapy , Female , Humans , Male , Middle Aged , Netherlands/epidemiology , Outcome and Process Assessment, Health Care , Prosthesis-Related Infections/diagnosis , Prosthesis-Related Infections/epidemiology , Prosthesis-Related Infections/prevention & control , Prosthesis-Related Infections/surgery , Reoperation/methods , Reoperation/statistics & numerical data , Risk Adjustment/methods , Risk Adjustment/statistics & numerical data
3.
Med Care ; 58(8): 717-721, 2020 08.
Article in English | MEDLINE | ID: mdl-32692137

ABSTRACT

OBJECTIVE: Compare comorbidity identification in Medicare and Veterans Health Administration (VA) data for the purposes of risk adjustment. DATA SOURCES: Analysis of Medicare and VA datasets for dually-enrolled Veterans receiving care in both settings, fiscal years 2010-2014. STUDY DESIGN: A retrospective analysis of administrative data for a national sample of cancer decedents. DATA EXTRACTION METHODS: Comorbidities were evaluated using Elixhauser and Charlson coding algorithms. PRINCIPAL FINDINGS: Clinical comorbidities were more likely to be recorded in Medicare than in VA datasets. Of 42 comorbidities, 36 (86%) were recorded at a different frequency. For example, congestive heart failure was recorded for 22.0% of patients in Medicare data and for 11.3% of patients in VA data (P<0.001). CONCLUSION: There are large differences in comorbidity assessment across VA and Medicare administrative data for the same patient, posing challenges for risk adjustment.


Subject(s)
Comorbidity , Eligibility Determination/standards , Medicare/statistics & numerical data , Risk Adjustment/methods , United States Department of Veterans Affairs/statistics & numerical data , Aged , Eligibility Determination/methods , Eligibility Determination/statistics & numerical data , Female , Humans , Male , Middle Aged , Privatization/statistics & numerical data , Retrospective Studies , Risk Adjustment/statistics & numerical data , United States
4.
Circulation ; 142(1): 29-39, 2020 07 07.
Article in English | MEDLINE | ID: mdl-32408764

ABSTRACT

BACKGROUND: The utility of 30-day risk-standardized readmission rate (RSRR) as a hospital performance metric has been a matter of debate. Home time is a patient-centered outcome measure that accounts for rehospitalization, mortality, and postdischarge care. We aim to characterize risk-adjusted 30-day home time in patients with acute myocardial infarction (AMI) as a hospital-level performance metric and to evaluate associations with 30-day RSRR, 30-day risk-standardized mortality rate (RSMR), and 1-year RSMR. METHODS: The study included 984 612 patients with AMI hospitalization across 2379 hospitals between 2009 and 2015 derived from 100% Medicare claims data. Home time was defined as the number of days alive and spent outside of a hospital, skilled nursing facility, or intermediate-/long-term acute care facility 30 days after discharge. Correlations between hospital-level risk-adjusted 30-day home time and 30-day RSRR, 30-day RSMR, and 1-year RSMR were estimated with the Pearson correlation. Reclassification in hospital performance using 30-day home time versus 30-day RSRR and 30-day RSMR was also evaluated. RESULTS: Median hospital-level risk-adjusted 30-day home time was 24.0 days (range, 15.3-29.0 days). Hospitals with higher home time were more commonly academic centers, had available cardiac surgery and rehabilitation services, and had higher AMI volume and percutaneous coronary intervention use during the AMI hospitalization. Of the mean 30-day home time days lost, 58% were to intermediate-/long-term care or skilled nursing facility stays (4.7 days), 30% to death (2.5 days), and 12% to readmission (1.0 days). Hospital-level risk-adjusted 30-day home time was inversely correlated with 30-day RSMR (r=-0.22, P<0.0001) and 30-day RSRR (r=-0.25, P<0.0001). Patients admitted to hospitals with higher risk-adjusted 30-day home time had lower 30-day readmission (quartile 1 versus 4, 21% versus 17%), 30-day mortality rate (5% versus 3%), and 1-year mortality rate (18% versus 12%). Furthermore, 30-day home time reclassified hospital performance status in ≈30% of hospitals versus 30-day RSRR and 30-day RSMR. CONCLUSIONS: Thirty-day home time for patients with AMI can be assessed as a hospital-level performance metric with the use of Medicare claims data. It varies across hospitals, is associated with postdischarge readmission and mortality outcomes, and meaningfully reclassifies hospital performance compared with the 30-day RSRR and 30-day RSMR metrics.


Subject(s)
Medicare , Myocardial Infarction/epidemiology , Patient Discharge/statistics & numerical data , Patient Readmission , Risk Adjustment/statistics & numerical data , Aged , Aged, 80 and over , Comorbidity , Female , Humans , Male , Patient Outcome Assessment , United States
5.
Isr J Health Policy Res ; 9(1): 16, 2020 04 14.
Article in English | MEDLINE | ID: mdl-32290866

ABSTRACT

BACKGROUND: In 2015, mental health services were added to the Israeli National Health Insurance package of services. As such, these services are financed by the budget which is allocated to the Health Plans according to a risk adjustment scheme. An inter-ministerial team suggested a formula by which the mental health budget should be allocated among the Health Plans. Our objective in this study was to develop alternative rates based on individual data, and to evaluate the ones suggested. METHODS: The derivation of the new formula is based on our previous study of all psychiatric inpatients in Israel in the years 2012-2013 (n = 27,446), as well as outpatients in one psychiatric clinic in the same period (n = 6115). Based on Ministry of Health and clinic data we identified predictors of mental health services consumption. Age, gender, marital status and diagnosis were used as risk adjusters to calculate the capitation rates for outpatient care and inpatient care, respectively. All prices of services were obtained from the Ministry of Health tariffs. These rates were modified to include non-users using restricted models. RESULTS: The mental health capitation scales are typically "humped" with regard to age. The rates for ambulatory care varied from a minimum 0.19 of the average consumption for males above the age of 85 to a maximum of 1.93 times the average for females between the ages of 45-54. For inpatient services the highest rate was 409.25 times the average for single, male patients with schizophrenia spectrum diagnoses, aged 45-54. The overall mental health scale ranges from 2.347 times the average for men aged 45-54, to 0.191 for women aged 85+. The modified scale for the entire post-reform package of benefits (including both mental health care and physical health care) is increasing with age to 4.094 times the average in men aged over 85. The scale is flatter than the pre-reform scale. CONCLUSIONS: The risk adjustment rates calculated for outpatient care are substantially different from the ones suggested by the inter-ministerial team. The inpatient rates are new, and indicate that for patients with schizophrenia, a separate risk-sharing arrangement might be desirable. Adopting the rates developed in this analysis would decrease the budget shares of Clalit and Leumit with their relatively older populations, and increase Maccabi and Meuhedet's shares. Future research should develop a risk-adjustment scheme which covers directly both mental and physical care provided by the Israeli Health Plans, using their data.


Subject(s)
Mental Health/standards , Risk Adjustment/methods , Risk Assessment/methods , Adult , Aged , Aged, 80 and over , Female , Hospitalization/statistics & numerical data , Humans , Inpatients/statistics & numerical data , Israel , Male , Mental Health/statistics & numerical data , Middle Aged , Outpatients/statistics & numerical data , Risk Adjustment/statistics & numerical data , Risk Assessment/standards , Risk Assessment/statistics & numerical data
6.
Surgery ; 168(3): 371-378, 2020 09.
Article in English | MEDLINE | ID: mdl-32336468

ABSTRACT

BACKGROUND: Understanding the differences in how patient complexity varies across surgical specialties can inform policy decisions about appropriate resource allocation and reimbursement. This study evaluated variation in patient complexity across surgical specialties and the correlation between complexity and work relative value units. STUDY DESIGN: The 2017 American College of Surgeons National Surgical Quality Improvement Program was queried for cases involving otolaryngology and general, neurologic, vascular, cardiac, thoracic, urologic, orthopedic, and plastic surgery. A total of 10 domains of patient complexity were measured: American Society of Anesthesiologists class ≥4, number of major comorbidities, emergency operation, major complications, concurrent procedures, additional procedures, length of stay, non-home discharge, readmission, and mortality. Specialties were ranked by their complexity domains and the domains summed to create an overall complexity score. Patient complexity then was evaluated for correlation with work relative value units. RESULTS: Overall, 936,496 cases were identified. Cardiac surgery had the greatest total complexity score and was most complex across 4 domains: American Society of Anesthesiologists class ≥4 (78.5%), 30-day mortality (3.4%), major complications (56.9%), and mean length of stay (9.8 days). Vascular surgery had the second greatest complexity score and ranked the greatest on the domains of major comorbidities (2.7 comorbidities) and 30-day readmissions (10.1%). The work relative value units did not correlate with overall complexity score (Spearman's ρ = 0.07; P < .01). Although vascular surgery had the second most complex patients, it ranked fifth greatest in median work relative value units. Similarly, general surgery was the fifth most complex but had the second-least median work relative value units. CONCLUSION: Substantial differences exist between patient complexity across specialties, which do not correlate with work relative value units. Physician effort is determined largely by patient complexity, which is not captured appropriately by the current work relative value units.


Subject(s)
Efficiency , Relative Value Scales , Specialties, Surgical/statistics & numerical data , Surgical Procedures, Operative/statistics & numerical data , Adult , Aged , Comorbidity , Female , Humans , Length of Stay/statistics & numerical data , Male , Middle Aged , Patient Readmission/statistics & numerical data , Patient Transfer/statistics & numerical data , Postoperative Complications/epidemiology , Quality Improvement , Retrospective Studies , Risk Adjustment/statistics & numerical data , Risk Factors , Specialties, Surgical/organization & administration , Surgical Procedures, Operative/adverse effects , Time Factors
7.
Article in English | MEDLINE | ID: mdl-32164392

ABSTRACT

Located in the subtropics, Taiwan is one of the major epidemic areas for dengue fever, with severe epidemics occurring in recent years. Dengue fever has become a serious health threat to Taiwan's residents and a potentially serious economic cost to society. This study recruited 730 random participants and adopted the contingent valuation method to understand the factors influencing the populace's willingness to pay (WTP) to reduce the health risk of dengue fever. The results show that high-income women with children and people with higher preventive perceptions and behavior are more willing to invest in preventive measures against dengue fever. In the evaluation of WTP for preventive treatment for health risks, each person was willing to pay on average NT$751 annually to lower psychological health risks, NT$793 annually to lower the risk of illness, and NT$1086 annually to lower the risk of death.


Subject(s)
Dengue , Health Services , Child , Dengue/economics , Dengue/prevention & control , Female , Health Services/economics , Humans , Income , Male , Risk Adjustment/economics , Risk Adjustment/statistics & numerical data , Taiwan , Value of Life
8.
Cancer Epidemiol Biomarkers Prev ; 29(4): 832-837, 2020 04.
Article in English | MEDLINE | ID: mdl-31988073

ABSTRACT

BACKGROUND: Long-term antiviral therapy (AVT) for chronic hepatitis B (CHB) reduces the risk of hepatocellular carcinoma (HCC). We assessed the temporal trends in the incidence of HCC over time during long-term AVT among Asian patients with CHB. METHODS: Patients with CHB receiving entecavir/tenofovir (ETV/TDF) as a first-line antiviral were recruited from four academic hospitals in the Republic of Korea. We compared the incidence of HCC during and after the first 5 years of ETV/TDF treatment. RESULTS: Among 3,156 patients, the median age was 49.6 years and males predominated (62.4%). During the follow-up, 9.0% developed HCC. The annual incidence of HCC per 100 person-years during the first 5 years (n = 1,671) and after the first 5 years (n = 1,485) was statistically similar (1.93% vs. 2.27%, P = 0.347). When the study population was stratified according to HCC prediction model, that is, modified PAGE-B score, the annual incidence of HCC was 0.11% versus 0.39% in the low-risk group (<8 points), 1.26% versus 1.82% in the intermediate-risk group (9-12 points), and 4.63% versus 5.24% in the high-risk group (≥13 points; all P > 0.05). A Poisson regression analysis indicated that the duration of AVT did not significantly affect the overall trend of the incidence of HCC (adjusted annual incidence rate ratio = 0.85; 95% confidence interval, 0.66-1.11; P = 0.232). CONCLUSIONS: Despite long-term AVT, the risk of HCC steadily persists over time among patients with CHB in the Republic of Korea, in whom HBV genotype C2 predominates. IMPACT: Careful HCC surveillance is still essential.


Subject(s)
Antiviral Agents/therapeutic use , Carcinoma, Hepatocellular/epidemiology , Hepatitis B virus/genetics , Hepatitis B, Chronic/drug therapy , Liver Neoplasms/epidemiology , Adult , Carcinoma, Hepatocellular/pathology , Carcinoma, Hepatocellular/prevention & control , Carcinoma, Hepatocellular/virology , DNA, Viral/genetics , DNA, Viral/isolation & purification , Drug Therapy, Combination/methods , Female , Follow-Up Studies , Genotype , Guanine/analogs & derivatives , Guanine/therapeutic use , Hepatitis B virus/isolation & purification , Hepatitis B, Chronic/pathology , Hepatitis B, Chronic/virology , Humans , Incidence , Liver/diagnostic imaging , Liver/pathology , Liver Neoplasms/pathology , Liver Neoplasms/prevention & control , Liver Neoplasms/virology , Male , Middle Aged , Molecular Typing , Republic of Korea/epidemiology , Risk Adjustment/statistics & numerical data , Risk Factors , Tenofovir/therapeutic use , Time Factors
10.
Enferm. glob ; 18(55): 246-257, jul. 2019. ilus, tab
Article in Spanish | IBECS | ID: ibc-186241

ABSTRACT

En este articulo se determinó la relación entre la Percepción del riesgo de enfermedad cardiovascular con el Nivel de uso de Tecnologías de la Información y Comunicación (TIC's), así como el efecto explicativo del nivel de uso de las TIC's y antecedentes para enfermedad cardiovascular en la Percepción del riesgo de enfermedad cardiovascular en adultos con obesidad. Este estudio es pertinente dado que la relación entre las variables propuestas, así como la relación de las TIC's y otras variables sobre la percepción de riesgo de enfermedad cardiaca y cerebral aún no es del todo clara. Se realizó un estudio descriptivo-analítico realizado en una muestra de 260 adultos con obesidad. Se usaron los cuestionarios Percepción del Riesgo de enfermedad cardiovascular y Uso de TICS en pacientes atendidos en un centro de salud, se respetaron las normas éticas y se utilizó estadística descriptiva e inferencial. Se encontró relación entre la Percepción del riesgo de enfermedad cardiovascular y el uso de las TIC's (rs =0,142, p=0,022). El Nivel de uso de TICS y antecedentes personales/familiares para el desarrollo de enfermedad cardiovascular fue un 14,3% en la percepción del riesgo de enfermedad cardiovascular. Se concluyó que la percepción del riesgo de enfermedad cardiovascular se relaciona con el Nivel de uso de Tecnologías de la información y comunicación en salud y es explicada en parte por el Nivel de uso de tecnologías de la información y comunicación y antecedentes de salud


Objective: this paper determined the relationship between the perception of risk of a cardiovascular disease with the level of use of Information and Communication Technology or ICT, as well as the explanatory effect of these ICTs and the history of cardiovascular disease in the perception of risk of cardiovascular disease in adults with obesity. Methods: this study is relevant since the relationship between the proposed variables, and the relationship of the ICTs and other variables about the risk perception of heart and brain disease is not very clear yet. An analytical-descriptive research was made on a sample of 260 obese adults. Questionnaires of risk perception of a cardiovascular disease and use of ICT in patients who receive care in a health center were used; ethical standards were observed and descriptive statistics and statistical inference were applied. Results: a relationship between risk perception of a cardiovascular disease and the use of ICTs was found (rs=0,142, p=0,022). The level of use of ICTs and personal/family history of disease for the development of a cardiovascular disease explain a 14,3% in the perception of risk of the disease. Conclusions: it was concluded that the perception of risk of cardiovascular disease was related to the level of use of Information and Communication Technologies regarding health, and it is partially explained by the level of use of the ICTs and health history


Subject(s)
Humans , Male , Female , Young Adult , Adult , Risk Adjustment/statistics & numerical data , Obesity/epidemiology , Cardiovascular Diseases/prevention & control , Information Seeking Behavior , Information Technology/statistics & numerical data , Cross-Sectional Studies
11.
Med Care ; 57(4): 295-299, 2019 04.
Article in English | MEDLINE | ID: mdl-30829940

ABSTRACT

RESEARCH OBJECTIVE: Pharmacists are an expensive and limited resource in the hospital and outpatient setting. A pharmacist can spend up to 25% of their day planning. Time spent planning is time not spent delivering an intervention. A readmission risk adjustment model has potential to be used as a universal outcome-based prioritization tool to help pharmacists plan their interventions more efficiently. Pharmacy-specific predictors have not been used in the constructs of current readmission risk models. We assessed the impact of adding pharmacy-specific predictors on performance of readmission risk prediction models. STUDY DESIGN: We used an observational retrospective cohort study design to assess whether pharmacy-specific predictors such as an aggregate pharmacy score and drug classes would improve the prediction of 30-day readmission. A model of age, sex, length of stay, and admission category predictors was used as the reference model. We added predictor variables in sequential models to evaluate the incremental effect of additional predictors on the performance of the reference. We used logistic regression to regress the outcomes on predictors in our derivation dataset. We derived and internally validated our models through a 50:50 split validation of our dataset. POPULATION STUDIED: Our study population (n=350,810) was of adult admissions at hospitals in a large integrated health care delivery system. PRINCIPAL FINDINGS: Individually, the aggregate pharmacy score and drug classes caused a nearly identical but moderate increase in model performance over the reference. As a single predictor, the comorbidity burden score caused the greatest increase in model performance when added to the reference. Adding the severity of illness score, comorbidity burden score and the aggregate pharmacy score to the reference caused a cumulative increase in model performance with good discrimination (c statistic, 0.712; Nagelkerke R, 0.112). The best performing model included all predictors: severity of illness score, comorbidity burden score, aggregate pharmacy score, diagnosis groupings, and drug subgroups. CONCLUSIONS: Adding the aggregate pharmacy score to the reference model significantly increased the c statistic but was out-performed by the comorbidity burden score model in predicting readmission. The need for a universal prioritization tool for pharmacists may therefore be potentially met with the comorbidity burden score model. However, the aggregate pharmacy score and drug class models still out-performed current Medicare readmission risk adjustment models. IMPLICATIONS FOR POLICY OR PRACTICE: Pharmacists have a great role in preventing readmission, and therefore can potentially use one of our models: comorbidity burden score model, aggregate pharmacy score model, drug class model or complex model (a combination of all 5 major predictors) to prioritize their interventions while exceeding Medicare performance measures on readmission. The choice of model to use should be based on the availability of these predictors in the health care system.


Subject(s)
Comorbidity , Patient Readmission/statistics & numerical data , Pharmaceutical Services/statistics & numerical data , Risk Adjustment/statistics & numerical data , Severity of Illness Index , Aged , Chronic Disease/therapy , Female , Hospitalization/statistics & numerical data , Humans , Male , Medicare , Outcome Assessment, Health Care/statistics & numerical data , Retrospective Studies , Risk Adjustment/methods , United States
12.
Health Aff (Millwood) ; 38(2): 253-261, 2019 02.
Article in English | MEDLINE | ID: mdl-30715995

ABSTRACT

The Medicare Shared Savings Program (MSSP) adjusts savings benchmarks by beneficiaries' baseline risk scores. To discourage increased coding intensity, the benchmark is not adjusted upward if beneficiaries' risk scores rise while in the MSSP. As a result, accountable care organizations (ACOs) have an incentive to avoid increasingly sick or expensive beneficiaries. We examined whether beneficiaries' exposure to the MSSP was associated with within-beneficiary changes in risk scores and whether risk scores were associated with entry to or exit from the MSSP. We found that the MSSP was not associated with consistent changes in within-beneficiary risk scores. Conversely, beneficiaries at the ninety-fifth percentile of risk score had a 21.6 percent chance of exiting the MSSP, compared to a 16.0 percent chance among beneficiaries at the fiftieth percentile. The decision not to upwardly adjust risk scores in the MSSP has successfully deterred coding increases but might discourage ACOs to care for high-risk beneficiaries in the MSSP .


Subject(s)
Accountable Care Organizations/economics , Benchmarking/economics , Cost Savings , Risk Adjustment/statistics & numerical data , Aged , Fee-for-Service Plans , Humans , Insurance Claim Review , Medicare , United States
13.
Health Serv Res ; 54(2): 466-473, 2019 04.
Article in English | MEDLINE | ID: mdl-30467846

ABSTRACT

OBJECTIVE: The objective of this work was to assess the effectiveness of a population-level patient-centered intervention for multimorbid patients based on risk stratification for case finding in 2014 compared with the baseline scenario in 2012. DATA SOURCE: Clinical and administrative databases. STUDY DESIGN: This was an observational cohort study with an intervention group and a historical control group. A propensity score by a genetic matching approach was used to minimize bias. Generalized linear models were used to analyze relationships among variables. DATA COLLECTION: We included all eligible patients at the beginning of the year and followed them until death or until the follow-up period concluded (end of the year). The control group (2012) totaled 3558 patients, and 4225 patients were in the intervention group (2014). PRINCIPAL FINDING: A patient-centered strategy based on risk stratification for case finding and the implementation of an integrated program based on new professional roles and an extensive infrastructure of information and communication technologies avoided 9 percent (OR: 0.91, CI: 0.86-0.96) of hospitalizations. However, this effect was not found in nonprioritized groups whose probability of hospitalization increased (OR: 1.19, CI = 1.09-1.30). CONCLUSIONS: In a before-and-after analysis using propensity score matching, a comprehensive, patient-centered, integrated care intervention was associated with a lower risk of hospital admission among prioritized patients, but not among patients who were not prioritized to receive the intervention.


Subject(s)
Comprehensive Health Care/statistics & numerical data , Hospitalization/statistics & numerical data , Multiple Chronic Conditions/economics , Multiple Chronic Conditions/epidemiology , Patient-Centered Care/statistics & numerical data , Risk Adjustment/statistics & numerical data , Aged , Aged, 80 and over , Europe , Female , Humans , Male , Propensity Score , Systems Integration
14.
Rev. esp. enferm. dig ; 110(12): 782-793, dic. 2018. tab, graf
Article in Spanish | IBECS | ID: ibc-177928

ABSTRACT

Introducción: existen diversos indicadores para la valoración de la supervivencia del injerto hepático (DRI americano y ET-DRI europeo, entre otros), pero existen diferencias importantes entre los programas de trasplante de los diferentes países y podría ser que dichos indicadores no sean válidos en nuestro medio. Objetivos: el objetivo de este estudio es describir un nuevo indicador nacional de riesgo del injerto hepático a partir de los resultados del Registro Español de Trasplante Hepático (RETH) y validar el DRI y el ET-DRI. Metodología: el RETH incluye un análisis de Cox de los factores relacionados con la supervivencia del injerto. En base a sus resultados se define el indicador graft risk index (GRI). Las variables que contempla dependen del proceso de donación: edad, causa de muerte, compatibilidad sanguínea y tiempo de isquemia fría; y del receptor: edad, enfermedad de base, virus C, número de trasplante, estado UNOS y técnica quirúrgica. Se obtuvo la curva de la regresión logística y se calcularon las curvas de supervivencia del injerto por estratificación. La precisión se evaluó mediante el área ROC. Resultados: un GRI de 1 se corresponde con una probabilidad de pérdida del injerto del 23,25%; cada punto de aumento del GRI supone que la probabilidad se multiplica por 1,33. El GRI mostró la mejor discriminación por estratificación. El área ROC del DRI fue 0,54 (95% IC, 0,50-0,59) y del ET-DRI, 0,56 (95% IC, 0,51-0,61), frente al GRI 0,70 (95% IC, 0,65-0,73) (p < 0,0001). Conclusiones: el DRI y el ET-DRI no parecen útiles en nuestro medio y sería necesario disponer de un indicador propio. El GRI requiere un estudio nacional que perfile más el indicador y realice una validación más amplia


Introduction: several indicators are available to assess liver graft survival, including the American DRI and the European ET-DRI. However, there are significant differences between transplant programs of different countries, and the previously mentioned indicators might be not valid in our setting. Objectives: the aim of the study was to describe a new national liver graft risk indicator based on the results obtained from the Registro Español de Trasplante Hepático (RETH) and to validate the DRI and ET-DRI indicators. Methods: the RETH includes a Cox analysis of factors associated with graft survival; the graft risk index (GRI) indicator was defined based on these results. The variables considered are dependent upon the donation conditions (age, cause of death, blood compatibility and cold ischemia time) and the transplant recipient (age, underlying disease, hepatitis C virus, transplant number, UNOS status and surgical technique). A logistic regression curve was obtained and graft survival curves were calculated by stratification. Precision was assessed using the ROC analysis. Results: a GRI of 1 represents a probability of graft loss of 23.25%; each point increase in the GRI score multiplies this probability by 1.33. The best discrimination of GRI was obtained by stratification. The DRI ROC area was 0.54 (95% CI, 0.50-0.59) and the ET-DRI ROC area was 0.56 (95% CI, 0.51-0.61), compared to 0.70 (95% CI, 0.65-0.73) (p < 0.0001) for the GRI. Conclusions: both the DRI and ET-DRI do not seem to be useful in our setting. Hence a national indicator is more desirable. The GRI requires a national study in order to further streamline and assess this indicator


Subject(s)
Humans , Male , Female , Adult , Middle Aged , Aged , Liver Transplantation/statistics & numerical data , Graft Survival , Indicators of Morbidity and Mortality , Biomarkers/analysis , Risk Adjustment/statistics & numerical data , Medical Records/statistics & numerical data
16.
Rev. clín. esp. (Ed. impr.) ; 218(8): 391-398, nov. 2018. tab, graf
Article in Spanish | IBECS | ID: ibc-176230

ABSTRACT

Objetivos: Evaluar el riesgo cardiovascular con la herramienta UKPDS risk engine, la función y escala Framingham y comparar las características clínicas de pacientes con diabetes mellitus tipo 2 (DM2) en base a sus hábitos. Pacientes y métodos: Se llevó a cabo un análisis descriptivo. Se incluyó a un total de 890 pacientes con DM2 (444 fumadores y 446 no fumadores) en un estudio transversal, observacional, epidemiológico, multicéntrico y a nivel nacional. Se calculó el riesgo de enfermedad coronaria a 10 años utilizando, para ello, la puntuación UKPDS en ambas cohortes. Los resultados se compararon también con las puntuaciones calibradas para España (REGICOR) y la escala de riesgo de Framingham. Resultados: La probabilidad estimada de enfermedad coronaria a los 10 años según la herramienta UKPDS fue ostensiblemente superior en los fumadores que en los no fumadores. Este aumento del riesgo fue mayor en sujetos con un peor control de la glucosa en sangre, y disminuye en mujeres de 60 o más años de edad. Tanto Framingham como UKPDS confieren un riesgo estimado mayor que REGICOR a los diabéticos españoles. Conclusiones: Dejar de fumar en pacientes con DM2 implica un descenso significativo del riesgo estimado de eventos coronarios según la herramienta UKPDS. Nuestros descubrimientos avalan lo importante que es dejar de fumar en pacientes diabéticos a la hora de reducir el riesgo cardiovascular


Aims: To assess the cardiovascular risk according to the UKPDS risk engine; Framingham function and score comparing clinical characteristics of diabetes mellitus type 2 (DM2) patients according to their habits status. Patients and methods: A descriptive analysis was performed. A total of 890 Spanish patients with DM2 (444 smokers and 446 former-smokers) were included in a cross-sectional, observational, epidemiological multicenter nationwide study. Coronary heart disease risk at 10 years was calculated using the UKPDS risk score in both patient subgroups. Results were also compared with the Spanish calibrated (REGICOR) and updated Framingham risk scores. Results: The estimated likelihood of coronary heart disease risk at 10 years according to the UKPDS score was significantly greater in smokers compared with former-smokers. This increased risk was greater in subjects with poorer blood glucose control, and was attenuated in women ≥60 years-old. The Framingham and UKPDS scores conferred a greater estimated risk than the REGICOR equation in Spanish diabetics. Conclusions: Quitting smoke in patients with DM2 is accompanied by a significant decrease in the estimated risk of coronary events as assessed by UKPDS. Our findings support the importance of quitting smoking among diabetic patients in order to reduce cardiovascular risk


Subject(s)
Humans , Diabetes Mellitus, Type 2/epidemiology , Tobacco Use Disorder/complications , Smoking Cessation/statistics & numerical data , Cardiovascular Diseases/epidemiology , Risk Factors , Cross-Sectional Studies , Coronary Disease/prevention & control , Risk Adjustment/statistics & numerical data
17.
ANZ J Surg ; 88(11): 1168-1173, 2018 11.
Article in English | MEDLINE | ID: mdl-30306716

ABSTRACT

BACKGROUND: To develop a risk-adjustment model for unplanned return to theatre (URTT) outcomes following colorectal surgeries in Australia and New Zealand hospitals and apply top-down and bottom-up statistical process control methods for fair comparison of hospitals and surgeons' URTT rates. METHODS: We analysed URTT outcomes from hospitals contributing data to the Bi-National Colorectal Cancer Audit clinical registry between 2007 and 2016. Preoperative and intraoperative covariates were considered for risk adjustment. A risk-adjusted rate funnel plot was prepared for between-hospital comparisons and identification of outlying hospitals with unusually high rates of URTT. Cumulative observed-minus-expected charts with cumulative sum signals were then presented for surgeons within an outlying hospital. RESULTS: The study included 15 134 patients and 166 surgeons across 70 hospitals. The weighted average URTT rate was 5.2%. The risk-adjustment model identified 12 preoperative and intraoperative variables that significantly raise the risk of URTT: male sex, American Society of Anesthesiologists score, emergency admissions, conversion entry, left hemicolectomy, total colectomy, proctocolectomy, lower anterior resection, ultra-low anterior resection, abdominoperineal resection, organ resection and excess lymph nodes harvested. Right hemicolectomy significantly reduced risk of URTT. URTT rates were not found to significantly vary across seniority of operator; however, comparisons were limited by lack of data on junior operators. The funnel plot identified five hospitals as 'possible outliers' and hospital T was identified as a 'definite outlier'. The cumulative observed-minus-expected charts with cumulative sum signals showed that within hospital T, one surgeon among three had a particularly bad run of URTTs. CONCLUSION: Feedback from aggregated URTT outcomes using a risk-adjusted rate funnel plot is enhanced when follow-up examination of outlying hospitals is conducted with concurrent application of cumulative observed-minus-expected charts with cumulative sum signals.


Subject(s)
Benchmarking/methods , Colectomy , Colorectal Neoplasms/surgery , Postoperative Complications/surgery , Quality Indicators, Health Care/statistics & numerical data , Reoperation/statistics & numerical data , Risk Adjustment/methods , Adult , Aged , Aged, 80 and over , Australia , Benchmarking/statistics & numerical data , Clinical Competence/statistics & numerical data , Female , Follow-Up Studies , Hospitals/standards , Hospitals/statistics & numerical data , Humans , Male , Middle Aged , New Zealand , Postoperative Complications/epidemiology , Registries , Retrospective Studies , Risk Adjustment/statistics & numerical data , Surgeons/standards , Surgeons/statistics & numerical data
18.
Med Care ; 56(12): 1042-1050, 2018 12.
Article in English | MEDLINE | ID: mdl-30339574

ABSTRACT

BACKGROUND: Using electronic health records (EHRs) for population risk stratification has gained attention in recent years. Compared with insurance claims, EHRs offer novel data types (eg, vital signs) that can potentially improve population-based predictive models of cost and utilization. OBJECTIVE: To evaluate whether EHR-extracted body mass index (BMI) improves the performance of diagnosis-based models to predict concurrent and prospective health care costs and utilization. METHODS: We used claims and EHR data over a 2-year period from a cohort of continuously insured patients (aged 20-64 y) within an integrated health system. We examined the addition of BMI to 3 diagnosis-based models of increasing comprehensiveness (ie, demographics, Charlson, and Dx-PM model of the Adjusted Clinical Group system) to predict concurrent and prospective costs and utilization, and compared the performance of models with and without BMI. RESULTS: The study population included 59,849 patients, 57% female, with BMI class I, II, and III comprising 19%, 9%, and 6% of the population. Among demographic models, R improvement from adding BMI ranged from 61% (ie, R increased from 0.56 to 0.90) for prospective pharmacy cost to 29% (1.24-1.60) for concurrent medical cost. Adding BMI to demographic models improved the prediction of all binary service-linked outcomes (ie, hospitalization, emergency department admission, and being in top 5% total costs) with area under the curve increasing from 2% (0.602-0.617) to 7% (0.516-0.554). Adding BMI to Charlson models only improved total and medical cost predictions prospectively (13% and 15%; 4.23-4.79 and 3.30-3.79), and also improved predicting all prospective outcomes with area under the curve increasing from 3% (0.649-0.668) to 4% (0.639-0.665; and, 0.556-0.576). No improvements in prediction were seen in the most comprehensive model (ie, Dx-PM). DISCUSSION: EHR-extracted BMI levels can be used to enhance predictive models of utilization especially if comprehensive diagnostic data are missing.


Subject(s)
Body Mass Index , Health Care Costs/statistics & numerical data , Patient Acceptance of Health Care/statistics & numerical data , Risk Adjustment/statistics & numerical data , Adult , Demography , Electronic Health Records , Female , Hospitalization , Humans , Insurance Claim Review , Male , Middle Aged , Pharmaceutical Services , Retrospective Studies , Young Adult
19.
PLoS One ; 13(8): e0200915, 2018.
Article in English | MEDLINE | ID: mdl-30089109

ABSTRACT

We propose a nonparametric risk-adjusted cumulative sum chart to monitor surgical outcomes for patients with different risks of post-operative mortality due to risk factors that exist before the surgery. Using varying-coefficient logistic regression models, we accomplish the risk adjustment. Unknown coefficient functions are estimated by global polynomial spline approximation based on the maximum likelihood principle. We suggest a bisection minimization approach and a bootstrap method to determine the chart testing limit value. Compared with the previous (parametric) risk-adjusted cumulative sum chart, a major advantage of our method is that the morality rate can be modeled more flexibly by related covariates, which significantly enhances the monitoring efficiency. Simulations demonstrate nice performance of our proposed procedure. An application to a UK cardiac surgery dataset illustrates the use of our methodology.


Subject(s)
Outcome Assessment, Health Care/methods , Risk Adjustment/methods , Risk Adjustment/statistics & numerical data , Cardiac Surgical Procedures/methods , General Surgery/methods , Humans , Logistic Models , Models, Statistical , Models, Theoretical , Risk Factors , Statistics, Nonparametric , Treatment Outcome
SELECTION OF CITATIONS
SEARCH DETAIL
...