Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 145
Filter
2.
Med J Aust ; 207(5): 201-205, 2017 Aug 04.
Article in English | MEDLINE | ID: mdl-28987133

ABSTRACT

OBJECTIVE: To evaluate hospital length of stay (LOS) and admission rates before and after implementation of an evidence-based, accelerated diagnostic protocol (ADP) for patients presenting to emergency departments (EDs) with chest pain. DESIGN: Quasi-experimental design, with interrupted time series analysis for the period October 2013 - November 2015. Setting, participants: Adults presenting with chest pain to EDs of 16 public hospitals in Queensland. INTERVENTION: Implementation of the ADP by structured clinical re-design. MAIN OUTCOME MEASURES: Primary outcome: hospital LOS. SECONDARY OUTCOMES: ED LOS, hospital admission rate, proportion of patients identified as being at low risk of an acute coronary syndrome (ACS). RESULTS: Outcomes were recorded for 30 769 patients presenting before and 23 699 presenting after implementation of the ADP. Following implementation, 21.3% of patients were identified by the ADP as being at low risk for an ACS. Following implementation of the ADP, mean hospital LOS fell from 57.7 to 47.3 hours (rate ratio [RR], 0.82; 95% CI, 0.74-0.91) and mean ED LOS for all patients presenting with chest pain fell from 292 to 256 minutes (RR, 0.80; 95% CI, 0.72-0.89). The hospital admission rate fell from 68.3% (95% CI, 59.3-78.5%) to 54.9% (95% CI, 44.7-67.6%; P < 0.01). The estimated release in financial capacity amounted to $2.3 million as the result of reduced ED LOS and $11.2 million through fewer hospital admissions. CONCLUSIONS: Implementing an evidence-based ADP for assessing patients with chest pain was feasible across a range of hospital types, and achieved a substantial release of health service capacity through reductions in hospital admissions and ED LOS.


Subject(s)
Acute Coronary Syndrome/diagnosis , Chest Pain/diagnosis , Clinical Protocols/standards , Hospitalization/statistics & numerical data , Length of Stay/statistics & numerical data , Risk Assessment/methods , Adult , Aged , Emergency Service, Hospital , Evidence-Based Practice , Female , Hospitalization/economics , Humans , Length of Stay/economics , Male , Middle Aged , Queensland/epidemiology , Risk Assessment/classification
3.
BMC Med Inform Decis Mak ; 17(1): 35, 2017 04 08.
Article in English | MEDLINE | ID: mdl-28390405

ABSTRACT

BACKGROUND: An accurate risk stratification tool is critical in identifying patients who are at high risk of frequent hospital readmissions. While 30-day hospital readmissions have been widely studied, there is increasing interest in identifying potential high-cost users or frequent hospital admitters. In this study, we aimed to derive and validate a risk stratification tool to predict frequent hospital admitters. METHODS: We conducted a retrospective cohort study using the readily available clinical and administrative data from the electronic health records of a tertiary hospital in Singapore. The primary outcome was chosen as three or more inpatient readmissions within 12 months of index discharge. We used univariable and multivariable logistic regression models to build a frequent hospital admission risk score (FAM-FACE-SG) by incorporating demographics, indicators of socioeconomic status, prior healthcare utilization, markers of acute illness burden and markers of chronic illness burden. We further validated the risk score on a separate dataset and compared its performance with the LACE index using the receiver operating characteristic analysis. RESULTS: Our study included 25,244 patients, with 70% randomly selected patients for risk score derivation and the remaining 30% for validation. Overall, 4,322 patients (17.1%) met the outcome. The final FAM-FACE-SG score consisted of nine components: Furosemide (Intravenous 40 mg and above during index admission); Admissions in past one year; Medifund (Required financial assistance); Frequent emergency department (ED) use (≥3 ED visits in 6 month before index admission); Anti-depressants in past one year; Charlson comorbidity index; End Stage Renal Failure on Dialysis; Subsidized ward stay; and Geriatric patient or not. In the experiments, the FAM-FACE-SG score had good discriminative ability with an area under the curve (AUC) of 0.839 (95% confidence interval [CI]: 0.825-0.853) for risk prediction of frequent hospital admission. In comparison, the LACE index only achieved an AUC of 0.761 (0.745-0.777). CONCLUSIONS: The FAM-FACE-SG score shows strong potential for implementation to provide near real-time prediction of frequent admissions. It may serve as the first step to identify high risk patients to receive resource intensive interventions.


Subject(s)
Electronic Health Records/statistics & numerical data , Patient Readmission/statistics & numerical data , Risk Assessment/statistics & numerical data , Tertiary Care Centers/statistics & numerical data , Adult , Aged , Female , Humans , Male , Middle Aged , Retrospective Studies , Risk Assessment/classification , Singapore
4.
Biomarkers ; 22(3-4): 189-199, 2017.
Article in English | MEDLINE | ID: mdl-27299923

ABSTRACT

Precise estimation of the absolute risk for CVD events is necessary when making treatment recommendations for patients. A number of multivariate risk models have been developed for estimation of cardiovascular risk in asymptomatic individuals based upon assessment of multiple variables. Due to the inherent limitation of risk models, several novel risk markers including serum biomarkers have been studied in an attempt to improve the cardiovascular risk prediction above and beyond the established risk factors. In this review, we discuss the role of underappreciated biomarkers such as red cell distribution width (RDW), cystatin C (cysC), and homocysteine (Hcy) as well as imaging biomarkers in cardiovascular risk reclassification, and highlight their utility as additional source of information in patients with intermediate risk.


Subject(s)
Biomarkers/blood , Cardiovascular Diseases/diagnosis , Risk Assessment/classification , Cardiovascular Diseases/diagnostic imaging , Cystatin C/blood , Erythrocyte Indices , Female , Homocysteine/blood , Humans , Male , Risk Assessment/methods
5.
Health Serv Res ; 52(4): 1277-1296, 2017 08.
Article in English | MEDLINE | ID: mdl-27714791

ABSTRACT

OBJECTIVE: There is increasing interest in identifying high-quality physicians, such as whether physicians perform above or below a threshold level. To evaluate whether current methods accurately distinguish above- versus below-threshold physicians, we estimate misclassification rates for two-category identification systems. DATA SOURCES: Claims data for Medicare fee-for-service beneficiaries residing in Florida or New York in 2010. STUDY DESIGN: Estimate colorectal cancer, glaucoma, and diabetes quality scores for 23,085 physicians. Use a beta-binomial model to estimate physician score reliabilities. Compute the proportion of physicians whose performance tier would be misclassified under three scoring systems. PRINCIPAL FINDINGS: In the three scoring systems, misclassification ranges were 8.6-25.7 percent, 6.4-22.8 percent, and 4.5-21.7%. True positive rate ranges were 72.9-97.0 percent, 83.4-100.0 percent, and 34.7-88.2 percent. True negative rate ranges were 68.5-91.6 percent, 10.5-92.4 percent, and 81.1-99.9 percent. Positive predictive value ranges were 70.5-91.6 percent, 77.0-97.3 percent, and 55.2-99.1 percent. CONCLUSIONS: Current methods for profiling physicians on quality may produce misleading results, as the number of eligible events is typically small. Misclassification is a policy-relevant measure of the potential impact of tiering on providers, payers, and patients. Quantifying misclassification rates should inform the construction of high-performance networks and quality improvement initiatives.


Subject(s)
Physicians, Primary Care/standards , Quality of Health Care/standards , Risk Assessment/classification , Algorithms , Fee-for-Service Plans , Florida , Humans , Insurance Claim Review
6.
J Clin Endocrinol Metab ; 101(11): 4244-4250, 2016 11.
Article in English | MEDLINE | ID: mdl-27588439

ABSTRACT

CONTEXT: Young-onset obesity is strongly associated with the early development of type 2 diabetes (T2D). Genetic risk scores (GRSs) related to T2D might help predicting the early impairment of glucose homeostasis in obese youths. OBJECTIVE: Our objective was to investigate the contributions of four GRSs (associated with: T2D [GRS-T2D], beta-cell function [GRS-ß], insulin resistance [GRS-IR], and body mass index) to the variation of traits derived from oral glucose tolerance test (OGTT) in obese and normal-weight children and young adults. DESIGN: This was a cross-sectional association study. PATIENTS: A total of 1076 obese children/adolescents (age = 11.4 ± 2.8 years) and 1265 normal-weight young volunteers (age = 21.1 ± 4.4 years) of European ancestry were recruited from pediatric obesity clinics and general population, respectively. INTERVENTION: Standard OGTT was the intervention in this study. MAIN OUTCOME MEASURES: Associations between GRSs and OGTT-derived traits including fasting glucose and insulin, insulinogenic index, insulin sensitivity index, disposition index (DI) and associations between GRSs and pre-diabetic conditions were measured. RESULTS: GRS-ß significantly associated with fasting glucose (ß = 0.019; P = 3.5 × 10-4) and DI (ß = -0.031; P = 8.9 × 10-4, last quartile 18% lower than first) in obese children, and nominally associated with fasting glucose (ß = 0.009; P = 0.017) and DI (ß = -0.030; P = 1.1 × 10-3, last quartile 11% lower than first) in normal-weight youths. GRS-T2D showed weaker contribution to fasting glucose and DI compared to GRS-ß, in both obese and normal-weight youths. GRS associated with insulin resistance and GRS associated with body mass index did not associate with any traits. None of the GRSs associated with prediabetes, which affected only 4% of participants overall. CONCLUSION: Single nucleotide polymorphisms identified by genome-wide association studies to influence beta-cell function were associated with fasting glucose and indices of insulin secretion in youths, especially in obese children.


Subject(s)
Blood Glucose/metabolism , Diabetes Mellitus, Type 2/metabolism , Genetic Predisposition to Disease/classification , Insulin-Secreting Cells/metabolism , Insulin/metabolism , Pediatric Obesity/metabolism , Adolescent , Adult , Child , Cohort Studies , Cross-Sectional Studies , Diabetes Mellitus, Type 2/epidemiology , Diabetes Mellitus, Type 2/genetics , France/epidemiology , Genome-Wide Association Study , Glucose Tolerance Test , Humans , Insulin Secretion , Italy/epidemiology , Male , Pediatric Obesity/epidemiology , Pediatric Obesity/genetics , Polymorphism, Single Nucleotide , Risk Assessment/classification , Young Adult
7.
Work ; 51(4): 703-13, 2015.
Article in English | MEDLINE | ID: mdl-26409941

ABSTRACT

BACKGROUND: The identification of hazards or risk factors at the workplace level is a crucial procedure to the risk identification, risk analysis and risk evaluation. OBJECTIVE: This article presents a hazard or risk factors taxonomy, to be applied at the workplace level, during the systematic hazards identification. METHODS: The taxonomy was based on evidences literature, including technical documents, standards, regulations, good-practice documents and toxicology databases. RESULTS: The taxonomy was organized as a matrix (Risk Factors-Disorders Matrix), an extensive list of occupational hazards. Hazards were organized in terms of the potential individual dominant consequences: in terms of accidents (injuries), occupational disease and negative social, mental or physical well-being (like dissatisfaction and discomfort complaints not resulting from injuries or diseases symptomatology). The specific hazards in each work context were characterized by three summary tables: (1) Accidents-Risk Factors Table, (2) Diseases-Risk Factors Table and (3) Negative Well-being-Risk Factors Table. CONCLUSIONS: Risk factors are coded according to the Risk Factors-Disorders Matrix and the dominant potential disorders are identified in the Risk Factors Tables. The inclusion of individual, psychosocial, emerging and combined hazards in the Matrix, contributes to focusing the risk identification in non-traditional sources of risk during risk assessment procedures.


Subject(s)
Accidents, Occupational , Occupational Diseases , Risk Assessment/classification , Humans , Job Satisfaction , Occupational Diseases/etiology , Risk Factors , Workplace
8.
Einstein (Sao Paulo) ; 13(2): 196-201, 2015.
Article in English, Portuguese | MEDLINE | ID: mdl-26154540

ABSTRACT

OBJECTIVE: To evaluate the impact of traditional check-up appointment on the progression of the cardiovascular risk throughout time. METHODS: This retrospective cohort study included 11,126 medical records of asymptomatic executives who were evaluated between January, 2005 and October, 2008. Variables included participants' demographics characteristics, smoking habit, history of cardiovascular diseases, diabetes, dyslipidemia, total cholesterol, HDL, triglycerides, glucose, c-reactive protein, waist circumference, hepatic steatosis, Framingham score, metabolic syndrome, level of physical activity, stress, alcohol consumption, and body mass index. RESULTS: A total of 3,150 patients was included in the final analysis. A worsening was observed in all risk factors, excepting in smoking habit, incidence of myocardial infarction or stroke and in the number of individuals classified as medium or high risk for cardiovascular events. In addition, a decrease in stress level and alcohol consumption was also seen. CONCLUSION: The adoption of consistent health policies by companies is imperative in order to reduce the risk factors and the future costs associated with illness and absenteeism.


Subject(s)
Cardiovascular Diseases/diagnosis , Disease Progression , Mass Screening/methods , Physical Examination/methods , Adult , Body Mass Index , Cardiovascular Diseases/prevention & control , Cholesterol/blood , Diabetes Mellitus/blood , Female , Humans , Life Style , Male , Middle Aged , Retrospective Studies , Risk Assessment/classification , Risk Assessment/methods , Risk Factors , Sex Factors , Smoking , Stress, Psychological/diagnosis , Time Factors
9.
PLoS One ; 10(6): e0129966, 2015.
Article in English | MEDLINE | ID: mdl-26047133

ABSTRACT

OBJECTIVES: The prevalence of cardiovascular disease risk factors has increased worldwide. However, the prevalence and clustering of cardiovascular disease risk factors among Tibetans is currently unknown. We aimed to explore the prevalence and clustering of cardiovascular disease risk factors among Tibetan adults in China. METHODS: In 2011, 1659 Tibetan adults (aged ≥ 18 years) from Changdu, China were recruited to this cross-section study. The questionnaire, physical examinations and laboratory testing were completed and the prevalence of cardiovascular disease risk factors, including hypertension, diabetes, overweight/obesity, dyslipidemia, and current smoking, were counted. The association between the clustering of cardiovascular disease risk factors and demographic characteristics, and geographic altitude were assessed. RESULTS: The age-standardized prevalence of hypertension, diabetes, overweight or obesity, dyslipidemia, and current smoking were 62.4%, 6.4%, 34.3%, 42.7%, and 6.1%, respectively, and these risk factors were associated with age, gender, education level, yearly family income, altitude, occupation, and butter tea consumption (P < 0.05). Overall, the age-adjusted prevalence of clustering of ≥ 1, ≥ 2, and ≥ 3 cardiovascular disease risk factors were 79.4%, 47.1%, and 20.9%, respectively. There appeared higher clustering of ≥ 2 and ≥ 3 cardiovascular disease risk factors among Tibetans with higher education level and family income yearly, and those living at an altitude < 3500 m and in a township. CONCLUSIONS: The prevalence of cardiovascular disease risk factors, especially hypertension, was high in Tibetans. Moreover, there was an increased clustering of cardiovascular disease risk factors among those with higher socioeconomic status, lamas and those living at an altitude < 3500 m. These findings suggest that without the immediate implementation of an efficient policy to control these risk factors, cardiovascular disease will eventually become a major disease burden among Tibetans.


Subject(s)
Cardiovascular Diseases/epidemiology , Population Surveillance/methods , Risk Assessment/methods , Adolescent , Adult , Aged , Asian People , Cardiovascular Diseases/ethnology , Cardiovascular Diseases/etiology , Cluster Analysis , Cross-Sectional Studies , Dyslipidemias/complications , Humans , Hypertension/complications , Middle Aged , Obesity/complications , Overweight/complications , Prevalence , Risk Assessment/classification , Risk Assessment/statistics & numerical data , Risk Factors , Smoking/adverse effects , Surveys and Questionnaires , Tibet/epidemiology , Young Adult
10.
Av. diabetol ; 31(3): 102-112, mayo-jun. 2015.
Article in Spanish | IBECS | ID: ibc-140305

ABSTRACT

La enfermedad cardiovascular (ECV) representa la primera causa de mortalidad en las personas con diabetes mellitus, entre las cuales, el riesgo de mortalidad cardiovascular es 2-4 veces mayor que el de la población general. Las guías de práctica clínica recomiendan calcular el riesgo de ECV en la diabetes; sin embargo, se han desarrollado pocos modelos para estimar este riesgo en las personas con diabetes. Los primeros modelos de predicción de ECV en la diabetes mellitus tipo 2, que incluyeron junto a las variables de riesgo clásicas la HbA1c y los años de evolución, no son contemporáneos y no funcionan en poblaciones diferentes a las que participaron en su desarrollo. La creación de modelos de riesgo propios actuales y validados en diferentes poblaciones permitiría realizar intervenciones preventivas más agresivas y tempranas, y centradas en el paciente, con la finalidad de frenar la epidemia de ECV que padecen las personas con diabetes


Cardiovascular disease (CVD) remains the first cause of death in patients with diabetes mellitus. Cardiovascular mortality is between 2 and 4 times as high as the risk of matched controls in the general population. Although practice guidelines recommend calculating CVD risk in diabetes, few models for estimating cardiovascular risk have been developed specifically for diabetic patients. The first ones, taking into account HbA1c and diabetes duration plus classical risk factors, is not contemporary and perform sub-optimally in different populations with diabetes. Constructing updated population-derived and externally validated cardiovascular risk models will yield more aggressive patient-centered preventive interventions to curb the ongoing epidemic of CVD in patients with diabetes


Subject(s)
Female , Humans , Male , Calibration/standards , Diabetes Mellitus, Type 2/blood , Diabetes Mellitus, Type 2/metabolism , Coronary Disease/congenital , Coronary Disease/metabolism , Quality of Life/psychology , Spain/ethnology , Risk Assessment , Risk Assessment/methods , Diabetes Mellitus, Type 2/genetics , Diabetes Mellitus, Type 2/pathology , Coronary Disease/complications , Coronary Disease/genetics , /standards , Quality of Life , Risk Assessment/classification , Risk Assessment/ethics
11.
Eur J Epidemiol ; 30(4): 299-304, 2015 Apr.
Article in English | MEDLINE | ID: mdl-25724473

ABSTRACT

The Net Reclassification Improvement (NRI) has become a popular metric for evaluating improvement in disease prediction models through the past years. The concept is relatively straightforward but usage and interpretation has been different across studies. While no thresholds exist for evaluating the degree of improvement, many studies have relied solely on the significance of the NRI estimate. However, recent studies recommend that statistical testing with the NRI should be avoided. We propose using confidence ellipses around the estimated values of event and non-event NRIs which might provide the best measure of variability around the point estimates. Our developments are illustrated using practical examples from EPIC-Potsdam study.


Subject(s)
Chronic Disease/classification , Chronic Disease/epidemiology , Models, Statistical , Predictive Value of Tests , Risk Assessment/classification , Aged , Confidence Intervals , Data Interpretation, Statistical , Female , Humans , Male , Middle Aged , Risk Assessment/methods , Risk Factors
12.
Int Arch Occup Environ Health ; 88(8): 1069-75, 2015 Nov.
Article in English | MEDLINE | ID: mdl-25702173

ABSTRACT

BACKGROUND: Prognostic models including age, self-rated health and prior sickness absence (SA) have been found to predict high (≥ 30) SA days and high (≥ 3) SA episodes during 1-year follow-up. More predictors of high SA are needed to improve these SA prognostic models. The purpose of this study was to investigate fatigue as new predictor in SA prognostic models by using risk reclassification methods and measures. METHODS: This was a prospective cohort study with 1-year follow-up of 1,137 office workers. Fatigue was measured at baseline with the 20-item checklist individual strength and added to the existing SA prognostic models. SA days and episodes during 1-year follow-up were retrieved from an occupational health service register. The added value of fatigue was investigated with Net Reclassification Index (NRI) and integrated discrimination improvement (IDI) measures. RESULTS: In total, 579 (51 %) office workers had complete data for analysis. Fatigue was prospectively associated with both high SA days and episodes. The NRI revealed that adding fatigue to the SA days model correctly reclassified workers with high SA days, but incorrectly reclassified workers without high SA days. The IDI indicated no improvement in risk discrimination by the SA days model. Both NRI and IDI showed that the prognostic model predicting high SA episodes did not improve when fatigue was added as predictor variable. CONCLUSION: In the present study, fatigue increased false-positive rates which may reduce the cost-effectiveness of interventions for preventing SA.


Subject(s)
Fatigue/epidemiology , Occupational Diseases/epidemiology , Sick Leave/classification , Absenteeism , Adult , Checklist , Fatigue/etiology , Female , Humans , Male , Middle Aged , Models, Theoretical , Occupational Diseases/etiology , Occupational Health Services/statistics & numerical data , Prospective Studies , Risk Assessment/classification , Risk Factors , Sick Leave/statistics & numerical data
14.
Regul Toxicol Pharmacol ; 70(3): 590-604, 2014 Dec.
Article in English | MEDLINE | ID: mdl-25239592

ABSTRACT

Recent EU legislation has introduced endocrine disrupting properties as a hazard-based "cut-off" criterion for the approval of active substances as pesticides and biocides. Currently, no specific science-based approach for the assessment of substances with endocrine disrupting properties has been agreed upon, although this new legislation provides interim criteria based on classification and labelling. Different proposals for decision making on potential endocrine disrupting properties in human health risk assessment have been developed by the German Federal Institute for Risk Assessment (BfR) and other regulatory bodies. All these frameworks, although differing with regard to hazard characterisation, include a toxicological assessment of adversity of the effects, the evaluation of underlying modes/mechanisms of action in animals and considerations concerning the relevance of effects to humans. Three options for regulatory decision making were tested upon 39 pesticides for their applicability and to analyze their potential impact on the regulatory status of active substances that are currently approved for use in Europe: Option 1, based purely on hazard identification (adversity, mode of action, and the plausibility that both are related); Option 2, based on hazard identification and additional elements of hazard characterisation (severity and potency); Option 3, based on the interim criteria laid down in the recent EU pesticides legislation. Additionally, the data analysed in this study were used to address the questions, which parts of the endocrine system were affected, which studies were the most sensitive and whether no observed adverse effect levels were observed for substance with ED properties. The results of this exercise represent preliminary categorisations and must not be used as a basis for definitive regulatory decisions. They demonstrate that a combination of criteria for hazard identification with additional criteria of hazard characterisation allows prioritising and differentiating between substances with regard to their regulatory concern. It is proposed to integrate these elements into a decision matrix to be used within a weight of evidence approach for the toxicological categorisation of relevant endocrine disruptors and to consider all parts of the endocrine system for regulatory decision making on endocrine disruption.


Subject(s)
Decision Making , Endocrine Disruptors/toxicity , Pesticides/toxicity , Animals , Endocrine Disruptors/classification , European Union , Government Regulation , Humans , Pesticides/classification , Risk Assessment/classification , Risk Assessment/legislation & jurisprudence , Risk Assessment/methods
15.
Am J Hematol ; 89(8): 813-8, 2014 Aug.
Article in English | MEDLINE | ID: mdl-24782398

ABSTRACT

Approximately 30% of patients with chronic myelomonocytic leukemia (CMML) have karyotypic abnormalities and this low frequency has made using cytogenetic data for the prognostication of CMML patients challenging. Recently, a three-tiered cytogenetic risk stratification system for CMML patients has been proposed by a Spanish study group. Here we assessed the prognostic impact of cytogenetic abnormalities on overall survival (OS) and leukemia-free survival (LFS) in 417 CMML patients from our institution. Overall, the Spanish cytogenetic risk effectively stratified patients into different risk groups, with a median OS of 33 months in the low-, 24 months in intermediate- and 14 months in the high-risk groups. Within the proposed high risk group, however, marked differences in OS were observed. Patients with isolated trisomy 8 showed a median OS of 22 months, similar to the intermediate-risk group (P = 0.132), but significantly better than other patients in the high-risk group (P = 0.018). Furthermore, patients with more than three chromosomal abnormalities showed a significantly shorter OS compared with patients with three abnormalities (8 vs. 15 months, P = 0.004), suggesting possible a separate risk category. If we simply moved trisomy 8 to the intermediate risk category, the modified cytogenetic grouping would provide a better separation of OS and LFS; and its prognostic impact was independent of other risk parameters. Our study results strongly advocate for the incorporation of cytogenetic information in the risk model for CMML.


Subject(s)
Chromosome Aberrations , Leukemia, Myelomonocytic, Chronic/genetics , Trisomy , Adult , Aged , Aged, 80 and over , Chromosomes, Human, Pair 8 , Female , Humans , Karyotyping , Leukemia, Myelomonocytic, Chronic/classification , Leukemia, Myelomonocytic, Chronic/mortality , Leukemia, Myelomonocytic, Chronic/pathology , Male , Middle Aged , Risk , Risk Assessment/classification , Survival Analysis
16.
Pharmacoepidemiol Drug Saf ; 23(7): 667-78, 2014 Jul.
Article in English | MEDLINE | ID: mdl-24821575

ABSTRACT

BACKGROUND: The need for formal and structured approaches for benefit-risk assessment of medicines is increasing, as is the complexity of the scientific questions addressed before making decisions on the benefit-risk balance of medicines. We systematically collected, appraised and classified available benefit-risk methodologies to facilitate and inform their future use. METHODS: A systematic review of publications identified benefit-risk assessment methodologies. Methodologies were appraised on their fundamental principles, features, graphical representations, assessability and accessibility. We created a taxonomy of methodologies to facilitate understanding and choice. RESULTS: We identified 49 methodologies, critically appraised and classified them into four categories: frameworks, metrics, estimation techniques and utility survey techniques. Eight frameworks describe qualitative steps in benefit-risk assessment and eight quantify benefit-risk balance. Nine metric indices include threshold indices to measure either benefit or risk; health indices measure quality-of-life over time; and trade-off indices integrate benefits and risks. Six estimation techniques support benefit-risk modelling and evidence synthesis. Four utility survey techniques elicit robust value preferences from relevant stakeholders to the benefit-risk decisions. CONCLUSIONS: Methodologies to help benefit-risk assessments of medicines are diverse and each is associated with different limitations and strengths. There is not a 'one-size-fits-all' method, and a combination of methods may be needed for each benefit-risk assessment. The taxonomy introduced herein may guide choice of adequate methodologies. Finally, we recommend 13 of 49 methodologies for further appraisal for use in the real-life benefit-risk assessment of medicines.


Subject(s)
Drug-Related Side Effects and Adverse Reactions/epidemiology , Models, Statistical , Risk Assessment/methods , Decision Making , Humans , Pharmaceutical Preparations/administration & dosage , Quality of Life , Risk Assessment/classification
17.
Clin Rehabil ; 28(12): 1218-24, 2014 Dec.
Article in English | MEDLINE | ID: mdl-24849795

ABSTRACT

OBJECTIVE: To evaluate relative accuracy of a newly developed Stroke Assessment of Fall Risk (SAFR) for classifying fallers and non-fallers, compared with a health system fall risk screening tool, the Fall Harm Risk Screen. DESIGN AND SETTING: Prospective quality improvement study conducted at an inpatient stroke rehabilitation unit at a large urban university hospital. PARTICIPANTS: Patients admitted for inpatient stroke rehabilitation (N = 419) with imaging or clinical evidence of ischemic or hemorrhagic stroke, between 1 August 2009 and 31 July 2010. INTERVENTIONS: Not applicable. MAIN OUTCOME MEASURES: Sensitivity, specificity, and area under the curve for Receiver Operating Characteristic Curves of both scales' classifications, based on fall risk score completed upon admission to inpatient stroke rehabilitation. RESULTS: A total of 68 (16%) participants fell at least once. The SAFR was significantly more accurate than the Fall Harm Risk Screen (p < 0.001), with area under the curve of 0.73, positive predictive value of 0.29, and negative predictive value of 0.94. For the Fall Harm Risk Screen, area under the curve was 0.56, positive predictive value was 0.19, and negative predictive value was 0.86. Sensitivity and specificity of the SAFR (0.78 and 0.63, respectively) was higher than the Fall Harm Risk Screen (0.57 and 0.48, respectively). CONCLUSIONS: An evidence-derived, population-specific fall risk assessment may more accurately predict fallers than a general fall risk screen for stroke rehabilitation patients. While the SAFR improves upon the accuracy of a general assessment tool, additional refinement may be warranted.


Subject(s)
Accidental Falls/prevention & control , Risk Assessment/classification , Stroke/complications , Accidental Falls/statistics & numerical data , Age Distribution , Area Under Curve , Female , Humans , Male , Middle Aged , Nursing Assessment , Predictive Value of Tests , Prospective Studies , Quality Improvement , ROC Curve , Rehabilitation Centers , Risk Assessment/methods , Stroke/classification , Stroke Rehabilitation
18.
Ann Intern Med ; 160(2): 122-31, 2014 Jan 21.
Article in English | MEDLINE | ID: mdl-24592497

ABSTRACT

The net reclassification improvement (NRI) is an increasingly popular measure for evaluating improvements in risk predictions. This article details a review of 67 publications in high-impact general clinical journals that considered the NRI. Incomplete reporting of NRI methods, incorrect calculation, and common misinterpretations were found. To aid improved applications of the NRI, the article elaborates on several aspects of the computation and interpretation in various settings. Limitations and controversies are discussed, including the effect of miscalibration of prediction models, the use of the continuous NRI and "clinical NRI," and the relation with decision analytic measures. A systematic approach toward presenting NRI analysis is proposed: Detail and motivate the methods used for computation of the NRI, use clinically meaningful risk cutoffs for the category-based NRI, report both NRI components, address issues of calibration, and do not interpret the overall NRI as a percentage of the study population reclassified. Promising NRI findings need to be followed with decision analytic or formal cost-effectiveness evaluations.


Subject(s)
Models, Statistical , Risk Assessment/classification , Data Interpretation, Statistical , Decision Support Techniques , Humans , Risk Assessment/methods
20.
Stat Med ; 33(11): 1914-27, 2014 May 20.
Article in English | MEDLINE | ID: mdl-24353130

ABSTRACT

Risk prediction models play an important role in prevention and treatment of several diseases. Models that are in clinical use are often refined and improved. In many instances, the most efficient way to improve a successful model is to identify subgroups for which there is a specific biological rationale for improvement and tailor the improved model to individuals in these subgroups, an approach especially in line with personalized medicine. At present, we lack statistical tools to evaluate improvements targeted to specific subgroups. Here, we propose simple tools to fill this gap. First, we extend a recently proposed measure, the Integrated Discrimination Improvement, using a linear model with covariates representing the subgroups. Next, we develop graphical and numerical tools that compare reclassification of two models, focusing only on those subjects for whom the two models reclassify differently. We apply these approaches to BRCAPRO, a genetic risk prediction model for breast and ovarian cancer, using data from MD Anderson Cancer Center. We also conduct a simulation study to investigate properties of the new reclassification measure and compare it with currently used measures. Our results show that the proposed tools can successfully uncover subgroup specific model improvements.


Subject(s)
Data Interpretation, Statistical , Models, Genetic , Risk Assessment/methods , Breast Neoplasms/genetics , Computer Simulation , Female , Humans , Ovarian Neoplasms/genetics , Risk Assessment/classification
SELECTION OF CITATIONS
SEARCH DETAIL
...