Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 44
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-38734893

RESUMO

BACKGROUND: A lack of consensus exists across guidelines as to which risk model should be used for the primary prevention of cardiovascular disease (CVD). Our objective was to determine potential improvements in the number needed to treat (NNT) and number of events prevented (NEP) using different risk models in patients eligible for risk stratification. METHODS: A retrospective observational cohort was assembled from primary care patients in Ontario, Canada between January 1st, 2010, to December 31st, 2014 and followed for up to 5 years. Risk estimation was undertaken in patients 40-75 years of age, without CVD, diabetes, or chronic kidney disease using the Framingham Risk Score (FRS), Pooled Cohort Equations (PCEs), a recalibrated FRS (R-FRS), Systematic Coronary Risk Evaluation 2 (SCORE2), and the low-risk region recalibrated SCORE2 (LR-SCORE2). RESULTS: The cohort consisted of 47,399 patients (59% women, mean age 54). The NNT with statins was lowest for SCORE2 at 40, followed by LR-SCORE2 at 41, R-FRS at 43, PCEs at 55, and FRS at 65. Models that selected for individuals with a lower NNT recommended statins to fewer, but higher risk patients. For instance, SCORE2 recommended statins to 7.9% of patients (5-year CVD incidence 5.92%). The FRS, however, recommended statins to 34.6% of patients (5-year CVD incidence 4.01%). Accordingly, the NEP was highest for the FRS at 406 and lowest for SCORE2 at 156. CONCLUSIONS: Newer models such as SCORE2 may improve statin allocation to higher risk groups with a lower NNT but prevent fewer events at the population level.

3.
J Am Heart Assoc ; 13(5): e033768, 2024 Mar 05.
Artigo em Inglês | MEDLINE | ID: mdl-38390797

RESUMO

BACKGROUND: Transcatheter aortic valve implantation (TAVI) has seen indication expansion and thus exponential growth in demand over the past decade. In many jurisdictions, the growing demand has outpaced capacity, increasing wait times and preprocedural adverse events. In this study, we derived prediction models that estimate the risk of adverse events on the waitlist and developed a triage tool to identify patients who should be prioritized for TAVI. METHODS AND RESULTS: We included adult patients in Ontario, Canada referred for TAVI and followed up until one of the following events first occurred: death, TAVI procedure, removal from waitlist, or end of the observation period. We used subdistribution hazards models to find significant predictors for each of the following outcomes: (1) all-cause death while on the waitlist; (2) all-cause hospitalization while on the waitlist; (3) receipt of urgent TAVI; and (4) a composite outcome. The median predicted risk at 12 weeks was chosen as a threshold for a maximum acceptable risk while on the waitlist and incorporated in the triage tool to recommend individualized wait times. Of 13 128 patients, 586 died while on the waitlist, and 4343 had at least 1 hospitalization. A total of 6854 TAVIs were completed, of which 1135 were urgent procedures. We were able to create parsimonious models for each outcome that included clinically relevant predictors. CONCLUSIONS: The Canadian TAVI Triage Tool (CAN3T) is a triage tool to assist clinicians in the prioritization of patients who should have timely access to TAVI. We anticipate that the CAN3T will be a valuable tool as it may improve equity in access to care, reduce preventable adverse events, and improve system efficiency.


Assuntos
Estenose da Valva Aórtica , Substituição da Valva Aórtica Transcateter , Humanos , Substituição da Valva Aórtica Transcateter/efeitos adversos , Substituição da Valva Aórtica Transcateter/métodos , Estenose da Valva Aórtica/diagnóstico por imagem , Estenose da Valva Aórtica/cirurgia , Estenose da Valva Aórtica/etiologia , Listas de Espera , Triagem , Resultado do Tratamento , Ontário , Valva Aórtica/cirurgia , Fatores de Risco
4.
Eur J Prev Cardiol ; 31(6): 668-676, 2024 Apr 18.
Artigo em Inglês | MEDLINE | ID: mdl-37946603

RESUMO

AIMS: Systematic Coronary Risk Evaluation Model 2 (SCORE2) was recently developed to predict atherosclerotic cardiovascular disease (ASCVD) in Europe. Whether these models could be used outside of Europe is not known. The objective of this study was to test the validity of SCORE2 in a large Canadian cohort. METHODS AND RESULTS: A primary care cohort of persons with routinely collected electronic medical record data from 1 January 2010 to 31 December 2014, in Ontario, Canada, was used for validation. The SCORE2 models for younger persons (YP) were applied to 57 409 individuals aged 40-69 while the models for older persons (OPs) were applied to 9885 individuals 70-89 years of age. Five-year ASCVD predictions from both the uncalibrated and low-risk region recalibrated SCORE2 models were evaluated. The C-statistic for SCORE2-YP was 0.74 in women and 0.69 in men. The uncalibrated SCORE2-YP overestimated risk by 17% in women and underestimated by 2% in men. In contrast, the low-risk region recalibrated model demonstrated worse calibration, overestimating risk by 100% in women and 36% in men. The C-statistic for SCORE2-OP was 0.64 and 0.62 in older women and men, respectively. The uncalibrated SCORE2-OP overestimated risk by more than 100% in both sexes. The low-risk region recalibrated model demonstrated improved calibration but still overestimated risk by 60% in women and 13% in men. CONCLUSION: The performance of SCORE2 to predict ASCVD risk in Canada varied by age group and depended on whether regional calibration was applied. This underscores the necessity for validation assessment of SCORE2 prior to implementation in new jurisdictions.


In this study, new tools [Systematic Coronary Risk Evaluation Model 2 (SCORE2)] that were developed across Europe to predict heart attack and stroke risk in healthy individuals were tested independently for the first time in a Canadian setting. Key findings are as follows:The accuracy of predictions from SCORE2 in Canadians depends on the age group considered and whether uncalibrated or recalibrated equations are being used.Independent assessment of tools such as SCORE2 remains useful prior to widespread implementation in new jurisdictions.


Assuntos
Aterosclerose , Doenças Cardiovasculares , Masculino , Humanos , Feminino , Idoso , Idoso de 80 Anos ou mais , Fatores de Risco , Medição de Risco/métodos , Estudos de Coortes , Ontário , Atenção Primária à Saúde
5.
Ann Intern Med ; 176(12): 1638-1647, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-38079638

RESUMO

BACKGROUND: Prediction of atherosclerotic cardiovascular disease (ASCVD) in primary prevention assessments exclusively with laboratory results may facilitate automated risk reporting and improve uptake of preventive therapies. OBJECTIVE: To develop and validate sex-specific prediction models for ASCVD using age and routine laboratory tests and compare their performance with that of the pooled cohort equations (PCEs). DESIGN: Derivation and validation of the CANHEART (Cardiovascular Health in Ambulatory Care Research Team) Lab Models. SETTING: Population-based cohort study in Ontario, Canada. PARTICIPANTS: A derivation and internal validation cohort of adults aged 40 to 75 years without cardiovascular disease from April 2009 to December 2015; an external validation cohort of primary care patients from January 2010 to December 2014. MEASUREMENTS: Age and laboratory predictors measured in the outpatient setting included serum total cholesterol, high-density lipoprotein cholesterol, triglycerides, hemoglobin, mean corpuscular volume, platelets, leukocytes, estimated glomerular filtration rate, and glucose. The ASCVD outcomes were defined as myocardial infarction, stroke, and death from ischemic heart or cerebrovascular disease within 5 years. RESULTS: Sex-specific models were developed and internally validated in 2 160 497 women and 1 833 147 men. They were well calibrated, with relative differences less than 1% between mean predicted and observed risk for both sexes. The c-statistic was 0.77 in women and 0.71 in men. External validation in 31 697 primary care patients showed a relative difference less than 14% and an absolute difference less than 0.3 percentage points in mean predicted and observed risks for both sexes. The c-statistics for the laboratory models were 0.72 for both sexes and were not statistically significantly different from those for the PCEs in women (change in c-statistic, -0.01 [95% CI, -0.03 to 0.01]) or men (change in c-statistic, -0.01 [CI, -0.04 to 0.02]). LIMITATION: Medication use was not available at the population level. CONCLUSION: The CANHEART Lab Models predict ASCVD with similar accuracy to more complex models, such as the PCEs. PRIMARY FUNDING SOURCE: Canadian Institutes of Health Research.


Assuntos
Aterosclerose , Doenças Cardiovasculares , Adulto , Masculino , Humanos , Feminino , Doenças Cardiovasculares/epidemiologia , Doenças Cardiovasculares/prevenção & controle , Estudos de Coortes , Medição de Risco/métodos , Aterosclerose/diagnóstico , Aterosclerose/epidemiologia , Colesterol , Ontário/epidemiologia , Fatores de Risco
6.
MDM Policy Pract ; 8(2): 23814683231202993, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37900721

RESUMO

Objective. To conduct cost-utility analyses for Computed Tomography To Strength (CT2S), a novel osteoporosis screening service, compared with dual-energy X-ray absorptiometry (DXA), treat all without screening, and no screening methods for Dutch postmenopausal women referred to fracture liaison service (FLS). CT2S uses CT scans to generate femur models and simulate sideways fall scenarios for bone strength assessment. Methods. Early health technology assessment (HTA) was adopted to evaluate CT2S as a novel osteoporosis screening tool for secondary fracture prevention. We constructed a 2-dimensional simulation model considering 4 strategies (no screening, treat all without screening, DXA, CT2S) together with screening intervals (5 y, 2 y), treatments (oral alendronate, zoledronic acid), and discount rate scenarios among Dutch women in 3 age groups (60s, 70s, and 80s). Strategy comparisons were based on incremental cost-effectiveness ratios (ICERs), considering an ICER below €20,000 per QALY gained as cost-effective in the Netherlands. Results. Under the base-case scenario, CT2S versus DXA had estimated ICERs of €41,200 and €14,083 per QALY gained for the 60s and 70s age groups, respectively. For the 80s age group, CT2S was more effective and less costly than DXA. Changing treatment from weekly oral alendronate to annual zoledronic acid substantially decreased CT2S versus DXA ICERs across all age groups. Setting the screening interval to 2 y increased CT2S versus DXA ICERs to €100,333, €55,571, and €15,750 per QALY gained for the 60s, 70s, and 80s age groups, respectively. In all simulated populations and scenarios, CT2S was cost-effective (in some cases dominant) compared with the treat all strategy and cost-saving (more effective and less costly) compared with no screening. Conclusion. CT2S was estimated to be potentially cost-effective in the 70s and 80s age groups considering the willingness-to-pay threshold of the Netherlands. This early HTA suggests CT2S as a potential novel osteoporosis screening tool for secondary fracture prevention. Highlights: For postmenopausal Dutch women who have been referred to the FLS, direct access to CT2S may be cost-effective compared with DXA for age groups 70s and 80s, when considering the ICER threshold of the Netherlands. This study positions CT2S as a potential novel osteoporosis-screening tool for secondary fracture prevention in the clinical setting.A shorter screening interval of 2 y increases the effectiveness of both screening strategies, but the ICER of CT2S compared with DXA also increased substantially, which made CT2S no longer cost-effective for the 70s age group; however, it remains cost-effective for individuals in their 80s.Annual zoledronic acid treatment with better adherence may contribute to a lower cost-effectiveness ratio when comparing CT2S to DXA screening and the treat all strategies for all age groups.

7.
J Am Soc Nephrol ; 34(3): 482-494, 2023 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-36857500

RESUMO

SIGNIFICANCE STATEMENT: The kidney failure risk equation (KFRE) uses age, sex, GFR, and urine albumin-to-creatinine ratio (ACR) to predict 2- and 5-year risk of kidney failure in populations with eGFR <60 ml/min per 1.73 m 2 . However, the CKD-EPI 2021 creatinine equation for eGFR is now recommended for use but has not been fully tested in the context of KFRE. In 59 cohorts comprising 312,424 patients with CKD, the authors assessed the predictive performance and calibration associated with the use of the CKD-EPI 2021 equation and whether additional variables and accounting for the competing risk of death improves the KFRE's performance. The KFRE generally performed well using the CKD-EPI 2021 eGFR in populations with eGFR <45 ml/min per 1.73 m 2 and was not improved by adding the 2-year prior eGFR slope and cardiovascular comorbidities. BACKGROUND: The kidney failure risk equation (KFRE) uses age, sex, GFR, and urine albumin-to-creatinine ratio (ACR) to predict kidney failure risk in people with GFR <60 ml/min per 1.73 m 2 . METHODS: Using 59 cohorts with 312,424 patients with CKD, we tested several modifications to the KFRE for their potential to improve the KFRE: using the CKD-EPI 2021 creatinine equation for eGFR, substituting 1-year average ACR for single-measure ACR and 1-year average eGFR in participants with high eGFR variability, and adding 2-year prior eGFR slope and cardiovascular comorbidities. We also assessed calibration of the KFRE in subgroups of eGFR and age before and after accounting for the competing risk of death. RESULTS: The KFRE remained accurate and well calibrated overall using the CKD-EPI 2021 eGFR equation. The other modifications did not improve KFRE performance. In subgroups of eGFR 45-59 ml/min per 1.73 m 2 and in older adults using the 5-year time horizon, the KFRE demonstrated systematic underprediction and overprediction, respectively. We developed and tested a new model with a spline term in eGFR and incorporating the competing risk of mortality, resulting in more accurate calibration in those specific subgroups but not overall. CONCLUSIONS: The original KFRE is generally accurate for eGFR <45 ml/min per 1.73 m 2 when using the CKD-EPI 2021 equation. Incorporating competing risk methodology and splines for eGFR may improve calibration in low-risk settings with longer time horizons. Including historical averages, eGFR slopes, or a competing risk design did not meaningfully alter KFRE performance in most circumstances.


Assuntos
Insuficiência Renal Crônica , Insuficiência Renal , Humanos , Idoso , Creatinina , Fatores de Transcrição , Albuminas
8.
Eur Heart J ; 44(13): 1157-1166, 2023 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-36691956

RESUMO

AIMS: Chronic kidney disease (CKD) increases risk of cardiovascular disease (CVD). Less is known about how CVD associates with future risk of kidney failure with replacement therapy (KFRT). METHODS AND RESULTS: The study included 25 903 761 individuals from the CKD Prognosis Consortium with known baseline estimated glomerular filtration rate (eGFR) and evaluated the impact of prevalent and incident coronary heart disease (CHD), stroke, heart failure (HF), and atrial fibrillation (AF) events as time-varying exposures on KFRT outcomes. Mean age was 53 (standard deviation 17) years and mean eGFR was 89 mL/min/1.73 m2, 15% had diabetes and 8.4% had urinary albumin-to-creatinine ratio (ACR) available (median 13 mg/g); 9.5% had prevalent CHD, 3.2% prior stroke, 3.3% HF, and 4.4% prior AF. During follow-up, there were 269 142 CHD, 311 021 stroke, 712 556 HF, and 605 596 AF incident events and 101 044 (0.4%) patients experienced KFRT. Both prevalent and incident CVD were associated with subsequent KFRT with adjusted hazard ratios (HRs) of 3.1 [95% confidence interval (CI): 2.9-3.3], 2.0 (1.9-2.1), 4.5 (4.2-4.9), 2.8 (2.7-3.1) after incident CHD, stroke, HF and AF, respectively. HRs were highest in first 3 months post-CVD incidence declining to baseline after 3 years. Incident HF hospitalizations showed the strongest association with KFRT [HR 46 (95% CI: 43-50) within 3 months] after adjustment for other CVD subtype incidence. CONCLUSION: Incident CVD events strongly and independently associate with future KFRT risk, most notably after HF, then CHD, stroke, and AF. Optimal strategies for addressing the dramatic risk of KFRT following CVD events are needed.


Assuntos
Doenças Cardiovasculares , Insuficiência Renal Crônica , Humanos , Pessoa de Meia-Idade , Doenças Cardiovasculares/etiologia , Doenças Cardiovasculares/complicações , Taxa de Filtração Glomerular , Insuficiência Cardíaca/epidemiologia , Insuficiência Cardíaca/complicações , Prognóstico , Insuficiência Renal Crônica/epidemiologia , Insuficiência Renal Crônica/etiologia , Fatores de Risco , Acidente Vascular Cerebral/etiologia , Acidente Vascular Cerebral/complicações
9.
Am J Ophthalmol ; 247: 152-160, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36375588

RESUMO

PURPOSE: To determine the cost-effectiveness of preoperative topical antibiotic prophylaxis for the prevention of endophthalmitis following cataract surgery. DESIGN: Cost-effectiveness analysis using a decision-analytic microsimulation model. METHODS: Preoperative topical antibiotic prophylaxis vs no-prophylaxis costs and effects were projected over a life-time horizon for a simulated cohort of 500 000 adult patients (≥18 years old) requiring cataract surgery in theoretical surgical centers in the United States. Efficacy and cost (2021 US dollars) values were obtained from the literature and discounted at 3% per year. RESULTS: Based on inputted parameters, the mean incidence of endophthalmitis following cataract surgery for preoperative topical antibiotic prophylaxis vs no-prophylaxis was 0.034% (95% CI 0%-0.2%) and 0.042% (95% CI 0%-0.3%), respectively-an absolute risk reduction of 0.008%. The mean life-time costs for cataract surgery with prophylaxis and no-prophylaxis were $2486.67 (95% CI $2193.61-$2802.44) and $2409.03 (95% CI $2129.94-$2706.69), respectively. The quality-adjusted life-years (QALYs) associated with prophylaxis and no-prophylaxis were 10.33495 (95% CI 7.81629-12.38158) and 10.33498 (95% CI 7.81284-12.38316), respectively. Assuming a cost-effectiveness criterion of ≤$50 000 per QALY gained, the threshold analyses indicated that prophylaxis would be cost-effective if the incidence of endophthalmitis after cataract surgery was greater than 5.5% or if the price of the preoperative topical antibiotic prophylaxis was less than $0.75. CONCLUSIONS: General use of preoperative topical antibiotic prophylaxis is not cost-effective compared with no-prophylaxis for the prevention of endophthalmitis following cataract surgery. Preoperative topical antibiotic prophylaxis, however, would be cost-effective at a higher incidence of endophthalmitis and/or a substantially lower price for prophylaxis.


Assuntos
Extração de Catarata , Catarata , Endoftalmite , Adulto , Humanos , Estados Unidos , Adolescente , Antibioticoprofilaxia , Antibacterianos/uso terapêutico , Análise Custo-Benefício , Endoftalmite/epidemiologia , Catarata/tratamento farmacológico , Complicações Pós-Operatórias/prevenção & controle
10.
J Am Coll Cardiol ; 80(14): 1330-1342, 2022 10 04.
Artigo em Inglês | MEDLINE | ID: mdl-36175052

RESUMO

BACKGROUND: The Framingham Risk Score (FRS) and Pooled Cohort Equations (PCEs) overestimate risk in many contemporary cohorts. OBJECTIVES: This study sought to determine if recalibration of these scores using contemporary population-level data improves risk stratification for statin therapy. METHODS: Five-year FRS and PCEs were recalibrated using a cohort of Ontario residents alive January 1, 2011, who were 30 to 79 years of age without cardiovascular disease. Scores were externally validated in a primary care cohort of routinely collected electronic medical record data from January 1, 2010, to December 31, 2014. The relative difference in mean predicted and observed risk, number of statins avoided, and number needed to treat with statins to reduce a cardiovascular event at 5 years were reported. RESULTS: The FRS was recalibrated in 6,938,971 Ontario residents (51.6% women, mean age 48 years) and validated in 71,450 individuals (56.1% women, mean age 52 years). Recalibration reduced overestimation from 109% to 49% for women and 131% to 32% for men. The recalibrated FRS was estimated to reduce statin prescriptions in up to 26 per 1,000 low-risk women and 80 per 1,000 low-risk men, as well as reduce the number needed to treat from 61 to 47 in women and from 53 to 41 in men. In contrast, after recalibration of the PCEs, risk remained overestimated by 217% in women and 128% in men. CONCLUSIONS: Recalibration is a feasible solution to improve risk prediction but is dependent on the model being used. Recalibration of the FRS but not the PCEs reduced overestimation and may improve utilization of statins.


Assuntos
Doenças Cardiovasculares , Inibidores de Hidroximetilglutaril-CoA Redutases , Doenças Cardiovasculares/epidemiologia , Doenças Cardiovasculares/prevenção & controle , Estudos de Coortes , Registros Eletrônicos de Saúde , Feminino , Humanos , Inibidores de Hidroximetilglutaril-CoA Redutases/uso terapêutico , Masculino , Pessoa de Meia-Idade , Fatores de Risco
11.
Diabetes Care ; 45(9): 2055-2063, 2022 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-35856507

RESUMO

OBJECTIVE: To predict adverse kidney outcomes for use in optimizing medical management and clinical trial design. RESEARCH DESIGN AND METHODS: In this meta-analysis of individual participant data, 43 cohorts (N = 1,621,817) from research studies, electronic medical records, and clinical trials with global representation were separated into development and validation cohorts. Models were developed and validated within strata of diabetes mellitus (presence or absence) and estimated glomerular filtration rate (eGFR; ≥60 or <60 mL/min/1.73 m2) to predict a composite of ≥40% decline in eGFR or kidney failure (i.e., receipt of kidney replacement therapy) over 2-3 years. RESULTS: There were 17,399 and 24,591 events in development and validation cohorts, respectively. Models predicting ≥40% eGFR decline or kidney failure incorporated age, sex, eGFR, albuminuria, systolic blood pressure, antihypertensive medication use, history of heart failure, coronary heart disease, atrial fibrillation, smoking status, and BMI, and, in those with diabetes, hemoglobin A1c, insulin use, and oral diabetes medication use. The median C-statistic was 0.774 (interquartile range [IQR] = 0.753, 0.782) in the diabetes and higher-eGFR validation cohorts; 0.769 (IQR = 0.758, 0.808) in the diabetes and lower-eGFR validation cohorts; 0.740 (IQR = 0.717, 0.763) in the no diabetes and higher-eGFR validation cohorts; and 0.750 (IQR = 0.731, 0.785) in the no diabetes and lower-eGFR validation cohorts. Incorporating the previous 2-year eGFR slope minimally improved model performance, and then only in the higher-eGFR cohorts. CONCLUSIONS: Novel prediction equations for a decline of ≥40% in eGFR can be applied successfully for use in the general population in persons with and without diabetes with higher or lower eGFR.


Assuntos
Diabetes Mellitus , Insuficiência Renal Crônica , Insuficiência Renal , Albuminúria , Diabetes Mellitus/epidemiologia , Taxa de Filtração Glomerular , Humanos , Rim , Insuficiência Renal Crônica/epidemiologia
12.
Am J Obstet Gynecol MFM ; 4(6): 100697, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-35878805

RESUMO

BACKGROUND: Pregnant individuals are vulnerable to COVID-19-related acute respiratory distress syndrome. There is a lack of high-quality evidence on whether elective delivery or expectant management leads to better maternal and neonatal outcomes. OBJECTIVE: This study aimed to determine whether elective delivery or expectant management are associated with higher quality-adjusted life expectancy for pregnant individuals with COVID-19-related acute respiratory distress syndrome and their neonates. STUDY DESIGN: We performed a clinical decision analysis using a patient-level model in which we simulatedpregnant individuals and their unborn children. We used a patient-level model with parallel open-cohort structure, daily cycle length, continuous discounting, lifetime horizon, sensitivity analyses for key parameter values, and 1000 iterations for quantification of uncertainty. We simulated pregnant individuals at 32 weeks of gestation, invasively ventilated because of COVID-19-related acute respiratory distress syndrome. In the elective delivery strategy, pregnant individuals received immediate cesarean delivery. In the expectant management strategy, pregnancies continued until spontaneous labor or obstetrical decision to deliver. For both pregnant individuals and neonates, model outputs were hospital or perinatal survival, life expectancy, and quality-adjusted life expectancy denominated in years, summarized by the mean and 95% credible interval. Maternal utilities incorporated neonatal outcomes in accordance with best practices in perinatal decision analysis. RESULTS: Model outputs for pregnant individuals were similar when comparing elective delivery at 32 weeks' gestation with expectant management, including hospital survival (87.1% vs 87.4%), life-years (difference, -0.1; 95% credible interval, -1.4 to 1.1), and quality-adjusted life expectancy denominated in years (difference, -0.1; 95% credible interval, -1.3 to 1.1). For neonates, elective delivery at 32 weeks' gestation was estimated to lead to a higher perinatal survival (98.4% vs 93.2%; difference, 5.2%; 95% credible interval, 3.5-7), similar life-years (difference, 0.9; 95% credible interval, -0.9 to 2.8), and higher quality-adjusted life expectancy denominated in years (difference, 1.3; 95% credible interval, 0.4-2.2). For pregnant individuals, elective delivery was not superior to expectant management across a range of scenarios between 28 and 34 weeks of gestation. Elective delivery in cases where intrauterine death or maternal mortality were more likely resulted in higher neonatal quality-adjusted life expectancy, as did elective delivery at 30 weeks' gestation (difference, 1.1 years; 95% credible interval, 0.1 - 2.1) despite higher long-term complications (4.3% vs 0.5%; difference, 3.7%; 95% credible interval, 2.4-5.1), and in cases where intrauterine death or maternal acute respiratory distress syndrome mortality were more likely. CONCLUSION: The decision to pursue elective delivery vs expectant management in pregnant individuals with COVID-19-related acute respiratory distress syndrome should be guided by gestational age, risk of intrauterine death, and maternal acute respiratory distress syndrome severity. For the pregnant individual, elective delivery is comparable but not superior to expectant management for gestational ages from 28 to 34 weeks. For neonates, elective delivery was superior if gestational age was ≥30 weeks and if the rate of intrauterine death or maternal mortality risk were high. We recommend basing the decision for elective delivery vs expectant management in a pregnant individual with COVID-19-related acute respiratory distress syndrome on gestational age and likelihood of intrauterine or maternal death.

13.
BMC Med Res Methodol ; 21(1): 282, 2021 12 18.
Artigo em Inglês | MEDLINE | ID: mdl-34922454

RESUMO

INTRODUCTION: Extrapolation of time-to-event data from clinical trials is commonly used in decision models for health technology assessment (HTA). The objective of this study was to assess performance of standard parametric survival analysis techniques for extrapolation of time-to-event data for a single event from clinical trials with limited data due to small samples or short follow-up. METHODS: Simulated populations with 50,000 individuals were generated with an exponential hazard rate for the event of interest. A scenario consisted of 5000 repetitions with six sample size groups (30-500 patients) artificially censored after every 10% of events observed. Goodness-of-fit statistics (AIC, BIC) were used to determine the best-fitting among standard parametric distributions (exponential, Weibull, log-normal, log-logistic, generalized gamma, Gompertz). Median survival, one-year survival probability, time horizon (1% survival time, or 99th percentile of survival distribution) and restricted mean survival time (RMST) were compared to population values to assess coverage and error (e.g., mean absolute percentage error). RESULTS: The true exponential distribution was correctly identified using goodness-of-fit according to BIC more frequently compared to AIC (average 92% vs 68%). Under-coverage and large errors were observed for all outcomes when distributions were specified by AIC and for time horizon and RMST with BIC. Error in point estimates were found to be strongly associated with sample size and completeness of follow-up. Small samples produced larger average error, even with complete follow-up, than large samples with short follow-up. Correctly specifying the event distribution reduced magnitude of error in larger samples but not in smaller samples. CONCLUSIONS: Limited clinical data from small samples, or short follow-up of large samples, produce large error in estimates relevant to HTA regardless of whether the correct distribution is specified. The associated uncertainty in estimated parameters may not capture the true population values. Decision models that base lifetime time horizon on the model's extrapolated output are not likely to reliably estimate mean survival or its uncertainty. For data with an exponential event distribution, BIC more reliably identified the true distribution than AIC. These findings have important implications for health decision modelling and HTA of novel therapies seeking approval with limited evidence.


Assuntos
Tecnologia Biomédica , Simulação por Computador , Seguimentos , Humanos , Tamanho da Amostra , Análise de Sobrevida
14.
CMAJ Open ; 9(4): E1063-E1072, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34815262

RESUMO

BACKGROUND: Jurisdictions worldwide ramped down ophthalmic surgeries to mitigate the effects of COVID-19, creating a global surgical backlog. We sought to predict the long-term impact of COVID-19 on the timely delivery of non-emergent ophthalmology sub-specialty surgical care in Ontario. METHODS: This is a microsimulation modelling study. We used provincial population-based administrative data from the Wait Time Information System database in Ontario for January 2019 to May 2021 and facility-level data for March 2018 to May 2021 to estimate the backlog size and wait times associated with the COVID-19 pandemic. For the postpandemic recovery phase, we estimated the resources required to clear the backlog of patients accumulated on the wait-list during the pandemic. Outcomes were accrued over a time horizon of 3 years. RESULTS: A total of 56 923 patients were on the wait-list in the province of Ontario awaiting non-emergency ophthalmic surgery as of Mar. 15, 2020. The number of non-emergency surgeries performed in the province decreased by 97% in May 2020 and by 80% in May 2021 compared with the same months in 2019. By 2 years and 3 years since the start of the pandemic, the overall estimated number of patients awaiting surgery grew by 129% and 150%, respectively. The estimated mean wait time for patients for all subspecialty surgeries increased to 282 (standard deviation [SD] 91) days in March 2023 compared with 94 (SD 97) days in 2019. The provincial monthly additional resources required to clear the backlog by March 2023 was estimated to be a 34% escalation from the prepandemic volumes (4626 additional surgeries). INTERPRETATION: The estimates from this microsimulation modelling study suggest that the magnitude of the ophthalmic surgical backlog from the COVID-19 pandemic has important implications for the recovery phase. This model can be adapted to other jurisdictions to assist with recovery planning for vision-saving surgeries.


Assuntos
COVID-19/epidemiologia , Procedimentos Cirúrgicos Oftalmológicos/estatística & dados numéricos , Pandemias , Bases de Dados Factuais , Procedimentos Cirúrgicos Eletivos/estatística & dados numéricos , Humanos , Modelos Estatísticos , Ontário/epidemiologia , SARS-CoV-2 , Fatores de Tempo , Listas de Espera
15.
CMAJ Open ; 9(1): E271-E279, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33757964

RESUMO

BACKGROUND: Understanding resource use for coronavirus disease 2019 (COVID-19) is critical. We conducted a descriptive analysis using public health data to describe age- and sex-specific acute care use, length of stay (LOS) and mortality associated with COVID-19. METHODS: We conducted a descriptive analysis using Ontario's Case and Contact Management Plus database of individuals who tested positive for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) in Ontario from Mar. 1 to Sept. 30, 2020, to determine age- and sex-specific hospital admissions, intensive care unit (ICU) admissions, use of invasive mechanical ventilation, LOS and mortality. We stratified analyses by month of infection to study temporal trends and conducted subgroup analyses by long-term care residency. RESULTS: During the observation period, 56 476 individuals testing positive for SARS-CoV-2 were reported; 41 049 (72.7%) of these were younger than 60 years, and 29 196 (51.7%) were female. Proportion of cases shifted from older populations (> 60 yr) to younger populations (10-39 yr) over time. Overall, 5383 (9.5%) of individuals were admitted to hospital; of these, 1183 (22.0%) were admitted to the ICU, and 712 (60.2%) of these received invasive mechanical ventilation. Mean LOS for individuals in the ward, ICU without invasive mechanical ventilation and ICU with invasive mechanical ventilation was 12.8 (standard deviation [SD] 15.4), 8.5 (SD 7.8) and 20.5 (SD 18.1) days, respectively. Among patients receiving care in the ward, ICU without invasive mechanical ventilation and ICU with invasive mechanical ventilation, 911/3834 (23.8%), 124/418 (29.7%) and 287/635 (45.2%) died, respectively. All outcomes varied by age and decreased over time, overall and within age groups. INTERPRETATION: This descriptive study shows use of acute care and mortality varying by age and decreasing between March and September 2020 in Ontario. Improvements in clinical practice and changing risk distributions among those infected may contribute to fewer severe outcomes.


Assuntos
COVID-19/epidemiologia , COVID-19/terapia , Cuidados Críticos/estatística & dados numéricos , Hospitalização/estatística & dados numéricos , Adolescente , Adulto , Fatores Etários , Idoso , Idoso de 80 Anos ou mais , Criança , Pré-Escolar , Utilização de Instalações e Serviços , Feminino , Humanos , Lactente , Recém-Nascido , Masculino , Pessoa de Meia-Idade , Ontário , Respiração Artificial/estatística & dados numéricos , Fatores Sexuais , Taxa de Sobrevida , Adulto Jovem
17.
CMAJ Open ; 8(4): E706-E714, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33158928

RESUMO

BACKGROUND: Antithrombotic drugs decrease stroke risk in patients with atrial fibrillation, but they increase bleeding risk, particularly in older adults at high risk for falls. We aimed to determine the most cost-effective antithrombotic therapy in older adults with atrial fibrillation who are at high risk for falls. METHODS: We conducted a mathematical modelling study from July 2019 to March 2020 based on the Ontario, Canada, health care system. We derived the base-case age, sex and fall risk distribution from a published cohort of older adults at risk for falls, and the bleeding and stroke risk parameters from an atrial fibrillation trial population. Using a probabilistic microsimulation Markov decision model, we calculated quality-adjusted life years (QALYs), total cost and incremental cost-effectiveness ratios (ICERs) for each of acetylsalicylic acid (ASA), warfarin, apixaban, dabigatran, rivaroxaban and edoxaban. Cost data were adjusted for inflation to 2018 values. The analysis used the Ontario public payer perspective with a lifetime horizon. RESULTS: In our model, the most cost-effective antithrombotic therapy for atrial fibrillation in older patients at risk for falls was apixaban, with an ICER of $8517 per QALY gained (5.86 QALYs at $92 056) over ASA. It was a dominant strategy over warfarin and the other antithrombotic agents. There was moderate uncertainty in cost-effectiveness ranking, with apixaban as the preferred choice in 66% of model iterations (given willingness to pay of $50 000 per QALY gained); edoxaban, 30 mg, was preferred in 31% of iterations. Sensitivity analysis across ranges of age, bleeding risk and fall risk still favoured apixaban over the other medications. INTERPRETATION: From a public payer perspective, apixaban is the most cost-effective antithrombotic agent in older adults at high risk for falls. Health care funders should implement strategies to encourage use of the most cost-effective medication in this population.


Assuntos
Acidentes por Quedas/prevenção & controle , Fibrilação Atrial/complicações , Análise Custo-Benefício , Fibrinolíticos/economia , Acidente Vascular Cerebral/prevenção & controle , Acidentes por Quedas/economia , Idoso , Idoso de 80 Anos ou mais , Aspirina/economia , Aspirina/farmacologia , Fibrilação Atrial/tratamento farmacológico , Dabigatrana/economia , Dabigatrana/farmacologia , Feminino , Fibrinolíticos/farmacologia , Hemorragia/induzido quimicamente , Hemorragia/prevenção & controle , Humanos , Masculino , Modelos Teóricos , Ontário , Pirazóis/economia , Pirazóis/farmacologia , Piridinas/economia , Piridinas/farmacologia , Piridonas/economia , Piridonas/farmacologia , Anos de Vida Ajustados por Qualidade de Vida , Rivaroxabana/economia , Rivaroxabana/farmacologia , Acidente Vascular Cerebral/economia , Acidente Vascular Cerebral/etiologia , Tiazóis/economia , Tiazóis/farmacologia , Varfarina/efeitos adversos , Varfarina/economia , Varfarina/farmacologia
18.
CMAJ ; 192(46): E1474-E1481, 2020 11 16.
Artigo em Francês | MEDLINE | ID: mdl-33199458

RESUMO

CONTEXTE: La propagation à l'échelle planétaire de la maladie à coronavirus 2019 (COVID-2019) se poursuit dans plusieurs pays, mettant à rude épreuve les systèmes de santé. Cette étude avait pour but de prédire les répercussions de la pandémie sur les issues des patients et l'utilisation des ressources hospitalières en Ontario (Canada). MÉTHODES: Nous avons conçu un modèle de simulation axé sur les personnes illustrant le flux de patients atteints de la COVID-19 dans les hôpitaux ontariens. Nous avons simulé diverses combinaisons de trajectoires épidémiques et de capacités de soins hospitaliers. Les paramètres à l'étude étaient le nombre de patients devant être admis au service d'hospitalisation ou à l'unité des soins intensifs (USI) ­avec ou sans respirateur mécanique ­, le nombre de jours jusqu'à l'épuisement des ressources, le nombre de patients en attente de ressources et le nombre de décès. RÉSULTATS: Nous avons constaté que la mise en place rapide de mesures de santé publique efficaces éviterait l'épuisement des ressources hospitalières. Les simulations dans lesquelles les mesures d'éloignement sanitaire étaient inefficaces ou adoptées tardivement ont montré que l'épuisement des ressources prendrait de 14 à 26 jours et qu'il y aurait, dans le pire des cas, 13 321 décès de personnes en attente de ressources. Cet épuisement pourrait être évité ou retardé par la mise en place de mesures rigoureuses visant à améliorer la capacité des hôpitaux en matière de soins intensifs, de respirateurs mécaniques et de soins hospitaliers. INTERPRÉTATION: Sans l'adoption de mesures d'éloignement sanitaire rigoureuses, le système de santé ontarien n'aurait pas eu les ressources nécessaires pour prendre en charge le nombre attendu de patients atteints de la COVID-19, même en cas d'augmentation rapide de sa capacité hospitalière. Les pénuries auraient fait augmenter le taux de mortalité. En ralentissant la transmission de la maladie par la mise en place de mesures de santé publique et l'augmentation de la capacité des hôpitaux, l'Ontario a probablement évité que ces derniers subissent une pression catastrophique.

19.
Ann Intern Med ; 173(6): 426-435, 2020 09 15.
Artigo em Inglês | MEDLINE | ID: mdl-32658569

RESUMO

BACKGROUND: Although measuring albuminuria is the preferred method for defining and staging chronic kidney disease (CKD), total urine protein or dipstick protein is often measured instead. OBJECTIVE: To develop equations for converting urine protein-creatinine ratio (PCR) and dipstick protein to urine albumin-creatinine ratio (ACR) and to test their diagnostic accuracy in CKD screening and staging. DESIGN: Individual participant-based meta-analysis. SETTING: 12 research and 21 clinical cohorts. PARTICIPANTS: 919 383 adults with same-day measures of ACR and PCR or dipstick protein. MEASUREMENTS: Equations to convert urine PCR and dipstick protein to ACR were developed and tested for purposes of CKD screening (ACR ≥30 mg/g) and staging (stage A2: ACR of 30 to 299 mg/g; stage A3: ACR ≥300 mg/g). RESULTS: Median ACR was 14 mg/g (25th to 75th percentile of cohorts, 5 to 25 mg/g). The association between PCR and ACR was inconsistent for PCR values less than 50 mg/g. For higher PCR values, the PCR conversion equations demonstrated moderate sensitivity (91%, 75%, and 87%) and specificity (87%, 89%, and 98%) for screening (ACR >30 mg/g) and classification into stages A2 and A3, respectively. Urine dipstick categories of trace or greater, trace to +, and ++ for screening for ACR values greater than 30 mg/g and classification into stages A2 and A3, respectively, had moderate sensitivity (62%, 36%, and 78%) and high specificity (88%, 88%, and 98%). For individual risk prediction, the estimated 2-year 4-variable kidney failure risk equation using predicted ACR from PCR had discrimination similar to that of using observed ACR. LIMITATION: Diverse methods of ACR and PCR quantification were used; measurements were not always performed in the same urine sample. CONCLUSION: Urine ACR is the preferred measure of albuminuria; however, if ACR is not available, predicted ACR from PCR or urine dipstick protein may help in CKD screening, staging, and prognosis. PRIMARY FUNDING SOURCE: National Institute of Diabetes and Digestive and Kidney Diseases and National Kidney Foundation.


Assuntos
Albuminúria/diagnóstico , Creatinina/urina , Programas de Rastreamento/métodos , Proteinúria/diagnóstico , Fitas Reagentes , Insuficiência Renal Crônica/diagnóstico , Urinálise/métodos , Albuminúria/urina , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Prognóstico , Proteinúria/urina , Insuficiência Renal Crônica/urina , Sensibilidade e Especificidade , Urinálise/instrumentação
20.
Can J Kidney Health Dis ; 7: 2054358120914426, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32426146

RESUMO

BACKGROUND: Mortality rates for patients on hemodialysis (HD) continue to be high, in particular, following the long interdialytic period, yet thrice-weekly conventional HD (CHD) is still an almost universal regimen. Alternate-day dialysis (ADD) may have advantages over the current schedule because it would eliminate the long interdialytic break. A preliminary, as yet unpublished, patient simulation and cost-utility analysis compared CHD versus ADD and demonstrated that the economic attractiveness of ADD was sensitive, in particular, to patients' preference for ADD versus CHD. To date, this preference has not been elicited. OBJECTIVE: To elicit utilities for both CHD and ADD using 3 standard elicitation methods among a prevalent cohort of patients on CHD. DESIGN: This study is a single-center survey of patient preferences (utilities). SETTING: This study took place within the dialysis units of Sunnybrook Health Centre, a university-affiliated teaching hospital in Toronto, Ontario, Canada, which encompasses 174 patients on in-center HD. PATIENTS: Those older than 18 years of age, on thrice-weekly HD, were included in this study. MEASUREMENTS: Descriptive statistics were used to summarize patient characteristics and the utility values generated. A multiple linear regression was performed to determine an association between participant characteristics and the utility ratio. METHODS: Via standardized face-to-face interviews by a single investigator, 3 utility elicitation methods, visual analogue scale (VAS), time trade-off (TTO), and standard gamble (SG), were administered to generate utilities for each patient for their current health state of CHD (thrice-weekly). After completing this task, we provided each patient with a concise summary regarding the current literature on how ADD may impact their health. Finally, patients were asked to envision their health while on an ADD regimen while repeating the VAS, TTO, and SG. RESULTS: We recruited 65 participants. The mean utilities of CHD versus ADD were similar for all 3 methods. Visual analogue scale, TTO, and SG had utility values of 0.6 ± 0.2, 0.6 ± 0.3, and 0.7 ± 0.3, and 0.6 ± 0.2, 0.7 ± 0.3, and 0.7 ± 0.3 for CHD and ADD, respectively. The ratio for CHD to ADD was 1.1 ± 0.4, 1.1 ± 0.5, and 1.0 ± 0.2 for VAS, TTO, and SG, respectively. LIMITATIONS: Small sample size from a single center, where not all participants agreed to participate, wide variability in participant responses and requiring patients to conceptually imagine life on ADD may have affected our results. CONCLUSIONS: Compared with CHD, there was no difference in the preference toward ADD which demonstrates promise that adopting an alternate-day schedule may be acceptable to patients. Furthermore, with the generation of a utility for ADD, this will allow for more precise estimates in future simulation studies of the economic attractiveness of ADD. TRIAL REGISTRATION: Not required as this article is not a systematic review nor does it report the results of a health care intervention.


CONTEXTE: Le taux de mortalité des patients traités par hémodialyse demeure élevé, particulièrement après la longue période interdialytique. Pourtant, l'hémodialyse conventionnelle (HDC) trois fois par semaine est encore un régime quasi universel. La dialyse un jour sur deux (ADD­Alternate Day Dialysis) peut présenter des avantages par rapport au schéma actuel puisqu'elle éliminerait la longue pause interdialytique. Une analyse préliminaire, non encore publiée, de simulation des patients et d'analyse coût-utilité a comparé l'HDC et l'ADD et démontré que l'attractivité économique de l'ADD était sensible; particulièrement à une préférence des patients pour l'ADD comparativement à l'HDC. À ce jour, cette préférence n'est toujours pas établie. OBJECTIF: Sonder les avantages de l'HDC et de l'ADD sur l'état de santé par trois methods d'interrogation standard dans une cohorte de patients hémodialysés. TYPE D'ÉTUDE: Un sondage mené dans un seul centre qui examinait les préférences (satisfaction quant à l'état de santé­utilité) des patients. CADRE: L'étude s'est tenue aux unités de dialyse du Sunnybrook Health Centre, un hôpital universitaire situé à Toronto (Ontario) au Canada qui regroupe 174 patients hémodialysés en centre. SUJETS: Ont été inclus les adultes suivant un traitement d'hémodialyse trois fois par semaine. MESURES: Des statistiques descriptives ont servi à résumer les caractéristiques des patients et les valeurs d'utilité générées. Une régression linéaire multiple a été réalisée pour établir l'association entre les caractéristiques du patient et le rapport d'utilité. MÉTHODOLOGIE: Par l'entremise d'entretiens uniformisés en personne, trois méthodes de sollicitation ont été employées - une échelle visuelle analogique (EVA), l'arbitrage temporel (AT) et le pari standard (PS) - pour générer des valeurs d'utilité pour chaque patient pour la modalité actuelle (HDC - trois fois par semaine). Chaque patient a par la suite reçu un résumé de la littérature scientifique actuelle sur les possibles effets de l'ADD sur l'état de santé. Puis, nous avons fait de nouveau passer les trois questionnaires aux patients en leur demandant d'imaginer leur état de santé s'ils étaient sous traitement par ADD. RÉSULTATS: Nous avons recruté 65 participants. Les valeurs moyennes d'utilité pour l'HDC compare à l'ADD étaient similaires pour les trois méthodes. Les valeurs d'utilité pour l'HDC et l'ADD étaient respectivement de 0,6 ± 0,2 et 0,6 ± 0,2 (EVA); de 0,6 ± 0,3 et 0,7 ± 0,3 (AT); et de 0,7 ± 0,3 et 0,7 ± 0,3 (PS). Les rapports d'utilité entre l'HDC et l'ADD étaient de 1,1 ± 0,4 (EVA), de 1,1 ± 0,5 (AT) et 1,0 ± 0,2 (PS). LIMITES: L'échantillon est faible et provient d'un seul centre où tous les patients n'ont pas accepté de participer à l'étude. Également, les résultats pourraient être affectés par la grande variabilité dans les réponses des participants et par le fait d'avoir exigé des patients qu'ils imaginent leur état de santé sous traitement par ADD. CONCLUSION: Aucune différence significative n'a été observée quant à une préférence pour l'ADD comparativement à l'HDC, ce qui montre que l'adoption d'un régime d'hémodialyse tous les deux jours serait probablement acceptable pour les patients. D'autre part, la génération de valeurs d'utilité pour l'ADD permettra des estimations plus précises dans les futures études de simulation et d'attractivité économique pour l'ADD.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...