ABSTRACT
BACKGROUND: The net benefit of aspirin cessation in older adults remains uncertain. This study aimed to use observational data to emulate a randomized trial of aspirin cessation versus continuation in older adults without cardiovascular disease (CVD). METHODS: Post hoc analysis using a target trial emulation framework applied to the immediate post-trial period (2017-2021) of a study of low-dose aspirin initiation in adults aged ≥ 70 years (ASPREE; NCT01038583). Participants from Australia and the USA were included if they were free of CVD at the start of the post-trial intervention period (time zero, T0) and had been taking open-label or randomized aspirin immediately before T0. The two groups in the target trial were as follows: aspirin cessation (participants who were taking randomized aspirin immediately before T0; assumed to have stopped at T0 as instructed) versus aspirin continuation (participants on open-label aspirin at T0 regardless of their randomized treatment; assumed to have continued at T0). The outcomes after T0 were incident CVD, major adverse cardiovascular events (MACE), all-cause mortality, and major bleeding during 3, 6, and 12 months (short-term) and 48 months (long-term) follow-up. Hazard ratios (HRs) comparing aspirin cessation to continuation were estimated from propensity-score (PS) adjusted Cox proportional-hazards regression models. RESULTS: We included 6103 CVD-free participants (cessation: 5427, continuation: 676). Over both short- and long-term follow-up, aspirin cessation versus continuation was not associated with elevated risk of CVD, MACE, and all-cause mortality (HRs, at 3 and 48 months respectively, were 1.23 and 0.73 for CVD, 1.11 and 0.84 for MACE, and 0.23 and 0.79 for all-cause mortality, p > 0.05), but cessation had a reduced risk of incident major bleeding events (HRs at 3 and 48 months, 0.16 and 0.63, p < 0.05). Similar findings were seen for all outcomes at 6 and 12 months, except for a lowered risk of all-cause mortality in the cessation group at 12 months. CONCLUSIONS: Our findings suggest that deprescribing prophylactic aspirin might be safe in healthy older adults with no known CVD.
Subject(s)
Aspirin , Cardiovascular Diseases , Humans , Aspirin/administration & dosage , Aspirin/therapeutic use , Aged , Male , Female , Cardiovascular Diseases/prevention & control , Aged, 80 and over , Platelet Aggregation Inhibitors/administration & dosage , Australia , United States , Hemorrhage/chemically inducedABSTRACT
PURPOSE: A recent genome-wide association study of age-related macular degeneration (AMD) identified new AMD-associated risk variants. These variants now can be incorporated into an updated polygenic risk score (PRS). This study aimed to assess the performance of an updated PRS, PRS2023, in an independent cohort of older individuals with retinal imaging data and to compare performance with an older PRS, PRS2016. DESIGN: Cross-sectional study. PARTICIPANTS: A total of 4175 participants of European ancestry, 70 years of age or older, with genotype and retinal imaging data. METHODS: We used logistic regression models and area under the receiver operating characteristic curve (AUC) to assess the performance of PRS2023 compared with PRS2016. AMD status and severity were graded using color fundus photography. MAIN OUTCOME MEASURES: Association of PRS2023 and PRS2016 with AMD risk at baseline. RESULTS: At enrollment among 4175 participants, 2605 participants (62.4%) had no AMD and 853 participants (20.4%), 671 participants (16.1%), and 46 participants (1.1%) had early, intermediate, and late-stage AMD, respectively. More than 27% of the participants with a high PRS2023 (top quartile) had intermediate or late-stage AMD, compared with < 15% for those in the middle 2 quartiles and less than 13% for those in the lowest quartile. Both PRS2023 and PRS2016 were associated significantly with AMD after adjustment for age, sex, smoking status, and lipid levels, with increasing odds ratios (ORs) for worsening AMD grades. PRS2023 outperformed PRS2016 (P = 0.03 for all AMD and P = 0.03 for late AMD, DeLong test comparing AUC). PRS2023 was associated with late-stage AMD with an adjusted OR of 5.05 (95% confidence interval [CI], 3.41-7.47) per standard deviation. The AUC of a model containing conventional or nongenetic risk factors and PRS2023 was 91% (95% CI, 87%-95%) for predicting late-stage AMD, which improved 12% over the model without the PRS (AUC, 79%; P < 0.001 for difference). CONCLUSIONS: A new PRS, PRS2023, for AMD outperforms a previous PRS and predicts increasing risk for late-stage AMD (with stronger association for more severe imaging-confirmed AMD grades). Our findings have clinical implications for the improved prediction and risk stratification of AMD. FINANCIAL DISCLOSURE(S): Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Subject(s)
Genome-Wide Association Study , Macular Degeneration , ROC Curve , Humans , Male , Female , Aged , Cross-Sectional Studies , Risk Factors , Macular Degeneration/genetics , Macular Degeneration/diagnosis , Aged, 80 and over , Polymorphism, Single Nucleotide , Area Under Curve , Risk Assessment/methods , Genetic Predisposition to Disease , Multifactorial Inheritance , Predictive Value of Tests , Genotype , Genetic Risk ScoreABSTRACT
In studies that assess disease status periodically, time of disease onset is interval censored between visits. Participants who die between two visits may have unknown disease status after their last visit. In this work, we consider an additional scenario where diagnosis requires two consecutive positive tests, such that disease status can also be unknown at the last visit preceding death. We show that this impacts the choice of censoring time for those who die without an observed disease diagnosis. We investigate two classes of models that quantify the effect of risk factors on disease outcome: a Cox proportional hazards model with death as a competing risk and an illness death model that treats disease as a possible intermediate state. We also consider four censoring strategies: participants without observed disease are censored at death (Cox model only), the last visit, the last visit with a negative test, or the second last visit. We evaluate the performance of model and censoring strategy combinations on simulated data with a binary risk factor and illustrate with a real data application. We find that the illness death model with censoring at the second last visit shows the best performance in all simulation settings. Other combinations show bias that varies in magnitude and direction depending on the differential mortality between diseased and disease-free subjects, the gap between visits, and the choice of the censoring time.
Subject(s)
Proportional Hazards Models , Humans , Computer Simulation , Risk FactorsABSTRACT
BACKGROUND: In randomized clinical trials, treatment effects may vary, and this possibility is referred to as heterogeneity of treatment effect (HTE). One way to quantify HTE is to partition participants into subgroups based on individual's risk of experiencing an outcome, then measuring treatment effect by subgroup. Given the limited availability of externally validated outcome risk prediction models, internal models (created using the same dataset in which heterogeneity of treatment analyses also will be performed) are commonly developed for subgroup identification. We aim to compare different methods for generating internally developed outcome risk prediction models for subject partitioning in HTE analysis. METHODS: Three approaches were selected for generating subgroups for the 2,441 participants from the United States enrolled in the ASPirin in Reducing Events in the Elderly (ASPREE) randomized controlled trial. An extant proportional hazards-based outcomes predictive risk model developed on the overall ASPREE cohort of 19,114 participants was identified and was used to partition United States' participants by risk of experiencing a composite outcome of death, dementia, or persistent physical disability. Next, two supervised non-parametric machine learning outcome classifiers, decision trees and random forests, were used to develop multivariable risk prediction models and partition participants into subgroups with varied risks of experiencing the composite outcome. Then, we assessed how the partitioning from the proportional hazard model compared to those generated by the machine learning models in an HTE analysis of the 5-year absolute risk reduction (ARR) and hazard ratio for aspirin vs. placebo in each subgroup. Cochran's Q test was used to detect if ARR varied significantly by subgroup. RESULTS: The proportional hazard model was used to generate 5 subgroups using the quintiles of the estimated risk scores; the decision tree model was used to generate 6 subgroups (6 automatically determined tree leaves); and the random forest model was used to generate 5 subgroups using the quintiles of the prediction probability as risk scores. Using the semi-parametric proportional hazards model, the ARR at 5 years was 15.1% (95% CI 4.0-26.3%) for participants with the highest 20% of predicted risk. Using the random forest model, the ARR at 5 years was 13.7% (95% CI 3.1-24.4%) for participants with the highest 20% of predicted risk. The highest outcome risk group in the decision tree model also exhibited a risk reduction, but the confidence interval was wider (5-year ARR = 17.0%, 95% CI= -5.4-39.4%). Cochran's Q test indicated ARR varied significantly only by subgroups created using the proportional hazards model. The hazard ratio for aspirin vs. placebo therapy did not significantly vary by subgroup in any of the models. The highest risk groups for the proportional hazards model and random forest model contained 230 participants each, while the highest risk group in the decision tree model contained 41 participants. CONCLUSIONS: The choice of technique for internally developed models for outcome risk subgroups influences HTE analyses. The rationale for the use of a particular subgroup determination model in HTE analyses needs to be explicitly defined based on desired levels of explainability (with features importance), uncertainty of prediction, chances of overfitting, and assumptions regarding the underlying data structure. Replication of these analyses using data from other mid-size clinical trials may help to establish guidance for selecting an outcomes risk prediction modelling technique for HTE analyses.
Subject(s)
Aspirin , Machine Learning , Proportional Hazards Models , Humans , Aspirin/therapeutic use , Aged , Female , Male , Treatment Outcome , United States , Risk Assessment/methods , Risk Assessment/statistics & numerical data , Models, Statistical , Randomized Controlled Trials as Topic/methods , Randomized Controlled Trials as Topic/statistics & numerical data , Decision Trees , Outcome Assessment, Health Care/methods , Outcome Assessment, Health Care/statistics & numerical dataABSTRACT
Randomized controlled trials can be used to generate evidence on the efficacy and safety of new treatments in eating disorders research. Many of the trials previously conducted in this area have been deemed to be of low quality, in part due to a number of practical constraints. This article provides an overview of established and more innovative clinical trial designs, accompanied by pertinent examples, to highlight how design choices can enhance flexibility and improve efficiency of both resource allocation and participant involvement. Trial designs include individually randomized, cluster randomized, and designs with randomizations at multiple time points and/or addressing several research questions (master protocol studies). Design features include the use of adaptations and considerations for pragmatic or registry-based trials. The appropriate choice of trial design, together with rigorous trial conduct, reporting and analysis, can establish high-quality evidence to advance knowledge in the field. It is anticipated that this article will provide a broad and contemporary introduction to trial designs and will help researchers make informed trial design choices for improved testing of new interventions in eating disorders. PUBLIC SIGNIFICANCE: There is a paucity of high quality randomized controlled trials that have been conducted in eating disorders, highlighting the need to identify where efficiency gains in trial design may be possible to advance the eating disorder research field. We provide an overview of some key trial designs and features which may offer solutions to practical constraints and increase trial efficiency.
Subject(s)
Feeding and Eating Disorders , Randomized Controlled Trials as Topic , Research Design , Humans , Feeding and Eating Disorders/therapyABSTRACT
INTRODUCTION: Risk factors for cardiovascular disease (CVD) also increase the risk of dementia. However, whether commonly used CVD risk scores are associated with dementia risk in older adults who do not have a history of CVD, and potential gender differences in this association, remains unclear. The aim of this study was to determine whether CVD risk scores are prospectively associated with cognitive decline and dementia in initially healthy older men and women. METHODS: A total of19,114 participants from a prospective cohort of individuals aged 65+ years without known CVD or dementia were recruited. The atherosclerotic cardiovascular disease risk score (ASCVDRS), Systematic Coronary Risk Evaluation 2-Older Persons (SCORE2-OP), and the Framingham risk score (FRS) were calculated at baseline. Risk of dementia (according to DSM-IV criteria) and cognitive decline (defined as a >1.5 standard deviation decline in global cognition, episodic memory, psychomotor speed, or verbal fluency from the previous year) were assessed using hazard ratio. RESULTS: Over a median follow-up of 6.4 years, 850 individuals developed dementia and 4,352 cognitive decline. Men and women in the highest ASCVDRS tertile had a 41% (95% CI 1.08, 1.85) and 45% (1.11, 1.89) increased risk of dementia compared to the lowest tertile, respectively. Likewise, men and women in the highest SCORE2-OP tertile had a 64% (1.24, 2.16) and 60% (1.22, 2.11) increased risk of dementia compared to the lowest tertile, respectively. Findings were similar, but the risk was slightly lesser when examining risk of cognitive decline for both ASCVDRS and SCORE2-OP. However, FRS was only associated with the risk of cognitive decline among women (highest vs. lowest tertiles: 1.13 [1.01-1.26]). CONCLUSION: These findings suggest the utility of the ASCVDRS and SCORE2-OP in clinical practice, to not only assess future risk of CVD, but also as potential early indicators of cognitive impairment, even in relatively healthy older men and women.
Subject(s)
Cardiovascular Diseases , Cognitive Dysfunction , Dementia , Male , Humans , Female , Aged , Aged, 80 and over , Dementia/diagnosis , Dementia/epidemiology , Dementia/etiology , Cardiovascular Diseases/diagnosis , Cardiovascular Diseases/epidemiology , Cardiovascular Diseases/etiology , Prospective Studies , Cognitive Dysfunction/diagnosis , Cognitive Dysfunction/epidemiology , Cognitive Dysfunction/etiology , Risk Factors , Heart Disease Risk FactorsABSTRACT
BACKGROUND: Daily low-dose aspirin increases major bleeding; however, few studies have investigated its effect on iron deficiency and anemia. OBJECTIVE: To investigate the effect of low-dose aspirin on incident anemia, hemoglobin, and serum ferritin concentrations. DESIGN: Post hoc analysis of the ASPREE (ASPirin in Reducing Events in the Elderly) randomized controlled trial. (ClinicalTrials.gov: NCT01038583). SETTING: Primary/community care in Australia and the United States. PARTICIPANTS: Community-dwelling persons aged 70 years or older (≥65 years for Black persons and Hispanic persons). INTERVENTION: 100 mg of aspirin daily or placebo. MEASUREMENTS: Hemoglobin concentration was measured annually in all participants. Ferritin was measured at baseline and 3 years after random assignment in a large subset. RESULTS: 19 114 persons were randomly assigned. Anemia incidence in the aspirin and placebo groups was 51.2 events and 42.9 events per 1000 person-years, respectively (hazard ratio, 1.20 [95% CI, 1.12 to 1.29]). Hemoglobin concentrations declined by 3.6 g/L per 5 years in the placebo group and the aspirin group experienced a steeper decline by 0.6 g/L per 5 years (CI, 0.3 to 1.0 g/L). In 7139 participants with ferritin measures at baseline and year 3, the aspirin group had greater prevalence than placebo of ferritin levels less than 45 µg/L at year 3 (465 [13%] vs. 350 [9.8%]) and greater overall decline in ferritin by 11.5% (CI, 9.3% to 13.7%) compared with placebo. A sensitivity analysis quantifying the effect of aspirin in the absence of major bleeding produced similar results. LIMITATIONS: Hemoglobin was measured annually. No data were available on causes of anemia. CONCLUSION: Low-dose aspirin increased incident anemia and decline in ferritin in otherwise healthy older adults, independent of major bleeding. Periodic monitoring of hemoglobin should be considered in older persons on aspirin. PRIMARY FUNDING SOURCE: National Institutes of Health and Australian National Health and Medical Research Council.
Subject(s)
Anemia , Aspirin , Aged , Humans , United States/epidemiology , Aged, 80 and over , Aspirin/adverse effects , Incidence , Australia/epidemiology , Hemorrhage/epidemiology , Anemia/epidemiology , Anemia/prevention & control , Anemia/drug therapy , Ferritins , Hemoglobins , Double-Blind MethodABSTRACT
Risk of chronic kidney disease (CKD) is influenced by environmental and genetic factors and increases sharply in individuals 70 years and older. Polygenic scores (PGS) for kidney disease-related traits have shown promise but require validation in well-characterized cohorts. Here, we assessed the performance of recently developed PGSs for CKD-related traits in a longitudinal cohort of healthy older individuals enrolled in the Australian ASPREE randomized controlled trial of daily low-dose aspirin with CKD risk at baseline and longitudinally. Among 11,813 genotyped participants aged 70 years or more with baseline eGFR measures, we tested associations between PGSs and measured eGFR at baseline, clinical phenotype of CKD, and longitudinal rate of eGFR decline spanning up to six years of follow-up per participant. A PGS for eGFR was associated with baseline eGFR, with a significant decrease of 3.9 mL/min/1.73m2 (95% confidence interval -4.17 to -3.68) per standard deviation (SD) increase of the PGS. This PGS, as well as a PGS for CKD stage 3 were both associated with higher risk of baseline CKD stage 3 in cross-sectional analysis (Odds Ratio 1.75 per SD, 95% confidence interval 1.66-1.85, and Odds Ratio 1.51 per SD, 95% confidence interval 1.43-1.59, respectively). Longitudinally, two separate PGSs for eGFR slope were associated with significant kidney function decline during follow-up. Thus, our study demonstrates that kidney function has a considerable genetic component in older adults, and that new PGSs for kidney disease-related phenotypes may have potential utility for CKD risk prediction in advanced age.
Subject(s)
Renal Insufficiency, Chronic , Humans , Longitudinal Studies , Cross-Sectional Studies , Glomerular Filtration Rate , Disease Progression , Australia , Renal Insufficiency, Chronic/diagnosis , Renal Insufficiency, Chronic/genetics , Renal Insufficiency, Chronic/complications , PhenotypeABSTRACT
BACKGROUND: Evidence for the prognostic implications of hyperglycaemia in older adults is inconsistent. OBJECTIVE: To evaluate disability-free survival (DFS) in older individuals by glycaemic status. METHODS: This analysis used data from a randomised trial recruiting 19,114 community-based participants aged ≥70 years, who had no prior cardiovascular events, dementia and physical disability. Participants with sufficient information to ascertain their baseline diabetes status were categorised as having normoglycaemia (fasting plasma glucose [FPG] < 5.6 mmol/l, 64%), prediabetes (FPG 5.6 to <7.0 mmol/l, 26%) and diabetes (self-report or FPG ≥ 7.0 mmol/l or use of glucose-lowering agents, 11%). The primary outcome was loss of disability-free survival (DFS), a composite of all-cause mortality, persistent physical disability or dementia. Other outcomes included the three individual components of the DFS loss, as well as cognitive impairment-no dementia (CIND), major adverse cardiovascular events (MACE) and any cardiovascular event. Cox models were used for outcome analyses, with covariate adjustment using inverse-probability weighting. RESULTS: We included 18,816 participants (median follow-up: 6.9 years). Compared to normoglycaemia, participants with diabetes had greater risks of DFS loss (weighted HR: 1.39, 95% CI 1.21-1.60), all-cause mortality (1.45, 1.23-1.72), persistent physical disability (1.73, 1.35-2.22), CIND (1.22, 1.08-1.38), MACE (1.30, 1.04-1.63) and cardiovascular events (1.25, 1.02-1.54) but not dementia (1.13, 0.87-1.47). The prediabetes group did not have an excess risk for DFS loss (1.02, 0.93-1.12) or other outcomes. CONCLUSIONS: Among older people, diabetes was associated with reduced DFS, and higher risk of CIND and cardiovascular outcomes, whereas prediabetes was not. The impact of preventing or treating diabetes in this age group deserves closer attention.
Subject(s)
Cardiovascular Diseases , Diabetes Mellitus , Prediabetic State , Aged , Humans , Aspirin , Diabetes Mellitus/diagnosis , Prediabetic State/diagnosis , Prediabetic State/drug therapy , Prognosis , Cardiovascular Diseases/diagnosis , Cardiovascular Diseases/prevention & controlABSTRACT
Frailty and chronic kidney disease (CKD) both increase with age and are prevalent in older adults. However, studies in older adults examining the relationship between frailty and milder impairments of kidney function are relatively sparse. We examined the cross-sectional association of baseline estimated glomerular filtration rate (eGFR), albuminuria and CKD ([eGFR <60 ml/min/1.73 m2 ] and/or albuminuria [>3.0 mg/mmol]) with prefrailty and frailty in the ASPirin in Reducing Events in the Elderly (ASPREE) trial cohort of healthy older participants. Univariate logistic regression models measured the unadjusted odds ratios (OR) and 95% confidence intervals (CI) for prevalent combined prefrailty and frailty (respectively defined as presence of 1-2 or 3+ of 5 modified fried criteria) for the association between CKD, eGFR, albuminuria and other potential risk factors. Multivariable models calculated OR for prefrailty-frailty adjusted for potential confounders and either CKD, (i) eGFR and albuminuria measured as either continuous variables; (ii) or categorical variables; (iii). Of 17 759 eligible participants, 6934 were classified as prefrail, 389 were frail. CKD, eGFR and albuminuria were all associated with combined prefrailty-frailty on univariate analysis. In the multivariable modelling, neither CKD (reduced eGFR and/or albuminuria), nor eGFR (either continuous or categorical variables) were associated with prefrailty-frailty. However, albuminuria, either as a continuous variable (OR [95% CI] 1.07 [1.04-1.10]; p < .001), or categorical variable (OR 1.21 [1.08-1.36]; p = .001) was consistently associated with prefrailty-frailty. The complex relationship between albuminuria (which may be a biomarker for vascular inflammation), ageing, progressive CKD and frailty requires further investigation.
Subject(s)
Frailty , Renal Insufficiency, Chronic , Humans , Aged , Frailty/diagnosis , Frailty/epidemiology , Albuminuria/diagnosis , Albuminuria/epidemiology , Aspirin/adverse effects , Cross-Sectional Studies , Renal Insufficiency, Chronic/complications , Renal Insufficiency, Chronic/diagnosis , Renal Insufficiency, Chronic/epidemiology , Glomerular Filtration Rate , Risk FactorsABSTRACT
BACKGROUND: Unhealthy lifestyle behaviours such as smoking, high alcohol consumption, poor diet or low physical activity are associated with morbidity and mortality. Public health guidelines provide recommendations for adherence to these four factors, however, their relationship to the health of older people is less certain. METHODS: The study involved 11,340 Australian participants (median age 7.39 [Interquartile Range (IQR) 71.7, 77.3]) from the ASPirin in Reducing Events in the Elderly study, followed for a median of 6.8 years (IQR: 5.7, 7.9). We investigated whether a point-based lifestyle score based on adherence to guidelines for a healthy diet, physical activity, non-smoking and moderate alcohol consumption was associated with subsequent all-cause and cause-specific mortality. RESULTS: In multivariable adjusted models, compared to those in the unfavourable lifestyle group, individuals in the moderate lifestyle group (Hazard Ratio (HR) 0.73 [95% CI 0.61, 0.88]) and favourable lifestyle group (HR 0.68 [95% CI 0.56, 0.83]) had lower risk of all-cause mortality. A similar pattern was observed for cardiovascular related mortality and non-cancer/non-cardiovascular related mortality. There was no association of lifestyle with cancer-related mortality. CONCLUSIONS: In a large cohort of initially healthy older people, reported adherence to a healthy lifestyle is associated with reduced risk of all-cause and cause-specific mortality. Adherence to all four lifestyle factors resulted in the strongest protection.
Subject(s)
Healthy Lifestyle , Mortality , Aged , Humans , Australia/epidemiology , Cardiovascular Diseases/epidemiology , Cardiovascular Diseases/mortality , Health Behavior , Life Style , Prospective Studies , Risk Factors , Diet, Healthy/mortality , Diet, Healthy/statistics & numerical data , Exercise/statistics & numerical data , Alcohol Drinking/epidemiology , Alcohol Drinking/mortality , Smoking/epidemiology , Smoking/mortality , Neoplasms/epidemiology , Neoplasms/mortalityABSTRACT
BACKGROUND: Investigation of infection risk with subcutaneous versus intravenous trastuzumab and rituximab administration in an individual patient data (IPD) and published data meta-analysis of randomised controlled trials (RCTs). METHODS: Databases were searched to September 2021. Primary outcomes were serious and high-grade infection. Relative-risk (RR) and 95% confidence intervals (95%CI) were calculated using random-effects models. RESULTS: IPD meta-analysis (6 RCTs, 2971 participants, 2320 infections) demonstrated higher infection incidence with subcutaneous versus intravenous administration, without reaching statistical significance (serious: 12.2% versus 9.3%, RR 1.28, 95%CI 0.93to1.77, P = 0.13; high-grade: 12.2% versus 9.9%, RR 1.32, 95%CI 0.98to1.77, P = 0.07). With exclusion of an outlying study in post-hoc analysis, increased risks were statistically significant (serious: 13.1% versus 8.4%, RR 1.53, 95%CI 1.14to2.06, P = 0.01; high-grade: 13.2% versus 9.3%, RR 1.56, 95%CI 1.16to2.11, P < 0.01). Published data meta-analysis (8 RCTs, 3745 participants, 648 infections) demonstrated higher incidence of serious (HR 1.31, 95%CI 1.02to1.68, P = 0.04) and high-grade (HR 1.52, 95%CI 1.17to1.98, P < 0.01) infection with subcutaneous versus intravenous administration. CONCLUSIONS: Results suggest increased infection risk with subcutaneous versus intravenous administration, although IPD findings are sensitive to exclusion of one trial with inconsistent results and identified risk-of-bias. Ongoing trials may confirm findings. Clinical surveillance should be considered when switching to subcutaneous administration. PROSPERO registration CRD42020221866/CRD42020125376.
ABSTRACT
INTRODUCTION: Cardiovascular disease (CVD) is a recognized risk factor for dementia. Here we determined the extent to which an incident CVD event modifies the trajectory of cognitive function and risk of dementia. METHODS: 19,114 adults (65+) without CVD or dementia were followed prospectively over 9 years. Incident CVD (fatal coronary heart disease, nonfatal myocardial infarction [MI], stroke, hospitalization for heart failure) and dementia (Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition criteria) were adjudicated by experts. RESULTS: Nine hundred twenty-two participants had incident CVD, and 44 developed dementia after CVD (4.9% vs. 4.4% for participants without CVD). Following a CVD event there was a short-term drop in processing speed (-1.97, 95% confidence interval [CI]: -2.57 to -1.41), but there was no significant association with longer-term processing speed. In contrast, faster declines in trajectories of global function (-0.56, 95% CI: -0.76 to -0.36), episodic memory (-0.10, 95% CI: -0.16 to -0.04), and verbal fluency (-0.19, 95% CI: -0.30 to -0.01) were observed. DISCUSSION: Findings highlight the importance of monitoring cognition after a CVD event.
Subject(s)
Cardiovascular Diseases , Coronary Disease , Dementia , Humans , Aged , Cardiovascular Diseases/epidemiology , Risk Factors , Cognition , Dementia/epidemiologyABSTRACT
INTRODUCTION: Recent genome-wide association studies identified new dementia-associated variants. We assessed the performance of updated polygenic risk scores (PRSs) using these variants in an independent cohort. METHODS: We used Cox models and area under the curve (AUC) to validate new PRSs (PRS-83SNP, PRS-SBayesR, and PRS-CS) compared with an older PRS-23SNP in 12,031 initially-healthy participants ≥70 years of age. Dementia was rigorously adjudicated according to Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) criteria. RESULTS: PRS-83SNP, PRS-SBayesR, and PRS-CS were associated with incident dementia, with fully adjusted (including apolipoprotein E [APOE] ε4) hazard ratios per standard deviation (SD) of 1.35 (1.23-1.47), 1.37 (1.25-1.50), and 1.42 (1.30-1.56), respectively. The AUC of a model containing conventional/non-genetic factors and APOE was 74.7%. This was improved to 75.7% (p = 0.007), 76% (p = 0.004), and 76.1% (p = 0.003) with addition of PRS-83SNP, PRS-SBayesR, and PRS-CS, respectively. The PRS-23SNP did not improve AUC (74.7%, p = 0.95). CONCLUSION: New PRSs for dementia significantly improve risk-prediction performance, but still account for less risk than APOE genotype overall.
Subject(s)
Dementia , Genetic Risk Score , Humans , Prospective Studies , Genome-Wide Association Study , Apolipoproteins E/genetics , Dementia/genetics , Risk FactorsABSTRACT
Late-life depression is common and often inadequately managed using existing therapies. Depression is also associated with increased markers of inflammation, suggesting a potential role for anti-inflammatory agents. ASPREE-D is a sub-study of ASPREE, a large multi-centre, population-based, double-blind, placebo-controlled trial of aspirin vs placebo in older Australian and American adults (median follow-up: 4.7 years) of whom 1879 were depressed at baseline. Participants were given 100 mg daily dose of aspirin or placebo. Depressive symptoms were assessed annually using the validated, self-rated short version of the Center for Epidemiological Studies Depression scale. There was a significant increase in depressive scores (0.6; 95% CI 0.2 to 0.9; χ2 (1) = 10.37; p = 0.001) and a decreased score in the mental health component of a quality of life scale (-0.7; 95% CI -1.4 to -0.1; χ2 (1) = 4.74; p = 0.029) in the aspirin group compared to the placebo group. These effects were greater in the first year of follow-up and persisted throughout the study, albeit with small to very small effect sizes. This study failed to demonstrate any benefit of aspirin in the long-term course of depression in this community-dwelling sample of older adults over a 5-year period, and identified an adverse effect of aspirin in the course of depression in those with pre-existing depressive symptoms.
Subject(s)
Aspirin , Depression , Aged , Australia , Depression/drug therapy , Double-Blind Method , Humans , Quality of LifeABSTRACT
PURPOSE: Recent epidemiological evidence has suggested that use of lipid-lowering medications, particularly statins, was associated with reduced cardiovascular disease (CVD) events and persistent physical disability in healthy older adults. However, the comparative efficacy of different statins in this group remains unclear. This study aimed to compare different forms of statins in their associations with CVD and physical disability in healthy older adults. METHODS: This post hoc analysis included data from 5981 participants aged ≥ 70 years (≥ 65 if US minorities; median age:74.0) followed for a median of 4.7 years, who had no prior CVD events or physical disability and reported using a statin at baseline. The incidence of the composite and components of major adverse cardiovascular events and persistent physical disability were compared across different statins according to their type, potency, and lipophilicity using multivariable Cox proportional-hazards models. RESULTS: Atorvastatin was the most used statin type at baseline (37.9%), followed by simvastatin (29.6%), rosuvastatin (25.5%), and other statins (7.0%, predominantly pravastatin). In comparisons of specific statins according to type and lipophilicity (lipophilic vs. hydrophilic statin), observed differences in all outcomes were small and not statistically significant (all p values > 0.05). High-potency statin use (atorvastatin and rosuvastatin) was marginally associated with lower risk of fatal CVD events compared with low-/moderate-potency statin use (hazard ratio: 0.59; 95% confidence interval: 0.35, 1.00). CONCLUSION: There were minimal differences in CVD outcomes and no significant difference in persistent physical disability between various forms of statins in healthy older adults. Future investigations are needed to confirm our results.
Subject(s)
Cardiovascular Diseases/prevention & control , Disabled Persons/statistics & numerical data , Hydroxymethylglutaryl-CoA Reductase Inhibitors/administration & dosage , Aged , Aged, 80 and over , Atorvastatin/administration & dosage , Atorvastatin/adverse effects , Double-Blind Method , Female , Humans , Hydroxymethylglutaryl-CoA Reductase Inhibitors/adverse effects , Male , Pravastatin/administration & dosage , Pravastatin/adverse effects , Primary Prevention , Proportional Hazards Models , Rosuvastatin Calcium/administration & dosage , Rosuvastatin Calcium/adverse effects , Simvastatin/administration & dosage , Simvastatin/adverse effectsABSTRACT
BACKGROUND AND OBJECTIVE: The clinical significance of sleep-disordered breathing (SDB) in older age is uncertain. This study determined the prevalence and associations of SDB with mood, daytime sleepiness, quality of life (QOL) and cognition in a relatively healthy older Australian cohort. METHODS: A cross-sectional analysis was conducted from the Study of Neurocognitive Outcomes, Radiological and retinal Effects of Aspirin in Sleep Apnoea. Participants completed an unattended limited channel sleep study to measure the oxygen desaturation index (ODI) to define mild (ODI 5-15) and moderate/severe (ODI ≥ 15) SDB, the Centre for Epidemiological Studies Scale, the Epworth Sleepiness Scale, the 12-item Short-Form for QOL and neuropsychological tests. RESULTS: Of the 1399 participants (mean age 74.0 years), 36% (273 of 753) of men and 25% (164 of 646) of women had moderate/severe SDB. SDB was associated with lower physical health-related QOL (mild SDB: beta coefficient [ß] -2.5, 95% CI -3.6 to -1.3, p < 0.001; moderate/severe SDB: ß -1.8, 95% CI -3.0 to -0.6, p = 0.005) and with lower global composite cognition (mild SDB: ß -0.1, 95% CI -0.2 to 0.0, p = 0.022; moderate/severe SDB: ß -0.1, 95% CI -0.2 to 0.0, p = 0.032) compared to no SDB. SDB was not associated with daytime sleepiness nor depression. CONCLUSION: SDB was associated with lower physical health-related quality of life and cognitive function. Given the high prevalence of SDB in older age, assessing QOL and cognition may better delineate subgroups requiring further management, and provide useful treatment target measures for this age group.
Subject(s)
Disorders of Excessive Somnolence , Sleep Apnea Syndromes , Aged , Australia , Cognition , Cross-Sectional Studies , Disorders of Excessive Somnolence/complications , Disorders of Excessive Somnolence/epidemiology , Female , Humans , Male , Oxygen , Quality of LifeABSTRACT
AIMS: Regular skin examinations for early detection of melanoma are recommended for high-risk individuals, but there is minimal consensus regarding what constitutes 'high-risk'. Melanoma risk prediction models may guide this. We compared two online melanoma risk prediction tools: Victorian Melanoma Service (VMS) and Melanoma Institute Australia (MIA) risk tools; to assess classification differences of patients at high-risk of a first primary melanoma. METHODS: Risk factor data for 357 patients presenting with their first primary melanoma were entered into both risk tools. Predicted risks were recorded: 5-year absolute risk (VMS tool and MIA tool); 10-year, lifetime, and relative risk estimates (MIA tool). Sensitivities for each tool were calculated using the same high-risk thresholds. The MIA risk tool showed greater sensitivity on comparison of 5-year absolute risks (90% MIA vs 78% VMS). Patients had significantly higher odds of being classified as high or very-high risk using the MIA risk tool overall, and for each patient subgroup. Using either tool, patients of male gender or with synchronous multiple first primary melanomas were more likely to be correctly classified as high- or very-high risk using 5-year absolute risk thresholds; but tumour invasiveness was unrelated to risk. Classification differed when using the MIA risk categories based on relative risk. CONCLUSIONS: Both melanoma risk prediction tools had high sensitivity for identifying individuals at high-risk and could be used for optimising prevention campaigns. The choice of which risk tool, measure, and threshold for risk stratification depends on the intended purpose of risk prediction, and ideally requires information on specificity.
Subject(s)
Melanoma , Skin Neoplasms , Australia/epidemiology , Humans , Male , Melanoma/diagnosis , Melanoma/epidemiology , Melanoma/pathology , Neoplasm Invasiveness , Risk Factors , Skin Neoplasms/diagnosis , Skin Neoplasms/pathologyABSTRACT
OBJECTIVE: There is a lack of robust data on significant gastrointestinal bleeding in older people using aspirin. We calculated the incidence, risk factors and absolute risk using data from a large randomised, controlled trial. DESIGN: Data were extracted from an aspirin versus placebo primary prevention trial conducted throughout 2010-2017 ('ASPirin in Reducing Events in the Elderly (ASPREE)', n=19 114) in community-dwelling persons aged ≥70 years. Clinical characteristics were collected at baseline and annually. The endpoint was major GI bleeding that resulted in transfusion, hospitalisation, surgery or death, adjudicated independently by two physicians blinded to trial arm. RESULTS: Over a median follow-up of 4.7 years (88 389 person years), there were 137 upper GI bleeds (89 in aspirin arm and 48 in placebo arm, HR 1.87, 95% CI 1.32 to 2.66, p<0.01) and 127 lower GI bleeds (73 in aspirin and 54 in placebo arm, HR 1.36, 95% CI 0.96 to 1.94, p=0.08) reflecting a 60% increase in bleeding overall. There were two fatal bleeds in the placebo arm. Multivariable analyses indicated age, smoking, hypertension, chronic kidney disease and obesity increased bleeding risk. The absolute 5-year risk of bleeding was 0.25% (95% CI 0.16% to 0.37%) for a 70 year old not on aspirin and up to 5.03% (2.56% to 8.73%) for an 80 year old taking aspirin with additional risk factors. CONCLUSION: Aspirin increases overall GI bleeding risk by 60%; however, the 5-year absolute risk of serious bleeding is modest in younger, well individuals. These data may assist patients and their clinicians to make informed decisions about prophylactic use of aspirin. TRIAL REGISTRATION NUMBER: ASPREE. NCT01038583.
Subject(s)
Anti-Inflammatory Agents, Non-Steroidal/adverse effects , Aspirin/adverse effects , Gastrointestinal Hemorrhage/chemically induced , Gastrointestinal Hemorrhage/epidemiology , Aged , Aged, 80 and over , Australia/epidemiology , Double-Blind Method , Female , Humans , Incidence , Independent Living , Male , Primary Prevention , Risk Factors , United States/epidemiologyABSTRACT
The role of aspirin for primary prevention in older adults with chronic kidney disease (CKD) is unclear. Therefore, post hoc analysis of the randomized controlled trial ASPirin in Reducing Events in the Elderly (ASPREE) was undertaken comparing 100 mg of enteric-coated aspirin daily against matching placebo. Participants were community dwelling adults aged 70 years and older in Australia, 65 years and older in the United States, all free of a history of dementia or cardiovascular disease and of any disease expected to lead to death within five years. CKD was defined as present at baseline if either eGFR under 60mL/min/1.73m2 or urine albumin to creatinine ratio 3 mg/mmol or more. In 4758 participants with and 13004 without CKD, the rates of a composite endpoint (dementia, persistent physical disability or death), major adverse cardiovascular events and clinically significant bleeding in the CKD participants were almost double those without CKD. Aspirin's effects as estimated by hazard ratios were generally similar between CKD and non-CKD groups for dementia, persistent physical disability or death, major adverse cardiovascular events and clinically significant bleeding. Thus, in our analysis aspirin did not improve outcomes in older people while increasing the risk of bleeding, with mostly consistent effects in participants with and without CKD.