Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 59
Filter
1.
BMJ ; 385: e077097, 2024 05 08.
Article in English | MEDLINE | ID: mdl-38719492

ABSTRACT

OBJECTIVE: To compare the effectiveness of three commonly prescribed oral antidiabetic drugs added to metformin for people with type 2 diabetes mellitus requiring second line treatment in routine clinical practice. DESIGN: Cohort study emulating a comparative effectiveness trial (target trial). SETTING: Linked primary care, hospital, and death data in England, 2015-21. PARTICIPANTS: 75 739 adults with type 2 diabetes mellitus who initiated second line oral antidiabetic treatment with a sulfonylurea, DPP-4 inhibitor, or SGLT-2 inhibitor added to metformin. MAIN OUTCOME MEASURES: Primary outcome was absolute change in glycated haemoglobin A1c (HbA1c) between baseline and one year follow-up. Secondary outcomes were change in body mass index (BMI), systolic blood pressure, and estimated glomerular filtration rate (eGFR) at one year and two years, change in HbA1c at two years, and time to ≥40% decline in eGFR, major adverse kidney event, hospital admission for heart failure, major adverse cardiovascular event (MACE), and all cause mortality. Instrumental variable analysis was used to reduce the risk of confounding due to unobserved baseline measures. RESULTS: 75 739 people initiated second line oral antidiabetic treatment with sulfonylureas (n=25 693, 33.9%), DPP-4 inhibitors (n=34 464 ,45.5%), or SGLT-2 inhibitors (n=15 582, 20.6%). SGLT-2 inhibitors were more effective than DPP-4 inhibitors or sulfonylureas in reducing mean HbA1c values between baseline and one year. After the instrumental variable analysis, the mean differences in HbA1c change between baseline and one year were -2.5 mmol/mol (95% confidence interval (CI) -3.7 to -1.3) for SGLT-2 inhibitors versus sulfonylureas and -3.2 mmol/mol (-4.6 to -1.8) for SGLT-2 inhibitors versus DPP-4 inhibitors. SGLT-2 inhibitors were more effective than sulfonylureas or DPP-4 inhibitors in reducing BMI and systolic blood pressure. For some secondary endpoints, evidence for SGLT-2 inhibitors being more effective was lacking-the hazard ratio for MACE, for example, was 0.99 (95% CI 0.61 to 1.62) versus sulfonylureas and 0.91 (0.51 to 1.63) versus DPP-4 inhibitors. SGLT-2 inhibitors had reduced hazards of hospital admission for heart failure compared with DPP-4 inhibitors (0.32, 0.12 to 0.90) and sulfonylureas (0.46, 0.20 to 1.05). The hazard ratio for a ≥40% decline in eGFR indicated a protective effect versus sulfonylureas (0.42, 0.22 to 0.82), with high uncertainty in the estimated hazard ratio versus DPP-4 inhibitors (0.64, 0.29 to 1.43). CONCLUSIONS: This emulation study of a target trial found that SGLT-2 inhibitors were more effective than sulfonylureas or DPP-4 inhibitors in lowering mean HbA1c, BMI, and systolic blood pressure and in reducing the hazards of hospital admission for heart failure (v DPP-4 inhibitors) and kidney disease progression (v sulfonylureas), with no evidence of differences in other clinical endpoints.


Subject(s)
Diabetes Mellitus, Type 2 , Dipeptidyl-Peptidase IV Inhibitors , Glycated Hemoglobin , Hypoglycemic Agents , Metformin , Sodium-Glucose Transporter 2 Inhibitors , Sulfonylurea Compounds , Humans , Diabetes Mellitus, Type 2/drug therapy , Hypoglycemic Agents/therapeutic use , Hypoglycemic Agents/administration & dosage , Male , Female , Middle Aged , Sulfonylurea Compounds/therapeutic use , Sulfonylurea Compounds/administration & dosage , Aged , Metformin/therapeutic use , Metformin/administration & dosage , Glycated Hemoglobin/analysis , Glycated Hemoglobin/metabolism , Dipeptidyl-Peptidase IV Inhibitors/therapeutic use , Dipeptidyl-Peptidase IV Inhibitors/administration & dosage , Sodium-Glucose Transporter 2 Inhibitors/therapeutic use , Sodium-Glucose Transporter 2 Inhibitors/administration & dosage , Administration, Oral , Glomerular Filtration Rate/drug effects , England/epidemiology , Drug Therapy, Combination , Treatment Outcome , Cohort Studies , Comparative Effectiveness Research , Body Mass Index , Blood Pressure/drug effects
3.
BMJ Open ; 14(4): e081881, 2024 Apr 23.
Article in English | MEDLINE | ID: mdl-38658004

ABSTRACT

INTRODUCTION: Telomeres are a measure of cellular ageing with potential links to diseases such as cardiovascular diseases and cancer. Studies have shown that some infections may be associated with telomere shortening, but whether an association exists across all types and severities of infections and in which populations is unclear. Therefore we aim to collate available evidence to enable comparison and to inform future research in this field. METHODS AND ANALYSIS: We will search for studies involving telomere length and infection in various databases including MEDLINE (Ovid interface), EMBASE (Ovid interface), Web of Science, Scopus, Global Health and the Cochrane Library. For grey literature, the British Library of electronic theses databases (ETHOS) will be explored. We will not limit by study type, geographical location, infection type or method of outcome measurement. Two researchers will independently carry out study selection, data extraction and risk of bias assessment using the ROB2 and ROBINS-E tools. The overall quality of the studies will be determined using the Grading of Recommendations Assessment, Development and Evaluation criteria. We will also evaluate study heterogeneity with respect to study design, exposure and outcome measurement and if there is sufficient homogeneity, a meta-analysis will be conducted. Otherwise, we will provide a narrative synthesis with results grouped by exposure category and study design. ETHICS AND DISSEMINATION: The present study does not require ethical approval. Results will be disseminated via publishing in a peer-reviewed journal and conference presentations. PROSPERO REGISTRATION NUMBER: CRD42023444854.


Subject(s)
Research Design , Systematic Reviews as Topic , Humans , Telomere Shortening , Telomere/genetics , Infections
4.
Biom J ; 66(1): e2300085, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37823668

ABSTRACT

For simulation studies that evaluate methods of handling missing data, we argue that generating partially observed data by fixing the complete data and repeatedly simulating the missingness indicators is a superficially attractive idea but only rarely appropriate to use.


Subject(s)
Research , Data Interpretation, Statistical , Computer Simulation
5.
Stat Biopharm Res ; 15(2): 421-432, 2023.
Article in English | MEDLINE | ID: mdl-37260584

ABSTRACT

The ICH E9 addendum introduces the term intercurrent event to refer to events that happen after treatment initiation and that can either preclude observation of the outcome of interest or affect its interpretation. It proposes five strategies for handling intercurrent events to form an estimand but does not suggest statistical methods for estimation. In this article we focus on the hypothetical strategy, where the treatment effect is defined under the hypothetical scenario in which the intercurrent event is prevented. For its estimation, we consider causal inference and missing data methods. We establish that certain "causal inference estimators" are identical to certain "missing data estimators." These links may help those familiar with one set of methods but not the other. Moreover, using potential outcome notation allows us to state more clearly the assumptions on which missing data methods rely to estimate hypothetical estimands. This helps to indicate whether estimating a hypothetical estimand is reasonable, and what data should be used in the analysis. We show that hypothetical estimands can be estimated by exploiting data after intercurrent event occurrence, which is typically not used. Supplementary materials for this article are available online.

6.
Pharm Stat ; 21(6): 1246-1257, 2022 11.
Article in English | MEDLINE | ID: mdl-35587109

ABSTRACT

Clinical trials with longitudinal outcomes typically include missing data due to missed assessments or structural missingness of outcomes after intercurrent events handled with a hypothetical strategy. Approaches based on Bayesian random multiple imputation and Rubin's rules for pooling results across multiple imputed data sets are increasingly used in order to align the analysis of these trials with the targeted estimand. We propose and justify deterministic conditional mean imputation combined with the jackknife for inference as an alternative approach. The method is applicable to imputations under a missing-at-random assumption as well as for reference-based imputation approaches. In an application and a simulation study, we demonstrate that it provides consistent treatment effect estimates with the Bayesian approach and reliable frequentist inference with accurate standard error estimation and type I error control. A further advantage of the method is that it does not rely on random sampling and is therefore replicable and unaffected by Monte Carlo error.


Subject(s)
Research Design , Humans , Data Interpretation, Statistical , Bayes Theorem , Computer Simulation , Monte Carlo Method
7.
Clin Epidemiol ; 13: 935-940, 2021.
Article in English | MEDLINE | ID: mdl-34703318

ABSTRACT

Testing for SARS-CoV-2 internationally has focused on COVID-19 diagnosis among symptomatic individuals using reverse transcriptase polymerase chain reaction (PCR) tests. Recently, however, SARS-CoV-2 antigen rapid lateral flow tests (LFT) have been rolled out in several countries for testing asymptomatic individuals in public health programmes. Validation studies for LFT have been largely cross-sectional, reporting sensitivity, specificity and predictive values of LFT relative to PCR. However, because PCR detects genetic material left behind for a long period when the individual is no longer infectious, these statistics can under-represent the sensitivity of LFT for detecting infectious individuals, especially when sampling asymptomatic populations. LFTs (intended to detect individuals shedding SARS-CoV-2 antigens) validated against PCR (intended to diagnose infection) are not reporting against a gold standard of equivalent measurements. Instead, these validation studies have reported relative performance statistics that need recalibrating to the purpose for which LFT is being used. We present an approach to this recalibration. We derive a formula for recalibrating relative performance statistics from LFT vs PCR validation studies to give likely absolute sensitivity of LFT for detecting individuals who are shedding shedding SARS-CoV-2 antigens. We contrast widely reported apparent sensitivities of LFT with recalibrated absolute sensitivity for detecting individuals shedding SARS-CoV-2 antigens. After accounting for within-individual viral kinetics and epidemic dynamics within asymptomatic populations we show that a highly performant test for SARS-CoV-2 antigen should show LFT-to-PCR relative sensitivity of less than 50% in conventional validation studies, which after re-calibration would be an absolute sensitivity of more than 80%. Further studies are needed to ascertain the absolute sensitivity of LFT as a test of infectiousness in COVID-19 responses. These studies should include longitudinal series of LFT and PCR, ideally in cohorts sampled from both contacts of cases and the general population.

8.
PLoS One ; 15(12): e0242908, 2020.
Article in English | MEDLINE | ID: mdl-33320865

ABSTRACT

PURPOSE: Volume indices and left ventricular ejection fraction (LVEF) are routinely used to assess cardiac function. Ventricular strain values may provide additional diagnostic information, but their reproducibility is unclear. This study therefore compares the repeatability and reproducibility of volumes, volume fraction, and regional ventricular strains, derived from cardiovascular magnetic resonance (CMR) imaging, across three software packages and between readers. METHODS: Seven readers analysed 16 short-axis CMR stacks of a porcine heart. Endocardial contours were manually drawn using OsiriX and Simpleware ScanIP and repeated in both softwares. The images were also contoured automatically in Circle CVI42. Endocardial global, apical, mid-ventricular, and basal circumferential strains, as well as end-diastolic and end-systolic volume and LVEF were compared. RESULTS: Bland-Altman analysis found systematic biases in contour length between software packages. Compared to OsiriX, contour lengths were shorter in both ScanIP (-1.9 cm) and CVI42 (-0.6 cm), causing statistically significant differences in end-diastolic and end-systolic volumes, and apical circumferential strain (all p<0.006). No differences were found for mid-ventricular, basal or global strains, or left ventricular ejection fraction (all p<0.007). All CVI42 results lay within the ranges of the OsiriX results. Intra-software differences were found to be lower than inter-software differences. CONCLUSION: OsiriX and CVI42 gave consistent results for all strain and volume metrics, with no statistical differences found between OsiriX and ScanIP for mid-ventricular, global or basal strains, or left ventricular ejection fraction. However, volumes were influenced by the choice of contouring software, suggesting care should be taken when comparing volumes across different software.


Subject(s)
Heart Ventricles/anatomy & histology , Heart Ventricles/diagnostic imaging , Magnetic Resonance Imaging , Stress, Mechanical , Animals , Diastole , Image Processing, Computer-Assisted , Organ Size , Swine , Systole
9.
Stat Methods Med Res ; 29(12): 3533-3546, 2020 12.
Article in English | MEDLINE | ID: mdl-32605503

ABSTRACT

Multiple imputation has become one of the most popular approaches for handling missing data in statistical analyses. Part of this success is due to Rubin's simple combination rules. These give frequentist valid inferences when the imputation and analysis procedures are so-called congenial and the embedding model is correctly specified, but otherwise may not. Roughly speaking, congeniality corresponds to whether the imputation and analysis models make different assumptions about the data. In practice, imputation models and analysis procedures are often not congenial, such that tests may not have the correct size, and confidence interval coverage deviates from the advertised level. We examine a number of recent proposals which combine bootstrapping with multiple imputation and determine which are valid under uncongeniality and model misspecification. Imputation followed by bootstrapping generally does not result in valid variance estimates under uncongeniality or misspecification, whereas certain bootstrap followed by imputation methods do. We recommend a particular computationally efficient variant of bootstrapping followed by imputation.


Subject(s)
Models, Statistical , Data Interpretation, Statistical
10.
Ther Innov Regul Sci ; 54(2): 324-341, 2020 03.
Article in English | MEDLINE | ID: mdl-32072573

ABSTRACT

The National Research Council (NRC) Expert Panel Report on Prevention and Treatment of Missing Data in Clinical Trials highlighted the need for clearly defining objectives and estimands. That report sparked considerable discussion and literature on estimands and how to choose them. Importantly, consideration moved beyond missing data to include all postrandomization events that have implications for estimating quantities of interest (intercurrent events, aka ICEs). The ICH E9(R1) draft addendum builds on that research to outline key principles in choosing estimands for clinical trials, primarily with focus on confirmatory trials. This paper provides additional insights, perspectives, details, and examples to help put ICH E9(R1) into practice. Specific areas of focus include how the perspectives of different stakeholders influence the choice of estimands; the role of randomization and the intention-to-treat principle; defining the causal effects of a clearly defined treatment regimen, along with the implications this has for trial design and the generalizability of conclusions; detailed discussion of strategies for handling ICEs along with their implications and assumptions; estimands for safety objectives, time-to-event endpoints, early-phase and one-arm trials, and quality of life endpoints; and realistic examples of the thought process involved in defining estimands in specific clinical contexts.


Subject(s)
Models, Statistical , Research Design , Data Interpretation, Statistical , Quality of Life
11.
Ther Innov Regul Sci ; 54(2): 370-384, 2020 03.
Article in English | MEDLINE | ID: mdl-32072586

ABSTRACT

This paper provides examples of defining estimands in real-world scenarios following ICH E9(R1) guidelines. Detailed discussions on choosing the estimands and estimators can be found in our companion papers. Three scenarios of increasing complexity are illustrated. The first example is a proof-of-concept trial in major depressive disorder where the estimand is chosen to support the sponsor decision on whether to continue development. The second and third examples are confirmatory trials in severe asthma and rheumatoid arthritis respectively. We discuss the intercurrent events expected during each trial and how they can be handled so as to be consistent with the study objectives. The estimands discussed in these examples are not the only acceptable choices for their respective scenarios. The intent is to illustrate the key concepts rather than focus on specific choices. Emphasis is placed on following a study development process where estimands link the study objectives with data collection and analysis in a coherent manner, thereby avoiding disconnect between objectives, estimands, and analyses.


Subject(s)
Asthma , Depressive Disorder, Major , Asthma/drug therapy , Data Interpretation, Statistical , Depressive Disorder, Major/drug therapy , Humans , Research Design
13.
Biometrics ; 76(3): 1036-1038, 2020 09.
Article in English | MEDLINE | ID: mdl-31823345

ABSTRACT

Randomized trials with continuous outcomes are often analyzed using analysis of covariance (ANCOVA), with adjustment for prognostic baseline covariates. The ANCOVA estimator of the treatment effect is consistent under arbitrary model misspecification. In an article recently published in the journal, Wang et al proved the model-based variance estimator for the treatment effect is also consistent under outcome model misspecification, assuming the probability of randomization to each treatment is 1/2. In this reader reaction, we derive explicit expressions which show that when randomization is unequal, the model-based variance estimator can be biased upwards or downwards. In contrast, robust sandwich variance estimators can provide asymptotically valid inferences under arbitrary misspecification, even when randomization probabilities are not equal.


Subject(s)
Analysis of Variance , Confidence Intervals , Random Allocation , Randomized Controlled Trials as Topic
14.
Pharm Stat ; 17(5): 648-666, 2018 09.
Article in English | MEDLINE | ID: mdl-29998483

ABSTRACT

Analyses of randomised trials are often based on regression models which adjust for baseline covariates, in addition to randomised group. Based on such models, one can obtain estimates of the marginal mean outcome for the population under assignment to each treatment, by averaging the model-based predictions across the empirical distribution of the baseline covariates in the trial. We identify under what conditions such estimates are consistent, and in particular show that for canonical generalised linear models, the resulting estimates are always consistent. We show that a recently proposed variance estimator underestimates the variance of the estimator around the true marginal population mean when the baseline covariates are not fixed in repeated sampling and provide a simple adjustment to remedy this. We also describe an alternative semiparametric estimator, which is consistent even when the outcome regression model used is misspecified. The different estimators are compared through simulations and application to a recently conducted trial in asthma.


Subject(s)
Data Interpretation, Statistical , Models, Statistical , Randomized Controlled Trials as Topic/methods , Anti-Asthmatic Agents/administration & dosage , Asthma/drug therapy , Computer Simulation , Humans , Linear Models , Regression Analysis
15.
J Alzheimers Dis ; 64(2): 631-642, 2018.
Article in English | MEDLINE | ID: mdl-29914016

ABSTRACT

Health-care professionals, patients, and families seek as much information as possible about prognosis for patients with Alzheimer's disease (AD); however, we do not yet have a robust understanding of how demographic factors predict prognosis. We evaluated associations between age at presentation, age of onset, and symptom length with cognitive decline as measured using the Mini-Mental State Examination (MMSE) and Clinical Dementia Rating sum-of-boxes (CDR-SOB) in a large dataset of AD patients. Age at presentation was associated with post-presentation decline in MMSE (p < 0.001), with younger patients showing faster decline. There was little evidence of an association with change in CDR-SOB. Symptom length, rather than age, was the strongest predictor of MMSE and CDR-SOB at presentation, with increasing symptom length associated with worse outcomes. The evidence that younger AD patients have a more aggressive disease course implies that early diagnosis is essential.


Subject(s)
Aging , Alzheimer Disease/physiopathology , Disease Progression , Age of Onset , Aged , Aged, 80 and over , Cognitive Dysfunction/etiology , Cross-Sectional Studies , Female , Humans , Longitudinal Studies , Male , Mental Status and Dementia Tests , Neuropsychological Tests , Risk Factors
16.
Biometrics ; 74(4): 1438-1449, 2018 12.
Article in English | MEDLINE | ID: mdl-29870056

ABSTRACT

The nested case-control and case-cohort designs are two main approaches for carrying out a substudy within a prospective cohort. This article adapts multiple imputation (MI) methods for handling missing covariates in full-cohort studies for nested case-control and case-cohort studies. We consider data missing by design and data missing by chance. MI analyses that make use of full-cohort data and MI analyses based on substudy data only are described, alongside an intermediate approach in which the imputation uses full-cohort data but the analysis uses only the substudy. We describe adaptations to two imputation methods: the approximate method (MI-approx) of White and Royston (2009) and the "substantive model compatible" (MI-SMC) method of Bartlett et al. (2015). We also apply the "MI matched set" approach of Seaman and Keogh (2015) to nested case-control studies, which does not require any full-cohort information. The methods are investigated using simulation studies and all perform well when their assumptions hold. Substantial gains in efficiency can be made by imputing data missing by design using the full-cohort approach or by imputing data missing by chance in analyses using the substudy only. The intermediate approach brings greater gains in efficiency relative to the substudy approach and is more robust to imputation model misspecification than the full-cohort approach. The methods are illustrated using the ARIC Study cohort. Supplementary Materials provide R and Stata code.


Subject(s)
Biometry/methods , Case-Control Studies , Cohort Studies , Computer Simulation/statistics & numerical data , Data Interpretation, Statistical , Humans
17.
Stat Methods Med Res ; 27(6): 1695-1708, 2018 06.
Article in English | MEDLINE | ID: mdl-27647812

ABSTRACT

Bayesian approaches for handling covariate measurement error are well established and yet arguably are still relatively little used by researchers. For some this is likely due to unfamiliarity or disagreement with the Bayesian inferential paradigm. For others a contributory factor is the inability of standard statistical packages to perform such Bayesian analyses. In this paper, we first give an overview of the Bayesian approach to handling covariate measurement error, and contrast it with regression calibration, arguably the most commonly adopted approach. We then argue why the Bayesian approach has a number of statistical advantages compared to regression calibration and demonstrate that implementing the Bayesian approach is usually quite feasible for the analyst. Next, we describe the closely related maximum likelihood and multiple imputation approaches and explain why we believe the Bayesian approach to generally be preferable. We then empirically compare the frequentist properties of regression calibration and the Bayesian approach through simulation studies. The flexibility of the Bayesian approach to handle both measurement error and missing data is then illustrated through an analysis of data from the Third National Health and Nutrition Examination Survey.


Subject(s)
Bayes Theorem , Regression Analysis , Calibration , Data Interpretation, Statistical , Nutrition Surveys/statistics & numerical data
18.
Stat Med ; 36(19): 3092-3109, 2017 Aug 30.
Article in English | MEDLINE | ID: mdl-28557022

ABSTRACT

Missing outcomes are a commonly occurring problem for cluster randomised trials, which can lead to biased and inefficient inference if ignored or handled inappropriately. Two approaches for analysing such trials are cluster-level analysis and individual-level analysis. In this study, we assessed the performance of unadjusted cluster-level analysis, baseline covariate-adjusted cluster-level analysis, random effects logistic regression and generalised estimating equations when binary outcomes are missing under a baseline covariate-dependent missingness mechanism. Missing outcomes were handled using complete records analysis and multilevel multiple imputation. We analytically show that cluster-level analyses for estimating risk ratio using complete records are valid if the true data generating model has log link and the intervention groups have the same missingness mechanism and the same covariate effect in the outcome model. We performed a simulation study considering four different scenarios, depending on whether the missingness mechanisms are the same or different between the intervention groups and whether there is an interaction between intervention group and baseline covariate in the outcome model. On the basis of the simulation study and analytical results, we give guidance on the conditions under which each approach is valid. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.


Subject(s)
Bias , Cluster Analysis , Logistic Models , Randomized Controlled Trials as Topic/methods , Biometry/methods , Computer Simulation , Epidemiologic Methods , Humans , Reproducibility of Results
19.
BMC Pediatr ; 17(1): 80, 2017 03 16.
Article in English | MEDLINE | ID: mdl-28302082

ABSTRACT

BACKGROUND: Early growth of HIV-exposed, uninfected (HEU) children is poorer than that of their HIV-unexposed, uninfected (HUU) counterparts but there is little longitudinal or longer term information about the growth effects of early HIV exposure. METHODS: We performed a longitudinal analysis to compare growth of HEU and HUU infants and children using data from two cohort studies in Lusaka, Zambia. Initially 207 HUU and 200 HEU infants from the Breastfeeding and Postpartum Health (BFPH) study and 580 HUU and 165 HEU from the Chilenje Infant Growth, Nutrition and Infection Study (CIGNIS) had anthropometric measurements taken during infancy and again when school-aged, at which time 66 BFPH children and 326 CIGNIS children were available. We analysed the data from the two cohorts separately using linear mixed models. Linear regression models were used as a secondary analysis at the later time points, adjusting for breastfeeding duration. We explored when the main group differences in growth emerged in order to estimate the largest 'effect periods'. RESULTS: After adjusting for socioeconomic status and maternal education, HEU children had lower weight-for-age, length-for-age and BMI-for-age Z-scores during early growth and these differences still existed when children were school-aged. Exposure group differences changed most between 1 and 6 weeks and between 18 months and ~7.5 years. CONCLUSIONS: HEU children have poorer early growth than HUU children which persists into later growth. Interventions to improve growth of HEU children need to target pregnant women and infants.


Subject(s)
Body Height , Body Weight , Child Development , HIV Infections , Pregnancy Complications, Infectious , Prenatal Exposure Delayed Effects/virology , Case-Control Studies , Child , Child, Preschool , Female , Humans , Infant , Infant, Newborn , Linear Models , Longitudinal Studies , Male , Pregnancy , Prenatal Exposure Delayed Effects/physiopathology , Zambia
20.
Hippocampus ; 27(3): 249-262, 2017 03.
Article in English | MEDLINE | ID: mdl-27933676

ABSTRACT

This study investigates relationships between white matter hyperintensity (WMH) volume, cerebrospinal fluid (CSF) Alzheimer's disease (AD) pathology markers, and brain and hippocampal volume loss. Subjects included 198 controls, 345 mild cognitive impairment (MCI), and 154 AD subjects with serial volumetric 1.5-T MRI. CSF Aß42 and total tau were measured (n = 353). Brain and hippocampal loss were quantified from serial MRI using the boundary shift integral (BSI). Multiple linear regression models assessed the relationships between WMHs and hippocampal and brain atrophy rates. Models were refitted adjusting for (a) concurrent brain/hippocampal atrophy rates and (b) CSF Aß42 and tau in subjects with CSF data. WMH burden was positively associated with hippocampal atrophy rate in controls (P = 0.002) and MCI subjects (P = 0.03), and with brain atrophy rate in controls (P = 0.03). The associations with hippocampal atrophy rate remained following adjustment for concurrent brain atrophy rate in controls and MCIs, and for CSF biomarkers in controls (P = 0.007). These novel results suggest that vascular damage alongside AD pathology is associated with disproportionately greater hippocampal atrophy in nondemented older adults. © 2016 The Authors Hippocampus Published by Wiley Periodicals, Inc.


Subject(s)
Alzheimer Disease/diagnostic imaging , Cognitive Dysfunction/diagnostic imaging , Hippocampus/diagnostic imaging , White Matter/diagnostic imaging , Aged , Aging/pathology , Alzheimer Disease/cerebrospinal fluid , Amyloid beta-Peptides/cerebrospinal fluid , Atrophy/diagnostic imaging , Biomarkers/cerebrospinal fluid , Cognitive Dysfunction/cerebrospinal fluid , Disease Progression , Female , Follow-Up Studies , Humans , Image Processing, Computer-Assisted , Linear Models , Longitudinal Studies , Magnetic Resonance Imaging , Male , Organ Size , Peptide Fragments/cerebrospinal fluid
SELECTION OF CITATIONS
SEARCH DETAIL