Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 675
Filter
1.
J Bone Miner Res ; 2024 Aug 04.
Article in English | MEDLINE | ID: mdl-39127916

ABSTRACT

There is a strong association between total hip bone mineral density (THBMD) changes after 24 months of treatment and reduced fracture risk. We examined whether changes in THBMD after 12- and 18 months of treatment are also associated with fracture risk reduction. We used individual patient data (n = 122 235 participants) from 22 randomised, placebo-controlled, double-blind trials of osteoporosis medications. We calculated the difference in mean percent change in THBMD (active-placebo) at 12, 18, and 24 months using data available for each trial. We determined the treatment-related fracture reductions for the entire follow-up period, using logistic regression for radiologic vertebral fractures and Cox regression for hip, non-vertebral, "all" (combination of non-vertebral, clinical vertebral, and radiologic vertebral) fractures, and all clinical fractures (combination of non-vertebral and clinical vertebral). We performed meta-regression to estimate the study-level association (r2 and 95% confidence interval) between treatment-related differences in THBMD changes for each BMD measurement interval and fracture risk reduction. The meta-regression revealed that for vertebral fractures, the r2 (95% confidence interval) was 0.59 (0.19, 0.75), 0.69 (0.32, 0.82), and 0.73 (0.33, 0.84) for 12, 18 and 24 months, respectively. Similar patterns were observed for hip: r2 = 0.27 (0.00, 0.54), 0.39 (0.02, 0.63), and 0.41 (0.02, 0.65); non-vertebral: r2 = 0.27 (0.01, 0.52), 0.49 (0.10, 0.69), and 0.53 (0.11, 0.72); all fractures: r2 = 0.44 (0.10, 0.64), 0.63 (0.24, 0.77), and 0.66 (0.25, 0.80); and all clinical fractures: r2 = 0.46 (0.11, 0.65), 0.64 (0.26, 0.78), and 0.71 (0.32, 0.83), for 12-, 18- and 24-month changes in THBMD, respectively. These findings demonstrate that treatment-related THBMD changes at 12, 18 and 24 months are associated with fracture risk reductions across trials. We conclude that BMD measurement intervals as short as 12 months could be used to assess fracture efficacy, but the association is stronger with longer BMD measurement intervals.


In this study, we looked at how changes in hip bone density over time relate to the risk of fractures in people taking osteoporosis medications. We analysed data from over 122 000 participants across 22 different clinical trials. We found that the increase in bone density measured after 12, 18, and 24 months of treatment was linked to the risk of fractures. Specifically, greater improvements in bone density were associated with fewer fractures in the spine, hips, and other bones. Using statistical methods, we calculated the strength of this association. We discovered that the later we measured bone mineral density in people taking the medication, the stronger the link between improved bone density and reduced fracture risk became. Our findings suggest that bone density measurements after 12 months of treatment could help predict how well a medication will prevent fractures. However, the best predictions came from bone density changes measured over longer periods.

2.
Ann Intern Med ; 2024 Aug 27.
Article in English | MEDLINE | ID: mdl-39186785

ABSTRACT

BACKGROUND: Pelvic floor yoga has been recommended as a complementary treatment strategy for urinary incontinence (UI) in women, but evidence of its efficacy is lacking. OBJECTIVE: To evaluate the effects of a therapeutic pelvic floor yoga program versus a nonspecific physical conditioning program on UI in women. DESIGN: Randomized trial. (ClinicalTrials.gov: NCT03672461). SETTING: Three study sites in California, United States. PARTICIPANTS: Ambulatory women aged 45 years or older reporting daily urgency-, stress-, or mixed-type UI. INTERVENTION: Twelve-week program of twice-weekly group instruction and once-weekly self-directed practice of pelvic floor-specific Hatha yoga techniques (pelvic yoga) versus equivalent-time instruction and practice of general skeletal muscle stretching and strengthening exercises (physical conditioning). MEASUREMENTS: Total and type-specific UI frequency assessed by 3-day voiding diaries. RESULTS: Among the 240 randomly assigned women (age range, 45 to 90 years), mean baseline UI frequency was 3.4 episodes per day (SD, 2.2), including 1.9 urgency-type episodes per day (SD, 1.9) and 1.4 stress-type episodes per day (SD, 1.7). Over a 12-week time period, total UI frequency (primary outcome) decreased by an average of 2.3 episodes per day with pelvic yoga and 1.9 episodes per day with physical conditioning (between-group difference of -0.3 episodes per day [95% CI, -0.7 to 0.0]). Urgency-type UI frequency decreased by 1.2 episodes per day in the pelvic yoga group and 1.0 episode per day in the physical conditioning group (between-group difference of -0.3 episodes per day [CI, -0.5 to 0.0]). Reductions in stress-type UI frequency did not differ between groups (-0.1 episodes per day [CI, -0.3 to 0.3]). LIMITATION: No comparison to no treatment or other clinical UI treatments; conversion to videoconference-based intervention instruction during the COVID-19 pandemic. CONCLUSION: A 12-week pelvic yoga program was not superior to a general muscle stretching and strengthening program in reducing clinically important UI in midlife and older women with daily UI. PRIMARY FUNDING SOURCE: National Institutes of Health.

3.
J Gen Intern Med ; 2024 Aug 22.
Article in English | MEDLINE | ID: mdl-39172193

ABSTRACT

BACKGROUND: Previous literature has explored the relationship between television viewing and cardiovascular disease (CVD) in adults; however, there remains a paucity of longitudinal data describing how young adult television viewing relates to premature CVD events. OBJECTIVE: To ascertain the relationship between level and annualized changes in television viewing from young adulthood to middle age and the incidence of premature CVD events before age 60. DESIGN: The Coronary Artery Risk Development in Young Adults (CARDIA) study, a prospective community-based cohort with over 30 years of follow-up (1985-present). PARTICIPANTS: Black and White men and women who were 18-30 years old at baseline (1985-1986). MAIN MEASURES: Independent variables: Individualized television viewing trajectories were developed using linear mixed models. DEPENDENT VARIABLES: Fatal and nonfatal coronary heart disease (CHD), heart failure, and stroke outcomes were analyzed separately and as a combined CVD event outcome. KEY RESULTS: Among 4318 included participants, every 1-h increase in daily hours of television viewing at age 23 was associated with higher odds of incident CHD (adjusted odds ratio [AOR] 1.26, 95% confidence interval [CI] 1.06-1.49) and incident CVD events (AOR 1.16, 95% CI 1.03-1.32). Each additional hour of daily television viewing annually was associated with higher annual odds of CHD incidence (AOR 1.55, 95% CI 1.06-2.25), stroke incidence (AOR 1.58, 95% CI 1.02-2.46), and CVD incidence (AOR 1.32, 95% CI 1.03-1.69). Race and sex modified the association between television viewing level at age 23 and CHD, heart failure, and stroke, with White men most consistently having significant associations. CONCLUSIONS: In this prospective cohort study, greater television viewing in young adulthood and annual increases in television viewing across midlife were associated with incident premature CVD events, particularly CHD. Young adulthood as well as behaviors across midlife may be important periods to promote healthy television viewing behavior patterns.

4.
Article in English | MEDLINE | ID: mdl-38994585

ABSTRACT

CONTEXT: Impaired bone microarchitecture, assessed by high-resolution peripheral quantitative computed tomography (HR-pQCT), may contribute to bone fragility in type 2 diabetes (T2DM) but data on men are lacking. OBJECTIVE: To investigate the association between T2DM and HR-pQCT parameters in older men. METHODS: HR-pQCT scans were acquired on 1794 participants in the Osteoporotic Fractures in Men (MrOS) study. T2DM was ascertained by self-report or medication use. Linear regression models, adjusted for age, race, BMI, limb length, clinic site, and oral corticosteroid use, were used to compare HR-pQCT parameters by diabetes status. RESULTS: Among 1777 men, 290 had T2DM (mean age 84.4 years). T2DM men had smaller total cross-sectional area (Tt.AR) at the distal tibia (p=0.028) and diaphyseal tibia (p=0.025), and smaller cortical area at the distal (p= 0.009) and diaphyseal tibia (p= 0.023). Trabecular indices and cortical porosity were similar between T2DM and non-T2DM. Among men with T2DM, in a model including HbA1c, diabetes duration, and insulin use, diabetes duration ≥ 10 years, compared with <10 years, was significantly associated with higher cortical porosity but with higher trabecular thickness at the distal radius. Insulin use was significantly associated with lower cortical area and thickness at the distal radius and diaphyseal tibia and lower failure load at all three scan sites. Lower cortical area, cortical thickness, total BMD, cortical BMD, and failure load of the distal sites were associated with increased risk of incident non-vertebral fracture in T2DM. CONCLUSIONS: Older men with T2DM have smaller bone size compared to non-T2DM, which may contribute to diabetic skeletal fragility. Longer diabetes duration was associated with higher cortical porosity and insulin use with cortical bone deficits and lower failure load.

5.
Neurology ; 103(1): e209510, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38865677

ABSTRACT

BACKGROUND AND OBJECTIVES: The nature of associations between depressive symptoms and cognition early in the life course remains unclear, and racial differences in these associations are not well characterized. The aim of this study was to examine the relationship between trajectories of depressive symptom over 20 years, beginning in young adulthood, and cognitive functions in middle-age among Black and White adults. METHODS: We used prospective data from participants of the Coronary Artery Risk Development in Young Adults Study. Depressive symptoms were measured at 5 study visits between 1990 and 2010 using the Center for Epidemiologic Studies Depression scale. We used latent class group-based modeling to identify 4 trajectories: "persistently low," "persistently medium," "medium decreasing," and "high increasing" depressive symptoms. In 2015, cognitive function was measured using the Digit Symbol Substitution Test (DSST), Stroop test (reverse coded), and Rey Auditory-Verbal Learning Test (RAVLT).We excluded participants who missed the cognitive battery or had no depressive symptoms measurements, resulting in a total of 3,117 participants. All cognitive tests were standardized, and linear regression was used to relate depressive trajectories with 2015 cognitive functions. RESULTS: The mean [SD] baseline age was 30.1 [3.6] years, and 57% were female. The associations between depressive symptoms and cognition significantly differed by race (p < 0.05). Among Black individuals, compared with having "persistently low," having "medium decreasing," "persistently medium," or "high increasing" depressive symptoms were associated with worse verbal memory, processing speed, and executive function scores (RAVLT persistently medium vs low: ß = -0.30, 95% CI -0.48 to -0.12; and high increasing vs low: ß = -0.49, 95% CI -0.70 to -0.27; DSST persistently medium vs low: ß = -0.28, 95% CI -0.47 to -0.09; and high increasing vs low: ß = -0.64, 95% CI -0.87 to -0.42; Stroop persistently medium vs low: ß = -0.46, 95% CI -0.70 to -0.23; and high increasing vs low: ß = -0.76, 95% CI -1.04 to -0.47). Associations were slightly weaker among White individuals, but we still found that having 'high increasing' depressive symptoms was associated with worse verbal memory and processing speed scores (high increasing vs low: ß = -0.38, 95% CI -0.61 to -0.15; and ß = -0.40, 95% CI -0.63 to -0.18, respectively). DISCUSSION: Prolonged exposure to elevated depressive symptoms beginning in young adulthood may result in worse cognitive function over midlife. This association was stronger among Black adults.


Subject(s)
Depression , Humans , Female , Male , Depression/epidemiology , Adult , White People , Middle Aged , Cognition/physiology , Neuropsychological Tests , Prospective Studies , Black or African American , Young Adult , Longitudinal Studies , Cognitive Dysfunction/etiology , Cognitive Dysfunction/epidemiology
6.
Schizophr Bull ; 2024 Jun 06.
Article in English | MEDLINE | ID: mdl-38842724

ABSTRACT

BACKGROUND AND HYPOTHESIS: In the United States, women with schizophrenia face challenges in receiving gynecologic care, but little is known about how cervical cancer screening rates vary across time or states in a publicly insured population. We hypothesized that women Medicaid beneficiaries with schizophrenia would be less likely to receive cervical cancer screening across the United States compared with a control population, and that women with schizophrenia and other markers of vulnerability would be least likely to receive screening. STUDY DESIGN: This retrospective cohort study used US Medicaid administrative data from across 44 states between 2002 and 2012 and examined differences in cervical cancer screening test rates among 283 950 female Medicaid beneficiaries with schizophrenia and a frequency-matched control group without serious mental illness, matched on age and race/ethnicity. Among women with schizophrenia, multivariable logistic regression estimated the odds of receiving cervical cancer screening using individual sociodemographics, comorbid conditions, and health care service utilization. STUDY RESULTS: Compared to the control group, women with schizophrenia were less likely to receive cervical cancer screening (OR = 0.76; 95% CI 0.75-0.77). Among women with schizophrenia, nonwhite populations, younger women, urban dwellers, those with substance use disorders, anxiety, and depression and those connected to primary care were more likely to complete screening. CONCLUSIONS: Cervical cancer screening rates among US women Medicaid beneficiaries with schizophrenia were suboptimal. To address cervical cancer care disparities for this population, interventions are needed to prioritize women with schizophrenia who are less engaged with the health care system or who reside in rural areas.

7.
J Bone Miner Res ; 39(7): 867-876, 2024 Aug 05.
Article in English | MEDLINE | ID: mdl-38691441

ABSTRACT

Some osteoporosis drug trials have suggested that treatment is more effective in those with low BMD measured by DXA. This study used data from a large set of randomized controlled trials (RCTs) to determine whether the anti-fracture efficacy of treatments differs according to baseline BMD. We used individual patient data from 25 RCTs (103 086 subjects) of osteoporosis medications collected as part of the FNIH-ASBMR SABRE project. Participants were stratified into FN BMD T-score subgroups (≤-2.5, > -2.5). We used Cox proportional hazard regression to estimate treatment effect for clinical fracture outcomes and logistic regression for the radiographic vertebral fracture outcome. We also performed analyses based on BMD quintiles. Overall, 42% had a FN BMD T-score ≤ -2.5. Treatment with anti-osteoporosis drugs led to significant reductions in fractures in both T-score ≤ -2.5 and > -2.5 subgroups. Compared to those with FN BMD T-score > -2.5, the risk reduction for each fracture outcome was greater in those with T-score ≤ -2.5, but only the all-fracture outcome reached statistical significance (interaction P = .001). Results were similar when limited to bisphosphonate trials. In the quintile analysis, there was significant anti-fracture efficacy across all quintiles for vertebral fractures and with greater effects on fracture risk reduction for non-vertebral, all, and all clinical fractures in the lower BMD quintiles (all interaction P ≤ .03). In summary, anti-osteoporotic medications reduced the risk of fractures regardless of baseline BMD. Significant fracture risk reduction with treatment for 4 of the 5 fracture endpoints was seen in participants with T-scores above -2.5, though effects tended to be larger and more significant in those with baseline T-scores <-2.5.


It is important to know whether our treatments for osteoporosis are effective at reducing the risk of fracture no matter what the BMD before starting treatment. This study used data from many clinical trials to determine whether the anti-fracture efficacy of treatments differs according to baseline BMD. We found that anti-osteoporotic medications reduced the risk of fractures regardless of baseline BMD, though effects tended to be larger and more significant in those with lower BMD scores.


Subject(s)
Bone Density , Humans , Bone Density/drug effects , Female , Male , Aged , Middle Aged , Risk Factors , Fractures, Bone/drug therapy , Bone Density Conservation Agents/therapeutic use , Bone Density Conservation Agents/pharmacology , Randomized Controlled Trials as Topic , Spinal Fractures/drug therapy , Spinal Fractures/diagnostic imaging , Osteoporosis/drug therapy
8.
Article in English | MEDLINE | ID: mdl-38661855

ABSTRACT

People with schizophrenia are at increased risk for contracting HIV and face higher mortality rates compared with the general population. Viral suppression is key to HIV care, yet little is known about this metric among people with HIV and schizophrenia. A chart review was conducted among people with HIV/AIDS and schizophrenia living in San Francisco who had received inpatient mental health services between 2010 and 2016. Demographic, laboratory, medication, encounter, and discharge data were collected, and were compared with all people living with HIV in San Francisco (PLWH-SF). Among 153 people living with HIV and comorbid schizophrenia, 77% were virally suppressed, compared to 67% for all PLWH-SF. Viral suppression for people with comorbid HIV and schizophrenia living in San Francisco appears higher than PLWH-SF. Further research is needed to confirm the association and mechanisms behind better treatment outcomes for people living with HIV and comorbid schizophrenia.


Subject(s)
HIV Infections , Schizophrenia , Humans , San Francisco/epidemiology , Schizophrenia/epidemiology , HIV Infections/epidemiology , HIV Infections/drug therapy , HIV Infections/complications , Male , Female , Retrospective Studies , Adult , Middle Aged , Inpatients/statistics & numerical data , Inpatients/psychology , Comorbidity , Viral Load
9.
Contraception ; 135: 110465, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38636583

ABSTRACT

OBJECTIVES: To explore the relevance of pregnancy intention as a screen for contraceptive needs among postpartum individuals. STUDY DESIGN: We surveyed 234 postpartum individuals to assess the alignment between pregnancy intentions in the next year and current desire to prevent pregnancy. RESULTS: Most individuals (87%) desired pregnancy prevention now, including 73% of individuals who desired or were ambivalent about pregnancy in the next year. CONCLUSION: A majority of individuals considering pregnancy in the next year desired pregnancy prevention now. Directly assessing current desire to prevent pregnancy may be more specific for contraceptive needs in postpartum individuals. IMPLICATIONS: Our ability to ensure that all individuals who want to prevent pregnancy have access to contraception depends on the use of effective screening questions. These findings prompt consideration of broader clinical implementation of screening for desire to prevent pregnancy in lieu of questions about pregnancy intention in the next year.


Subject(s)
Contraception , Intention , Postpartum Period , Humans , Female , Pregnancy , Adult , Young Adult , Contraception/methods , Surveys and Questionnaires , Contraception Behavior , Family Planning Services , Adolescent , Needs Assessment
10.
J Bone Miner Res ; 39(5): 544-550, 2024 May 24.
Article in English | MEDLINE | ID: mdl-38501786

ABSTRACT

There is a common belief that antiosteoporosis medications are less effective in older adults. This study used data from randomized controlled trials (RCTs) to determine whether the anti-fracture efficacy of treatments and their effects on BMD differ in people ≥70 compared to those <70 yr. We used individual patient data from 23 RCTs of osteoporosis medications collected as part of the FNIH-ASBMR SABRE project. We assessed the following fractures: radiographic vertebral, non-vertebral, hip, all clinical, and all fractures. We used Cox proportional hazard regression to estimate treatment effect for clinical fracture outcomes, logistic regression for the radiographic vertebral fracture outcome, and linear regression to estimate treatment effect on 24-mo change in hip and spine BMD in each age subgroup. The analysis included 123 164 (99% female) participants; 43% being ≥70 yr. Treatment with anti-osteoporosis drugs significantly and similarly reduced fractures in both subgroups (eg, odds ratio [OR] = 0.47 and 0.51 for vertebral fractures in those below and above 70 yr, interaction P = .19; hazard ratio [HR] for all fractures: 0.72 vs 0.70, interaction P = .20). Results were similar when limited to bisphosphonate trials with the exception of hip fracture risk reduction which was somewhat greater in those <70 (HR = 0.44) vs ≥70 (HR = 0.79) yr (interaction P = .02). Allocation to anti-osteoporotic drugs resulted in significantly greater increases in hip and spine BMD at 24 mo in those ≥70 compared to those <70 yr. In summary, anti-osteoporotic medications similarly reduced the risk of fractures regardless of age, and the few small differences in fracture risk reduction by age were of uncertain clinical significance.


Medications used for osteoporosis maybe are less effective in older adults. This study used data from clinical trials to determine whether these medications work equally well in reducing the risk of fractures in people ≥70 compared to those <70 yr. The analysis included 123 164 participants with data from 23 trials. Treatment with anti-osteoporosis drugs significantly reduced fractures in both groups in a similar way. The BMD increased more in the older group.


Subject(s)
Bone Density , Humans , Female , Aged , Male , Bone Density/drug effects , Middle Aged , Randomized Controlled Trials as Topic , Age Factors , Fractures, Bone/drug therapy , Treatment Outcome , Osteoporosis/drug therapy , Aged, 80 and over , Bone Density Conservation Agents/therapeutic use , Bone Density Conservation Agents/pharmacology
11.
Sci Rep ; 14(1): 3304, 2024 02 08.
Article in English | MEDLINE | ID: mdl-38332308

ABSTRACT

Previous studies relying on alcohol sales, alcohol-related injuries, and surveys have suggested that alcohol consumption increased during the COVID-19 pandemic. We sought to leverage over 1 million Breath Alcohol Concentration (BrAC) measurements from Bluetooth-enabled breathalyzers to conduct an objective and longitudinal assessment of alcohol use during the pandemic. Serial BrAC measurements revealed a decrease in drinking between January 1, 2020 and March 30, 2020, an increase between March 30, 2020 and May 25, 2020, a statistically insignificant decrease between May 25, 2020 and January 1, 2021, and an increase again between January 1, 2021 and June 4, 2021. No statistically significant relationships between shelter-in-place orders and alcohol consumption were detected. These findings demonstrate the complex relationship between the pandemic and alcohol consumption patterns, providing insights that may be relevant to the use of this commonly consumed substance with implications relevant to long-term effects from the patterns observed.


Subject(s)
COVID-19 , Pandemics , Humans , Alcohol Drinking/epidemiology , Longitudinal Studies , COVID-19/epidemiology , Cohort Studies
12.
Article in English | MEDLINE | ID: mdl-38198798

ABSTRACT

CONTEXT: Prolonged bisphosphonate (BP) treatment for osteoporosis prevents hip and other fractures but causes atypical femoral fractures (AFF). OBJECTIVE: To establish the relationship between patterns of BP use and the risk of AFF and hip fractures. Other potential risk factors for AFF were also examined. DESIGN: Population-based case-cohort study. SETTING: The Danish National Healthcare system maintains longitudinal records of medication use, healthcare utilization, and x-ray images. PARTICIPANTS: Among all 1.9 million Danish adults ≥50, those with subtrochanteric or femoral shaft fractures between 2010-2015 (n = 4,973) were identified and compared to a random sample (n = 37,021). PREDICTORS: Bisphosphonate use was collected from 1995-2015. MAIN OUTCOME MEASURES: Fracture radiographs (n = 4,769) were reviewed by blinded study radiologists to identify AFFs (n = 181) using established criteria. Traditional hip fractures in the random sample (n = 691) were identified by ICD-10. RESULTS: Compared to <1 year of BP use, 5-7 years of use was associated with a 7-fold increase in AFF [adjusted HR = 7.29 (CI: 3.07,17.30)]; the risk of AFF fell quickly after discontinuation. The 5-year number-needed-to-harm for one AFF was 1,424, while the 5-year number-needed-to-treat to prevent one hip fracture was 56. Glucocorticoid and proton pump inhibitor use were independently associated with increased AFF risk. Thirty-one percent of those with AFF had no BP exposure. CONCLUSIONS: The risk of AFF increases with duration of BP use but the beneficial effects of BP therapy in adults ≥50 dramatically exceed this increased risk. Nearly one-third of those with AFF have no BP exposure.

13.
J Electrocardiol ; 83: 26-29, 2024.
Article in English | MEDLINE | ID: mdl-38295539

ABSTRACT

BACKGROUND: Alcohol consumption is associated with a higher increased risk of atrial fibrillation (AF), but the acute effects on cardiac electrophysiology in humans remain poorly understood. The HOw ALcohol InDuces Atrial TachYarrhythmias (HOLIDAY) Trial revealed that alcohol shortened pulmonary vein atrial effective refractory periods, but more global electrophysiologic changes gleaned from the surface ECG have not yet been reported. METHODS: This was a secondary analysis of the HOLIDAY Trial. During AF ablation procedures, 100 adults were randomized to intravenous alcohol titrated to 0.08% blood alcohol concentration versus a volume and osmolarity-matched, masked, placebo. Intervals measured from 12­lead ECGs were compared between pre infusion and at infusion steady state (20 min). RESULTS: The average age was 60 years and 11% were female. No significant differences in the P-wave duration, PR, QRS or QT intervals, were present between alcohol and placebo arms. However, infusion of alcohol was associated with a statistically significant relative shortening of the JT interval (r: -14.73, p = 0.048) after multivariable adjustment. CONCLUSION: Acute exposure to alcohol was associated with a relative reduction in the JT interval, reflecting shortening of ventricular repolarization. These acute changes may reflect a more global shortening of refractoriness, suggesting immediate proarrhythmic effects pertinent to the atria and ventricles.


Subject(s)
Atrial Fibrillation , Electrocardiography , Adult , Female , Humans , Male , Middle Aged , Blood Alcohol Content , Heart Atria , Randomized Controlled Trials as Topic
14.
AIDS ; 38(4): 465-475, 2024 Mar 15.
Article in English | MEDLINE | ID: mdl-37861689

ABSTRACT

OBJECTIVE: The aim of this study was to determine whether urine biomarkers of kidney health are associated with subclinical cardiovascular disease among men with and without HIV. DESIGN: A cross-sectional study within the Multicenter AIDS Cohort Study (MACS) among 504 men with and without HIV infection who underwent cardiac computed tomography scans and had urine biomarkers measured within the preceding 2 years. METHODS: Our primary predictors were four urine biomarkers of endothelial (albuminuria), proximal tubule dysfunction (alpha-1-microglobulin [A1 M] and injury (kidney injury molecule-1 [KIM-1]) and tubulointerstitial fibrosis (pro-collagen-III N-terminal peptide [PIIINP]). These were evaluated for association with coronary artery calcium (CAC) prevalence, CAC extent, total plaque score, and total segment stenosis using multivariable regression. RESULTS: Of the 504 participants, 384 were men with HIV (MWH) and 120 were men without HIV. In models adjusted for sociodemographic factors, cardiovascular disease risk factors, eGFR, and HIV-related factors, each two-fold higher concentration of albuminuria was associated with a greater extent of CAC (1.35-fold higher, 95% confidence interval 1.11-1.65), and segment stenosis (1.08-fold greater, 95% confidence interval 1.01-1.16). Associations were similar between MWH and men without HIV in stratified analyses. The third quartile of A1 M showed an association with greater CAC extent, total plaque score, and total segment stenosis, compared with the lowest quartile. CONCLUSION: Worse endothelial and proximal tubule dysfunction, as reflected by higher urine albumin and A1 M, were associated with greater CAC extent and coronary artery stenosis.


Subject(s)
Cardiovascular Diseases , Coronary Artery Disease , HIV Infections , Plaque, Atherosclerotic , Male , Humans , Female , HIV Infections/complications , HIV Infections/epidemiology , Coronary Artery Disease/epidemiology , Cardiovascular Diseases/complications , Cohort Studies , Albuminuria , Cross-Sectional Studies , Constriction, Pathologic/complications , Risk Factors , Kidney , Biomarkers
15.
JAMA Intern Med ; 184(1): 54-62, 2024 Jan 01.
Article in English | MEDLINE | ID: mdl-38010725

ABSTRACT

Importance: Modifiable risk factors are hypothesized to account for 30% to 40% of dementia; yet, few trials have demonstrated that risk-reduction interventions, especially multidomain, are efficacious. Objective: To determine if a personalized, multidomain risk reduction intervention improves cognition and dementia risk profile among older adults. Design, Setting, and Participants: The Systematic Multi-Domain Alzheimer Risk Reduction Trial was a randomized clinical trial with a 2-year personalized, risk-reduction intervention. A total of 172 adults at elevated risk for dementia (age 70-89 years and with ≥2 of 8 targeted risk factors) were recruited from primary care clinics associated with Kaiser Permanente Washington. Data were collected from August 2018 to August 2022 and analyzed from October 2022 to September 2023. Intervention: Participants were randomly assigned to the intervention (personalized risk-reduction goals with health coaching and nurse visits) or to a health education control. Main Outcomes and Measures: The primary outcome was change in a composite modified Neuropsychological Test Battery; preplanned secondary outcomes were change in risk factors and quality of life (QOL). Outcomes were assessed at baseline and 6, 12, 18, and 24 months. Linear mixed models were used to compare, by intention to treat, average treatment effects (ATEs) from baseline over follow-up. The intervention and outcomes were initially in person but then, due to onset of the COVID-19 pandemic, were remote. Results: The 172 total participants had a mean (SD) age of 75.7 (4.8) years, and 108 (62.8%) were women. After 2 years, compared with the 90 participants in the control group, the 82 participants assigned to intervention demonstrated larger improvements in the composite cognitive score (ATE of SD, 0.14; 95% CI, 0.03-0.25; P = .02; a 74% improvement compared with the change in the control group), better composite risk factor score (ATE of SD, 0.11; 95% CI, 0.01-0.20; P = .03), and improved QOL (ATE, 0.81 points; 95% CI, -0.21 to 1.84; P = .12). There were no between-group differences in serious adverse events (24 in the intervention group and 23 in the control group; P = .59), but the intervention group had greater treatment-related adverse events such as musculoskeletal pain (14 in the intervention group vs 0 in the control group; P < .001). Conclusions and Relevance: In this randomized clinical trial, a 2-year, personalized, multidomain intervention led to modest improvements in cognition, dementia risk factors, and QOL. Modifiable risk-reduction strategies should be considered for older adults at risk for dementia. Trial Registration: ClinicalTrials.gov Identifier: NCT03683394.


Subject(s)
Dementia , Quality of Life , Humans , Female , Aged , Aged, 80 and over , Male , Pandemics , Cognition , Risk Reduction Behavior , Dementia/prevention & control , Dementia/epidemiology
16.
JACC Clin Electrophysiol ; 10(1): 56-64, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37921790

ABSTRACT

BACKGROUND: Chronic sleep disruption is associated with incident atrial fibrillation (AF), but it is unclear whether poor sleep quality acutely triggers AF. OBJECTIVES: The aim of this study was to characterize the relationship between a given night's sleep quality and the risk of a discrete AF episode. METHODS: Patients with symptomatic paroxysmal AF in the I-STOP-AFIB (Individualized Studies of Triggers of Paroxysmal Atrial Fibrillation) trial reported sleep quality on a daily basis. Participants were also queried daily regarding AF episodes and were provided smartphone-based mobile electrocardiograms (ECGs) (KardiaMobile, AliveCor). RESULTS: Using 15,755 days of data from 419 patients, worse sleep quality on any given night was associated with a 15% greater odds of a self-reported AF episode the next day (OR: 1.15; 95% CI: 1.10-1.20; P < 0.0001) after adjustment for the day of the week. No statistically significant associations between worsening sleep quality and mobile ECG-confirmed AF events were observed (OR: 1.04; 95% CI: 0.95-1.13; P = 0.43), although substantially fewer of these mobile ECG-confirmed events may have limited statistical power. Poor sleep was also associated with longer self-reported AF episodes, with each progressive category of worsening sleep associated with 16 (95% CI: 12-21; P < 0.001) more minutes of AF the next day. CONCLUSIONS: Poor sleep was associated with an immediately heightened risk for self-reported AF episodes, and a dose-response relationship existed such that progressively worse sleep was associated with longer episodes of AF the next day. These data suggest that sleep quality may be a potentially modifiable trigger relevant to the near-term risk of a discrete AF episode.


Subject(s)
Atrial Fibrillation , Humans , Atrial Fibrillation/epidemiology , Sleep Quality , Electrocardiography
17.
Am J Prev Med ; 66(3): 427-434, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38085195

ABSTRACT

INTRODUCTION: Few studies have longitudinally examined TV viewing trajectories and cardiovascular disease risk factors. The objective of this study was to determine the association between level and annualized changes in young adult TV viewing and the incidence of cardiovascular disease risk factors from young adulthood to middle age. METHODS: In 2023, prospective community-based cohort data of 4,318 Coronary Artery Risk Development in Young Adults study participants (1990-1991 to 2015-2016) were analyzed. Individualized daily TV viewing trajectories for each participant were developed using linear mixed models. RESULTS: Every additional hour of TV viewing at age 23 years was associated with higher odds of incident hypertension (AOR=1.16; 95% CI=1.11, 1.22), diabetes (AOR=1.19; 95% CI=1.11, 1.28), high triglycerides (AOR=1.17; 95% CI=1.08, 1.26), dyslipidemia (AOR=1.10; 95% CI=1.03, 1.16), and obesity (AOR=1.12; 95% CI=1.06, 1.17). In addition, each hourly increase in daily TV viewing was associated with higher annual odds of incident hypertension (AOR=1.26; 95% CI=1.16, 1.37), low high-density lipoprotein cholesterol (AOR=1.15; 95% CI=1.03, 1.30), high triglycerides (AOR=1.32; 95% CI=1.15, 1.51), dyslipidemia (AOR=1.22; 95% CI=1.11, 1.34), and obesity (AOR=1.17; 95% CI=1.07, 1.27) over the follow-up period. CONCLUSIONS: In this prospective cohort study, higher TV viewing in young adulthood and annual increases in TV viewing were associated with incident hypertension, high triglycerides, and obesity. Young adulthood as well as behaviors across midlife may be important time periods to promote healthful TV viewing behavior patterns.


Subject(s)
Cardiovascular Diseases , Dyslipidemias , Hypertension , Young Adult , Humans , Middle Aged , Adult , Prospective Studies , Cardiovascular Diseases/epidemiology , Cardiovascular Diseases/etiology , Obesity/epidemiology , Hypertension/epidemiology , Hypertension/etiology , Triglycerides , Television , Risk Factors
18.
J Acquir Immune Defic Syndr ; 95(4): 342-346, 2024 04 01.
Article in English | MEDLINE | ID: mdl-38133589

ABSTRACT

BACKGROUND: People living with HIV have increased risk of cardiovascular disease, but few studies focus on women with HIV (WWH) and few account for the use of multiple substances. SETTING: We recruited WWH from San Francisco shelters, free meal programs, street encampments, and a safety net HIV clinic. METHODS: Between 2016 and 2019, participants completed 6 monthly interviews, specimen collection, and a transthoracic echocardiogram. We assessed associations between 3 echocardiographic indices of cardiac hypertrophy (concentric hypertrophy, concentric remodeling, and eccentric hypertrophy) and study factors, including cardiovascular risk factors, substance use, and HIV-specific factors (CD4 + count, viral load, HIV medication). RESULTS: Among 62 participants, the average age was 53 years and 70% were ethnic minority women. Just over 70% had elevated blood pressure. Toxicology-confirmed substance use included tobacco (63%), cannabis (52%), cocaine (51%), methamphetamine (29%), and alcohol (26%). Concentric hypertrophy was detected in 26% of participants. It was positively associated with cocaine use [adjusted relative risk (aRR) = 32.5, P < 0.01] and negatively associated with cannabis use (aRR = 0.07, P < 0.01). Concentric remodeling was detected in 40% of participants. It was positively associated with cocaine use (aRR = 11.2, P < 0.01) and negatively associated with cannabis use (aRR = 0.17, P = 0.02). Eccentric hypertrophy was not significantly associated with factors studied here. CONCLUSIONS: Routine evaluation of stimulant use as a contributing factor to cardiovascular risk may improve risk assessment in WWH. Whether cannabis use mitigates the impact of cocaine use on structural heart disease among WWH merits further investigation.


Subject(s)
Cocaine-Related Disorders , Cocaine , HIV Infections , Heart Diseases , Substance-Related Disorders , Humans , Female , Middle Aged , Ethnicity , HIV Infections/complications , Minority Groups , Substance-Related Disorders/complications , Heart Diseases/epidemiology , Hypertrophy
19.
Heart Rhythm ; 21(4): 370-377, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38142832

ABSTRACT

BACKGROUND: Cannabis use is increasing worldwide. While prior studies have reported an association between cannabis use and a higher risk of atrial fibrillation (AF), most were cross-sectional and generally relied on diagnostic coding to identify cannabis users, which may not be representative of the typical recreational cannabis user. OBJECTIVE: The purpose of this study was to examine the association between recreational cannabis use and lifetime AF risk. METHODS: We evaluated the AF risk of participants of the UK Biobank cohort who completed the cannabis use lifestyle questionnaire. Cannabis exposure was categorized as "Occasional Use" for less than 100 times used, "Frequent Use" for more than 100 times used, and "Never" users. AF events were identified using International Classification of Diseases codes. Cox models were used to estimate the hazard ratios (HRs) between cannabis use and incident AF and were subsequently adjusted for age, sex, race, alcohol, coffee, smoking, education, and baseline cardiovascular comorbidities. RESULTS: A total of 150,554 participants (mean age 63.4 ± 7.7 years; 86,487 (57.4%) female; and 33,442 (22.2%) using cannabis at least once) were followed for a mean period of 6.1 ± 0.6 years. After multivariable adjustment, there were no statistically significant differences in incident AF among occasional users (HR 0.98; 95% confidence interval 0.89-1.08) nor frequent users (HR 1.03; 95% confidence interval 0.81-1.32) as compared with never users. CONCLUSION: In a large prospective cohort study, there was no evidence that cannabis use was associated with a higher risk of incident AF. An evaluation of cannabis ingestion methods and quantification was not possible using the current data set.


Subject(s)
Atrial Fibrillation , Cannabis , Humans , Female , Middle Aged , Aged , Male , Atrial Fibrillation/epidemiology , Atrial Fibrillation/etiology , Prospective Studies , Risk Factors , Incidence
20.
Article in English | MEDLINE | ID: mdl-37835100

ABSTRACT

Stimulant use among unstably housed individuals is associated with increased risks of psychiatric co-morbidity, violence, HIV transmission, and overdose. Due to a lack of highly effective treatments, evidence-based policies targeting the prevention of stimulant use disorder are of critical importance. However, little empirical evidence exists on risks associated with initiating or returning to stimulant use among at-risk populations. In a longitudinal cohort of unstably housed women in San Francisco (2016-2019), self-reported data on stimulant use, housing status, and mental health were collected monthly for up to 6 months, and factors associated with initiating stimulants after a period of non-use were identified through logistic regression. Among 245 participants, 42 (17.1%) started using cocaine and 46 (18.8%) started using methamphetamine. In analyses adjusting for demographics and socio-structural exposures over the preceding month, experiencing street homelessness was associated with initiating cocaine use (AOR: 2.10; 95% CI: 1.04, 4.25) and sheltered homelessness with initiating methamphetamine use (AOR: 2.57; 95% CI: 1.37, 4.79). Other factors-including race, income, unmet subsistence needs, mental health, and treatment adherence-did not reach levels of significance, suggesting the paramount importance of policies directed toward improving access to permanent supportive housing to prevent stimulant use among unstably housed women.


Subject(s)
Cocaine , HIV Infections , Methamphetamine , Substance-Related Disorders , Humans , Female , HIV Infections/epidemiology , Housing Instability , Substance-Related Disorders/epidemiology , Housing
SELECTION OF CITATIONS
SEARCH DETAIL