Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 24
Filter
1.
medRxiv ; 2023 Dec 09.
Article in English | MEDLINE | ID: mdl-38105937

ABSTRACT

Background: Although cannabis legalization is associated with increases in self-report cannabis use, biological measures of cannabis use are needed to address potential bias introduced by improved self-reporting of cannabis use in states enacting medical cannabis laws (MCL) and recreational cannabis laws (RCL). Objective: Quantify the role of MCL and RCL enactment in cannabis positive urine drug screen (UDS) prevalence among Veterans Health Administration (VHA) emergency department (ED) patients from 2008 to 2019. Design: Staggered-adoption difference-in-difference analysis were used to estimate the role of MCL and RCL in cannabis positive UDS data, fitting adjusted linear binomial regression models to estimate the association between MCL and RCL enactment and prevalence of cannabis positive UDS. Participants: VHA enrolled veterans aged 18-75 years with ≥1 ED visit in a given year from 2008 to 2019. Main Measures: Receipt of ≥1 cannabis positive UDS during an ED visit were analyzed. Key Results: From 2008 to 2019, adjusted cannabis positive UDS prevalences increased from 16.4% to 25.6% in states with no cannabis law, 16.6% to 27.6% in MCL-only enacting states, and 18.2% to 33.8% in RCL-enacting states. MCL-only and MCL/RCL enactment was associated with a 0.8% (95% CI, 0.4-1.0) and 2.9% (95% CI, 2.5-3.3) absolute increase in cannabis positive UDS, respectively. Significant effect sizes were found for MCL and RCL, such that 7.0% and 18.5% of the total increase in cannabis positive UDS prevalence in MCL-only and RCL states could be attributed to MCLs and RCLs. Conclusions: In this study of VHA ED patients, MCL and RCL enactment played a significant role in the overall increases in cannabis positive UDS. The increase in a biological measure of cannabis use reduces concerns that previously documented increases in self-reported cannabis use from surveys are due to changes in patient willingness to report use as it becomes more legal.

2.
Am J Drug Alcohol Abuse ; 49(5): 618-629, 2023 09 03.
Article in English | MEDLINE | ID: mdl-37791817

ABSTRACT

Background: Opioid use disorder (OUD) continues to be major public health problem in the US and innovative medication strategies are needed. The extended-release injectable formulation of naltrexone (ER-NTX), an opioid receptor antagonist, is an effective treatment for OUD, but the need for an opioid-free period during the induction phase of treatment is a barrier to treatment success, particularly in the outpatient setting. Lofexidine, an alpha-2-adrenergic agonist, is an effective treatment for opioid withdrawal.Objectives: To evaluate the feasibility, safety, and tolerability of lofexidine for facilitating induction onto ER-NTX in the management of OUD.Methods: In an open-label, uncontrolled, 10-week outpatient clinical trial, 20 adults (four women) with OUD were treated with a fixed-flexible dosing strategy (maximum 0.54 mg 4×/daily) of lofexidine for up to 10 days to manage opioid withdrawal prior to receiving ER-NTX. The COVID-19 pandemic resulted in a modification of the study methods after enrolling 10 participants who attended all visits in person. The second group of 10 participants attended most induction period visits remotely.Results: Overall, 10 of the 20 participants (50%) achieved the primary outcome by receiving the first ER-NTX injection. Rates of induction success did not differ by the presence of fentanyl or remote visit attendance, although the small sample size provided limited statistical power. Six out of 20 participants (30%) initiated on lofexidine required dose adjustments. There were no study-related serious adverse events.Conclusions: This study provides preliminary evidence supporting the feasibility of inducting individuals with OUD onto ER-NTX using lofexidine.


Subject(s)
Opioid-Related Disorders , Substance Withdrawal Syndrome , Adult , Female , Humans , Naltrexone/therapeutic use , Analgesics, Opioid/therapeutic use , Pandemics , Opioid-Related Disorders/drug therapy , Narcotic Antagonists/therapeutic use , Substance Withdrawal Syndrome/drug therapy , Delayed-Action Preparations/therapeutic use
3.
Lancet Psychiatry ; 10(11): 877-886, 2023 11.
Article in English | MEDLINE | ID: mdl-37837985

ABSTRACT

BACKGROUND: Cannabis use disorder is associated with considerable comorbidity and impairment in functioning, and prevalence is increasing among adults with chronic pain. We aimed to assess the effect of introduction of medical cannabis laws (MCL) and recreational cannabis laws (RCL) on the increase in cannabis use disorder among patients in the US Veterans Health Administration (VHA). METHODS: Data from patients with one or more primary care, emergency, or mental health visit to the VHA in 2005-19 were analysed using 15 repeated cross-sectional VHA electronic health record datasets (ie, one dataset per year). Patients in hospice or palliative care were excluded. Patients were stratified as having chronic pain or not using an American Pain Society taxonomy of painful medical conditions. We used staggered-adoption difference-in-difference analyses to estimate the role of MCL and RCL enactment in the increases in prevalence of diagnosed cannabis use disorder and associations with presence of chronic pain, accounting for the year that state laws were enacted. We did this by fitting a linear binomial regression model stratified by pain, with time-varying cannabis law status, fixed effects for state, categorical year, time-varying state-level sociodemographic covariates, and patient covariates (age group [18-34 years, 35-64 years, and 65-75 years], sex, and race and ethnicity). FINDINGS: Between 2005 and 2019, 3 234 382-4 579 994 patients were included per year. Among patients without pain in 2005, 5·1% were female, mean age was 58·3 (SD 12·6) years, and 75·7%, 15·6%, and 3·6% were White, Black, and Hispanic or Latino, respectively. In 2019, 9·3% were female, mean age was 56·7 (SD 15·2) years, and 68·1%, 18·2%, and 6·5% were White, Black, and Hispanic or Latino, respectively. Among patients with pain in 2005, 7·1% were female, mean age was 57·2 (SD 11·4) years, and 74·0%, 17·8%, and 3·9% were White, Black, and Hispanic or Latino, respectively. In 2019, 12·4% were female, mean age was 57·2 (SD 13·8) years, and 65·3%, 21·9%, and 7·0% were White, Black, and Hispanic or Latino, respectively. Among patients with chronic pain, enacting MCL led to a 0·135% (95% CI 0·118-0·153) absolute increase in cannabis use disorder prevalence, with 8·4% of the total increase in MCL-enacting states attributable to MCL. Enacting RCL led to a 0·188% (0·160-0·217) absolute increase in cannabis use disorder prevalence, with 11·5% of the total increase in RCL-enacting states attributable to RCL. In patients without chronic pain, enacting MCL and RCL led to smaller absolute increases in cannabis use disorder prevalence (MCL: 0·037% [0·027-0·048], 5·7% attributable to MCL; RCL: 0·042% [0·023-0·060], 6·0% attributable to RCL). Overall, associations of MCL and RCL with cannabis use disorder were greater in patients with chronic pain than in patients without chronic pain. INTERPRETATION: Increasing cannabis use disorder prevalence among patients with chronic pain following state legalisation is a public health concern, especially among older age groups. Given cannabis commercialisation and widespread public beliefs about its efficacy, clinical monitoring of cannabis use and discussion of the risk of cannabis use disorder among patients with chronic pain is warranted. FUNDING: NIDA grant R01DA048860, New York State Psychiatric Institute, and the VA Centers of Excellence in Substance Addiction Treatment and Education.


Subject(s)
Cannabis , Chronic Pain , Marijuana Abuse , Medical Marijuana , Adult , Humans , Female , United States/epidemiology , Aged , Middle Aged , Adolescent , Young Adult , Male , Cross-Sectional Studies , Marijuana Abuse/epidemiology , Chronic Pain/epidemiology , Veterans Health , Medical Marijuana/therapeutic use
4.
PLOS Glob Public Health ; 3(6): e0001971, 2023.
Article in English | MEDLINE | ID: mdl-37315095

ABSTRACT

BACKGROUND AND OBJECTIVE: Estimating the contribution of risk factors of mortality due to COVID-19 is particularly important in settings with low vaccination coverage and limited public health and clinical resources. Very few studies of risk factors of COVID-19 mortality used high-quality data at an individual level from low- and middle-income countries (LMICs). We examined the contribution of demographic, socioeconomic and clinical risk factors of COVID-19 mortality in Bangladesh, a lower middle-income country in South Asia. METHODS: We used data from 290,488 lab-confirmed COVID-19 patients who participated in a telehealth service in Bangladesh between May 2020 and June 2021, linked with COVID-19 death data from a national database to study the risk factors associated with mortality. Multivariable logistic regression models were used to estimate the association between risk factors and mortality. We used classification and regression trees to identify the risk factors that are the most important for clinical decision-making. FINDINGS: This study is one of the largest prospective cohort studies of COVID-19 mortality in a LMIC, covering 36% of all lab-confirmed COVID-19 cases in the country during the study period. We found that being male, being very young or elderly, having low socioeconomic status, chronic kidney and liver disease, and being infected during the latter pandemic period were significantly associated with a higher risk of mortality from COVID-19. Males had 1.15 times higher odds (95% Confidence Interval, CI: 1.09, 1.22) of death compared to females. Compared to the reference age group (20-24 years olds), the odds ratio of mortality increased monotonically with age, ranging from an odds ratio of 1.35 (95% CI: 1.05, 1.73) for ages 30-34 to an odds ratio of 21.6 (95% CI: 17.08, 27.38) for ages 75-79 year group. For children 0-4 years old the odds of mortality were 3.93 (95% CI: 2.74, 5.64) times higher than 20-24 years olds. Other significant predictors were severe symptoms of COVID-19 such as breathing difficulty, fever, and diarrhea. Patients who were assessed by a physician as having a severe episode of COVID-19 based on the telehealth interview had 12.43 (95% CI: 11.04, 13.99) times higher odds of mortality compared to those assessed to have a mild episode. The finding that the telehealth doctors' assessment of disease severity was highly predictive of subsequent COVID-19 mortality, underscores the feasibility and value of the telehealth services. CONCLUSIONS: Our findings confirm the universality of certain COVID-19 risk factors-such as gender and age-while highlighting other risk factors that appear to be more (or less) relevant in the context of Bangladesh. These findings on the demographic, socioeconomic, and clinical risk factors for COVID-19 mortality can help guide public health and clinical decision-making. Harnessing the benefits of the telehealth system and optimizing care for those most at risk of mortality, particularly in the context of a LMIC, are the key takeaways from this study.

5.
Am J Clin Nutr ; 118(1): 273-282, 2023 07.
Article in English | MEDLINE | ID: mdl-37244291

ABSTRACT

BACKGROUND: Maintenance of cognitive abilities is of critical importance to older adults, yet few effective strategies to slow cognitive decline currently exist. Multivitamin supplementation is used to promote general health; it is unclear whether it favorably affects cognition in older age. OBJECTIVES: To examine the effect of daily multivitamin/multimineral supplementation on memory in older adults. METHODS: The COcoa Supplement and Multivitamin Outcomes Study Web (COSMOS-Web) ancillary study (NCT04582617) included 3562 older adults. Participants were randomly assigned to a daily multivitamin supplement (Centrum Silver) or placebo and evaluated annually with an Internet-based battery of neuropsychological tests for 3 y. The prespecified primary outcome measure was change in episodic memory, operationally defined as immediate recall performance on the ModRey test, after 1 y of intervention. Secondary outcome measures included changes in episodic memory over 3 y of follow-up and changes in performance on neuropsychological tasks of novel object recognition and executive function over 3 y. RESULTS: Compared with placebo, participants randomly assigned to multivitamin supplementation had significantly better ModRey immediate recall at 1 y, the primary endpoint (t(5889) = 2.25, P = 0.025), as well as across the 3 y of follow-up on average (t(5889) = 2.54, P = 0.011). Multivitamin supplementation had no significant effects on secondary outcomes. Based on cross-sectional analysis of the association between age and performance on the ModRey, we estimated that the effect of the multivitamin intervention improved memory performance above placebo by the equivalent of 3.1 y of age-related memory change. CONCLUSIONS: Daily multivitamin supplementation, compared with placebo, improves memory in older adults. Multivitamin supplementation holds promise as a safe and accessible approach to maintaining cognitive health in older age. This trial was registered at clinicaltrials.gov as NCT04582617.


Subject(s)
Dietary Supplements , Vitamins , Humans , Aged , Cross-Sectional Studies , Double-Blind Method , Vitamins/pharmacology , Vitamins/therapeutic use , Cognition
6.
Proc Natl Acad Sci U S A ; 120(23): e2216932120, 2023 06 06.
Article in English | MEDLINE | ID: mdl-37252983

ABSTRACT

Dietary flavanols are food constituents found in certain fruits and vegetables that have been linked to cognitive aging. Previous studies suggested that consumption of dietary flavanols might specifically be associated with the hippocampal-dependent memory component of cognitive aging and that memory benefits of a flavanol intervention might depend on habitual diet quality. Here, we tested these hypotheses in the context of a large-scale study of 3,562 older adults, who were randomly assigned to a 3-y intervention of cocoa extract (500 mg of cocoa flavanols per day) or a placebo [(COcoa Supplement and Multivitamin Outcomes Study) COSMOS-Web, NCT04582617]. Using the alternative Healthy Eating Index in all participants and a urine-based biomarker of flavanol intake in a subset of participants [n = 1,361], we show that habitual flavanol consumption and diet quality at baseline are positively and selectively correlated with hippocampal-dependent memory. While the prespecified primary end point testing for an intervention-related improvement in memory in all participants after 1 y was not statistically significant, the flavanol intervention restored memory among participants in lower tertiles of habitual diet quality or habitual flavanol consumption. Increases in the flavanol biomarker over the course of the trial were associated with improving memory. Collectively, our results allow dietary flavanols to be considered in the context of a depletion-repletion paradigm and suggest that low flavanol consumption can act as a driver of the hippocampal-dependent component of cognitive aging.


Subject(s)
Cacao , Diet , Humans , Aged , Dietary Supplements , Polyphenols , Biomarkers , Double-Blind Method
7.
Drug Alcohol Depend ; 247: 109865, 2023 06 01.
Article in English | MEDLINE | ID: mdl-37094488

ABSTRACT

BACKGROUND: In 2021, while overdose (OD) deaths were at the highest in recorded history, it is estimated that >80% of ODs do not result in a fatality. While several case studies have indicated that opioid-related ODs can result in cognitive impairment, the possible association has not yet been systematically investigated. METHODS: 78 participants with a history of OUD who reported experiencing an OD in the past year (n=35) or denied a lifetime history of OD (n=43) completed this study. Participants completed cognitive assessments including the Test of Premorbid Functioning (TOPF) and the NIH Toolbox Cognition Battery (NIHTB-CB). Comparisons were made between those who experienced an opioid-related OD in the past year versus those who denied a lifetime OD history while controlling for factors including age, premorbid functioning, and number of prior ODs. RESULTS: When comparing those who experienced an opioid-related OD within the past year to those without a history of OD, uncorrected standard scores were generally comparable; however, differences emerged in the multivariable model. Specifically, compared to those without a history of OD, those who experienced a past year OD evidenced significantly lower total cognition composite scores (coef. = -7.112; P=0.004), lower crystalized cognition composite scores (coef. = -4.194; P=0.009), and lower fluid cognition composite scores (coef. = -7.879; P=0.031). CONCLUSIONS: Findings revealed that opioid-related ODs may be associated with, or contribute to, reduced cognition. Extent of the impairment appears contingent upon individuals' premorbid intellectual functioning and the cumulative number of past ODs. While statistically significant, clinical significance may be limited given that performance differences (∼4 - 8 points) were not particularly robust. More rigorous investigation is warranted, and future studies must also account for the many other variables possibly contributing to cognitive impairment.


Subject(s)
Cognitive Dysfunction , Drug Overdose , Opiate Overdose , Humans , Analgesics, Opioid/adverse effects , Pilot Projects , Opiate Overdose/drug therapy , Drug Overdose/drug therapy , Cognitive Dysfunction/chemically induced , Cognitive Dysfunction/diagnosis , Neuropsychological Tests
8.
JAMA Psychiatry ; 80(4): 380-388, 2023 04 01.
Article in English | MEDLINE | ID: mdl-36857036

ABSTRACT

Importance: Cannabis use disorder (CUD) is increasing among US adults. Few national studies have addressed the role of medical cannabis laws (MCLs) and recreational cannabis laws (RCLs) in these increases, particularly in patient populations with high rates of CUD risk factors. Objective: To quantify the role of MCL and RCL enactment in the increases in diagnosed CUD prevalence among Veterans Health Administration (VHA) patients from 2005 to 2019. Design, Setting, and Participants: Staggered-adoption difference-in-difference analyses were used to estimate the role of MCL and RCL in the increases in prevalence of CUD diagnoses, fitting a linear binomial regression model with fixed effects for state, categorical year, time-varying cannabis law status, state-level sociodemographic covariates, and patient age group, sex, and race and ethnicity. Patients aged 18 to 75 years with 1 or more VHA primary care, emergency department, or mental health visit and no hospice/palliative care within a given calendar year were included. Time-varying yearly state control covariates were state/year rates from American Community Survey data: percentage male, Black, Hispanic, White, 18 years or older, unemployed, income below poverty threshold, and yearly median household income. Analysis took place between February to December 2022. Main Outcomes and Measures: As preplanned, International Classification of Diseases, Clinical Modification, ninth and tenth revisions, CUD diagnoses from electronic health records were analyzed. Results: The number of individuals analyzed ranged from 3 234 382 in 2005 to 4 579 994 in 2019. Patients were largely male (94.1% in 2005 and 89.0% in 2019) and White (75.0% in 2005 and 66.6% in 2019), with a mean (SD) age of 57.0 [14.4] years. From 2005 to 2019, adjusted CUD prevalences increased from 1.38% to 2.25% in states with no cannabis laws (no CLs), 1.38% to 2.54% in MCL-only enacting states, and 1.39% to 2.56% in RCL-enacting states. Difference-in-difference results indicated that MCL-only enactment was associated with a 0.05% (0.05-0.06) absolute increase in CUD prevalence, ie, that 4.7% of the total increase in CUD prevalence in MCL-only enacting states could be attributed to MCLs, while RCL enactment was associated with a 1.12% (95% CI, 0.10-0.13) absolute increase in CUD prevalence, ie, that 9.8% of the total increase in CUD prevalence in RCL-enacting states could be attributed to RCLs. The role of RCL in the increases in CUD prevalence was greatest in patients aged 65 to 75 years, with an absolute increase of 0.15% (95% CI, 0.13-0.17) in CUD prevalence associated with RCLs, ie, 18.6% of the total increase in CUD prevalence in that age group. Conclusions and Relevance: In this study of VHA patients, MCL and RCL enactment played a significant role in the overall increases in CUD prevalence, particularly in older patients. However, consistent with general population studies, effect sizes were relatively small, suggesting that cumulatively, laws affected cannabis attitudes diffusely across the country or that other factors played a larger role in the overall increases in adult CUD. Results underscore the need to screen for cannabis use and CUD and to treat CUD when it is present.


Subject(s)
Cannabis , Hallucinogens , Marijuana Abuse , Medical Marijuana , Substance-Related Disorders , Adult , Humans , Male , United States , Aged , Marijuana Abuse/epidemiology , Veterans Health , Substance-Related Disorders/epidemiology , Medical Marijuana/therapeutic use , Hallucinogens/therapeutic use
9.
Neuropsychology ; 37(3): 284-300, 2023 Mar.
Article in English | MEDLINE | ID: mdl-35786960

ABSTRACT

OBJECTIVE: Cross-national work on neurocognitive testing has been characterized by inconsistent findings, suggesting the need for improved harmonization. Here, we describe a prospective harmonization approach in an ongoing global collaborative study. METHOD: Visuospatial N-Back, Tower of London (ToL), Stop Signal task (SST), Risk Aversion (RA), and Intertemporal Choice (ITC) tasks were administered to 221 individuals from Brazil, India, the Netherlands, South Africa, and the USA. Prospective harmonization methods were employed to ensure procedural similarity of task implementation and processing of derived task measures across sites. Generalized linear models tested for between-site differences controlling for sex, age, education, and socioeconomic status (SES). Associations with these covariates were also examined and tested for differences by site with site-by-covariate interactions. RESULTS: The Netherlands site performed more accurately on N-Back and ToL than the other sites, except for the USA site on the N-Back. The Netherlands and the USA sites performed faster than the other three sites during the go events in the SST. Finally, the Netherlands site also exhibited a higher tolerance for delay discounting than other sites on the ITC, and the India site showed more risk aversion than other sites on the RA task. However, effect size differences across sites on the five tasks were generally small (i.e., partial eta-squared < 0.05) after dropping the Netherlands (on ToL, N-Back, ITC, and SST tasks) and India (on the RA task). Across tasks, regardless of site, the N-Back (sex, age, education, and SES), ToL (sex, age, and SES), SST (age), and ITC (SES) showed associations with covariates. CONCLUSIONS: Four out of the five sites showed only small between-site differences for each task. Nevertheless, despite our extensive prospective harmonization steps, task score performance deviated from the other sites in the Netherlands site (on four tasks) and the India site (on one task). Because the procedural methods were standardized across sites, and our analyses were adjusted for covariates, the differences found in cognitive performance may indicate selection sampling bias due to unmeasured confounders. Future studies should follow similar cross-site prospective harmonization procedures when assessing neurocognition and consider measuring other possible confounding variables for additional statistical control. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Social Class , Humans , Prospective Studies , Longitudinal Studies , Educational Status , Neuropsychological Tests
10.
Drug Alcohol Depend Rep ; 2: 100016, 2022 Mar.
Article in English | MEDLINE | ID: mdl-36845891

ABSTRACT

Background: Oxytocin (OT) treatment in drug addiction studies have suggested potential therapeutic benefits. There is a paucity of clinical trial studies of oxytocin in cocaine use disorders. Method: This was a 6-week randomized, double-blind, outpatient clinical trial study investigating the effect of daily Intranasal Oxytocin (24 IU) on cocaine use by cocaine use disorder patients. After a 7-day inpatient abstinence induction stage, patients were randomized to intranasal oxytocin or intranasal placebo. During the outpatient phase, cocaine use disorder patients were required to present themselves to the research staff 3 times a week for witnessed randomized medication administration, to provide a urine sample for qualitative toxicology, and complete mandatory assessments, including the Time-Line-Follow Back. For the interim days, patients were given an "at-home" bottle that was weighed at each clinic visit to monitor compliance. Results: Neither administration of Intranasal placebo (n = 11) or Oxytocin (n = 15) induced at least 3 weeks of continuous abstinence. However, from week 3, the odds of weekly abstinence increased from 4.61 (95% CI = 1.05, 20.3) to 15.0 (CI = 1.18, 190.2) by week 6 for the Intranasal Oxytocin group (t = 2.12, p = 0.037), though there was no significant group difference overall in the odds of abstinence over time (F1,69 = 1.73, p = 0.19). More patients on Intranasal Oxytocin dropped out (p = 0.0005). Conclusions: Intranasal Oxytocin increased the odds of weekly abstinence in Cocaine patients after 2 weeks compared to PBO, but was associated with a higher dropout rate. (ClinicalTrials.gov 02,255,357, 10/2014).

11.
Psychol Med ; 52(13): 2441-2449, 2022 10.
Article in English | MEDLINE | ID: mdl-33213541

ABSTRACT

BACKGROUND: Treatment for major depressive disorder (MDD) is imprecise and often involves trial-and-error to determine the most effective approach. To facilitate optimal treatment selection and inform timely adjustment, the current study investigated whether neurocognitive variables could predict an antidepressant response in a treatment-specific manner. METHODS: In the two-stage Establishing Moderators and Biosignatures of Antidepressant Response for Clinical Care (EMBARC) trial, outpatients with non-psychotic recurrent MDD were first randomized to an 8-week course of sertraline selective serotonin reuptake inhibitor or placebo. Behavioral measures of reward responsiveness, cognitive control, verbal fluency, psychomotor, and cognitive processing speeds were collected at baseline and week 1. Treatment responders then continued on another 8-week course of the same medication, whereas non-responders to sertraline or placebo were crossed-over under double-blinded conditions to bupropion noradrenaline/dopamine reuptake inhibitor or sertraline, respectively. Hamilton Rating for Depression scores were also assessed at baseline, weeks 8, and 16. RESULTS: Greater improvements in psychomotor and cognitive processing speeds within the first week, as well as better pretreatment performance in these domains, were specifically associated with higher likelihood of response to placebo. Moreover, better reward responsiveness, poorer cognitive control and greater verbal fluency were associated with greater likelihood of response to bupropion in patients who previously failed to respond to sertraline. CONCLUSION: These exploratory results warrant further scrutiny, but demonstrate that quick and non-invasive behavioral tests may have substantial clinical value in predicting antidepressant treatment response.


Subject(s)
Depressive Disorder, Major , Sertraline , Humans , Sertraline/therapeutic use , Bupropion/therapeutic use , Depressive Disorder, Major/psychology , Treatment Outcome , Double-Blind Method , Antidepressive Agents/therapeutic use
12.
Biol Psychol ; 160: 108040, 2021 03.
Article in English | MEDLINE | ID: mdl-33556452

ABSTRACT

In a multigenerational study of families at risk for depression, individuals with a lifetime history of depression had: 1) abnormal perceptual asymmetry (PA; smaller left ear/right hemisphere [RH] advantage) in a dichotic emotion recognition task, and 2) reduced RH late positive potential (P3RH) during an emotional hemifield task. We used standardized difference scores for processing auditory (PA sad-neutral) and visual (P3RH negative-neutral) stimuli for 112 participants (52 men) in a logistic regression to predict history of depression, anxiety or comorbidity of both. Whereas comorbidity was separately predicted by reduced PA (OR = 0.527, p = .042) or P3RH (OR = 0.457, p = .013) alone, an interaction between PA and P3RH (OR = 2.499, p = .011) predicted depressive disorder. Follow-up analyses revealed increased probability of depression at low (lack of emotional differentiation) and high (heightened reactivity to negative stimuli) levels of both predictors. Findings suggest that reduced or heightened right-lateralized emotional responsivity to negative stimuli may be uniquely associated with depression.


Subject(s)
Depression , Functional Laterality , Anxiety/epidemiology , Brain , Comorbidity , Depression/epidemiology , Emotions , Humans , Male
13.
Neuroimage Clin ; 14: 692-707, 2017.
Article in English | MEDLINE | ID: mdl-28393011

ABSTRACT

Behavioral and electrophysiologic evidence suggests that major depression (MDD) involves right parietotemporal dysfunction, a region activated by arousing affective stimuli. Building on prior event-related potential (ERP) findings (Kayser et al. 2016 NeuroImage 142:337-350), this study examined whether these abnormalities also characterize individuals at clinical high risk for MDD. We systematically explored the impact of family risk status and personal history of depression and anxiety on three distinct stages of emotional processing comprising the late positive potential (LPP). ERPs (72 channels) were recorded from 74 high and 53 low risk individuals (age 13-59 years, 58 male) during a visual half-field paradigm using highly-controlled pictures of cosmetic surgery patients showing disordered (negative) or healed (neutral) facial areas before or after treatment. Reference-free current source density (CSD) transformations of ERP waveforms were quantified by temporal principal components analysis (tPCA). Component scores of prominent CSD-tPCA factors sensitive to emotional content were analyzed via permutation tests and repeated measures ANOVA for mixed factorial designs with unstructured covariance matrix, including gender, age and clinical covariates. Factor-based distributed inverse solutions provided descriptive estimates of emotional brain activations at group level corresponding to hierarchical activations along ventral visual processing stream. Risk status affected emotional responsivity (increased positivity to negative-than-neutral stimuli) overlapping early N2 sink (peak latency 212 ms), P3 source (385 ms), and a late centroparietal source (630 ms). High risk individuals had reduced right-greater-than-left emotional lateralization involving occipitotemporal cortex (N2 sink) and bilaterally reduced emotional effects involving posterior cingulate (P3 source) and inferior temporal cortex (630 ms) when compared to those at low risk. While the early emotional effects were enhanced for left hemifield (right hemisphere) presentations, hemifield modulations did not differ between risk groups, suggesting top-down rather than bottom-up effects of risk. Groups did not differ in their stimulus valence or arousal ratings. Similar effects were seen for individuals with a lifetime history of depression or anxiety disorder in comparison to those without. However, there was no evidence that risk status and history of MDD or anxiety disorder interacted in their impact on emotional responsivity, suggesting largely independent attenuation of attentional resource allocation to enhance perceptual processing of motivationally salient stimuli. These findings further suggest that a deficit in motivated attention preceding conscious awareness may be a marker of risk for depression.


Subject(s)
Depression/complications , Emotions/physiology , Evoked Potentials/physiology , Functional Laterality/physiology , Motivation/physiology , Adolescent , Adult , Electroencephalography , Female , Humans , Male , Middle Aged , Photic Stimulation , Principal Component Analysis , Visual Fields/physiology , Young Adult
14.
Psychophysiology ; 54(1): 34-50, 2017 01.
Article in English | MEDLINE | ID: mdl-28000259

ABSTRACT

Growing evidence suggests that loudness dependency of auditory evoked potentials (LDAEP) and resting EEG alpha and theta may be biological markers for predicting response to antidepressants. In spite of this promise, little is known about the joint reliability of these markers, and thus their clinical applicability. New standardized procedures were developed to improve the compatibility of data acquired with different EEG platforms, and used to examine test-retest reliability for the three electrophysiological measures selected for a multisite project-Establishing Moderators and Biosignatures of Antidepressant Response for Clinical Care (EMBARC). Thirty-nine healthy controls across four clinical research sites were tested in two sessions separated by about 1 week. Resting EEG (eyes-open and eyes-closed conditions) was recorded and LDAEP measured using binaural tones (1000 Hz, 40 ms) at five intensities (60-100 dB SPL). Principal components analysis of current source density waveforms reduced volume conduction and provided reference-free measures of resting EEG alpha and N1 dipole activity to tones from auditory cortex. Low-resolution electromagnetic tomography (LORETA) extracted resting theta current density measures corresponding to rostral anterior cingulate (rACC), which has been implicated in treatment response. There were no significant differences in posterior alpha, N1 dipole, or rACC theta across sessions. Test-retest reliability was .84 for alpha, .87 for N1 dipole, and .70 for theta rACC current density. The demonstration of good-to-excellent reliability for these measures provides a template for future EEG/ERP studies from multiple testing sites, and an important step for evaluating them as biomarkers for predicting treatment response.


Subject(s)
Alpha Rhythm , Antidepressive Agents/therapeutic use , Cerebral Cortex/physiology , Electroencephalography/methods , Evoked Potentials, Auditory , Theta Rhythm , Acoustic Stimulation , Adult , Auditory Cortex/physiology , Biomarkers , Female , Gyrus Cinguli/physiology , Humans , Male , Principal Component Analysis , Reproducibility of Results , Signal Processing, Computer-Assisted
15.
Neuroimage ; 142: 337-350, 2016 Nov 15.
Article in English | MEDLINE | ID: mdl-27263509

ABSTRACT

Event-related potential (ERP) studies have provided evidence for an allocation of attentional resources to enhance perceptual processing of motivationally salient stimuli. Emotional modulation affects several consecutive components associated with stages of affective-cognitive processing, beginning as early as 100-200ms after stimulus onset. In agreement with the notion that the right parietotemporal region is critically involved during the perception of arousing affective stimuli, some ERP studies have reported asymmetric emotional ERP effects. However, it is difficult to separate emotional from non-emotional effects because differences in stimulus content unrelated to affective salience or task demands may also be associated with lateralized function or promote cognitive processing. Other concerns pertain to the operational definition and statistical independence of ERP component measures, their dependence on an EEG reference, and spatial smearing due to volume conduction, all of which impede the identification of distinct scalp activation patterns associated with affective processing. Building on prior research using a visual half-field paradigm with highly controlled emotional stimuli (pictures of cosmetic surgery patients showing disordered [negative] or healed [neutral] facial areas before or after treatment), 72-channel ERPs recorded from 152 individuals (ages 13-68years; 81 female) were transformed into reference-free current source density (CSD) waveforms and submitted to temporal principal components analysis (PCA) to identify their underlying neuronal generator patterns. Using both nonparametric randomization tests and repeated measures ANOVA, robust effects of emotional content were found over parietooccipital regions for CSD factors corresponding to N2 sink (212ms peak latency), P3 source (385ms) and a late centroparietal source (630ms), all indicative of greater positivity for negative than neutral stimuli. For the N2 sink, emotional effects were right-lateralized and modulated by hemifield, with larger amplitude and asymmetry for left hemifield (right hemisphere) presentations. For all three factors, more positive amplitudes at parietooccipital sites were associated with increased ratings of negative valence and greater arousal. Distributed inverse solutions of the CSD-PCA-based emotional effects implicated a sequence of maximal activations in right occipitotemporal cortex, bilateral posterior cingulate cortex, and bilateral inferior temporal cortex. These findings are consistent with hierarchical activations of the ventral visual pathway reflecting subsequent processing stages in response to motivationally salient stimuli.


Subject(s)
Attention/physiology , Cerebral Cortex/physiology , Electroencephalography/methods , Emotions/physiology , Evoked Potentials/physiology , Functional Laterality/physiology , Motivation/physiology , Pattern Recognition, Visual/physiology , Signal Processing, Computer-Assisted , Adolescent , Adult , Aged , Female , Humans , Male , Middle Aged , Principal Component Analysis , Young Adult
16.
Psychiatry Res ; 228(3): 606-11, 2015 Aug 30.
Article in English | MEDLINE | ID: mdl-26162656

ABSTRACT

A prior study (Bruder, G.E., Stewart, J.W., Mercier, M.A., Agosti, V., Leite, P., Donovan, S., Quitkin, F.M., 1997. Outcome of cognitive-behavioral therapy for depression: relation of hemispheric dominance for verbal processing. Journal of Abnormal Psychology 106, 138-144.) found left hemisphere advantage for verbal dichotic listening was predictive of clinical response to cognitive behavioral therapy (CBT) for depression. This study aimed to confirm this finding and to examine the value of neuropsychological tests, which have shown promise for predicting antidepressant response. Twenty depressed patients who subsequently completed 14 weeks of CBT and 74 healthy adults were tested on a Dichotic Fused Words Test (DFWT). Patients were also tested on the National Adult Reading Test to estimate IQ, and word fluency, choice RT, and Stroop neuropsychological tests. Left hemisphere advantage on the DFWT was more than twice as large in CBT responders as in non-responders, and was associated with improvement in depression following treatment. There was no difference between responders and non-responders on neuropsychological tests. The results support the hypothesis that the ability of individuals with strong left hemisphere dominance to recruit frontal and temporal cortical regions involved in verbal dichotic listening predicts CBT response. The large effect size, sensitivity and specificity of DFWT predictions suggest the potential value of this brief and inexpensive test as an indicator of whether a patient will benefit from CBT for depression.


Subject(s)
Cognitive Behavioral Therapy , Depressive Disorder, Major/psychology , Depressive Disorder, Major/therapy , Dominance, Cerebral , Speech Perception , Adult , Aged , Dichotic Listening Tests , Female , Humans , Male , Middle Aged , Neuropsychological Tests , Treatment Outcome , Young Adult
17.
Clin Neurophysiol ; 125(3): 484-90, 2014 Mar.
Article in English | MEDLINE | ID: mdl-24095153

ABSTRACT

OBJECTIVE: EEG topographies may be distorted by electrode bridges, typically caused by electrolyte spreading between adjacent electrodes. We therefore sought to determine the prevalence of electrode bridging and its potential impact on the EEG literature. METHODS: Five publicly-available EEG datasets were evaluated for evidence of bridging using a new screening method that employs the temporal variance of pairwise difference waveforms (electrical distance). Distinctive characteristics of electrical distance frequency distributions were used to develop an algorithm to identify electrode bridges in datasets with different montages (22-64 channels) and noise properties. RESULTS: The extent of bridging varied substantially across datasets: 54% of EEG recording sessions contained an electrode bridge, and the mean percentage of bridged electrodes in a montage was as high as 18% in one of the datasets. Furthermore, over 40% of the recording channels were bridged in 9 of 203 sessions. These findings were independently validated by visual inspection. CONCLUSIONS: The new algorithm conveniently, efficiently, and reliably identified electrode bridges across different datasets and recording conditions. Electrode bridging may constitute a substantial problem for some datasets. SIGNIFICANCE: Given the extent of the electrode bridging across datasets, this problem may be more widespread than commonly thought. However, when used as an automatic screening routine, the new algorithm will prevent pitfalls stemming from unrecognized electrode bridges.


Subject(s)
Databases, Factual , Electroencephalography , Algorithms , Brain Mapping , Electrodes , Humans
18.
Int J Psychophysiol ; 91(2): 104-20, 2014 Feb.
Article in English | MEDLINE | ID: mdl-24333745

ABSTRACT

Prior research suggests that event-related potentials (ERP) obtained during active and passive auditory paradigms, which have demonstrated abnormal neurocognitive function in schizophrenia, may provide helpful tools in predicting transition to psychosis. In addition to ERP measures, reduced modulations of EEG alpha, reflecting top-down control required to inhibit irrelevant information, have revealed attentional deficits in schizophrenia and its prodromal stage. Employing a three-stimulus novelty oddball task, nose-referenced 48-channel ERPs were recorded from 22 clinical high-risk (CHR) patients and 20 healthy controls detecting target tones (12% probability, 500Hz; button press) among nontargets (76%, 350Hz) and novel sounds (12%). After current source density (CSD) transformation of EEG epochs (-200 to 1000ms), event-related spectral perturbations were obtained for each site up to 30Hz and 800ms after stimulus onset, and simplified by unrestricted time-frequency (TF) principal components analysis (PCA). Alpha event-related desynchronization (ERD) as measured by TF factor 610-9 (spectral peak latency at 610ms and 9Hz; 31.9% variance) was prominent over right posterior regions for targets, and markedly reduced in CHR patients compared to controls, particularly in three patients who later developed psychosis. In contrast, low-frequency event-related synchronization (ERS) distinctly linked to novels (260-1; 16.0%; mid-frontal) and N1 sink across conditions (130-1; 3.4%; centro-temporoparietal) did not differ between groups. Analogous time-domain CSD-ERP measures (temporal PCA), consisting of N1 sink, novelty mismatch negativity (MMN), novelty vertex source, novelty P3, P3b, and frontal response negativity, were robust and closely comparable between groups. Novelty MMN at FCz was, however, absent in the three converters. In agreement with prior findings, alpha ERD and MMN may hold particular promise for predicting transition to psychosis among CHR patients.


Subject(s)
Alpha Rhythm/physiology , Brain/physiopathology , Electroencephalography/methods , Evoked Potentials, Auditory/physiology , Prodromal Symptoms , Psychotic Disorders/physiopathology , Adolescent , Adult , Auditory Perception/physiology , Child , Female , Humans , Male , Risk , Time Factors , Young Adult
19.
Int J Psychophysiol ; 90(2): 190-206, 2013 Nov.
Article in English | MEDLINE | ID: mdl-23856353

ABSTRACT

Smell identification deficits (SIDs) are relatively specific to schizophrenia and its negative symptoms, and may predict transition to psychosis in clinical high-risk (CHR) individuals. Moreover, event-related potentials (ERPs) to odors are reduced in schizophrenia. This study examined whether CHR patients show SIDs and abnormal olfactory N1 and P2 potentials. ERPs (49 channels) were recorded from 21 CHR and 20 healthy participants (13 males/group; ages 13-27 years) during an odor detection task using three concentrations of hydrogen sulfide (H2S) or blank air presented unilaterally by a constant-flow olfactometer. Neuronal generator patterns underlying olfactory ERPs were identified and measured by principal components analysis (unrestricted Varimax) of reference-free current source densities (CSD). Replicating previous findings, CSD waveforms to H2S stimuli were characterized by an early N1 sink (345 ms, lateral-temporal) and a late P2 source (600 ms, mid-frontocentroparietal). N1 and P2 varied monotonically with odor intensity (strong > medium > weak) and did not differ across groups. Patients and controls also showed comparable odor detection and had normal odor identification and thresholds (Sniffin' Sticks). However, olfactory ERPs strongly reflected differences in odor intensity and detection in controls, but these associations were substantially weaker in patients. Moreover, severity of negative symptoms in patients was associated with reduced olfactory ERPs and poorer odor detection, identification and thresholds. Three patients who developed psychosis had poorer odor detection and thresholds, and marked reductions of N1 and P2. Thus, despite the lack of overall group differences, olfactory measures may be of utility in predicting transition to psychosis among CHR patients.


Subject(s)
Odorants , Olfaction Disorders/diagnosis , Olfaction Disorders/etiology , Psychotic Disorders/complications , Reaction Time/physiology , Smell/physiology , Adolescent , Adult , Brain Mapping , Child , Cross-Sectional Studies , Evoked Potentials , Female , Humans , Male , Sensitivity and Specificity , Taste Threshold , Young Adult
20.
Neurology ; 79(19): 1951-60, 2012 Nov 06.
Article in English | MEDLINE | ID: mdl-23035068

ABSTRACT

OBJECTIVE: Generalized periodic discharges are increasingly recognized on continuous EEG monitoring, but their relationship to seizures and prognosis remains unclear. METHODS: All adults with generalized periodic discharges from 1996 to 2006 were matched 1:1 to controls by age, etiology, and level of consciousness. Overall, 200 patients with generalized periodic discharges were matched to 200 controls. RESULTS: Mean age was 66 years (range 18-96); 56% were comatose. Presenting illnesses included acute brain injury (44%), acute systemic illness (38%), cardiac arrest (15%), and epilepsy (3%). A total of 46% of patients with generalized periodic discharges had a seizure during their hospital stay (almost half were focal), vs 34% of controls (p = 0.014). Convulsive seizures were seen in a third of both groups. A total of 27% of patients with generalized periodic discharges had nonconvulsive seizures, vs 8% of controls (p < 0.001); 22% of patients with generalized periodic discharges had nonconvulsive status epilepticus, vs 7% of controls (p < 0.001). In both groups, approximately half died or were in a vegetative state, one-third had severe disability, and one-fifth had moderate to no disability. Excluding cardiac arrest patients, generalized periodic discharges were associated with increased mortality on univariate analysis (36.8% vs 26.9%; p = 0.049). Multivariate predictors of worse outcome were cardiac arrest, coma, nonconvulsive status epilepticus, and sepsis, but not generalized periodic discharges. CONCLUSION: Generalized periodic discharges were strongly associated with nonconvulsive seizures and nonconvulsive status epilepticus. While nonconvulsive status epilepticus was independently associated with worse outcome, generalized periodic discharges were not after matching for age, etiology, and level of consciousness.


Subject(s)
Critical Illness , Seizures/physiopathology , Adolescent , Adult , Aged , Aged, 80 and over , Case-Control Studies , Electroencephalography , Female , Glasgow Coma Scale , Humans , Inpatients , Male , Middle Aged , Predictive Value of Tests , Retrospective Studies , Seizures/etiology , Statistics, Nonparametric , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...