Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
1.
Neuroimage ; 275: 120166, 2023 07 15.
Article in English | MEDLINE | ID: mdl-37178821

ABSTRACT

BACKGROUND: Neural activation during reward processing is thought to underlie critical behavioral changes that take place during the transition to adolescence (e.g., learning, risk-taking). Though literature on the neural basis of reward processing in adolescence is booming, important gaps remain. First, more information is needed regarding changes in functional neuroanatomy in early adolescence. Another gap is understanding whether sensitivity to different aspects of the incentive (e.g., magnitude and valence) changes during the transition into adolescence. We used fMRI from a large sample of preadolescent children to characterize neural responses to incentive valence vs. magnitude during anticipation and feedback, and their change over a period of two years. METHODS: Data were taken from the Adolescent Cognitive and Brain DevelopmentSM (ABCD®) study release 3.0. Children completed the Monetary Incentive Delay task at baseline (ages 9-10) and year 2 follow-up (ages 11-12). Based on data from two sites (N = 491), we identified activation-based Regions of Interest (ROIs; e.g., striatum, prefrontal regions, etc.) that were sensitive to trial type (win $5, win $0.20, neutral, lose $0.20, lose $5) during anticipation and feedback phases. Then, in an independent subsample (N = 1470), we examined whether these ROIs were sensitive to valence and magnitude and whether that sensitivity changed over two years. RESULTS: Our results show that most ROIs involved in reward processing (including the striatum, prefrontal cortex, and insula) are specialized, i.e., mainly sensitive to either incentive valence or magnitude, and this sensitivity was consistent over a 2-year period. The effect sizes of time and its interactions were significantly smaller (0.002≤η2≤0.02) than the effect size of trial type (0.06≤η2≤0.30). Interestingly, specialization was moderated by reward processing phase but was stable across development. Biological sex and pubertal status differences were few and inconsistent. Developmental changes were mostly evident during success feedback, where neural reactivity increased over time. CONCLUSIONS: Our results suggest sub-specialization to valence vs. magnitude within many ROIs of the reward circuitry. Additionally, in line with theoretical models of adolescent development, our results suggest that the ability to benefit from success increases from pre- to early adolescence. These findings can inform educators and clinicians and facilitate empirical research of typical and atypical motivational behaviors during a critical time of development.


Subject(s)
Motivation , Reward , Child , Humans , Brain/physiology , Brain Mapping , Corpus Striatum/physiology , Magnetic Resonance Imaging , Prefrontal Cortex
2.
Neuropsychopharmacology ; 47(7): 1339-1349, 2022 06.
Article in English | MEDLINE | ID: mdl-35017672

ABSTRACT

Prediction errors (PEs) are a keystone for computational neuroscience. Their association with midbrain neural firing has been confirmed across species and has inspired the construction of artificial intelligence that can outperform humans. However, there is still much to learn. Here, we leverage the wealth of human PE data acquired in the functional neuroimaging setting in service of a deeper understanding, using an MKDA (multi-level kernel-based density) meta-analysis. Studies were identified with Google Scholar, and we included studies with healthy adult participants that reported activation coordinates corresponding to PEs published between 1999-2018. Across 264 PE studies that have focused on reward, punishment, action, cognition, and perception, consistent with domain-general theoretical models of prediction error we found midbrain PE signals during cognitive and reward learning tasks, and an insula PE signal for perceptual, social, cognitive, and reward prediction errors. There was evidence for domain-specific error signals--in the visual hierarchy during visual perception, and the dorsomedial prefrontal cortex during social inference. We assessed bias following prior neuroimaging meta-analyses and used family-wise error correction for multiple comparisons. This organization of computation by region will be invaluable in building and testing mechanistic models of cognitive function and dysfunction in machines, humans, and other animals. Limitations include small sample sizes and ROI masking in some included studies, which we addressed by weighting each study by sample size, and directly comparing whole brain vs. ROI-based results.


Subject(s)
Artificial Intelligence , Motivation , Cognition/physiology , Humans , Magnetic Resonance Imaging , Perception , Reward
3.
Front Hum Neurosci ; 15: 615313, 2021.
Article in English | MEDLINE | ID: mdl-33679345

ABSTRACT

Compared to our understanding of positive prediction error signals occurring due to unexpected reward outcomes, less is known about the neural circuitry in humans that drives negative prediction errors during omission of expected rewards. While classical learning theories such as Rescorla-Wagner or temporal difference learning suggest that both types of prediction errors result from a simple subtraction, there has been recent evidence suggesting that different brain regions provide input to dopamine neurons which contributes to specific components of this prediction error computation. Here, we focus on the brain regions responding to negative prediction error signals, which has been well-established in animal studies to involve a distinct pathway through the lateral habenula. We examine the activity of this pathway in humans, using a conditioned inhibition paradigm with high-resolution functional MRI. First, participants learned to associate a sensory stimulus with reward delivery. Then, reward delivery was omitted whenever this stimulus was presented simultaneously with a different sensory stimulus, the conditioned inhibitor (CI). Both reward presentation and the reward-predictive cue activated midbrain dopamine regions, insula and orbitofrontal cortex. While we found significant activity at an uncorrected threshold for the CI in the habenula, consistent with our predictions, it did not survive correction for multiple comparisons and awaits further replication. Additionally, the pallidum and putamen regions of the basal ganglia showed modulations of activity for the inhibitor that did not survive the corrected threshold.

4.
J Abnorm Psychol ; 129(6): 544-555, 2020 Aug.
Article in English | MEDLINE | ID: mdl-32757599

ABSTRACT

In this brief review, we describe current computational models of drug-use and addiction that fall into 2 broad categories: mathematically based models that rely on computational theories, and brain-based models that link computations to brain areas or circuits. Across categories, many are models of learning and decision-making, which may be compromised in addiction. Several mathematical models take predictive coding approaches, focusing on Bayesian prediction error. Other models focus on learning processes and (traditional) prediction error. Brain-based models have incorporated prefrontal cortex, basal ganglia, and the dopamine system, based on the effects of drugs on dopamine, motivation, and executive control circuits. Several models specifically describe how behavioral control may transition from habitual to goal-directed systems, consistent with computational accounts of compromised "model-based" control. Some brain-based models have linked this to the transition of behavioral control from ventral to dorsal striatum. Overall, we propose that while computational models capture some aspects of addiction and have advanced our thinking, most have focused on the effects of drug use rather than addiction per se, most have not been tested on and/or supported by human data, and few capture multiple stages and symptoms of addiction. We conclude by suggesting a path forward for computational models of addiction. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Subject(s)
Behavior, Addictive/physiopathology , Brain/physiopathology , Computer Simulation , Models, Neurological , Substance-Related Disorders/physiopathology , Humans , Learning , Motivation
5.
Psychol Rev ; 127(6): 972-1021, 2020 11.
Article in English | MEDLINE | ID: mdl-32525345

ABSTRACT

We describe a neurobiologically informed computational model of phasic dopamine signaling to account for a wide range of findings, including many considered inconsistent with the simple reward prediction error (RPE) formalism. The central feature of this PVLV framework is a distinction between a primary value (PV) system for anticipating primary rewards (Unconditioned Stimuli [USs]), and a learned value (LV) system for learning about stimuli associated with such rewards (CSs). The LV system represents the amygdala, which drives phasic bursting in midbrain dopamine areas, while the PV system represents the ventral striatum, which drives shunting inhibition of dopamine for expected USs (via direct inhibitory projections) and phasic pausing for expected USs (via the lateral habenula). Our model accounts for data supporting the separability of these systems, including individual differences in CS-based (sign-tracking) versus US-based learning (goal-tracking). Both systems use competing opponent-processing pathways representing evidence for and against specific USs, which can explain data dissociating the processes involved in acquisition versus extinction conditioning. Further, opponent processing proved critical in accounting for the full range of conditioned inhibition phenomena, and the closely related paradigm of second-order conditioning. Finally, we show how additional separable pathways representing aversive USs, largely mirroring those for appetitive USs, also have important differences from the positive valence case, allowing the model to account for several important phenomena in aversive conditioning. Overall, accounting for all of these phenomena strongly constrains the model, thus providing a well-validated framework for understanding phasic dopamine signaling. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Subject(s)
Dopamine , Models, Neurological , Reward , Amygdala/physiology , Conditioning, Classical , Conditioning, Psychological , Humans , Learning
SELECTION OF CITATIONS
SEARCH DETAIL