Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 139
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Cell ; 186(3): 560-576.e17, 2023 02 02.
Artículo en Inglés | MEDLINE | ID: mdl-36693374

RESUMEN

Downward social mobility is a well-known mental risk factor for depression, but its neural mechanism remains elusive. Here, by forcing mice to lose against their subordinates in a non-violent social contest, we lower their social ranks stably and induce depressive-like behaviors. These rank-decline-associated depressive-like behaviors can be reversed by regaining social status. In vivo fiber photometry and single-unit electrophysiological recording show that forced loss, but not natural loss, generates negative reward prediction error (RPE). Through the lateral hypothalamus, the RPE strongly activates the brain's anti-reward center, the lateral habenula (LHb). LHb activation inhibits the medial prefrontal cortex (mPFC) that controls social competitiveness and reinforces retreats in contests. These results reveal the core neural mechanisms mutually promoting social status loss and depressive behaviors. The intertwined neuronal signaling controlling mPFC and LHb activities provides a mechanistic foundation for the crosstalk between social mobility and psychological disorder, unveiling a promising target for intervention.


Asunto(s)
Habénula , Estatus Social , Ratones , Animales , Recompensa , Conducta Social , Habénula/fisiología , Depresión
2.
Proc Natl Acad Sci U S A ; 121(20): e2316658121, 2024 May 14.
Artículo en Inglés | MEDLINE | ID: mdl-38717856

RESUMEN

Individual survival and evolutionary selection require biological organisms to maximize reward. Economic choice theories define the necessary and sufficient conditions, and neuronal signals of decision variables provide mechanistic explanations. Reinforcement learning (RL) formalisms use predictions, actions, and policies to maximize reward. Midbrain dopamine neurons code reward prediction errors (RPE) of subjective reward value suitable for RL. Electrical and optogenetic self-stimulation experiments demonstrate that monkeys and rodents repeat behaviors that result in dopamine excitation. Dopamine excitations reflect positive RPEs that increase reward predictions via RL; against increasing predictions, obtaining similar dopamine RPE signals again requires better rewards than before. The positive RPEs drive predictions higher again and thus advance a recursive reward-RPE-prediction iteration toward better and better rewards. Agents also avoid dopamine inhibitions that lower reward prediction via RL, which allows smaller rewards than before to elicit positive dopamine RPE signals and resume the iteration toward better rewards. In this way, dopamine RPE signals serve a causal mechanism that attracts agents via RL to the best rewards. The mechanism improves daily life and benefits evolutionary selection but may also induce restlessness and greed.


Asunto(s)
Dopamina , Neuronas Dopaminérgicas , Recompensa , Animales , Dopamina/metabolismo , Neuronas Dopaminérgicas/fisiología , Neuronas Dopaminérgicas/metabolismo , Humanos , Refuerzo en Psicología
3.
J Neurosci ; 44(26)2024 Jun 26.
Artículo en Inglés | MEDLINE | ID: mdl-38806248

RESUMEN

Coordinated multijoint limb and digit movements-"manual dexterity"-underlie both specialized skills (e.g., playing the piano) and more mundane tasks (e.g., tying shoelaces). Impairments in dexterous skill cause significant disability, as occurs with motor cortical injury, Parkinson's disease, and a range of other pathologies. Clinical observations, as well as basic investigations, suggest that corticostriatal circuits play a critical role in learning and performing dexterous skills. Furthermore, dopaminergic signaling in these regions is implicated in synaptic plasticity and motor learning. Nonetheless, the role of striatal dopamine signaling in skilled motor learning remains poorly understood. Here, we use fiber photometry paired with a genetically encoded dopamine sensor to investigate striatal dopamine release in both male and female mice as they learn and perform a skilled reaching task. Dopamine rapidly increases during a skilled reach and peaks near pellet consumption. In the dorsolateral striatum, dopamine dynamics are faster than in the dorsomedial and ventral striatum. Across training, as reaching performance improves, dopamine signaling shifts from pellet consumption to cues that predict pellet availability, particularly in medial and ventral areas of the striatum. Furthermore, performance prediction errors are present across the striatum, with reduced dopamine release after an unsuccessful reach. These findings show that dopamine dynamics during skilled motor behaviors change with learning and are differentially regulated across striatal subregions.


Asunto(s)
Cuerpo Estriado , Dopamina , Aprendizaje , Destreza Motora , Animales , Dopamina/metabolismo , Masculino , Ratones , Femenino , Cuerpo Estriado/metabolismo , Cuerpo Estriado/fisiología , Aprendizaje/fisiología , Destreza Motora/fisiología , Ratones Endogámicos C57BL
4.
J Neurosci ; 43(10): 1714-1730, 2023 03 08.
Artículo en Inglés | MEDLINE | ID: mdl-36669886

RESUMEN

In reinforcement learning (RL), animals choose by assigning values to options and learn by updating these values from reward outcomes. This framework has been instrumental in identifying fundamental learning variables and their neuronal implementations. However, canonical RL models do not explain how reward values are constructed from biologically critical intrinsic reward components, such as nutrients. From an ecological perspective, animals should adapt their foraging choices in dynamic environments to acquire nutrients that are essential for survival. Here, to advance the biological and ecological validity of RL models, we investigated how (male) monkeys adapt their choices to obtain preferred nutrient rewards under varying reward probabilities. We found that the nutrient composition of rewards strongly influenced learning and choices. Preferences of the animals for specific nutrients (sugar, fat) affected how they adapted to changing reward probabilities; the history of recent rewards influenced choices of the monkeys more strongly if these rewards contained the their preferred nutrients (nutrient-specific reward history). The monkeys also chose preferred nutrients even when they were associated with lower reward probability. A nutrient-sensitive RL model captured these processes; it updated the values of individual sugar and fat components of expected rewards based on experience and integrated them into subjective values that explained the choices of the monkeys. Nutrient-specific reward prediction errors guided this value-updating process. Our results identify nutrients as important reward components that guide learning and choice by influencing the subjective value of choice options. Extending RL models with nutrient-value functions may enhance their biological validity and uncover nutrient-specific learning and decision variables.SIGNIFICANCE STATEMENT RL is an influential framework that formalizes how animals learn from experienced rewards. Although reward is a foundational concept in RL theory, canonical RL models cannot explain how learning depends on specific reward properties, such as nutrients. Intuitively, learning should be sensitive to the nutrient components of the reward to benefit health and survival. Here, we show that the nutrient (fat, sugar) composition of rewards affects how the monkeys choose and learn in an RL paradigm and that key learning variables including reward history and reward prediction error should be modified with nutrient-specific components to account for the choice behavior observed in the monkeys. By incorporating biologically critical nutrient rewards into the RL framework, our findings help advance the ecological validity of RL models.


Asunto(s)
Refuerzo en Psicología , Recompensa , Animales , Masculino , Haplorrinos , Neuronas/fisiología , Nutrientes , Conducta de Elección/fisiología
5.
Psychol Med ; 54(2): 256-266, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37161677

RESUMEN

BACKGROUND: The incidence of adolescent depressive disorder is globally skyrocketing in recent decades, albeit the causes and the decision deficits depression incurs has yet to be well-examined. With an instrumental learning task, the aim of the current study is to investigate the extent to which learning behavior deviates from that observed in healthy adolescent controls and track the underlying mechanistic channel for such a deviation. METHODS: We recruited a group of adolescents with major depression and age-matched healthy control subjects to carry out the learning task with either gain or loss outcome and applied a reinforcement learning model that dissociates valence (positive v. negative) of reward prediction error and selection (chosen v. unchosen). RESULTS: The results demonstrated that adolescent depressive patients performed significantly less well than the control group. Learning rates suggested that the optimistic bias that overall characterizes healthy adolescent subjects was absent for the depressive adolescent patients. Moreover, depressed adolescents exhibited an increased pessimistic bias for the counterfactual outcome. Lastly, individual difference analysis suggested that these observed biases, which significantly deviated from that observed in normal controls, were linked with the severity of depressive symoptoms as measured by HAMD scores. CONCLUSIONS: By leveraging an incentivized instrumental learning task with computational modeling within a reinforcement learning framework, the current study reveals a mechanistic decision-making deficit in adolescent depressive disorder. These findings, which have implications for the identification of behavioral markers in depression, could support the clinical evaluation, including both diagnosis and prognosis of this disorder.


Asunto(s)
Trastorno Depresivo Mayor , Aprendizaje , Humanos , Adolescente , Refuerzo en Psicología , Recompensa , Condicionamiento Operante
6.
J Neurosci ; 42(40): 7648-7658, 2022 10 05.
Artículo en Inglés | MEDLINE | ID: mdl-36096671

RESUMEN

Humans routinely learn the value of actions by updating their expectations based on past outcomes - a process driven by reward prediction errors (RPEs). Importantly, however, implementing a course of action also requires the investment of effort. Recent work has revealed a close link between the neural signals involved in effort exertion and those underpinning reward-based learning, but the behavioral relationship between these two functions remains unclear. Across two experiments, we tested healthy male and female human participants (N = 140) on a reinforcement learning task in which they registered their responses by applying physical force to a pair of hand-held dynamometers. We examined the effect of effort on learning by systematically manipulating the amount of force required to register a response during the task. Our key finding, replicated across both experiments, was that greater effort increased learning rates following positive outcomes and decreased them following negative outcomes, which corresponded to a differential effect of effort in boosting positive RPEs and blunting negative RPEs. Interestingly, this effect was most pronounced in individuals who were more averse to effort in the first place, raising the possibility that the investment of effort may have an adaptive effect on learning in those less motivated to exert it. By integrating principles of reinforcement learning with neuroeconomic approaches to value-based decision-making, we show that the very act of investing effort modulates one's capacity to learn, and demonstrate how these functions may operate within a common computational framework.SIGNIFICANCE STATEMENT Recent work suggests that learning and effort may share common neurophysiological substrates. This raises the possibility that the very act of investing effort influences learning. Here, we tested whether effort modulates teaching signals in a reinforcement learning paradigm. Our results showed that effort resulted in more efficient learning from positive outcomes and less efficient learning from negative outcomes. Interestingly, this effect varied across individuals, and was more pronounced in those who were more averse to investing effort in the first place. These data highlight the importance of motivational factors in a common framework of reward-based learning, which integrates the computational principles of reinforcement learning with those of value-based decision-making.


Asunto(s)
Toma de Decisiones , Refuerzo en Psicología , Humanos , Masculino , Femenino , Toma de Decisiones/fisiología , Recompensa , Motivación , Afecto
7.
Hum Brain Mapp ; 44(12): 4545-4560, 2023 08 15.
Artículo en Inglés | MEDLINE | ID: mdl-37334979

RESUMEN

The question of how the brain represents reward prediction errors is central to reinforcement learning and adaptive, goal-directed behavior. Previous studies have revealed prediction error representations in multiple electrophysiological signatures, but it remains elusive whether these electrophysiological correlates underlying prediction errors are sensitive to valence (in a signed form) or to salience (in an unsigned form). One possible reason concerns the loose correspondence between objective probability and subjective prediction resulting from the optimistic bias, that is, the tendency to overestimate the likelihood of encountering positive future events. In the present electroencephalography (EEG) study, we approached this question by directly measuring participants' idiosyncratic, trial-to-trial prediction errors elicited by subjective and objective probabilities across two experiments. We adopted monetary gain and loss feedback in Experiment 1 and positive and negative feedback as communicated by the same zero-value feedback in Experiment 2. We provided electrophysiological evidence in time and time-frequency domains supporting both reward and salience prediction error signals. Moreover, we showed that these electrophysiological signatures were highly flexible and sensitive to an optimistic bias and various forms of salience. Our findings shed new light on multiple presentations of prediction error in the human brain, which differ in format and functional role.


Asunto(s)
Electroencefalografía , Potenciales Evocados , Humanos , Potenciales Evocados/fisiología , Recompensa , Refuerzo en Psicología , Encéfalo/fisiología
8.
Psychol Med ; : 1-11, 2023 Feb 08.
Artículo en Inglés | MEDLINE | ID: mdl-36752156

RESUMEN

BACKGROUND: Prior evidence indicates that negative symptom severity and cognitive deficits, in people with schizophrenia (PSZ), relate to measures of reward-seeking and loss-avoidance behavior (implicating the ventral striatum/VS), as well as uncertainty-driven exploration (reliant on rostrolateral prefrontal cortex/rlPFC). While neural correlates of reward-seeking and loss-avoidance have been examined in PSZ, neural correlates of uncertainty-driven exploration have not. Understanding neural correlates of uncertainty-driven exploration is an important next step that could reveal insights to how this mechanism of cognitive and negative symptoms manifest at a neural level. METHODS: We acquired fMRI data from 29 PSZ and 36 controls performing the Temporal Utility Integration decision-making task. Computational analyses estimated parameters corresponding to learning rates for both positive and negative reward prediction errors (RPEs) and the degree to which participates relied on representations of relative uncertainty. Trial-wise estimates of expected value, certainty, and RPEs were generated to model fMRI data. RESULTS: Behaviorally, PSZ demonstrated reduced reward-seeking behavior compared to controls, and negative symptoms were positively correlated with loss-avoidance behavior. This finding of a bias toward loss avoidance learning in PSZ is consistent with previous work. Surprisingly, neither behavioral measures of exploration nor neural correlates of uncertainty in the rlPFC differed significantly between groups. However, we showed that trial-wise estimates of relative uncertainty in the rlPFC distinguished participants who engaged in exploratory behavior from those who did not. rlPFC activation was positively associated with intellectual function. CONCLUSIONS: These results further elucidate the nature of reinforcement learning and decision-making in PSZ and healthy volunteers.

9.
Psychol Med ; 53(15): 7106-7115, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-36987680

RESUMEN

BACKGROUND: A leading theory of the negative symptoms of schizophrenia is that they reflect reduced responsiveness to rewarding stimuli. This proposal has been linked to abnormal (reduced) dopamine function in the disorder, because phasic release of dopamine is known to code for reward prediction error (RPE). Nevertheless, few functional imaging studies have examined if patients with negative symptoms show reduced RPE-associated activations. METHODS: Matched groups of DSM-5 schizophrenia patients with high negative symptom scores (HNS, N = 27) or absent negative symptoms (ANS, N = 27) and healthy controls (HC, N = 30) underwent fMRI scanning while they performed a probabilistic monetary reward task designed to generate a measure of RPE. RESULTS: In the HC, whole-brain analysis revealed that RPE was positively associated with activation in the ventral striatum, the putamen, and areas of the lateral prefrontal cortex and orbitofrontal cortex, among other regions. Group comparison revealed no activation differences between the healthy controls and the ANS patients. However, compared to the ANS patients, the HNS patients showed regions of significantly reduced activation in the left ventrolateral and dorsolateral prefrontal cortex, and in the right lingual and fusiform gyrus. HNS and ANS patients showed no activation differences in ventral striatal or midbrain regions-of-interest (ROIs), but the HNS patients showed reduced activation in a left orbitofrontal cortex ROI. CONCLUSIONS: The findings do not suggest that a generalized reduction of RPE signalling underlies negative symptoms. Instead, they point to a more circumscribed dysfunction in the lateral frontal and possibly the orbitofrontal cortex.


Asunto(s)
Esquizofrenia , Humanos , Esquizofrenia/diagnóstico por imagen , Dopamina , Recompensa , Encéfalo/diagnóstico por imagen , Lóbulo Frontal , Imagen por Resonancia Magnética
10.
Psychol Med ; : 1-10, 2023 Feb 09.
Artículo en Inglés | MEDLINE | ID: mdl-36754994

RESUMEN

BACKGROUND: Mood instability and risk-taking are hallmarks of borderline personality disorder (BPD). Schema modes are combinations of self-reflective evaluations, negative emotional states, and destructive coping strategies common in BPD. When activated, they can push patients with BPD into emotional turmoil and a dissociative state of mind. Our knowledge of the underlying neurocognitive mechanisms driving these changes is incomplete. We hypothesized that in patients with BPD, affective instability is more influenced by reward expectation, outcomes, and reward prediction errors (RPEs) during risky decision-making than in healthy controls. Additionally, we expected that these alterations would be related to schema modes. METHODS: Thirty-two patients with BPD and thirty-one healthy controls were recruited. We used an established behavioral paradigm to measure mood fluctuations during risky decision-making. The impact of expectations and RPEs on momentary mood was quantified by a computational model, and its parameters were estimated with hierarchical Bayesian analysis. Model parameters were compared using High-Density Intervals. RESULTS: We found that model parameters capturing the influence of RPE and Certain Rewards on mood were significantly higher in patients with BPD than in controls. These model parameters correlated significantly with schema modes, but not with depression severity. CONCLUSIONS: BPD is coupled with altered associations between mood fluctuation and reward processing under uncertainty. Our findings seem to be BPD-specific, as they stand in contrast with the correlates of depressive symptoms. Future studies should establish the clinical utility of these alterations, such as predicting or assessing therapeutic response in BPD.

11.
Cereb Cortex ; 32(15): 3318-3330, 2022 07 21.
Artículo en Inglés | MEDLINE | ID: mdl-34921602

RESUMEN

Despite its omnipresence in everyday interactions and its importance for mental health, mood and its neuronal underpinnings are poorly understood. Computational models can help identify parameters affecting self-reported mood during mood induction tasks. Here, we test if computationally modeled dynamics of self-reported mood during monetary gambling can be used to identify trial-by-trial variations in neuronal activity. To this end, we shifted mood in healthy (N = 24) and depressed (N = 30) adolescents by delivering individually tailored reward prediction errors while recording magnetoencephalography (MEG) data. Following a pre-registered analysis, we hypothesize that the expectation component of mood would be predictive of beta-gamma oscillatory power (25-40 Hz). We also hypothesize that trial variations in the source localized responses to reward feedback would be predicted by mood and by its reward prediction error component. Through our multilevel statistical analysis, we found confirmatory evidence that beta-gamma power is positively related to reward expectation during mood shifts, with localized sources in the posterior cingulate cortex. We also confirmed reward prediction error to be predictive of trial-level variations in the response of the paracentral lobule. To our knowledge, this is the first study to harness computational models of mood to relate mood fluctuations to variations in neural oscillations with MEG.


Asunto(s)
Juego de Azar , Magnetoencefalografía , Adolescente , Afecto/fisiología , Giro del Cíngulo , Humanos , Recompensa
12.
J Neurosci ; 41(8): 1716-1726, 2021 02 24.
Artículo en Inglés | MEDLINE | ID: mdl-33334870

RESUMEN

Recent behavioral evidence implicates reward prediction errors (RPEs) as a key factor in the acquisition of episodic memory. Yet, important neural predictions related to the role of RPEs in episodic memory acquisition remain to be tested. Humans (both sexes) performed a novel variable-choice task where we experimentally manipulated RPEs and found support for key neural predictions with fMRI. Our results show that in line with previous behavioral observations, episodic memory accuracy increases with the magnitude of signed (i.e., better/worse-than-expected) RPEs (SRPEs). Neurally, we observe that SRPEs are encoded in the ventral striatum (VS). Crucially, we demonstrate through mediation analysis that activation in the VS mediates the experimental manipulation of SRPEs on episodic memory accuracy. In particular, SRPE-based responses in the VS (during learning) predict the strength of subsequent episodic memory (during recollection). Furthermore, functional connectivity between task-relevant processing areas (i.e., face-selective areas) and hippocampus and ventral striatum increased as a function of RPE value (during learning), suggesting a central role of these areas in episodic memory formation. Our results consolidate reinforcement learning theory and striatal RPEs as key factors subtending the formation of episodic memory.SIGNIFICANCE STATEMENT Recent behavioral research has shown that reward prediction errors (RPEs), a key concept of reinforcement learning theory, are crucial to the formation of episodic memories. In this study, we reveal the neural underpinnings of this process. Using fMRI, we show that signed RPEs (SRPEs) are encoded in the ventral striatum (VS), and crucially, that SRPE VS activity is responsible for the subsequent recollection accuracy of one-shot learned episodic memory associations.


Asunto(s)
Aprendizaje/fisiología , Memoria Episódica , Refuerzo en Psicología , Recompensa , Estriado Ventral/fisiología , Adulto , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Adulto Joven
13.
J Neurosci ; 41(23): 5004-5014, 2021 06 09.
Artículo en Inglés | MEDLINE | ID: mdl-33888609

RESUMEN

Associating natural rewards with predictive environmental cues is crucial for survival. Dopamine (DA) neurons of the ventral tegmental area (VTA) are thought to play a crucial role in this process by encoding reward prediction errors (RPEs) that have been hypothesized to play a role in associative learning. However, it is unclear whether this signal is still necessary after animals have acquired a cue-reward association. In order to investigate this, we trained mice to learn a Pavlovian cue-reward association. After learning, mice show robust anticipatory and consummatory licking behavior. As expected, calcium activity of VTA DA neurons goes up for cue presentation as well as reward delivery. Optogenetic inhibition during the moment of reward delivery disrupts learned behavior, even in the continued presence of reward. This effect is more pronounced over trials and persists on the next training day. Moreover, outside of the task licking behavior and locomotion are unaffected. Similarly to inhibitions during the reward period, we find that inhibiting cue-induced dopamine (DA) signals robustly decreases learned licking behavior, indicating that cue-related DA signals are a potent driver for learned behavior. Overall, we show that inhibition of either of these DA signals directly impairs the expression of learned associative behavior. Thus, continued DA signaling in a learned state is necessary for consolidating Pavlovian associations.SIGNIFICANCE STATEMENT Dopamine (DA) neurons of the ventral tegmental area (VTA) have long been suggested to be necessary for animals to associate environmental cues with rewards that they predict. Here, we use time-locked optogenetic inhibition of these neurons to show that the activity of these neurons is directly necessary for performance on a Pavlovian conditioning task, without affecting locomotor per se These findings provide further support for the direct importance of second-by-second DA neuron activity in associative learning.


Asunto(s)
Aprendizaje por Asociación/fisiología , Condicionamiento Clásico/fisiología , Señales (Psicología) , Neuronas Dopaminérgicas/fisiología , Recompensa , Área Tegmental Ventral/fisiología , Animales , Dopamina/metabolismo , Masculino , Ratones , Ratones Endogámicos C57BL
14.
Neuroimage ; 257: 119322, 2022 08 15.
Artículo en Inglés | MEDLINE | ID: mdl-35577025

RESUMEN

The feedback-related negativity (FRN) is a well-established electrophysiological correlate of feedback-processing. However, there is still an ongoing debate whether the FRN is driven by negative or positive reward prediction errors (RPE), valence of feedback, or mere surprise. Our study disentangles independent contributions of valence, surprise, and RPE on the feedback-related neuronal signal including the FRN and P3 components using the statistical power of a sample of N = 992 healthy individuals. The participants performed a modified time-estimation task, while EEG from 64 scalp electrodes was recorded. Our results show that valence coding is present during the FRN with larger amplitudes for negative feedback. The FRN is further modulated by surprise in a valence-dependent way being more positive-going for surprising positive outcomes. The P3 was strongly driven by both global and local surprise, with larger amplitudes for unexpected feedback and local deviants. Behavioral adaptations after feedback and FRN just show small associations. Results support the theory of the FRN as a representation of a signed RPE. Additionally, our data indicates that surprising positive feedback enhances the EEG response in the time window of the P3. These results corroborate previous findings linking the P3 to the evaluation of PEs in decision making and learning tasks.


Asunto(s)
Potenciales Evocados , Retroalimentación Psicológica , Electroencefalografía/métodos , Potenciales Evocados/fisiología , Retroalimentación , Retroalimentación Psicológica/fisiología , Humanos , Recompensa
15.
Neurobiol Learn Mem ; 193: 107653, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-35772681

RESUMEN

Classical Conditioning is a fundamental learning mechanism where the Ventral Striatum is generally thought to be the source of inhibition to Ventral Tegmental Area (VTA) Dopamine neurons when a reward is expected. However, recent evidences point to a new candidate in VTA GABA encoding expectation for computing the reward prediction error in the VTA. In this system-level computational model, the VTA GABA signal is hypothesised to be a combination of magnitude and timing computed in the Peduncolopontine and Ventral Striatum respectively. This dissociation enables the model to explain recent results wherein Ventral Striatum lesions affected the temporal expectation of the reward but the magnitude of the reward was intact. This model also exhibits other features in classical conditioning namely, progressively decreasing firing for early rewards closer to the actual reward, twin peaks of VTA dopamine during training and cancellation of US dopamine after training.


Asunto(s)
Condicionamiento Clásico , Área Tegmental Ventral , Condicionamiento Clásico/fisiología , Dopamina , Neuronas Dopaminérgicas/fisiología , Neuronas GABAérgicas , Recompensa , Área Tegmental Ventral/fisiología , Ácido gamma-Aminobutírico
16.
Psychol Med ; 52(11): 2124-2133, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-33143778

RESUMEN

BACKGROUND: Internet gaming disorder (IGD) is a type of behavioural addictions. One of the key features of addiction is the excessive exposure to addictive objectives (e.g. drugs) reduces the sensitivity of the brain reward system to daily rewards (e.g. money). This is thought to be mediated via the signals expressed as dopaminergic reward prediction error (RPE). Emerging evidence highlights blunted RPE signals in drug addictions. However, no study has examined whether IGD also involves alterations in RPE signals that are observed in other types of addictions. METHODS: To fill this gap, we used functional magnetic resonance imaging data from 45 IGD and 42 healthy controls (HCs) during a reward-related prediction-error task and utilised a psychophysiological interaction (PPI) analysis to characterise the underlying neural correlates of RPE and related functional connectivity. RESULTS: Relative to HCs, IGD individuals showed impaired reinforcement learning, blunted RPE signals in multiple regions of the brain reward system, including the right caudate, left orbitofrontal cortex (OFC), and right dorsolateral prefrontal cortex (DLPFC). Moreover, the PPI analysis revealed a pattern of hyperconnectivity between the right caudate, right putamen, bilateral DLPFC, and right dorsal anterior cingulate cortex (dACC) in the IGD group. Finally, linear regression suggested that the connection between the right DLPFC and right dACC could significantly predict the variation of RPE signals in the left OFC. CONCLUSIONS: These results highlight disrupted RPE signalling and hyperconnectivity between regions of the brain reward system in IGD. Reinforcement learning deficits may be crucial underlying characteristics of IGD pathophysiology.


Asunto(s)
Mapeo Encefálico , Trastorno de Adicción a Internet , Humanos , Encéfalo/diagnóstico por imagen , Mapeo Encefálico/métodos , Internet , Trastorno de Adicción a Internet/diagnóstico por imagen , Imagen por Resonancia Magnética , Vías Nerviosas , Recompensa
17.
Cereb Cortex ; 31(11): 5006-5014, 2021 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-34023899

RESUMEN

Cognitive architectures tasked with swiftly and adaptively processing biologically important events are likely to classify these on two central axes: motivational salience, that is, those events' importance and unexpectedness, and motivational value, the utility they hold, relative to that expected. Because of its temporal precision, electroencephalography provides an opportunity to resolve processes associated with these two axes. A focus of attention for the last two decades has been the feedback-related negativity (FRN), a frontocentral component occurring 240-340 ms after valenced events that are not fully predicted. Both motivational salience and value are present in such events and competing claims have been made for which of these is encoded by the FRN. The present study suggests that motivational value, in the form of a reward prediction error, is the primary determinant of the FRN in active contexts, while in both passive and active contexts, a weaker and earlier overlapping motivational salience component may be present.


Asunto(s)
Potenciales Evocados , Recompensa , Afecto , Electroencefalografía , Retroalimentación Psicológica
18.
Cereb Cortex ; 31(2): 1060-1076, 2021 01 05.
Artículo en Inglés | MEDLINE | ID: mdl-32995836

RESUMEN

Feedback-related negativity (FRN) is believed to encode reward prediction error (RPE), a term describing whether the outcome is better or worse than expected. However, some studies suggest that it may reflect unsigned prediction error (UPE) instead. Some disagreement remains as to whether FRN is sensitive to the interaction of outcome valence and prediction error (PE) or merely responsive to the absolute size of PE. Moreover, few studies have compared FRN in appetitive and aversive domains to clarify the valence effect or examine PE's quantitative modulation. To investigate the impact of valence and parametrical PE on FRN, we varied the prediction and feedback magnitudes within a probabilistic learning task in valence (gain and loss domains, Experiment 1) and non-valence contexts (pure digits, Experiment 2). Experiment 3 was identical to Experiment 1 except that some blocks emphasized outcome valence, while others highlighted predictive accuracy. Experiments 1 and 2 revealed a UPE encoder; Experiment 3 found an RPE encoder when valence was emphasized and a UPE encoder when predictive accuracy was highlighted. In this investigation, we demonstrate that FRN is sensitive to outcome valence and expectancy violation, exhibiting a preferential response depending on the dimension that is emphasized.


Asunto(s)
Encéfalo/fisiología , Aprendizaje Discriminativo/fisiología , Potenciales Evocados/fisiología , Retroalimentación Fisiológica/fisiología , Motivación/fisiología , Aprendizaje por Probabilidad , Adolescente , Electroencefalografía/métodos , Femenino , Humanos , Masculino , Estimulación Luminosa/métodos , Análisis de Componente Principal/métodos , Distribución Aleatoria , Adulto Joven
19.
Eur J Neurosci ; 53(11): 3768-3790, 2021 06.
Artículo en Inglés | MEDLINE | ID: mdl-33840120

RESUMEN

Difficulty in cessation of drinking, smoking, or gambling has been widely recognized. Conventional theories proposed relative dominance of habitual over goal-directed control, but human studies have not convincingly supported them. Referring to the recently suggested "successor representation (SR)" of states that enables partially goal-directed control, we propose a dopamine-related mechanism that makes resistance to habitual reward-obtaining particularly difficult. We considered that long-standing behavior towards a certain reward without resisting temptation can (but not always) lead to a formation of rigid dimension-reduced SR based on the goal state, which cannot be updated. Then, in our model assuming such rigid reduced SR, whereas no reward prediction error (RPE) is generated at the goal while no resistance is made, a sustained large positive RPE is generated upon goal reaching once the person starts resisting temptation. Such sustained RPE is somewhat similar to the hypothesized sustained fictitious RPE caused by drug-induced dopamine. In contrast, if rigid reduced SR is not formed and states are represented individually as in simple reinforcement learning models, no sustained RPE is generated at the goal. Formation of rigid reduced SR also attenuates the resistance-dependent decrease in the value of the cue for behavior, makes subsequent introduction of punishment after the goal ineffective, and potentially enhances the propensity of nonresistance through the influence of RPEs via the spiral striatum-midbrain circuit. These results suggest that formation of rigid reduced SR makes cessation of habitual reward-obtaining particularly difficult and can thus be a mechanism for addiction, common to substance and nonsubstance reward.


Asunto(s)
Cuerpo Estriado , Recompensa , Dopamina , Humanos , Motivación , Refuerzo en Psicología
20.
Addict Biol ; 26(1): e12873, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-31975507

RESUMEN

Previous studies suggest that individuals with substance use disorder have abnormally large responses to unexpected outcomes (reward prediction errors [RPEs]). However, there is much less information on RPE in individuals at risk of alcohol misuse, prior to neurobiological adaptations that might result from sustained alcohol use. Here, participants (mean age 23.77 years, range 18-32 years) performed the electrophysiological monetary incentive delay task. This task involved responding to a target stimulus following reward incentive cues to win, or avoid losing, the cued reward while brain activity was recorded under 64-channel EEG. The Alcohol Use Disorders Identification Test (AUDIT) was used to quantify at-risk alcohol use, with high (n = 22, mean AUDIT score: 13.82) and low (n = 22, mean AUDIT score: 5.77) alcohol use groups. Trial-by-trial RPEs were estimated using a Rescorla-Wagner reinforcement model based on behavioral data. A single-trial analysis revealed that the feedback-related negativity (FRN) and feedback P3 (fb-P3) event-related potential components were significantly modulated by RPEs. There was increased RPE-related fb-P3 amplitude for those in the high alcohol use group. Next, the mean amplitude of ERPs elicited by positive and negative RPEs were compared between groups. We found that high alcohol use participants had attenuated FRN amplitude in contrast with low alcohol use participants for both positive and negative RPEs but enhanced fb-P3 for both positive and negative RPE. These results, with differences in RPE in an at-risk group, suggest that RPE a potential vulnerability marker for alcohol use disorder.


Asunto(s)
Alcoholismo/diagnóstico , Recompensa , Adolescente , Adulto , Anticipación Psicológica/fisiología , Encéfalo/fisiología , Señales (Psicología) , Potenciales Evocados/fisiología , Retroalimentación Psicológica/fisiología , Femenino , Humanos , Masculino , Motivación , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA