Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32
Filtrar
1.
J Neurosci ; 2024 Jul 05.
Artículo en Inglés | MEDLINE | ID: mdl-38969504

RESUMEN

Dopamine release in the nucleus accumbens core (NAcC) is generally considered to be a proxy for phasic firing of dopamine neurons in the ventral tegmental area (VTADA). Thus, dopamine release in NAcC is hypothesized to reflect a unitary role in reward prediction error signalling. However, recent studies revealed more diverse roles of dopamine neurons, which support an emerging idea that dopamine regulates learning differently in distinct circuits. To understand whether the NAcC might regulate a unique component of learning, we recorded dopamine release in NAcC while male rats performed a backward conditioning task where a reward is followed by a cue. We used this task because we can delineate different components of learning, which include sensory-specific inhibitory and general excitatory components. Further, we have shown that VTADA neurons are necessary for both the specific and general components of backward associations. Here, we found that dopamine release in NAcC increased to the reward across learning, while reducing to the cue that followed as it became more expected. This mirrors the dopamine prediction error signal seen during forward conditioning and cannot be accounted for temporal-difference reinforcement learning (TDRL). Subsequent tests allowed us to dissociate these learning components and revealed that dopamine release in NAcC reflects the general excitatory component of backward associations, but not their sensory-specific component. These results emphasize the importance of examining distinct functions of different dopamine projections in reinforcement learning.Significance Statement Dopamine regulates reinforcement learning. While it was previously believed that this system contributed to simple value assignment to reward cues, we now know dopamine plays increasingly diverse roles in reinforcement learning. How these diverse roles are achieved in distinct circuits is not fully understood. By using behavioural tasks that examine distinctive components of learning separately, we reveal that NAcC dopamine release contributes to a unique component of learning. Thus, the present study supports a distinct role of NAcC in reinforcement learning, consistent with the idea that different dopamine systems serve different learning functions. Examining the roles of different dopamine projections is an important approach to identify neuronal mechanisms underlying the reinforcement-learning deficits observed in schizophrenia and drug addiction.

2.
J Neurosci ; 41(2): 342-353, 2021 01 13.
Artículo en Inglés | MEDLINE | ID: mdl-33219006

RESUMEN

Substance use disorders (SUDs) are characterized by maladaptive behavior. The ability to properly adjust behavior according to changes in environmental contingencies necessitates the interlacing of existing memories with updated information. This can be achieved by assigning learning in different contexts to compartmentalized "states." Though not often framed this way, the maladaptive behavior observed in individuals with SUDs may result from a failure to properly encode states because of drug-induced neural alterations. Previous studies found that the dorsomedial striatum (DMS) is important for behavioral flexibility and state encoding, suggesting the DMS may be an important substrate for these effects. Here, we recorded DMS neural activity in cocaine-experienced male rats during a decision-making task where blocks of trials represented distinct states to probe whether the encoding of state and state-related information is affected by prior drug exposure. We found that DMS medium spiny neurons (MSNs) and fast-spiking interneurons (FSIs) encoded such information and that prior cocaine experience disrupted the evolution of representations both within trials and across recording sessions. Specifically, DMS MSNs and FSIs from cocaine-experienced rats demonstrated higher classification accuracy of trial-specific rules, defined by response direction and value, compared with those drawn from sucrose-experienced rats, and these overly strengthened trial-type representations were related to slower switching behavior and reaction times. These data show that prior cocaine experience paradoxically increases the encoding of state-specific information and rules in the DMS and suggest a model in which abnormally specific and persistent representation of rules throughout trials in DMS slows value-based decision-making in well trained subjects.SIGNIFICANCE STATEMENT Substance use disorders (SUDs) may result from a failure to properly encode rules guiding situationally appropriate behavior. The dorsomedial striatum (DMS) is thought to be important for such behavioral flexibility and encoding that defines the situation or "state." This suggests that the DMS may be an important substrate for the maladaptive behavior observed in SUDs. In the current study, we show that prior cocaine experience results in over-encoding of state-specific information and rules in the DMS, which may impair normal adaptive decision-making in the task, akin to what is observed in SUDs.


Asunto(s)
Trastornos Relacionados con Cocaína/psicología , Cocaína/farmacología , Toma de Decisiones/efectos de los fármacos , Neostriado/efectos de los fármacos , Animales , Conducta de Elección/efectos de los fármacos , Interneuronas/efectos de los fármacos , Masculino , Neuronas/efectos de los fármacos , Odorantes , Desempeño Psicomotor/efectos de los fármacos , Ratas , Ratas Long-Evans , Tiempo de Reacción/efectos de los fármacos , Recompensa , Autoadministración , Sacarosa/farmacología
3.
Annu Rev Psychol ; 70: 53-76, 2019 01 04.
Artículo en Inglés | MEDLINE | ID: mdl-30260745

RESUMEN

Making decisions in environments with few choice options is easy. We select the action that results in the most valued outcome. Making decisions in more complex environments, where the same action can produce different outcomes in different conditions, is much harder. In such circumstances, we propose that accurate action selection relies on top-down control from the prelimbic and orbitofrontal cortices over striatal activity through distinct thalamostriatal circuits. We suggest that the prelimbic cortex exerts direct influence over medium spiny neurons in the dorsomedial striatum to represent the state space relevant to the current environment. Conversely, the orbitofrontal cortex is argued to track a subject's position within that state space, likely through modulation of cholinergic interneurons.


Asunto(s)
Corteza Cerebral/fisiología , Cuerpo Estriado/fisiología , Toma de Decisiones/fisiología , Función Ejecutiva/fisiología , Modelos Psicológicos , Animales , Humanos
4.
Neurobiol Learn Mem ; 153(Pt B): 131-136, 2018 09.
Artículo en Inglés | MEDLINE | ID: mdl-29269085

RESUMEN

The phasic dopamine error signal is currently argued to be synonymous with the prediction error in Sutton and Barto (1987, 1998) model-free reinforcement learning algorithm (Schultz et al., 1997). This theory argues that phasic dopamine reflects a cached-value signal that endows reward-predictive cues with the scalar value inherent in reward. Such an interpretation does not envision a role for dopamine in more complex cognitive representations between events which underlie many forms of associative learning, restricting the role dopamine can play in learning. The cached-value hypothesis of dopamine makes three concrete predictions about when a phasic dopamine response should be seen and what types of learning this signal should be able to promote. We discuss these predictions in light of recent evidence which we believe provide particularly strong tests of their validity. In doing so, we find that while the phasic dopamine signal conforms to a cached-value account in some circumstances, other evidence demonstrate that this signal is not restricted to a model-free cached-value reinforcement learning signal. In light of this evidence, we argue that the phasic dopamine signal functions more generally to signal violations of expectancies to drive real-world associations between events.


Asunto(s)
Aprendizaje por Asociación/fisiología , Encéfalo/fisiología , Dopamina/fisiología , Modelos Neurológicos , Recompensa , Animales , Humanos
5.
Neurobiol Learn Mem ; 131: 201-6, 2016 05.
Artículo en Inglés | MEDLINE | ID: mdl-27112314

RESUMEN

Underlying many complex behaviors are simple learned associations that allow humans and animals to anticipate the consequences of their actions. The orbitofrontal cortex and basolateral amygdala are two regions which are crucial to this process. In this review, we go back to basics and discuss the literature implicating both these regions in simple paradigms requiring the development of associations between stimuli and the motivationally-significant outcomes they predict. Much of the functional research surrounding this ability has suggested that the orbitofrontal cortex and basolateral amygdala play very similar roles in making these predictions. However, electrophysiological data demonstrates critical differences in the way neurons in these regions respond to predictive cues, revealing a difference in their functional role. On the basis of these data and theories that have come before, we propose that the basolateral amygdala is integral to updating information about cue-outcome contingencies whereas the orbitofrontal cortex is critical to forming a wider network of past and present associations that are called upon by the basolateral amygdala to benefit future learning episodes. The tendency for orbitofrontal neurons to encode past and present contingencies in distinct neuronal populations may facilitate its role in the formation of complex, high-dimensional state-specific associations.


Asunto(s)
Aprendizaje por Asociación , Complejo Nuclear Basolateral/fisiología , Corteza Prefrontal/fisiología , Animales , Humanos
6.
Learn Mem ; 22(6): 289-93, 2015 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-25979990

RESUMEN

The prelimbic cortex is argued to promote conditioned fear expression, at odds with appetitive research implicating this region in attentional processing. Consistent with an attentional account, we report that the effect of prelimbic lesions on fear expression depends on the degree of competition between contextual and discrete cues. Further, when competition from contextual cues is low, we found that PL inactivation resulted in animals expressing fear toward irrelevant discrete cues; an effect selective to inactivation during the learning phase and not during retrieval. These data demonstrate that the prelimbic cortex modulates attention toward cues to preferentially direct fear responding on the basis of their predictive value.


Asunto(s)
Atención/fisiología , Condicionamiento Clásico/fisiología , Miedo/fisiología , Corteza Prefrontal/fisiología , Animales , Señales (Psicología) , Ratas
7.
Cereb Cortex ; 24(4): 1066-74, 2014 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-23236210

RESUMEN

Previous research suggests disruption of activity in the prelimbic (PL) cortex produces deficits in tasks requiring preferential attention toward cues that are good predictors of an event. By manipulating cue predictive power, we clarify this role using Pavlovian conditioning. Experiment 1a showed pretraining excitotoxic lesions of the PL cortex disrupted the ability of animals to distribute attention across stimuli conditioned in compound. Experiment 1b demonstrated that these lesions did not affect the ability to block learning about a stimulus when it was presented simultaneously with another stimulus that was previously paired with the outcome. However, in a subsequent test, PL-lesioned animals learnt about this blocked cue faster than sham-lesioned animals when this stimulus alone was paired with reinforcement, suggesting these animals did not down-regulate attention toward the redundant cue during blocking. Experiment 2 tested this hypothesis using an unblocking procedure designed to explicitly reveal a down-regulation of attention during blocking. In this, sham-lesioned animals were shown to down-regulate attention during blocking. PL-lesioned animals did not exhibit this effect. We propose that observed deficits are the result of a specific deficit in down-regulating attention toward redundant cues, indicating the disruption of an attentional process described in Mackintosh's (Mackintosh NJ. 1975. Psychol Review. 82:276) attentional theory.


Asunto(s)
Atención/fisiología , Corteza Cerebral/fisiología , Señales (Psicología) , Regulación hacia Abajo , Animales , Corteza Cerebral/lesiones , Condicionamiento Clásico/efectos de los fármacos , Condicionamiento Clásico/fisiología , Agonistas de Aminoácidos Excitadores/toxicidad , Extinción Psicológica/efectos de los fármacos , Extinción Psicológica/fisiología , Masculino , N-Metilaspartato/toxicidad , Ratas , Ratas Long-Evans , Refuerzo en Psicología
8.
Trends Cogn Sci ; 28(1): 18-29, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-37758590

RESUMEN

Despite the physiological complexity of the hypothalamus, its role is typically restricted to initiation or cessation of innate behaviors. For example, theories of lateral hypothalamus argue that it is a switch to turn feeding 'on' and 'off' as dictated by higher-order structures that render when feeding is appropriate. However, recent data demonstrate that the lateral hypothalamus is critical for learning about food-related cues. Furthermore, the lateral hypothalamus opposes learning about information that is neutral or distal to food. This reveals the lateral hypothalamus as a unique arbitrator of learning capable of shifting behavior toward or away from important events. This has relevance for disorders characterized by changes in this balance, including addiction and schizophrenia. Generally, this suggests that hypothalamic function is more complex than increasing or decreasing innate behaviors.


Asunto(s)
Área Hipotalámica Lateral , Hipotálamo , Humanos , Área Hipotalámica Lateral/fisiología , Hipotálamo/fisiología , Aprendizaje/fisiología , Señales (Psicología) , Cognición , Recompensa
9.
Nat Neurosci ; 27(4): 728-736, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38396258

RESUMEN

To make adaptive decisions, we build an internal model of the associative relationships in an environment and use it to make predictions and inferences about specific available outcomes. Detailed, identity-specific cue-reward memories are a core feature of such cognitive maps. Here we used fiber photometry, cell-type and pathway-specific optogenetic manipulation, Pavlovian cue-reward conditioning and decision-making tests in male and female rats, to reveal that ventral tegmental area dopamine (VTADA) projections to the basolateral amygdala (BLA) drive the encoding of identity-specific cue-reward memories. Dopamine is released in the BLA during cue-reward pairing; VTADA→BLA activity is necessary and sufficient to link the identifying features of a reward to a predictive cue but does not assign general incentive properties to the cue or mediate reinforcement. These data reveal a dopaminergic pathway for the learning that supports adaptive decision-making and help explain how VTADA neurons achieve their emerging multifaceted role in learning.


Asunto(s)
Complejo Nuclear Basolateral , Ratas , Masculino , Femenino , Animales , Complejo Nuclear Basolateral/fisiología , Dopamina , Aprendizaje/fisiología , Recompensa , Refuerzo en Psicología , Señales (Psicología)
10.
Nat Neurosci ; 27(7): 1253-1259, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38741021

RESUMEN

Dopamine neurons in the ventral tegmental area support intracranial self-stimulation (ICSS), yet the cognitive representations underlying this phenomenon remain unclear. Here, 20-Hz stimulation of dopamine neurons, which approximates a physiologically relevant prediction error, was not sufficient to support ICSS beyond a continuously reinforced schedule and did not endow cues with a general or specific value. However, 50-Hz stimulation of dopamine neurons was sufficient to drive robust ICSS and was represented as a specific reward to motivate behavior. The frequency dependence of this effect is due to the rate (not the number) of action potentials produced by dopamine neurons, which differently modulates dopamine release downstream.


Asunto(s)
Neuronas Dopaminérgicas , Recompensa , Autoestimulación , Área Tegmental Ventral , Animales , Neuronas Dopaminérgicas/fisiología , Autoestimulación/fisiología , Masculino , Área Tegmental Ventral/fisiología , Mesencéfalo/fisiología , Potenciales de Acción/fisiología , Cognición/fisiología , Estimulación Eléctrica/métodos , Macaca mulatta , Dopamina/metabolismo
11.
Behav Brain Res ; 417: 113587, 2022 01 24.
Artículo en Inglés | MEDLINE | ID: mdl-34543677

RESUMEN

Prior experience changes the way we learn about our environment. Stress predisposes individuals to developing psychological disorders, just as positive experiences protect from this eventuality (Kirkpatrick & Heller, 2014; Koenigs & Grafman, 2009; Pechtel & Pizzagalli, 2011). Yet current models of how the brain processes information often do not consider a role for prior experience. The considerable literature that examines how stress impacts the brain is an exception to this. This research demonstrates that stress can bias the interpretation of ambiguous events towards being aversive in nature, owed to changes in amygdala physiology (Holmes et al., 2013; Perusini et al., 2016; Rau et al., 2005; Shors et al., 1992). This is thought to be an important model for how people develop anxiety disorders, like post-traumatic stress disorder (PTSD; Rau et al., 2005). However, more recent evidence suggests that experience with reward learning can also change the neural circuits that are involved in learning about fear (Sharpe et al., 2021). Specifically, the lateral hypothalamus, a region typically restricted to modulating feeding and reward behavior, can be recruited to encode fear memories after experience with reward learning. This review discusses the literature on how stress and reward change the way we acquire and encode memories for aversive events, offering a testable model of how these regions may interact to promote either adaptive or maladaptive fear memories.


Asunto(s)
Amígdala del Cerebelo/fisiología , Miedo/fisiología , Área Hipotalámica Lateral/fisiología , Memoria/fisiología , Recompensa , Encéfalo/fisiología , Humanos , Aprendizaje/fisiología
12.
Curr Biol ; 32(14): 3210-3218.e3, 2022 07 25.
Artículo en Inglés | MEDLINE | ID: mdl-35752165

RESUMEN

For over two decades, phasic activity in midbrain dopamine neurons was considered synonymous with the prediction error in temporal-difference reinforcement learning.1-4 Central to this proposal is the notion that reward-predictive stimuli become endowed with the scalar value of predicted rewards. When these cues are subsequently encountered, their predictive value is compared to the value of the actual reward received, allowing for the calculation of prediction errors.5,6 Phasic firing of dopamine neurons was proposed to reflect this computation,1,2 facilitating the backpropagation of value from the predicted reward to the reward-predictive stimulus, thus reducing future prediction errors. There are two critical assumptions of this proposal: (1) that dopamine errors can only facilitate learning about scalar value and not more complex features of predicted rewards, and (2) that the dopamine signal can only be involved in anticipatory cue-reward learning in which cues or actions precede rewards. Recent work7-15 has challenged the first assumption, demonstrating that phasic dopamine signals across species are involved in learning about more complex features of the predicted outcomes, in a manner that transcends this value computation. Here, we tested the validity of the second assumption. Specifically, we examined whether phasic midbrain dopamine activity would be necessary for backward conditioning-when a neutral cue reliably follows a rewarding outcome.16-20 Using a specific Pavlovian-to-instrumental transfer (PIT) procedure,21-23 we show rats learn both excitatory and inhibitory components of a backward association, and that this association entails knowledge of the specific identity of the reward and cue. We demonstrate that brief optogenetic inhibition of VTADA neurons timed to the transition between the reward and cue reduces both of these components of backward conditioning. These findings suggest VTADA neurons are capable of facilitating associations between contiguously occurring events, regardless of the content of those events. We conclude that these data may be in line with suggestions that the VTADA error acts as a universal teaching signal. This may provide insight into why dopamine function has been implicated in myriad psychological disorders that are characterized by very distinct reinforcement-learning deficits.


Asunto(s)
Dopamina , Recompensa , Animales , Señales (Psicología) , Dopamina/fisiología , Neuronas Dopaminérgicas/fisiología , Aprendizaje/fisiología , Ratas , Refuerzo en Psicología
13.
Neuropsychopharmacology ; 47(3): 628-640, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-34588607

RESUMEN

Schizophrenia is a severe psychiatric disorder affecting 21 million people worldwide. People with schizophrenia suffer from symptoms including psychosis and delusions, apathy, anhedonia, and cognitive deficits. Strikingly, schizophrenia is characterised by a learning paradox involving difficulties learning from rewarding events, whilst simultaneously 'overlearning' about irrelevant or neutral information. While dysfunction in dopaminergic signalling has long been linked to the pathophysiology of schizophrenia, a cohesive framework that accounts for this learning paradox remains elusive. Recently, there has been an explosion of new research investigating how dopamine contributes to reinforcement learning, which illustrates that midbrain dopamine contributes in complex ways to reinforcement learning, not previously envisioned. This new data brings new possibilities for how dopamine signalling contributes to the symptomatology of schizophrenia. Building on recent work, we present a new neural framework for how we might envision specific dopamine circuits contributing to this learning paradox in schizophrenia in the context of models of reinforcement learning. Further, we discuss avenues of preclinical research with the use of cutting-edge neuroscience techniques where aspects of this model may be tested. Ultimately, it is hoped that this review will spur to action more research utilising specific reinforcement learning paradigms in preclinical models of schizophrenia, to reconcile seemingly disparate symptomatology and develop more efficient therapeutics.


Asunto(s)
Trastornos Psicóticos , Esquizofrenia , Dopamina/fisiología , Humanos , Trastornos Psicóticos/psicología , Refuerzo en Psicología , Recompensa
14.
Nat Neurosci ; 25(8): 1071-1081, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35902648

RESUMEN

Studies investigating the neural mechanisms by which associations between cues and predicted outcomes control behavior often use associative learning frameworks to understand the neural control of behavior. These frameworks do not always account for the full range of effects that novelty can have on behavior and future associative learning. Here, in mice, we show that dopamine in the nucleus accumbens core is evoked by novel, neutral stimuli, and the trajectory of this response over time tracked habituation to these stimuli. Habituation to novel cues before associative learning reduced future associative learning, a phenomenon known as latent inhibition. Crucially, trial-by-trial dopamine response patterns tracked this phenomenon. Optogenetic manipulation of dopamine responses to the cue during the habituation period bidirectionally influenced future associative learning. Thus, dopamine signaling in the nucleus accumbens core has a causal role in novelty-based learning in a way that cannot be predicted based on purely associative factors.


Asunto(s)
Dopamina , Núcleo Accumbens , Animales , Condicionamiento Clásico/fisiología , Señales (Psicología) , Dopamina/fisiología , Memoria , Ratones , Núcleo Accumbens/fisiología
15.
Elife ; 112022 08 23.
Artículo en Inglés | MEDLINE | ID: mdl-35997072

RESUMEN

Quantitative descriptions of animal behavior are essential to study the neural substrates of cognitive and emotional processes. Analyses of naturalistic behaviors are often performed by hand or with expensive, inflexible commercial software. Recently, machine learning methods for markerless pose estimation enabled automated tracking of freely moving animals, including in labs with limited coding expertise. However, classifying specific behaviors based on pose data requires additional computational analyses and remains a significant challenge for many groups. We developed BehaviorDEPOT (DEcoding behavior based on POsitional Tracking), a simple, flexible software program that can detect behavior from video timeseries and can analyze the results of experimental assays. BehaviorDEPOT calculates kinematic and postural statistics from keypoint tracking data and creates heuristics that reliably detect behaviors. It requires no programming experience and is applicable to a wide range of behaviors and experimental designs. We provide several hard-coded heuristics. Our freezing detection heuristic achieves above 90% accuracy in videos of mice and rats, including those wearing tethered head-mounts. BehaviorDEPOT also helps researchers develop their own heuristics and incorporate them into the software's graphical interface. Behavioral data is stored framewise for easy alignment with neural data. We demonstrate the immediate utility and flexibility of BehaviorDEPOT using popular assays including fear conditioning, decision-making in a T-maze, open field, elevated plus maze, and novel object exploration.


Asunto(s)
Conducta Animal , Programas Informáticos , Animales , Fenómenos Biomecánicos , Aprendizaje Automático , Ratas
16.
Front Behav Neurosci ; 15: 745388, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34671247

RESUMEN

Higher-order conditioning involves learning causal links between multiple events, which then allows one to make novel inferences. For example, observing a correlation between two events (e.g., a neighbor wearing a particular sports jersey), later helps one make new predictions based on this knowledge (e.g., the neighbor's wife's favorite sports team). This type of learning is important because it allows one to benefit maximally from previous experiences and perform adaptively in complex environments where many things are ambiguous or uncertain. Two procedures in the lab are often used to probe this kind of learning, second-order conditioning (SOC) and sensory preconditioning (SPC). In second-order conditioning (SOC), we first teach subjects that there is a relationship between a stimulus and an outcome (e.g., a tone that predicts food). Then, an additional stimulus is taught to precede the predictive stimulus (e.g., a light leads to the food-predictive tone). In sensory preconditioning (SPC), this order of training is reversed. Specifically, the two neutral stimuli (i.e., light and tone) are first paired together and then the tone is paired separately with food. Interestingly, in both SPC and SOC, humans, rodents, and even insects, and other invertebrates will later predict that both the light and tone are likely to lead to food, even though they only experienced the tone directly paired with food. While these processes are procedurally similar, a wealth of research suggests they are associatively and neurobiologically distinct. However, midbrain dopamine, a neurotransmitter long thought to facilitate basic Pavlovian conditioning in a relatively simplistic manner, appears critical for both SOC and SPC. These findings suggest dopamine may contribute to learning in ways that transcend differences in associative and neurological structure. We discuss how research demonstrating that dopamine is critical to both SOC and SPC places it at the center of more complex forms of cognition (e.g., spatial navigation and causal reasoning). Further, we suggest that these more sophisticated learning procedures, coupled with recent advances in recording and manipulating dopamine neurons, represent a new path forward in understanding dopamine's contribution to learning and cognition.

17.
Nat Neurosci ; 24(3): 391-400, 2021 03.
Artículo en Inglés | MEDLINE | ID: mdl-33589832

RESUMEN

Experimental research controls for past experience, yet prior experience influences how we learn. Here, we tested whether we could recruit a neural population that usually encodes rewards to encode aversive events. Specifically, we found that GABAergic neurons in the lateral hypothalamus (LH) were not involved in learning about fear in naïve rats. However, if these rats had prior experience with rewards, LH GABAergic neurons became important for learning about fear. Interestingly, inhibition of these neurons paradoxically enhanced learning about neutral sensory information, regardless of prior experience, suggesting that LH GABAergic neurons normally oppose learning about irrelevant information. These experiments suggest that prior experience shapes the neural circuits recruited for future learning in a highly specific manner, reopening the neural boundaries we have drawn for learning of particular types of information from work in naïve subjects.


Asunto(s)
Condicionamiento Clásico/fisiología , Miedo/fisiología , Neuronas GABAérgicas/fisiología , Área Hipotalámica Lateral/fisiología , Aprendizaje/fisiología , Animales , Señales (Psicología) , Femenino , Masculino , Vías Nerviosas/fisiología , Ratas , Ratas Long-Evans , Ratas Transgénicas , Recompensa
18.
Elife ; 92020 08 24.
Artículo en Inglés | MEDLINE | ID: mdl-32831173

RESUMEN

The orbitofrontal cortex (OFC) is necessary for inferring value in tests of model-based reasoning, including in sensory preconditioning. This involvement could be accounted for by representation of value or by representation of broader associative structure. We recently reported neural correlates of such broader associative structure in OFC during the initial phase of sensory preconditioning (Sadacca et al., 2018). Here, we used optogenetic inhibition of OFC to test whether these correlates might be necessary for value inference during later probe testing. We found that inhibition of OFC during cue-cue learning abolished value inference during the probe test, inference subsequently shown in control rats to be sensitive to devaluation of the expected reward. These results demonstrate that OFC must be online during cue-cue learning, consistent with the argument that the correlates previously observed are not simply downstream readouts of sensory processing and instead contribute to building the associative model supporting later behavior.


Asunto(s)
Condicionamiento Psicológico/fisiología , Aprendizaje/fisiología , Corteza Prefrontal/fisiología , Animales , Señales (Psicología) , Femenino , Masculino , Optogenética , Ratas , Ratas Long-Evans
19.
Nat Commun ; 11(1): 106, 2020 01 08.
Artículo en Inglés | MEDLINE | ID: mdl-31913274

RESUMEN

Dopamine neurons are proposed to signal the reward prediction error in model-free reinforcement learning algorithms. This term represents the unpredicted or 'excess' value of the rewarding event, value that is then added to the intrinsic value of any antecedent cues, contexts or events. To support this proposal, proponents cite evidence that artificially-induced dopamine transients cause lasting changes in behavior. Yet these studies do not generally assess learning under conditions where an endogenous prediction error would occur. Here, to address this, we conducted three experiments where we optogenetically activated dopamine neurons while rats were learning associative relationships, both with and without reward. In each experiment, the antecedent cues failed to acquire value and instead entered into associations with the later events, whether valueless cues or valued rewards. These results show that in learning situations appropriate for the appearance of a prediction error, dopamine transients support associative, rather than model-free, learning.


Asunto(s)
Dopamina/metabolismo , Neuronas Dopaminérgicas/fisiología , Aprendizaje , Animales , Conducta Animal , Condicionamiento Clásico , Señales (Psicología) , Femenino , Masculino , Modelos Neurológicos , Ratas , Recompensa
20.
Nat Neurosci ; 23(2): 176-178, 2020 02.
Artículo en Inglés | MEDLINE | ID: mdl-31959935

RESUMEN

Reward-evoked dopamine transients are well established as prediction errors. However, the central tenet of temporal difference accounts-that similar transients evoked by reward-predictive cues also function as errors-remains untested. In the present communication we addressed this by showing that optogenetically shunting dopamine activity at the start of a reward-predicting cue prevents second-order conditioning without affecting blocking. These results indicate that cue-evoked transients function as temporal-difference prediction errors rather than reward predictions.


Asunto(s)
Aprendizaje por Asociación/fisiología , Encéfalo/fisiología , Dopamina/metabolismo , Animales , Condicionamiento Operante/fisiología , Señales (Psicología) , Neuronas Dopaminérgicas/fisiología , Ratas , Ratas Long-Evans , Ratas Transgénicas , Recompensa
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA