Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 174
Filtrar
1.
Nature ; 590(7847): 606-611, 2021 02.
Artículo en Inglés | MEDLINE | ID: mdl-33361819

RESUMEN

How do we learn about what to learn about? Specifically, how do the neural elements in our brain generalize what has been learned in one situation to recognize the common structure of-and speed learning in-other, similar situations? We know this happens because we become better at solving new problems-learning and deploying schemas1-5-through experience. However, we have little insight into this process. Here we show that using prior knowledge to facilitate learning is accompanied by the evolution of a neural schema in the orbitofrontal cortex. Single units were recorded from rats deploying a schema to learn a succession of odour-sequence problems. With learning, orbitofrontal cortex ensembles converged onto a low-dimensional neural code across both problems and subjects; this neural code represented the common structure of the problems and its evolution accelerated across their learning. These results demonstrate the formation and use of a schema in a prefrontal brain region to support a complex cognitive operation. Our results not only reveal a role for the orbitofrontal cortex in learning but also have implications for using ensemble analyses to tap into complex cognitive functions.


Asunto(s)
Aprendizaje/fisiología , Modelos Neurológicos , Corteza Prefrontal/fisiología , Aceleración , Animales , Cognición/fisiología , Lógica , Masculino , Neuronas/fisiología , Odorantes/análisis , Corteza Prefrontal/citología , Ratas , Ratas Long-Evans , Recompensa
2.
J Neurosci ; 2024 Aug 09.
Artículo en Inglés | MEDLINE | ID: mdl-39122558

RESUMEN

The orbitofrontal cortex (OFC) is crucial for tracking various aspects of expected outcomes, thereby helping to guide choices and support learning. Our previous study showed that the effects of reward timing and size on the activity of single units in OFC were dissociable when these attributes were manipulated independently (Roesch et al., 2006). However, in real-life decision-making scenarios, outcome features often change simultaneously, so here we investigated how OFC neurons in male rats integrate information about the timing and identity (flavor) of reward and respond to changes in these features, according to whether they were changed simultaneously or separately. We found that a substantial number of OFC neurons fired differentially to immediate versus delayed reward and to the different reward flavors. However, contrary to the previous study, selectivity for timing was strongly correlated with selectivity for identity. Taken together with the previous research, these results suggest that when reward features are correlated, OFC tends to "pack" them into unitary constructs, whereas when they are independent, OFC tends to "crack" them into separate constructs. Furthermore, we found that when both reward timing and flavor were changed, reward-responsive OFC neurons showed unique activity patterns preceding and during the omission of an expected reward. Interestingly, this OFC activity is similar and slightly preceded the ventral tegmental area dopamine (VTA DA) activity that observed in a previous study (Takahashi et al., 2023), consistent with a role for OFC in providing predictive information to VTA DA neurons.Significant Statement Although multiple features of outcomes can change simultaneously in real-life decision-making scenarios, how OFC neurons integrate information about the reward timing and identity and respond to changes in these features remains unexplored. Here we found that OFC neurons integrate information about reward timing and identity when the two features changed simultaneously. Combining with prior research (Roesch et al., 2006), these findings suggest that OFC tends to integrate correlated features into unitary constructs, while segregating independent features into separate constructs. Additionally, we observed distinct activities in general reward-responsive neurons preceding the omission of an expected reward when both identity and timing changed. This implies OFC might convey predictions to VTA that track reward timing separately based on reward identity.

3.
J Neurosci ; 44(10)2024 Mar 06.
Artículo en Inglés | MEDLINE | ID: mdl-38346894

RESUMEN

When rats are given discrete choices between social interactions with a peer and opioid or psychostimulant drugs, they choose social interaction, even after extensive drug self-administration experience. Studies show that like drug and nondrug food reinforcers, social interaction is an operant reinforcer and induces dopamine release. However, these studies were conducted with same-sex peers. We examined if peer sex influences operant social interaction and the role of estrous cycle and striatal dopamine in same- versus opposite-sex social interaction. We trained male and female rats (n = 13 responders/12 peers) to lever-press (fixed-ratio 1 [FR1] schedule) for 15 s access to a same- or opposite-sex peer for 16 d (8 d/sex) while tracking females' estrous cycle. Next, we transfected GRAB-DA2m and implanted optic fibers into nucleus accumbens (NAc) core and dorsomedial striatum (DMS). We then retrained the rats for 15 s social interaction (FR1 schedule) for 16 d (8 d/sex) and recorded striatal dopamine during operant responding for a peer for 8 d (4 d/sex). Finally, we assessed economic demand by manipulating FR requirements for a peer (10 d/sex). In male, but not female rats, operant responding was higher for the opposite-sex peer. Female's estrous cycle fluctuations had no effect on operant social interaction. Striatal dopamine signals for operant social interaction were dependent on the peer's sex and striatal region (NAc core vs DMS). Results indicate that estrous cycle fluctuations did not influence operant social interaction and that NAc core and DMS dopamine activity reflect sex-dependent features of volitional social interaction.


Asunto(s)
Condicionamiento Operante , Dopamina , Ratas , Animales , Masculino , Femenino , Dopamina/farmacología , Interacción Social , Cuerpo Estriado , Inhibidores de Captación de Dopamina/farmacología , Núcleo Accumbens
4.
Neurobiol Learn Mem ; 207: 107869, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38042330

RESUMEN

The orbitofrontal cortex (OFC) is often proposed to function as a value integrator; however, alternative accounts focus on its role in representing associative structures that specify the probability and sensory identity of future outcomes. These two accounts make different predictions about how this area should respond to conditioned inhibitors of reward, since in the former, neural activity should reflect the negative value of the inhibitor, whereas in the latter, it should track the estimated probability of a future reward based on all cues present. Here, we assessed these predictions by recording from small groups of neurons in the lateral OFC of rats during training in a conditioned inhibition design. Rats showed negative summation when the inhibitor was compounded with a novel excitor, suggesting that they learned to respond to the conditioned inhibitor appropriately. Against this backdrop, we found unit and population responses that scaled with expected reward value on excitor + inhibitor compound trials. However, the responses of these neurons did not differentiate between the conditioned inhibitor and a neutral cue when both were presented in isolation. Further, when the ensemble patterns were analyzed, activity to the conditioned inhibitor did not classify according to putative negative value. Instead, it classified with a same-modality neutral cue when presented alone and as a unique item when presented in compound with a novel excitor. This pattern of results supports the notion that OFC encodes a model of the causal structure of the environment rather than either the modality or the value of cues.


Asunto(s)
Condicionamiento Clásico , Neuronas , Ratas , Animales , Neuronas/fisiología , Condicionamiento Clásico/fisiología , Corteza Prefrontal/fisiología , Aprendizaje , Recompensa , Señales (Psicología)
5.
PLoS Biol ; 18(1): e3000578, 2020 01.
Artículo en Inglés | MEDLINE | ID: mdl-31961854

RESUMEN

Internal representations of relationships between events in the external world can be utilized to infer outcomes when direct experience is lacking. This process is thought to involve the orbitofrontal cortex (OFC) and hippocampus (HPC), but there is little evidence regarding the relative role of these areas and their interactions in inference. Here, we used a sensory preconditioning task and pattern-based neuroimaging to study this question. We found that associations among value-neutral cues were acquired in both regions during preconditioning but that value-related information was only represented in the OFC at the time of the probe test. Importantly, inference was accompanied by representations of associated cues and inferred outcomes in the OFC, as well as by increased HPC-OFC connectivity. These findings suggest that the OFC and HPC represent only partially overlapping information and that interactions between the two regions support model-based inference.


Asunto(s)
Condicionamiento Psicológico/fisiología , Hipocampo/fisiología , Modelos Psicológicos , Vías Nerviosas/fisiología , Corteza Prefrontal/fisiología , Adulto , Señales (Psicología) , Femenino , Hipocampo/citología , Humanos , Masculino , Corteza Prefrontal/citología , Aprendizaje por Probabilidad , Reconocimiento en Psicología/fisiología , Adulto Joven
6.
PLoS Comput Biol ; 18(3): e1009897, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-35333867

RESUMEN

There is no single way to represent a task. Indeed, despite experiencing the same task events and contingencies, different subjects may form distinct task representations. As experimenters, we often assume that subjects represent the task as we envision it. However, such a representation cannot be taken for granted, especially in animal experiments where we cannot deliver explicit instruction regarding the structure of the task. Here, we tested how rats represent an odor-guided choice task in which two odor cues indicated which of two responses would lead to reward, whereas a third odor indicated free choice among the two responses. A parsimonious task representation would allow animals to learn from the forced trials what is the better option to choose in the free-choice trials. However, animals may not necessarily generalize across odors in this way. We fit reinforcement-learning models that use different task representations to trial-by-trial choice behavior of individual rats performing this task, and quantified the degree to which each animal used the more parsimonious representation, generalizing across trial types. Model comparison revealed that most rats did not acquire this representation despite extensive experience. Our results demonstrate the importance of formally testing possible task representations that can afford the observed behavior, rather than assuming that animals' task representations abide by the generative task structure that governs the experimental design.


Asunto(s)
Odorantes , Recompensa , Animales , Señales (Psicología) , Generalización Psicológica , Humanos , Ratas , Refuerzo en Psicología
7.
J Neurosci ; 41(9): 1941-1951, 2021 03 03.
Artículo en Inglés | MEDLINE | ID: mdl-33446521

RESUMEN

Animals can categorize the environment into "states," defined by unique sets of available action-outcome contingencies in different contexts. Doing so helps them choose appropriate actions and make accurate outcome predictions when in each given state. State maps have been hypothesized to be held in the orbitofrontal cortex (OFC), an area implicated in decision-making and encoding information about outcome predictions. Here we recorded neural activity in OFC in 6 male rats to test state representations. Rats were trained on an odor-guided choice task consisting of five trial blocks containing distinct sets of action-outcome contingencies, constituting states, with unsignaled transitions between them. OFC neural ensembles were analyzed using decoding algorithms. Results indicate that the vast majority of OFC neurons contributed to representations of the current state at any point in time, independent of odor cues and reward delivery, even at the level of individual neurons. Across state transitions, these representations gradually integrated evidence for the new state; the rate at which this integration happened in the prechoice part of the trial was related to how quickly the rats' choices adapted to the new state. Finally, OFC representations of outcome predictions, often thought to be the primary function of OFC, were dependent on the accuracy of OFC state representations.SIGNIFICANCE STATEMENT A prominent hypothesis proposes that orbitofrontal cortex (OFC) tracks current location in a "cognitive map" of state space. Here we tested this idea in detail by analyzing neural activity recorded in OFC of rats performing a task consisting of a series of states, each defined by a set of available action-outcome contingencies. Results show that most OFC neurons contribute to state representations and that these representations are related to the rats' decision-making and OFC reward predictions. These findings suggest new interpretations of emotional dysregulation in pathologies, such as addiction, which have long been known to be related to OFC dysfunction.


Asunto(s)
Conducta de Elección/fisiología , Neuronas/fisiología , Corteza Prefrontal/fisiología , Recompensa , Animales , Masculino , Ratas , Ratas Long-Evans
8.
J Neurosci ; 41(32): 6933-6945, 2021 08 11.
Artículo en Inglés | MEDLINE | ID: mdl-34210776

RESUMEN

The orbitofrontal cortex (OFC) and hippocampus share striking cognitive and functional similarities. As a result, both structures have been proposed to encode "cognitive maps" that provide useful scaffolds for planning complex behaviors. However, while this function has been exemplified by spatial coding in neurons of hippocampal regions-particularly place and grid cells-spatial representations in the OFC have been investigated far less. Here we sought to address this by recording OFC neurons from male rats engaged in an open-field foraging task like that originally developed to characterize place fields in rodent hippocampal neurons. Single-unit activity was recorded as rats searched for food pellets scattered randomly throughout a large enclosure. In some sessions, particular flavors of food occurred more frequently in particular parts of the enclosure; in others, only a single flavor was used. OFC neurons showed spatially localized firing fields in both conditions, and representations changed between flavored and unflavored foraging periods in a manner reminiscent of remapping in the hippocampus. Compared with hippocampal recordings taken under similar behavioral conditions, OFC spatial representations were less temporally reliable, and there was no significant evidence of grid tuning in OFC neurons. These data confirm that OFC neurons show spatial firing fields in a large, two-dimensional environment in a manner similar to hippocampus. Consistent with the focus of the OFC on biological meaning and goals, spatial coding was weaker than in hippocampus and influenced by outcome identity.SIGNIFICANCE STATEMENT The orbitofrontal cortex (OFC) and hippocampus have both been proposed to encode "cognitive maps" that provide useful scaffolds for planning complex behaviors. This function is exemplified by place and grid cells identified in hippocampus, the activity of which maps spatial environments. The current study directly demonstrates very similar, though not identical, spatial representatives in OFC neurons, confirming that OFC-like hippocampus-can represent a spatial map under the appropriate experimental conditions.


Asunto(s)
Neuronas/fisiología , Corteza Prefrontal/fisiología , Conducta Espacial/fisiología , Animales , Conducta Animal/fisiología , Mapeo Encefálico/métodos , Electrocorticografía , Masculino , Ratas , Ratas Long-Evans
9.
J Neurosci ; 41(2): 342-353, 2021 01 13.
Artículo en Inglés | MEDLINE | ID: mdl-33219006

RESUMEN

Substance use disorders (SUDs) are characterized by maladaptive behavior. The ability to properly adjust behavior according to changes in environmental contingencies necessitates the interlacing of existing memories with updated information. This can be achieved by assigning learning in different contexts to compartmentalized "states." Though not often framed this way, the maladaptive behavior observed in individuals with SUDs may result from a failure to properly encode states because of drug-induced neural alterations. Previous studies found that the dorsomedial striatum (DMS) is important for behavioral flexibility and state encoding, suggesting the DMS may be an important substrate for these effects. Here, we recorded DMS neural activity in cocaine-experienced male rats during a decision-making task where blocks of trials represented distinct states to probe whether the encoding of state and state-related information is affected by prior drug exposure. We found that DMS medium spiny neurons (MSNs) and fast-spiking interneurons (FSIs) encoded such information and that prior cocaine experience disrupted the evolution of representations both within trials and across recording sessions. Specifically, DMS MSNs and FSIs from cocaine-experienced rats demonstrated higher classification accuracy of trial-specific rules, defined by response direction and value, compared with those drawn from sucrose-experienced rats, and these overly strengthened trial-type representations were related to slower switching behavior and reaction times. These data show that prior cocaine experience paradoxically increases the encoding of state-specific information and rules in the DMS and suggest a model in which abnormally specific and persistent representation of rules throughout trials in DMS slows value-based decision-making in well trained subjects.SIGNIFICANCE STATEMENT Substance use disorders (SUDs) may result from a failure to properly encode rules guiding situationally appropriate behavior. The dorsomedial striatum (DMS) is thought to be important for such behavioral flexibility and encoding that defines the situation or "state." This suggests that the DMS may be an important substrate for the maladaptive behavior observed in SUDs. In the current study, we show that prior cocaine experience results in over-encoding of state-specific information and rules in the DMS, which may impair normal adaptive decision-making in the task, akin to what is observed in SUDs.


Asunto(s)
Trastornos Relacionados con Cocaína/psicología , Cocaína/farmacología , Toma de Decisiones/efectos de los fármacos , Neostriado/efectos de los fármacos , Animales , Conducta de Elección/efectos de los fármacos , Interneuronas/efectos de los fármacos , Masculino , Neuronas/efectos de los fármacos , Odorantes , Desempeño Psicomotor/efectos de los fármacos , Ratas , Ratas Long-Evans , Tiempo de Reacción/efectos de los fármacos , Recompensa , Autoadministración , Sacarosa/farmacología
11.
J Neurosci ; 40(45): 8726-8733, 2020 11 04.
Artículo en Inglés | MEDLINE | ID: mdl-33051355

RESUMEN

When direct experience is unavailable, animals and humans can imagine or infer the future to guide decisions. Behavior based on direct experience versus inference may recruit partially distinct brain circuits. In rodents, the orbitofrontal cortex (OFC) contains neural signatures of inferred outcomes, and OFC is necessary for behavior that requires inference but not for responding driven by direct experience. In humans, OFC activity is also correlated with inferred outcomes, but it is unclear whether OFC activity is required for inference-based behavior. To test this, we used noninvasive network-based continuous theta burst stimulation (cTBS) in human subjects (male and female) to target lateral OFC networks in the context of a sensory preconditioning task that was designed to isolate inference-based behavior from responding that can be based on direct experience alone. We show that, relative to sham, cTBS targeting this network impairs reward-related behavior in conditions in which outcome expectations have to be mentally inferred. In contrast, OFC-targeted stimulation does not impair behavior that can be based on previously experienced stimulus-outcome associations. These findings suggest that activity in the targeted OFC network supports decision-making when outcomes have to be mentally simulated, providing converging cross-species evidence for a critical role of OFC in model-based but not model-free control of behavior.SIGNIFICANCE STATEMENT It is widely accepted that the orbitofrontal cortex (OFC) is important for decision-making. However, it is less clear how exactly this region contributes to behavior. Here we test the hypothesis that the human OFC is only required for decision-making when future outcomes have to be mentally simulated, but not when direct experience with stimulus-outcome associations is available. We show that targeting OFC network activity in humans using network-based continuous theta burst stimulation selectively impairs behavior that requires inference but does not affect responding that can be based solely on direct experience. These results are in line with previous findings in animals and suggest a critical role for human OFC in model-based but not model-free behavior.


Asunto(s)
Anticipación Psicológica/fisiología , Toma de Decisiones/fisiología , Red Nerviosa/fisiología , Corteza Prefrontal/fisiología , Estimulación Magnética Transcraneal/métodos , Adulto , Condicionamiento Psicológico , Señales (Psicología) , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Red Nerviosa/diagnóstico por imagen , Odorantes , Estimulación Luminosa , Corteza Prefrontal/diagnóstico por imagen , Recompensa , Sensación/fisiología , Ritmo Teta/fisiología , Adulto Joven
12.
Nat Rev Neurosci ; 17(8): 513-23, 2016 08.
Artículo en Inglés | MEDLINE | ID: mdl-27256552

RESUMEN

The hippocampus and the orbitofrontal cortex (OFC) both have important roles in cognitive processes such as learning, memory and decision making. Nevertheless, research on the OFC and hippocampus has proceeded largely independently, and little consideration has been given to the importance of interactions between these structures. Here, evidence is reviewed that the hippocampus and OFC encode parallel, but interactive, cognitive 'maps' that capture complex relationships between cues, actions, outcomes and other features of the environment. A better understanding of the interactions between the OFC and hippocampus is important for understanding the neural bases of flexible, goal-directed decision making.


Asunto(s)
Cognición/fisiología , Toma de Decisiones/fisiología , Hipocampo/fisiología , Aprendizaje/fisiología , Memoria/fisiología , Corteza Prefrontal/fisiología , Animales , Humanos
13.
PLoS Biol ; 16(9): e2004015, 2018 09.
Artículo en Inglés | MEDLINE | ID: mdl-30256785

RESUMEN

Recent computational models of sign tracking (ST) and goal tracking (GT) have accounted for observations that dopamine (DA) is not necessary for all forms of learning and have provided a set of predictions to further their validity. Among these, a central prediction is that manipulating the intertrial interval (ITI) during autoshaping should change the relative ST-GT proportion as well as DA phasic responses. Here, we tested these predictions and found that lengthening the ITI increased ST, i.e., behavioral engagement with conditioned stimuli (CS) and cue-induced phasic DA release. Importantly, DA release was also present at the time of reward delivery, even after learning, and DA release was correlated with time spent in the food cup during the ITI. During conditioning with shorter ITIs, GT was prominent (i.e., engagement with food cup), and DA release responded to the CS while being absent at the time of reward delivery after learning. Hence, shorter ITIs restored the classical DA reward prediction error (RPE) pattern. These results validate the computational hypotheses, opening new perspectives on the understanding of individual differences in Pavlovian conditioning and DA signaling.


Asunto(s)
Dopamina/metabolismo , Modelos Biológicos , Recompensa , Animales , Condicionamiento Clásico , Objetivos , Masculino , Ratas Sprague-Dawley
14.
Annu Rev Psychol ; 70: 53-76, 2019 01 04.
Artículo en Inglés | MEDLINE | ID: mdl-30260745

RESUMEN

Making decisions in environments with few choice options is easy. We select the action that results in the most valued outcome. Making decisions in more complex environments, where the same action can produce different outcomes in different conditions, is much harder. In such circumstances, we propose that accurate action selection relies on top-down control from the prelimbic and orbitofrontal cortices over striatal activity through distinct thalamostriatal circuits. We suggest that the prelimbic cortex exerts direct influence over medium spiny neurons in the dorsomedial striatum to represent the state space relevant to the current environment. Conversely, the orbitofrontal cortex is argued to track a subject's position within that state space, likely through modulation of cholinergic interneurons.


Asunto(s)
Corteza Cerebral/fisiología , Cuerpo Estriado/fisiología , Toma de Decisiones/fisiología , Función Ejecutiva/fisiología , Modelos Psicológicos , Animales , Humanos
15.
J Neurosci ; 38(41): 8822-8830, 2018 10 10.
Artículo en Inglés | MEDLINE | ID: mdl-30181136

RESUMEN

Prediction errors are critical for associative learning. In the brain, these errors are thought to be signaled, in part, by midbrain dopamine neurons. However, although there is substantial direct evidence that brief increases in the firing of these neurons can mimic positive prediction errors, there is less evidence that brief pauses mimic negative errors. Whereas pauses in the firing of midbrain dopamine neurons can substitute for missing negative prediction errors to drive extinction, it has been suggested that this effect might be attributable to changes in salience rather than the operation of this signal as a negative prediction error. Here we address this concern by showing that the same pattern of inhibition will create a cue able to meet the classic definition of a conditioned inhibitor by showing suppression of responding in a summation test and slower learning in a retardation test. Importantly, these classic criteria were designed to rule out explanations founded on attention or salience; thus the results cannot be explained in this manner. We also show that this pattern of behavior is not produced by a single, prolonged, ramped period of inhibition, suggesting that it is precisely timed, sudden change and not duration that conveys the teaching signal.SIGNIFICANCE STATEMENT Here we show that brief pauses in the firing of midbrain dopamine neurons are sufficient to produce a cue that meets the classic criteria defining a conditioned inhibitor, or a cue that predicts the omission of a reward. These criteria were developed to distinguish actual learning from salience or attentional effects; thus these results formally show that brief pauses in the firing of dopamine neurons can serve as key teaching signals in the brain. Interestingly, this was not true for gradual prolonged pauses, suggesting it is the dynamic change in firing that serves as the teaching signal.


Asunto(s)
Condicionamiento Clásico/fisiología , Neuronas Dopaminérgicas/fisiología , Recompensa , Área Tegmental Ventral/fisiología , Potenciales de Acción , Animales , Atención/fisiología , Conducta Animal , Femenino , Masculino , Ratas Transgénicas
16.
Proc Biol Sci ; 285(1891)2018 11 21.
Artículo en Inglés | MEDLINE | ID: mdl-30464063

RESUMEN

Midbrain dopamine neurons are commonly thought to report a reward prediction error (RPE), as hypothesized by reinforcement learning (RL) theory. While this theory has been highly successful, several lines of evidence suggest that dopamine activity also encodes sensory prediction errors unrelated to reward. Here, we develop a new theory of dopamine function that embraces a broader conceptualization of prediction errors. By signalling errors in both sensory and reward predictions, dopamine supports a form of RL that lies between model-based and model-free algorithms. This account remains consistent with current canon regarding the correspondence between dopamine transients and RPEs, while also accounting for new data suggesting a role for these signals in phenomena such as sensory preconditioning and identity unblocking, which ostensibly draw upon knowledge beyond reward predictions.


Asunto(s)
Dopamina/metabolismo , Neuronas Dopaminérgicas/fisiología , Aprendizaje/fisiología , Algoritmos , Animales , Transducción de Señal/fisiología
17.
Neurobiol Learn Mem ; 153(Pt B): 131-136, 2018 09.
Artículo en Inglés | MEDLINE | ID: mdl-29269085

RESUMEN

The phasic dopamine error signal is currently argued to be synonymous with the prediction error in Sutton and Barto (1987, 1998) model-free reinforcement learning algorithm (Schultz et al., 1997). This theory argues that phasic dopamine reflects a cached-value signal that endows reward-predictive cues with the scalar value inherent in reward. Such an interpretation does not envision a role for dopamine in more complex cognitive representations between events which underlie many forms of associative learning, restricting the role dopamine can play in learning. The cached-value hypothesis of dopamine makes three concrete predictions about when a phasic dopamine response should be seen and what types of learning this signal should be able to promote. We discuss these predictions in light of recent evidence which we believe provide particularly strong tests of their validity. In doing so, we find that while the phasic dopamine signal conforms to a cached-value account in some circumstances, other evidence demonstrate that this signal is not restricted to a model-free cached-value reinforcement learning signal. In light of this evidence, we argue that the phasic dopamine signal functions more generally to signal violations of expectancies to drive real-world associations between events.


Asunto(s)
Aprendizaje por Asociación/fisiología , Encéfalo/fisiología , Dopamina/fisiología , Modelos Neurológicos , Recompensa , Animales , Humanos
18.
Neurobiol Learn Mem ; 153(Pt B): 137-143, 2018 09.
Artículo en Inglés | MEDLINE | ID: mdl-29408053

RESUMEN

Neurons in the orbitofrontal cortex (OFC) fire in anticipation of and during rewards. Such firing has been suggested to encode reward predictions and to account in some way for the role of this area in adaptive behavior and learning. However, it has also been reported that neural activity in OFC reflects reward prediction errors, which might drive learning directly. Here we tested this question by analyzing the firing of OFC neurons recorded in an odor discrimination task in which rats were trained to sample odor cues and respond left or right on each trial for reward. Neurons were recorded across blocks of trials in which we switched either the number or the flavor of the reward delivered in each well. Previously we have described how neurons in this dataset fired to the predictive cues (Stalnaker et al., 2014); here we focused on the firing in anticipation of and just after delivery of each drop of reward, looking specifically for differences in firing based on whether the reward number or flavor was unexpected or expected. Unlike dopamine neurons recorded in this setting, which exhibited phasic error-like responses after surprising changes in either reward number or reward flavor (Takahashi et al., 2017), OFC neurons showed no such error correlates and instead fired in a way that reflected reward predictions.


Asunto(s)
Potenciales de Acción/fisiología , Aprendizaje/fisiología , Neuronas/fisiología , Corteza Prefrontal/fisiología , Recompensa , Animales , Neuronas Dopaminérgicas/fisiología , Masculino , Neuronas/citología , Corteza Prefrontal/citología , Ratas , Ratas Long-Evans
19.
J Neurosci ; 36(23): 6242-57, 2016 06 08.
Artículo en Inglés | MEDLINE | ID: mdl-27277802

RESUMEN

UNLABELLED: When conditions change, organisms need to learn about the changed conditions without interfering with what they already know. To do so, they can assign the new learning to a new "state" and the old learning to a previous state. This state assignment is fundamental to behavioral flexibility. Cholinergic interneurons (CINs) in the dorsomedial striatum (DMS) are necessary for associative information to be compartmentalized in this way, but the mechanism by which they do so is unknown. Here we addressed this question by recording putative CINs from the DMS in rats performing a task consisting of a series of trial blocks, or states, that required the recall and application of contradictory associative information. We found that individual CINs in the DMS represented the current state throughout each trial. These state correlates were not observed in dorsolateral striatal CINs recorded in the same rats. Notably, DMS CIN ensembles tracked rats' beliefs about the current state such that, when states were miscoded, rats tended to make suboptimal choices reflecting the miscoding. State information held by the DMS CINs also depended completely on the orbitofrontal cortex, an area that has been proposed to signal environmental states. These results suggest that CINs set the stage for recalling associative information relevant to the current environment by maintaining a real-time representation of the current state. Such a role has novel implications for understanding the neural basis of a variety of psychiatric diseases, such as addiction or anxiety disorders, in which patients generalize inappropriately (or fail to generalize) between different environments. SIGNIFICANCE STATEMENT: Striatal cholinergic interneurons (CINs) are thought to be identical to tonically active neurons. These neurons have long been thought to have an important influence on striatal processing during reward-related learning. Recently, a more specific function for striatal CINs has been suggested, which is that they are necessary for striatal learning to be compartmentalized into different states as the state of the environment changes. Here we report that putative CINs appear to track rats' beliefs about which environmental state is current. We further show that this property of CINs depends on orbitofrontal cortex input and is correlated with choices made by rats. These findings could provide new insight into neuropsychiatric diseases that involve improper generalization between different contexts.


Asunto(s)
Aprendizaje por Asociación/fisiología , Neuronas Colinérgicas/fisiología , Interneuronas/fisiología , Neostriado/citología , Corteza Prefrontal/citología , Potenciales de Acción/efectos de los fármacos , Potenciales de Acción/fisiología , Análisis de Varianza , Animales , Conducta de Elección/efectos de los fármacos , Conducta de Elección/fisiología , Colinérgicos/farmacología , Neuronas Colinérgicas/efectos de los fármacos , Neuronas Colinérgicas/metabolismo , Lateralidad Funcional , Proteínas Fluorescentes Verdes/genética , Proteínas Fluorescentes Verdes/metabolismo , Interneuronas/efectos de los fármacos , Interneuronas/metabolismo , Masculino , Recuerdo Mental/fisiología , Neostriado/lesiones , Vías Nerviosas/fisiología , Corteza Prefrontal/lesiones , Corteza Prefrontal/fisiología , Ratas , Ratas Long-Evans , Transducción Genética
20.
J Neurosci ; 36(32): 8416-24, 2016 08 10.
Artículo en Inglés | MEDLINE | ID: mdl-27511013

RESUMEN

UNLABELLED: The orbitofrontal cortex (OFC) has been broadly implicated in the ability to use the current value of expected outcomes to guide behavior. Although value correlates have been prominently reported in lateral OFC, they are more often associated with more medial areas. Further, recent studies in primates have suggested a dissociation in which the lateral OFC is involved in credit assignment and representation of reward identity and more medial areas are critical to representing value. Previously, we used unblocking to test more specifically what information about outcomes is represented by OFC neurons in rats; consistent with the proposed dichotomy between the lateral and medial OFC, we found relatively little linear value coding in the lateral OFC (Lopatina et al., 2015). Here we have repeated this experiment, recording in the medial OFC, to test whether such value signals might be found there. Neurons were recorded in an unblocking task as rats learned about cues that signaled either more, less, or the same amount of reward. We found that medial OFC neurons acquired responses to these cues; however, these responses did not signal different reward values across cues. Surprisingly, we found that cells developed responses to cues predicting a change, particularly a decrease, in reward value. This is consistent with a special role for medial OFC in representing current value to support devaluation/revaluation sensitive changes in behavior. SIGNIFICANCE STATEMENT: This study uniquely examines encoding in rodent mOFC at the single-unit level in response to cues that predict more, less, or no change in reward in rats during training in a Pavlovian unblocking task, finding more cells responding to change-predictive cues and stronger activity in response to cues predictive of less reward.


Asunto(s)
Condicionamiento Operante/fisiología , Señales (Psicología) , Neuronas/fisiología , Corteza Prefrontal/citología , Recompensa , Potenciales de Acción/fisiología , Animales , Masculino , Odorantes , Ratas , Ratas Long-Evans
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA