Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 25
Filter
1.
Cell ; 182(1): 112-126.e18, 2020 07 09.
Article in English | MEDLINE | ID: mdl-32504542

ABSTRACT

Every decision we make is accompanied by a sense of confidence about its likely outcome. This sense informs subsequent behavior, such as investing more-whether time, effort, or money-when reward is more certain. A neural representation of confidence should originate from a statistical computation and predict confidence-guided behavior. An additional requirement for confidence representations to support metacognition is abstraction: they should emerge irrespective of the source of information and inform multiple confidence-guided behaviors. It is unknown whether neural confidence signals meet these criteria. Here, we show that single orbitofrontal cortex neurons in rats encode statistical decision confidence irrespective of the sensory modality, olfactory or auditory, used to make a choice. The activity of these neurons also predicts two confidence-guided behaviors: trial-by-trial time investment and cross-trial choice strategy updating. Orbitofrontal cortex thus represents decision confidence consistent with a metacognitive process that is useful for mediating confidence-guided economic decisions.


Subject(s)
Behavior/physiology , Prefrontal Cortex/physiology , Animals , Choice Behavior/physiology , Decision Making , Models, Biological , Neurons/physiology , Rats, Long-Evans , Sensation/physiology , Task Performance and Analysis , Time Factors
2.
Cell ; 166(6): 1564-1571.e6, 2016 Sep 08.
Article in English | MEDLINE | ID: mdl-27610576

ABSTRACT

Optogenetic studies in mice have revealed new relationships between well-defined neurons and brain functions. However, there are currently no means to achieve the same cell-type specificity in monkeys, which possess an expanded behavioral repertoire and closer anatomical homology to humans. Here, we present a resource for cell-type-specific channelrhodopsin expression in Rhesus monkeys and apply this technique to modulate dopamine activity and monkey choice behavior. These data show that two viral vectors label dopamine neurons with greater than 95% specificity. Infected neurons were activated by light pulses, indicating functional expression. The addition of optical stimulation to reward outcomes promoted the learning of reward-predicting stimuli at the neuronal and behavioral level. Together, these results demonstrate the feasibility of effective and selective stimulation of dopamine neurons in non-human primates and a resource that could be applied to other cell types in the monkey brain.


Subject(s)
Choice Behavior/physiology , Dopaminergic Neurons/metabolism , Optogenetics/methods , Animals , Dependovirus/genetics , Dopamine/metabolism , Gene Expression Regulation , Genetic Vectors/genetics , Macaca mulatta , Promoter Regions, Genetic/genetics , Rhodopsin/genetics
3.
PLoS Comput Biol ; 20(4): e1011516, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38626219

ABSTRACT

When facing an unfamiliar environment, animals need to explore to gain new knowledge about which actions provide reward, but also put the newly acquired knowledge to use as quickly as possible. Optimal reinforcement learning strategies should therefore assess the uncertainties of these action-reward associations and utilise them to inform decision making. We propose a novel model whereby direct and indirect striatal pathways act together to estimate both the mean and variance of reward distributions, and mesolimbic dopaminergic neurons provide transient novelty signals, facilitating effective uncertainty-driven exploration. We utilised electrophysiological recording data to verify our model of the basal ganglia, and we fitted exploration strategies derived from the neural model to data from behavioural experiments. We also compared the performance of directed exploration strategies inspired by our basal ganglia model with other exploration algorithms including classic variants of upper confidence bound (UCB) strategy in simulation. The exploration strategies inspired by the basal ganglia model can achieve overall superior performance in simulation, and we found qualitatively similar results in fitting model to behavioural data compared with the fitting of more idealised normative models with less implementation level detail. Overall, our results suggest that transient dopamine levels in the basal ganglia that encode novelty could contribute to an uncertainty representation which efficiently drives exploration in reinforcement learning.


Subject(s)
Basal Ganglia , Dopamine , Models, Neurological , Reward , Dopamine/metabolism , Dopamine/physiology , Uncertainty , Animals , Basal Ganglia/physiology , Exploratory Behavior/physiology , Reinforcement, Psychology , Dopaminergic Neurons/physiology , Computational Biology , Computer Simulation , Male , Algorithms , Decision Making/physiology , Behavior, Animal/physiology , Rats
4.
J Neurosci ; 41(34): 7197-7205, 2021 08 25.
Article in English | MEDLINE | ID: mdl-34253628

ABSTRACT

The striatum plays critical roles in visually-guided decision-making and receives dense axonal projections from midbrain dopamine neurons. However, the roles of striatal dopamine in visual decision-making are poorly understood. We trained male and female mice to perform a visual decision task with asymmetric reward payoff, and we recorded the activity of dopamine axons innervating striatum. Dopamine axons in the dorsomedial striatum (DMS) responded to contralateral visual stimuli and contralateral rewarded actions. Neural responses to contralateral stimuli could not be explained by orienting behavior such as eye movements. Moreover, these contralateral stimulus responses persisted in sessions where the animals were instructed to not move to obtain reward, further indicating that these signals are stimulus-related. Lastly, we show that DMS dopamine signals were qualitatively different from dopamine signals in the ventral striatum (VS), which responded to both ipsilateral and contralateral stimuli, conforming to canonical prediction error signaling under sensory uncertainty. Thus, during visual decisions, DMS dopamine encodes visual stimuli and rewarded actions in a lateralized fashion, and could facilitate associations between specific visual stimuli and actions.SIGNIFICANCE STATEMENT While the striatum is central to goal-directed behavior, the precise roles of its rich dopaminergic innervation in perceptual decision-making are poorly understood. We found that in a visual decision task, dopamine axons in the dorsomedial striatum (DMS) signaled stimuli presented contralaterally to the recorded hemisphere, as well as the onset of rewarded actions. Stimulus-evoked signals persisted in a no-movement task variant. We distinguish the patterns of these signals from those in the ventral striatum (VS). Our results contribute to the characterization of region-specific dopaminergic signaling in the striatum and highlight a role in stimulus-action association learning.


Subject(s)
Association Learning/physiology , Axons/physiology , Choice Behavior/physiology , Corpus Striatum/physiology , Dopaminergic Neurons/physiology , Photic Stimulation , Reward , Animals , Corpus Striatum/cytology , Dominance, Cerebral , Dopamine/physiology , Eye Movements/physiology , Female , Male , Mice , Mice, Inbred C57BL , Nerve Fibers/ultrastructure
5.
Cereb Cortex ; 29(5): 2196-2210, 2019 05 01.
Article in English | MEDLINE | ID: mdl-30796825

ABSTRACT

Cortical activity is organized across multiple spatial and temporal scales. Most research on the dynamics of neuronal spiking is concerned with timescales of 1 ms-1 s, and little is known about spiking dynamics on timescales of tens of seconds and minutes. Here, we used frequency domain analyses to study the structure of individual neurons' spiking activity and its coupling to local population rate and to arousal level across 0.01-100 Hz frequency range. In mouse medial prefrontal cortex, the spiking dynamics of individual neurons could be quantitatively captured by a combination of interspike interval and firing rate power spectrum distributions. The relative strength of coherence with local population often differed across timescales: a neuron strongly coupled to population rate on fast timescales could be weakly coupled on slow timescales, and vice versa. On slow but not fast timescales, a substantial proportion of neurons showed firing anticorrelated with the population. Infraslow firing rate changes were largely determined by arousal rather than by local factors, which could explain the timescale dependence of individual neurons' population coupling strength. These observations demonstrate how neurons simultaneously partake in fast local dynamics, and slow brain-wide dynamics, extending our understanding of infraslow cortical activity beyond the mesoscale resolution of fMRI.


Subject(s)
Action Potentials/physiology , Neurons/physiology , Prefrontal Cortex/physiology , Animals , Female , Male , Mice, Inbred C57BL , Models, Neurological , Signal Processing, Computer-Assisted , Time Factors
6.
Proc Natl Acad Sci U S A ; 111(6): 2343-8, 2014 Feb 11.
Article in English | MEDLINE | ID: mdl-24453218

ABSTRACT

Prediction error signals enable us to learn through experience. These experiences include economic choices between different rewards that vary along multiple dimensions. Therefore, an ideal way to reinforce economic choice is to encode a prediction error that reflects the subjective value integrated across these reward dimensions. Previous studies demonstrated that dopamine prediction error responses reflect the value of singular reward attributes that include magnitude, probability, and delay. Obviously, preferences between rewards that vary along one dimension are completely determined by the manipulated variable. However, it is unknown whether dopamine prediction error responses reflect the subjective value integrated from different reward dimensions. Here, we measured the preferences between rewards that varied along multiple dimensions, and as such could not be ranked according to objective metrics. Monkeys chose between rewards that differed in amount, risk, and type. Because their choices were complete and transitive, the monkeys chose "as if" they integrated different rewards and attributes into a common scale of value. The prediction error responses of single dopamine neurons reflected the integrated subjective value inferred from the choices, rather than the singular reward attributes. Specifically, amount, risk, and reward type modulated dopamine responses exactly to the extent that they influenced economic choices, even when rewards were vastly different, such as liquid and food. This prediction error response could provide a direct updating signal for economic values.


Subject(s)
Dopamine/physiology , Reward , Animals , Macaca mulatta , Male , Neurons/physiology , Probability
7.
J Neurosci ; 35(7): 3146-54, 2015 Feb 18.
Article in English | MEDLINE | ID: mdl-25698750

ABSTRACT

Economic choices are largely determined by two principal elements, reward value (utility) and probability. Although nonlinear utility functions have been acknowledged for centuries, nonlinear probability weighting (probability distortion) was only recently recognized as a ubiquitous aspect of real-world choice behavior. Even when outcome probabilities are known and acknowledged, human decision makers often overweight low probability outcomes and underweight high probability outcomes. Whereas recent studies measured utility functions and their corresponding neural correlates in monkeys, it is not known whether monkeys distort probability in a manner similar to humans. Therefore, we investigated economic choices in macaque monkeys for evidence of probability distortion. We trained two monkeys to predict reward from probabilistic gambles with constant outcome values (0.5 ml or nothing). The probability of winning was conveyed using explicit visual cues (sector stimuli). Choices between the gambles revealed that the monkeys used the explicit probability information to make meaningful decisions. Using these cues, we measured probability distortion from choices between the gambles and safe rewards. Parametric modeling of the choices revealed classic probability weighting functions with inverted-S shape. Therefore, the animals overweighted low probability rewards and underweighted high probability rewards. Empirical investigation of the behavior verified that the choices were best explained by a combination of nonlinear value and nonlinear probability distortion. Together, these results suggest that probability distortion may reflect evolutionarily preserved neuronal processing.


Subject(s)
Choice Behavior/physiology , Probability , Reward , Risk-Taking , Animals , Conditioning, Classical , Cues , Games, Experimental , Macaca mulatta , Male
8.
Proc Natl Acad Sci U S A ; 107(17): 7981-6, 2010 Apr 27.
Article in English | MEDLINE | ID: mdl-20385799

ABSTRACT

We investigated connections between the physiology of rat barrel cortex neurons and the sensation of vibration in humans. One set of experiments measured neuronal responses in anesthetized rats to trains of whisker deflections, each train characterized either by constant amplitude across all deflections or by variable amplitude ("amplitude noise"). Firing rate and firing synchrony were, on average, boosted by the presence of noise. However, neurons were not uniform in their responses to noise. Barrel cortex neurons have been categorized as regular-spiking units (putative excitatory neurons) and fast-spiking units (putative inhibitory neurons). Among regular-spiking units, amplitude noise caused a higher firing rate and increased cross-neuron synchrony. Among fast-spiking units, noise had the opposite effect: It led to a lower firing rate and decreased cross-neuron synchrony. This finding suggests that amplitude noise affects the interaction between inhibitory and excitatory neurons. From these physiological effects, we expected that noise would lead to an increase in the perceived intensity of a vibration. We tested this notion using psychophysical measurements in humans. As predicted, subjects overestimated the intensity of noisy vibrations. Thus the physiological mechanisms present in barrel cortex also appear to be at work in the human tactile system, where they affect vibration perception.


Subject(s)
Sensory Receptor Cells/physiology , Somatosensory Cortex/physiology , Touch Perception/physiology , Touch/physiology , Vibrissae/physiology , Animals , Humans , Physical Stimulation , Psychophysics , Rats , Vibration
9.
J Vis ; 12(2)2012 Feb 21.
Article in English | MEDLINE | ID: mdl-22353777

ABSTRACT

Traditionally, the perceived size of negative afterimages has been examined in relation to E. Emmert's law (1881), a size-distance equation that states that changes in perceived size of an afterimage are a function of the distance of the surface on which it is projected. Here, we present evidence that the size of an afterimage is also modulated by its surrounding context. We employed a new version of the Ebbinghaus-Titchener illusion with flickering surrounding stimuli and a static inner target that generated a vivid afterimage of the latter but not the former. Observers were asked to give an initial manual estimate of the size of the inner target during the adaptation phase followed by another manual estimate of the size of the afterimage during the test phase. Manual estimates were affected by the size-contrast illusion both when the surrounding contextual elements were present during afterimage induction and when the surrounding elements were absent during the viewing of the afterimage (Experiment 1). Such a modulation in perceived size, however, did not occur when observers viewed only the flickering surrounding context for a prolonged period of time and then estimated the size of a static target presented on the monitor afterward, demonstrating that flickering stimuli by themselves did not produce any aftereffect on perceived size (Experiment 2). Furthermore, in a final experiment, we showed that the modulation observed in the test phase of Experiment 1 was not due to memory of the manual estimates that had been performed during the adaptation phase (Experiment 3). These findings provide clear evidence for the role of high-level cognitive processes on the perceived size of an afterimage beyond the retinal level. Thus, although retinal stimulation is required to induce an afterimage, post-retinal factors influence its perceived size.


Subject(s)
Afterimage/physiology , Contrast Sensitivity/physiology , Illusions/physiology , Size Perception/physiology , Adolescent , Adult , Female , Follow-Up Studies , Humans , Male , Photic Stimulation/methods , Young Adult
10.
Cell Rep ; 41(2): 111470, 2022 10 11.
Article in English | MEDLINE | ID: mdl-36223748

ABSTRACT

Goal-directed navigation requires learning to accurately estimate location and select optimal actions in each location. Midbrain dopamine neurons are involved in reward value learning and have been linked to reward location learning. They are therefore ideally placed to provide teaching signals for goal-directed navigation. By imaging dopamine neural activity as mice learned to actively navigate a closed-loop virtual reality corridor to obtain reward, we observe phasic and pre-reward ramping dopamine activity, which are modulated by learning stage and task engagement. A Q-learning model incorporating position inference recapitulates our results, displaying prediction errors resembling phasic and ramping dopamine neural activity. The model predicts that ramping is followed by improved task performance, which we confirm in our experimental data, indicating that the dopamine ramp may have a teaching effect. Our results suggest that midbrain dopamine neurons encode phasic and ramping reward prediction error signals to improve goal-directed navigation.


Subject(s)
Dopamine , Dopaminergic Neurons , Animals , Dopamine/physiology , Goals , Mesencephalon/physiology , Mice , Reward
11.
12.
Neuron ; 105(1): 4-6, 2020 01 08.
Article in English | MEDLINE | ID: mdl-31951527

ABSTRACT

Fundamental research into early circuits of the neocortex provides insight into the etiology of mental illness. In this issue of Neuron, Chini et al. (2020) probe the consequences of combined genetic and environmental perturbation on emergent network activity in the prefrontal cortex, identifying a window for possible intervention.


Subject(s)
Cognitive Dysfunction , Neocortex , Animals , Mice , Neurons , Prefrontal Cortex
13.
Elife ; 92020 04 15.
Article in English | MEDLINE | ID: mdl-32286227

ABSTRACT

Learning from successes and failures often improves the quality of subsequent decisions. Past outcomes, however, should not influence purely perceptual decisions after task acquisition is complete since these are designed so that only sensory evidence determines the correct choice. Yet, numerous studies report that outcomes can bias perceptual decisions, causing spurious changes in choice behavior without improving accuracy. Here we show that the effects of reward on perceptual decisions are principled: past rewards bias future choices specifically when previous choice was difficult and hence decision confidence was low. We identified this phenomenon in six datasets from four laboratories, across mice, rats, and humans, and sensory modalities from olfaction and audition to vision. We show that this choice-updating strategy can be explained by reinforcement learning models incorporating statistical decision confidence into their teaching signals. Thus, reinforcement learning mechanisms are continually engaged to produce systematic adjustments of choices even in well-learned perceptual decisions in order to optimize behavior in an uncertain world.


Subject(s)
Bias , Decision Making/physiology , Reinforcement, Psychology , Animals , Choice Behavior , Hearing , Humans , Mice , Rats , Smell , Vision, Ocular
14.
Neuron ; 105(4): 700-711.e6, 2020 02 19.
Article in English | MEDLINE | ID: mdl-31859030

ABSTRACT

Deciding between stimuli requires combining their learned value with one's sensory confidence. We trained mice in a visual task that probes this combination. Mouse choices reflected not only present confidence and past rewards but also past confidence. Their behavior conformed to a model that combines signal detection with reinforcement learning. In the model, the predicted value of the chosen option is the product of sensory confidence and learned value. We found precise correlates of this variable in the pre-outcome activity of midbrain dopamine neurons and of medial prefrontal cortical neurons. However, only the latter played a causal role: inactivating medial prefrontal cortex before outcome strengthened learning from the outcome. Dopamine neurons played a causal role only after outcome, when they encoded reward prediction errors graded by confidence, influencing subsequent choices. These results reveal neural signals that combine reward value with sensory confidence and guide subsequent learning.


Subject(s)
Choice Behavior/physiology , Dopaminergic Neurons/metabolism , Learning/physiology , Prefrontal Cortex/metabolism , Reward , Animals , Dopaminergic Neurons/chemistry , Male , Mice , Mice, Inbred C57BL , Mice, Transgenic , Optogenetics/methods , Prefrontal Cortex/chemistry
15.
Cereb Cortex ; 18(5): 1085-93, 2008 May.
Article in English | MEDLINE | ID: mdl-17712164

ABSTRACT

Sensory stimuli under natural conditions often consist of a temporally irregular sequence of events, contrasting with the periodic sequences commonly used as stimuli in the laboratory. These experiments compared the responses of neurons in rat barrel cortex with trains of whisker movements with different frequencies; each train possessed either a periodic or an irregular, "noisy" temporal structure. Periodic stimulus trains were composed of a sequence of 21 whisker deflections separated by 20 equal interdeflection intervals (IDIs). Noisy trains were matched for mean IDI but included intervals shorter and longer than the mean IDI. Cortical responses were equivalent for periodic and noisy stimuli for frequencies up to 10 Hz. Above 10 Hz, temporal noise led to a larger response magnitude, and this effect was amplified as deflection frequency increased. Noise also caused a sharpening of the temporal precision of response to the individual deflections of the stimulus train. Cortical neurons thus appear to be "tuned" to respond in a different way to stimuli characterized by temporal unpredictability. As a consequence, perceptual judgments that depend on somatosensory cortical firing rate may be affected by the presence of temporal noise.


Subject(s)
Artifacts , Neurons/physiology , Somatosensory Cortex/cytology , Somatosensory Cortex/physiology , Vibrissae/physiology , Action Potentials/physiology , Animals , Discrimination, Psychological/physiology , Electrodes, Implanted , Male , Models, Neurological , Neural Inhibition/physiology , Rats , Rats, Wistar , Vibrissae/innervation
16.
Can J Exp Psychol ; 62(2): 101-9, 2008 Jun.
Article in English | MEDLINE | ID: mdl-18572987

ABSTRACT

The visual system can complete coloured surfaces from stimulus fragments, inducing the subjective perception of a colour-spread figure. Negative afterimages of these induced colours were first reported by S. Shimojo, Y. Kamitani, and S. Nishida (2001). Two experiments were conducted to examine the effect of attention on the duration of these afterimages. The results showed that shifting attention to the colour-spread figure during the adaptation phase weakened the subsequent afterimage. On the basis of previous findings that the duration of these afterimages is correlated with the strength of perceptual filling-in (grouping) among local inducers during the adaptation phase, it is proposed that attention weakens perceptual filling-in during the adaptation phase and thereby prevents the stimulus from being segmented into an illusory figure.


Subject(s)
Adaptation, Psychological , Afterimage , Attention , Color Perception , Cues , Humans
17.
Curr Opin Neurobiol ; 43: 139-148, 2017 04.
Article in English | MEDLINE | ID: mdl-28390863

ABSTRACT

The phasic dopamine reward prediction error response is a major brain signal underlying learning, approach and decision making. This dopamine response consists of two components that reflect, initially, stimulus detection from physical impact and, subsequenttly, reward valuation; dopamine activations by punishers reflect physical impact rather than aversiveness. The dopamine reward signal is distinct from earlier reported and recently confirmed phasic changes with behavioural activation. Optogenetic activation of dopamine neurones in monkeys causes value learning and biases economic choices. The dopamine reward signal conforms to formal economic utility and thus constitutes a utility prediction error signal. In these combined ways, the dopamine reward prediction error signal constitutes a potential neuronal substrate for the crucial economic decision variable of utility.


Subject(s)
Behavior/physiology , Dopamine/metabolism , Dopaminergic Neurons/physiology , Animals , Brain/physiology , Decision Making/physiology , Learning/physiology , Reward
18.
Curr Biol ; 27(6): 821-832, 2017 Mar 20.
Article in English | MEDLINE | ID: mdl-28285994

ABSTRACT

Central to the organization of behavior is the ability to predict the values of outcomes to guide choices. The accuracy of such predictions is honed by a teaching signal that indicates how incorrect a prediction was ("reward prediction error," RPE). In several reinforcement learning contexts, such as Pavlovian conditioning and decisions guided by reward history, this RPE signal is provided by midbrain dopamine neurons. In many situations, however, the stimuli predictive of outcomes are perceptually ambiguous. Perceptual uncertainty is known to influence choices, but it has been unclear whether or how dopamine neurons factor it into their teaching signal. To cope with uncertainty, we extended a reinforcement learning model with a belief state about the perceptually ambiguous stimulus; this model generates an estimate of the probability of choice correctness, termed decision confidence. We show that dopamine responses in monkeys performing a perceptually ambiguous decision task comply with the model's predictions. Consequently, dopamine responses did not simply reflect a stimulus' average expected reward value but were predictive of the trial-to-trial fluctuations in perceptual accuracy. These confidence-dependent dopamine responses emerged prior to monkeys' choice initiation, raising the possibility that dopamine impacts impending decisions, in addition to encoding a post-decision teaching signal. Finally, by manipulating reward size, we found that dopamine neurons reflect both the upcoming reward size and the confidence in achieving it. Together, our results show that dopamine responses convey teaching signals that are also appropriate for perceptual decisions.


Subject(s)
Choice Behavior , Decision Making , Dopaminergic Neurons/physiology , Macaca/physiology , Mesencephalon/physiology , Perception , Reinforcement, Psychology , Animals , Dopamine/physiology , Macaca/psychology , Male , Models, Animal , Reward
19.
Cell Rep ; 20(10): 2513-2524, 2017 Sep 05.
Article in English | MEDLINE | ID: mdl-28877482

ABSTRACT

Research in neuroscience increasingly relies on the mouse, a mammalian species that affords unparalleled genetic tractability and brain atlases. Here, we introduce high-yield methods for probing mouse visual decisions. Mice are head-fixed, facilitating repeatable visual stimulation, eye tracking, and brain access. They turn a steering wheel to make two alternative choices, forced or unforced. Learning is rapid thanks to intuitive coupling of stimuli to wheel position. The mouse decisions deliver high-quality psychometric curves for detection and discrimination and conform to the predictions of a simple probabilistic observer model. The task is readily paired with two-photon imaging of cortical activity. Optogenetic inactivation reveals that the task requires mice to use their visual cortex. Mice are motivated to perform the task by fluid reward or optogenetic stimulation of dopamine neurons. This stimulation elicits a larger number of trials and faster learning. These methods provide a platform to accurately probe mouse vision and its neural basis.


Subject(s)
Choice Behavior/physiology , Dopaminergic Neurons/metabolism , Psychophysics/methods , Visual Cortex/metabolism , Visual Cortex/physiology , Animals , Female , Male , Mice , Photic Stimulation
20.
J Comp Neurol ; 524(8): 1699-711, 2016 Jun 01.
Article in English | MEDLINE | ID: mdl-26272220

ABSTRACT

Rewards are defined by their behavioral functions in learning (positive reinforcement), approach behavior, economic choices, and emotions. Dopamine neurons respond to rewards with two components, similar to higher order sensory and cognitive neurons. The initial, rapid, unselective dopamine detection component reports all salient environmental events irrespective of their reward association. It is highly sensitive to factors related to reward and thus detects a maximal number of potential rewards. It also senses aversive stimuli but reports their physical impact rather than their aversiveness. The second response component processes reward value accurately and starts early enough to prevent confusion with unrewarded stimuli and objects. It codes reward value as a numeric, quantitative utility prediction error, consistent with formal concepts of economic decision theory. Thus, the dopamine reward signal is fast, highly sensitive and appropriate for driving and updating economic decisions.


Subject(s)
Brain/physiology , Dopaminergic Neurons/physiology , Reward , Animals , Choice Behavior/physiology , Dopamine/metabolism , Humans , Learning/physiology
SELECTION OF CITATIONS
SEARCH DETAIL