Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 146
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
J Neurosci ; 44(24)2024 Jun 12.
Artigo em Inglês | MEDLINE | ID: mdl-38670805

RESUMO

Reinforcement learning is a theoretical framework that describes how agents learn to select options that maximize rewards and minimize punishments over time. We often make choices, however, to obtain symbolic reinforcers (e.g., money, points) that are later exchanged for primary reinforcers (e.g., food, drink). Although symbolic reinforcers are ubiquitous in our daily lives, widely used in laboratory tasks because they can be motivating, mechanisms by which they become motivating are less understood. In the present study, we examined how monkeys learn to make choices that maximize fluid rewards through reinforcement with tokens. The question addressed here is how the value of a state, which is a function of multiple task features (e.g., the current number of accumulated tokens, choice options, task epoch, trials since the last delivery of primary reinforcer, etc.), drives value and affects motivation. We constructed a Markov decision process model that computes the value of task states given task features to then correlate with the motivational state of the animal. Fixation times, choice reaction times, and abort frequency were all significantly related to values of task states during the tokens task (n = 5 monkeys, three males and two females). Furthermore, the model makes predictions for how neural responses could change on a moment-by-moment basis relative to changes in the state value. Together, this task and model allow us to capture learning and behavior related to symbolic reinforcement.


Assuntos
Comportamento de Escolha , Macaca mulatta , Motivação , Reforço Psicológico , Recompensa , Animais , Motivação/fisiologia , Masculino , Comportamento de Escolha/fisiologia , Tempo de Reação/fisiologia , Cadeias de Markov , Feminino
2.
J Neurosci ; 44(5)2024 01 31.
Artigo em Inglês | MEDLINE | ID: mdl-38296647

RESUMO

Deciding whether to forego immediate rewards or explore new opportunities is a key component of flexible behavior and is critical for the survival of the species. Although previous studies have shown that different cortical and subcortical areas, including the amygdala and ventral striatum (VS), are implicated in representing the immediate (exploitative) and future (explorative) value of choices, the effect of the motor system used to make choices has not been examined. Here, we tested male rhesus macaques with amygdala or VS lesions on two versions of a three-arm bandit task where choices were registered with either a saccade or an arm movement. In both tasks we presented the monkeys with explore-exploit tradeoffs by periodically replacing familiar options with novel options that had unknown reward probabilities. We found that monkeys explored more with saccades but showed better learning with arm movements. VS lesions caused the monkeys to be more explorative with arm movements and less explorative with saccades, although this may have been due to an overall decrease in performance. VS lesions affected the monkeys' ability to learn novel stimulus-reward associations in both tasks, while after amygdala lesions this effect was stronger when choices were made with saccades. Further, on average, VS and amygdala lesions reduced the monkeys' ability to choose better options only when choices were made with a saccade. These results show that learning reward value associations to manage explore-exploit behaviors is motor system dependent and they further define the contributions of amygdala and VS to reinforcement learning.


Assuntos
Comportamento de Escolha , Estriado Ventral , Animais , Masculino , Macaca mulatta , Reforço Psicológico , Tonsila do Cerebelo , Recompensa
3.
Proc Natl Acad Sci U S A ; 119(22): e2121331119, 2022 05 31.
Artigo em Inglês | MEDLINE | ID: mdl-35622896

RESUMO

Adolescent development is characterized by an improvement in multiple cognitive processes. While performance on cognitive operations improves during this period, the ability to learn new skills quickly, for example, a new language, decreases. During this time, there is substantial pruning of excitatory synapses in cortex and specifically in prefrontal cortex. We have trained a series of recurrent neural networks to solve a working memory task and a reinforcement learning (RL) task. Performance on both of these tasks is known to improve during adolescence. After training, we pruned the networks by removing weak synapses. Pruning was done incrementally, and the networks were retrained during pruning. We found that pruned networks trained on the working memory task were more resistant to distraction. The pruned RL networks were able to produce more accurate value estimates and also make optimal choices more consistently. Both results are consistent with developmental improvements on these tasks. Pruned networks, however, learned some, but not all, new problems more slowly. Thus, improvements in task performance can come at the cost of flexibility. Our results show that overproduction and subsequent pruning of synapses is a computationally advantageous approach to building a competent brain.


Assuntos
Desenvolvimento do Adolescente , Memória de Curto Prazo , Redes Neurais de Computação , Plasticidade Neuronal , Reforço Psicológico , Sinapses , Adolescente , Humanos , Memória de Curto Prazo/fisiologia , Modelos Neurológicos , Córtex Pré-Frontal/fisiologia
4.
J Neurosci ; 43(50): 8723-8732, 2023 12 13.
Artigo em Inglês | MEDLINE | ID: mdl-37848282

RESUMO

Adolescence is an important developmental period, during which substantial changes occur in brain function and behavior. Several aspects of executive function, including response inhibition, improve during this period. Correspondingly, structural imaging studies have documented consistent decreases in cortical and subcortical gray matter volume, and postmortem histologic studies have found substantial (∼40%) decreases in excitatory synapses in prefrontal cortex. Recent computational modeling work suggests that the change in synaptic density underlie improvements in task performance. These models also predict changes in neural dynamics related to the depth of attractor basins, where deeper basins can underlie better task performance. In this study, we analyzed task-related neural dynamics in a large cohort of longitudinally followed subjects (male and female) spanning early to late adolescence. We found that age correlated positively with behavioral performance in the Eriksen Flanker task. Older subjects were also characterized by deeper attractor basins around task related evoked EEG potentials during specific cognitive operations. Thus, consistent with computational models examining the effects of excitatory synaptic pruning, older adolescents showed stronger attractor dynamics during task performance.SIGNIFICANCE STATEMENT There are well-documented changes in brain and behavior during adolescent development. However, there are few mechanistic theories that link changes in the brain to changes in behavior. Here, we tested a hypothesis, put forward on the basis of computational modeling, that pruning of excitatory synapses in cortex during adolescence changes neural dynamics. We found, consistent with the hypothesis, that variability around event-related potentials shows faster decay dynamics in older adolescent subjects. The faster decay dynamics are consistent with the hypothesis that synaptic pruning during adolescent development leads to stronger attractor basins in task-related neural activity.


Assuntos
Desenvolvimento do Adolescente , Encéfalo , Adolescente , Humanos , Masculino , Feminino , Idoso , Encéfalo/fisiologia , Córtex Pré-Frontal , Função Executiva , Substância Cinzenta
5.
PLoS Comput Biol ; 19(1): e1010873, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36716320

RESUMO

Choice impulsivity is characterized by the choice of immediate, smaller reward options over future, larger reward options, and is often thought to be associated with negative life outcomes. However, some environments make future rewards more uncertain, and in these environments impulsive choices can be beneficial. Here we examined the conditions under which impulsive vs. non-impulsive decision strategies would be advantageous. We used Markov Decision Processes (MDPs) to model three common decision-making tasks: Temporal Discounting, Information Sampling, and an Explore-Exploit task. We manipulated environmental variables to create circumstances where future outcomes were relatively uncertain. We then manipulated the discount factor of an MDP agent, which affects the value of immediate versus future rewards, to model impulsive and non-impulsive behavior. This allowed us to examine the performance of impulsive and non-impulsive agents in more or less predictable environments. In Temporal Discounting, we manipulated the transition probability to delayed rewards and found that the agent with the lower discount factor (i.e. the impulsive agent) collected more average reward than the agent with a higher discount factor (the non-impulsive agent) by selecting immediate reward options when the probability of receiving the future reward was low. In the Information Sampling task, we manipulated the amount of information obtained with each sample. When sampling led to small information gains, the impulsive MDP agent collected more average reward than the non-impulsive agent. Third, in the Explore-Exploit task, we manipulated the substitution rate for novel options. When the substitution rate was high, the impulsive agent again performed better than the non-impulsive agent, as it explored the novel options less and instead exploited options with known reward values. The results of these analyses show that impulsivity can be advantageous in environments that are unexpectedly uncertain.


Assuntos
Comportamento Impulsivo , Recompensa , Incerteza , Probabilidade , Cadeias de Markov , Comportamento de Escolha
6.
Cogn Dev ; 662023.
Artigo em Inglês | MEDLINE | ID: mdl-37033205

RESUMO

Previous research showed that uncertain, stochastic feedback drastically reduces children's performance. Here, 145 children from 7 to 11 years learned sets of sequences of four left-right button presses, each press followed by a red/green signal. After each of the 15% randomly false feedback trials, children received a verbal debrief that it was either (1) a mistake, or (2) a lie, or (3) received a reassuring comment for 85% correct trials. The control group received no verbal debrief. In the stochastic condition children reflected more on previous trials than with 100% correct feedback. Verbal debriefs helped children to overcome the deterioration of the first two repetitions. Mistakes were discarded and therefore the most helpful comment. Lie debriefs yielded the most reflection on previous experience. Reassurance comments were not quite as efficient.

7.
J Neurosci ; 41(6): 1301-1316, 2021 02 10.
Artigo em Inglês | MEDLINE | ID: mdl-33303679

RESUMO

Spatial selective listening and auditory choice underlie important processes including attending to a speaker at a cocktail party and knowing how (or whether) to respond. To examine task encoding and the relative timing of potential neural substrates underlying these behaviors, we developed a spatial selective detection paradigm for monkeys, and recorded activity in primary auditory cortex (AC), dorsolateral prefrontal cortex (dlPFC), and the basolateral amygdala (BLA). A comparison of neural responses among these three areas showed that, as expected, AC encoded the side of the cue and target characteristics before dlPFC and BLA. Interestingly, AC also encoded the choice of the monkey before dlPFC and around the time of BLA. Generally, BLA showed weak responses to all task features except the choice. Decoding analyses suggested that errors followed from a failure to encode the target stimulus in both AC and dlPFC, but again, these differences arose earlier in AC. The similarities between AC and dlPFC responses were abolished during passive sensory stimulation with identical trial conditions, suggesting that the robust sensory encoding in dlPFC is contextually gated. Thus, counter to a strictly PFC-driven decision process, in this spatial selective listening task AC neural activity represents the sensory and decision information before dlPFC. Unlike in the visual domain, in this auditory task, the BLA does not appear to be robustly involved in selective spatial processing.SIGNIFICANCE STATEMENT We examined neural correlates of an auditory spatial selective listening task by recording single-neuron activity in behaving monkeys from the amygdala, dorsolateral prefrontal cortex, and auditory cortex. We found that auditory cortex coded spatial cues and choice-related activity before dorsolateral prefrontal cortex or the amygdala. Auditory cortex also had robust delay period activity. Therefore, we found that auditory cortex could support the neural computations that underlie the behavioral processes in the task.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Complexo Nuclear Basolateral da Amígdala/fisiologia , Tomada de Decisões/fisiologia , Desempenho Psicomotor/fisiologia , Estimulação Acústica/métodos , Animais , Córtex Auditivo/diagnóstico por imagem , Complexo Nuclear Basolateral da Amígdala/diagnóstico por imagem , Macaca mulatta , Masculino , Estimulação Luminosa/métodos , Córtex Pré-Frontal/diagnóstico por imagem , Córtex Pré-Frontal/fisiologia
8.
J Cogn Neurosci ; 34(8): 1307-1325, 2022 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-35579977

RESUMO

To effectively behave within ever-changing environments, biological agents must learn and act at varying hierarchical levels such that a complex task may be broken down into more tractable subtasks. Hierarchical reinforcement learning (HRL) is a computational framework that provides an understanding of this process by combining sequential actions into one temporally extended unit called an option. However, there are still open questions within the HRL framework, including how options are formed and how HRL mechanisms might be realized within the brain. In this review, we propose that the existing human motor sequence literature can aid in understanding both of these questions. We give specific emphasis to visuomotor sequence learning tasks such as the discrete sequence production task and the M × N (M steps × N sets) task to understand how hierarchical learning and behavior manifest across sequential action tasks as well as how the dorsal cortical-subcortical circuitry could support this kind of behavior. This review highlights how motor chunks within a motor sequence can function as HRL options. Furthermore, we aim to merge findings from motor sequence literature with reinforcement learning perspectives to inform experimental design in each respective subfield.


Assuntos
Aprendizado Profundo , Encéfalo , Humanos , Aprendizagem , Reforço Psicológico
9.
Exp Brain Res ; 240(9): 2241-2253, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-35852565

RESUMO

Some patients with Parkinson's disease (PD) experience impulse control disorders (ICDs), characterized by deficient voluntary control over impulses, drives, or temptations regarding excessive hedonic behavior. The present study aimed to better understand the neural basis of impulsive, risky decision making in PD patients with ICDs by disentangling potential dysfunctions in decision and outcome mechanisms. We collected fMRI data from 20 patients with ICDs and 28 without ICDs performing an information gathering task. Patients viewed sequences of bead colors drawn from hidden urns and were instructed to infer the majority bead color in each urn. With each new bead, they could choose to either seek more evidence by drawing another bead (draw choice) or make an urn-inference (urn choice followed by feedback). We manipulated risk via the probability of bead color splits (80/20 vs. 60/40) and potential loss following an incorrect inference ($10 vs. $0). Patients also completed the Barratt Impulsiveness Scale (BIS) to assess impulsivity. Patients with ICDs showed greater urn choice-specific activation in the right middle frontal gyrus, overlapping the dorsal premotor cortex. Across all patients, fewer draw choices (i.e., more impulsivity) were associated with greater activation during both decision making and outcome processing in a variety of frontal and parietal areas, cerebellum, and bilateral striatum. Our findings demonstrate that ICDs in PD are associated with differences in neural processing of risk-related information and outcomes, implicating both reward and sensorimotor dopaminergic pathways.


Assuntos
Transtornos Disruptivos, de Controle do Impulso e da Conduta , Doença de Parkinson , Tomada de Decisões/fisiologia , Transtornos Disruptivos, de Controle do Impulso e da Conduta/complicações , Transtornos Disruptivos, de Controle do Impulso e da Conduta/etiologia , Humanos , Comportamento Impulsivo/fisiologia , Recompensa
10.
Cereb Cortex ; 31(1): 529-546, 2021 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-32954409

RESUMO

The neural systems that underlie reinforcement learning (RL) allow animals to adapt to changes in their environment. In the present study, we examined the hypothesis that the amygdala would have a preferential role in learning the values of visual objects. We compared a group of monkeys (Macaca mulatta) with amygdala lesions to a group of unoperated controls on a two-armed bandit reversal learning task. The task had two conditions. In the What condition, the animals had to learn to select a visual object, independent of its location. And in the Where condition, the animals had to learn to saccade to a location, independent of the object at the location. In both conditions choice-outcome mappings reversed in the middle of the block. We found that monkeys with amygdala lesions had learning deficits in both conditions. Monkeys with amygdala lesions did not have deficits in learning to reverse choice-outcome mappings. Rather, amygdala lesions caused the monkeys to become overly sensitive to negative feedback which impaired their ability to consistently select the more highly valued action or object. These results imply that the amygdala is generally necessary for RL.


Assuntos
Tonsila do Cerebelo/lesões , Comportamento Animal/fisiologia , Comportamento de Escolha/fisiologia , Reversão de Aprendizagem/fisiologia , Recompensa , Tonsila do Cerebelo/fisiologia , Animais , Macaca mulatta , Desempenho Psicomotor/fisiologia
11.
J Neurosci ; 40(12): 2553-2561, 2020 03 18.
Artigo em Inglês | MEDLINE | ID: mdl-32060169

RESUMO

Reinforcement learning (RL) refers to the behavioral process of learning to obtain reward and avoid punishment. An important component of RL is managing explore-exploit tradeoffs, which refers to the problem of choosing between exploiting options with known values and exploring unfamiliar options. We examined correlates of this tradeoff, as well as other RL related variables, in orbitofrontal cortex (OFC) while three male monkeys performed a three-armed bandit learning task. During the task, novel choice options periodically replaced familiar options. The values of the novel options were unknown, and the monkeys had to explore them to see if they were better than other currently available options. The identity of the chosen stimulus and the reward outcome were strongly encoded in the responses of single OFC neurons. These two variables define the states and state transitions in our model that are relevant to decision-making. The chosen value of the option and the relative value of exploring that option were encoded at intermediate levels. We also found that OFC value coding was stimulus specific, as opposed to coding value independent of the identity of the option. The location of the option and the value of the current environment were encoded at low levels. Therefore, we found encoding of the variables relevant to learning and managing explore-exploit tradeoffs in OFC. These results are consistent with findings in the ventral striatum and amygdala and show that this monosynaptically connected network plays an important role in learning based on the immediate and future consequences of choices.SIGNIFICANCE STATEMENT Orbitofrontal cortex (OFC) has been implicated in representing the expected values of choices. Here we extend these results and show that OFC also encodes information relevant to managing explore-exploit tradeoffs. Specifically, OFC encodes an exploration bonus, which characterizes the relative value of exploring novel choice options. OFC also strongly encodes the identity of the chosen stimulus, and reward outcomes, which are necessary for computing the value of novel and familiar options.


Assuntos
Comportamento Exploratório/fisiologia , Córtex Pré-Frontal/fisiologia , Tonsila do Cerebelo/citologia , Tonsila do Cerebelo/fisiologia , Animais , Comportamento de Escolha/fisiologia , Condicionamento Operante , Aprendizagem/fisiologia , Macaca mulatta , Masculino , Neurônios/fisiologia , Córtex Pré-Frontal/citologia , Desempenho Psicomotor/fisiologia , Punição , Recompensa , Estriado Ventral/citologia , Estriado Ventral/fisiologia
12.
J Neurosci ; 40(8): 1668-1678, 2020 02 19.
Artigo em Inglês | MEDLINE | ID: mdl-31941667

RESUMO

Understanding the neural code requires understanding how populations of neurons code information. Theoretical models predict that information may be limited by correlated noise in large neural populations. Nevertheless, analyses based on tens of neurons have failed to find evidence of saturation. Moreover, some studies have shown that noise correlations can be very small, and therefore may not affect information coding. To determine whether information-limiting correlations exist, we implanted eight Utah arrays in prefrontal cortex (PFC; area 46) of two male macaque monkeys, recording >500 neurons simultaneously. We estimated information in PFC about saccades as a function of ensemble size. Noise correlations were, on average, small (∼10-3). However, information scaled strongly sublinearly with ensemble size. After shuffling trials, destroying noise correlations, information was a linear function of ensemble size. Thus, we provide evidence for the existence of information-limiting noise correlations in large populations of PFC neurons.SIGNIFICANCE STATEMENT Recent theoretical work has shown that even small correlations can limit information if they are "differential correlations," which are difficult to measure directly. However, they can be detected through decoding analyses on recordings from a large number of neurons over a large number of trials. We have achieved both by collecting neural activity in dorsal-lateral prefrontal cortex of macaques using eight microelectrode arrays (768 electrodes), from which we were able to compute accurate information estimates. We show, for the first time, strong evidence for information-limiting correlations. Despite pairwise correlations being small (on the order of 10-3), they affect information coding in populations on the order of 100 s of neurons.


Assuntos
Modelos Neurológicos , Rede Nervosa/fisiologia , Neurônios/fisiologia , Córtex Pré-Frontal/fisiologia , Potenciais de Ação/fisiologia , Animais , Macaca mulatta , Masculino , Microeletrodos , Estimulação Luminosa , Movimentos Sacádicos/fisiologia
13.
J Neurophysiol ; 126(4): 1289-1309, 2021 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-34379536

RESUMO

The connectivity among architectonically defined areas of the frontal, parietal, and temporal cortex of the macaque has been extensively mapped through tract-tracing methods. To investigate the statistical organization underlying this connectivity, and identify its underlying architecture, we performed a hierarchical cluster analysis on 69 cortical areas based on their anatomically defined inputs. We identified 10 frontal, four parietal, and five temporal hierarchically related sets of areas (clusters), defined by unique sets of inputs and typically composed of anatomically contiguous areas. Across the cortex, clusters that share functional properties were linked by dominant information processing circuits in a topographically organized manner that reflects the organization of the main fiber bundles in the cortex. This led to a dorsal-ventral subdivision of the frontal cortex, where dorsal and ventral clusters showed privileged connectivity with parietal and temporal areas, respectively. Ventrally, temporofrontal circuits encode information to discriminate objects in the environment, their value, emotional properties, and functions such as memory and spatial navigation. Dorsal parietofrontal circuits encode information for selecting, generating, and monitoring appropriate actions based on visual-spatial and somatosensory information. This organization may reflect evolutionary antecedents, in which the vertebrate pallium, which is the ancestral cortex, was defined by a ventral and lateral olfactory region and a medial hippocampal region.NEW & NOTEWORTHY The study of cortical connectivity is crucial for understanding brain function and disease. We show that temporofrontal and parietofrontal networks in the macaque can be described in terms of circuits among clusters of areas that share similar inputs and functional properties. The resulting overall architecture described a dual subdivision of the frontal cortex, consistent with the main cortical fiber bundles and an evolutionary trend that underlies the organization of the cortex in the macaque.


Assuntos
Lobo Frontal , Macaca , Rede Nervosa , Lobo Parietal , Lobo Temporal , Animais , Análise por Conglomerados , Lobo Frontal/anatomia & histologia , Lobo Frontal/fisiologia , Macaca/anatomia & histologia , Macaca/fisiologia , Rede Nervosa/anatomia & histologia , Rede Nervosa/fisiologia , Lobo Parietal/anatomia & histologia , Lobo Parietal/fisiologia , Lobo Temporal/anatomia & histologia , Lobo Temporal/fisiologia
14.
PLoS Comput Biol ; 16(4): e1007514, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-32330126

RESUMO

Learning leads to changes in population patterns of neural activity. In this study we wanted to examine how these changes in patterns of activity affect the dimensionality of neural responses and information about choices. We addressed these questions by carrying out high channel count recordings in dorsal-lateral prefrontal cortex (dlPFC; 768 electrodes) while monkeys performed a two-armed bandit reinforcement learning task. The high channel count recordings allowed us to study population coding while monkeys learned choices between actions or objects. We found that the dimensionality of neural population activity was higher across blocks in which animals learned the values of novel pairs of objects, than across blocks in which they learned the values of actions. The increase in dimensionality with learning in object blocks was related to less shared information across blocks, and therefore patterns of neural activity that were less similar, when compared to learning in action blocks. Furthermore, these differences emerged with learning, and were not a simple function of the choice of a visual image or action. Therefore, learning the values of novel objects increases the dimensionality of neural representations in dlPFC.


Assuntos
Mapeamento Encefálico , Aprendizagem/fisiologia , Neurônios/fisiologia , Córtex Pré-Frontal/fisiologia , Algoritmos , Animais , Eletrodos , Movimentos Oculares , Processamento de Imagem Assistida por Computador , Luz , Macaca , Masculino , Microeletrodos , Estimulação Luminosa , Reforço Psicológico , Recompensa , Movimentos Sacádicos
15.
Proc Natl Acad Sci U S A ; 115(52): E12398-E12406, 2018 12 26.
Artigo em Inglês | MEDLINE | ID: mdl-30545910

RESUMO

Adaptive behavior requires animals to learn from experience. Ideally, learning should both promote choices that lead to rewards and reduce choices that lead to losses. Because the ventral striatum (VS) contains neurons that respond to aversive stimuli and aversive stimuli can drive dopamine release in the VS, it is possible that the VS contributes to learning about aversive outcomes, including losses. However, other work suggests that the VS may play a specific role in learning to choose among rewards, with other systems mediating learning from aversive outcomes. To examine the role of the VS in learning from gains and losses, we compared the performance of macaque monkeys with VS lesions and unoperated controls on a reinforcement learning task. In the task, the monkeys gained or lost tokens, which were periodically cashed out for juice, as outcomes for choices. They learned over trials to choose cues associated with gains, and not choose cues associated with losses. We found that monkeys with VS lesions had a deficit in learning to choose between cues that differed in reward magnitude. By contrast, monkeys with VS lesions performed as well as controls when choices involved a potential loss. We also fit reinforcement learning models to the behavior and compared learning rates between groups. Relative to controls, the monkeys with VS lesions had reduced learning rates for gain cues. Therefore, in this task, the VS plays a specific role in learning to choose between rewarding options.


Assuntos
Comportamento de Escolha/fisiologia , Aprendizagem/fisiologia , Estriado Ventral/fisiologia , Animais , Dopamina/fisiologia , Macaca mulatta/metabolismo , Neurônios/fisiologia , Tempo de Reação/fisiologia , Reforço Psicológico , Recompensa , Estriado Ventral/lesões
16.
J Cogn Neurosci ; 31(7): 1054-1064, 2019 07.
Artigo em Inglês | MEDLINE | ID: mdl-30883292

RESUMO

The mismatch negativity (MMN) is an ERP component seen in response to unexpected "novel" stimuli, such as in an auditory oddball task. The MMN is of wide interest and application, but the neural responses that generate it are poorly understood. This is in part due to differences in design and focus between animal and human oddball paradigms. For example, one of the main explanatory models, the "predictive error hypothesis", posits differences in timing and selectivity between signals carried in auditory and prefrontal cortex (PFC). However, these predictions have not been fully tested because (1) noninvasive techniques used in humans lack the combined spatial and temporal precision necessary for these comparisons and (2) single-neuron studies in animal models, which combine necessary spatial and temporal precision, have not focused on higher order contributions to novelty signals. In addition, accounts of the MMN traditionally do not address contributions from subcortical areas known to be involved in novelty detection, such as the amygdala. To better constrain hypotheses and to address methodological gaps between human and animal studies, we recorded single neuron activity from the auditory cortex, dorsolateral PFC, and basolateral amygdala of two macaque monkeys during an auditory oddball paradigm modeled after that used in humans. Consistent with predictions of the predictive error hypothesis, novelty signals in PFC were generally later than in auditory cortex and were abstracted from stimulus-specific effects seen in auditory cortex. However, we found signals in amygdala that were comparable in magnitude and timing to those in PFC, and both prefrontal and amygdala signals were generally much weaker than those in auditory cortex. These observations place useful quantitative constraints on putative generators of the auditory oddball-based MMN and additionally indicate that there are subcortical areas, such as the amygdala, that may be involved in novelty detection in an auditory oddball paradigm.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Complexo Nuclear Basolateral da Amígdala/fisiologia , Neurônios/fisiologia , Córtex Pré-Frontal/fisiologia , Estimulação Acústica , Animais , Macaca mulatta , Masculino
17.
J Neurophysiol ; 122(4): 1530-1537, 2019 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-31166811

RESUMO

The perception of emotionally arousing scenes modulates neural activity in ventral visual areas via reentrant signals from the amygdala. The orbitofrontal cortex (OFC) shares dense interconnections with amygdala and has been strongly implicated in emotional stimulus processing in primates, but our understanding of the functional contribution of this region to emotional perception in humans is poorly defined. In this study we acquired targeted rapid functional imaging from lateral OFC, amygdala, and fusiform gyrus (FG) over multiple scanning sessions (resulting in over 1,000 trials per participant) in an effort to define the activation amplitude and directional connectivity among these regions during naturalistic scene perception. All regions of interest showed enhanced activation during emotionally arousing, compared with neutral scenes. In addition, we identified bidirectional connectivity between amygdala, FG, and OFC in the great majority of individual subjects, suggesting that human emotional perception is implemented in part via nonhierarchical causal interactions across these three regions.NEW & NOTEWORTHY Due to the practical limitations of noninvasive recording methodologies, there is a scarcity of data regarding the interactions of human amygdala and orbitofrontal cortex (OFC). Using rapid functional MRI sampling and directional connectivity, we found that the human amygdala influences emotional perception via distinct interactions with late-stage ventral visual cortex and OFC, in addition to distinct interactions between OFC and fusiform gyrus. Future efforts may leverage these patterns of directional connectivity to noninvasively distinguish clinical groups from controls with respect to network causal hierarchy.


Assuntos
Tonsila do Cerebelo/fisiologia , Emoções/fisiologia , Córtex Pré-Frontal/fisiologia , Lobo Temporal/fisiologia , Percepção Visual/fisiologia , Adulto , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Vias Neurais/fisiologia , Adulto Jovem
18.
Cogn Psychol ; 111: 1-14, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-30826584

RESUMO

In realistic and challenging decision contexts, people may show biases that prevent them from choosing their favored options. For example, astronomer Johannes Kepler famously interviewed several candidate fiancées sequentially, but was rejected when attempting to return to a previous candidate. Similarly, we examined human performance on searches for attractive faces through fixed-length sequences by adapting optimal stopping computational theory developed from behavioral ecology and economics. Although economics studies have repeatedly found that participants sample too few options before choosing the best-ranked number from a series, we instead found overlong searches with many sequences ending without choice. Participants employed irrationally high choice thresholds, compared to the more lax, realistic standards of a Bayesian ideal observer, which achieved better-ranked faces. We consider several computational accounts and find that participants most resemble a Bayesian model that decides based on altered attractiveness values. These values may produce starkly different biases in the facial attractiveness domain than in other decision domains.


Assuntos
Comportamento de Escolha , Tomada de Decisões , Expressão Facial , Aparência Física/fisiologia , Viés , Feminino , Humanos , Masculino , Modelos Estatísticos
19.
J Neurosci ; 37(17): 4552-4564, 2017 04 26.
Artigo em Inglês | MEDLINE | ID: mdl-28336572

RESUMO

The neural underpinnings of rhythmic behavior, including music and dance, have been studied using the synchronization-continuation task (SCT), where subjects initially tap in synchrony with an isochronous metronome and then keep tapping at a similar rate via an internal beat mechanism. Here, we provide behavioral and neural evidence that supports a resetting drift-diffusion model (DDM) during SCT. Behaviorally, we show the model replicates the linear relation between the mean and standard-deviation of the intervals produced by monkeys in SCT. We then show that neural populations in the medial premotor cortex (MPC) contain an accurate trial-by-trial representation of elapsed-time between taps. Interestingly, the autocorrelation structure of the elapsed-time representation is consistent with a DDM. These results indicate that MPC has an orderly representation of time with features characteristic of concatenated DDMs and that this population signal can be used to orchestrate the rhythmic structure of the internally timed elements of SCT.SIGNIFICANCE STATEMENT The present study used behavioral data, ensemble recordings from medial premotor cortex (MPC) in macaque monkeys, and computational modeling, to establish evidence in favor of a class of drift-diffusion models of rhythmic timing during a synchronization-continuation tapping task (SCT). The linear relation between the mean and standard-deviation of the intervals produced by monkeys in SCT is replicated by the model. Populations of MPC cells faithfully represent the elapsed time between taps, and there is significant trial-by-trial relation between decoded times and the timing behavior of the monkeys. Notably, the neural decoding properties, including its autocorrelation structure are consistent with a set of drift-diffusion models that are arranged sequentially and that are resetting in each SCT tap.


Assuntos
Sincronização Cortical/fisiologia , Córtex Motor/fisiologia , Periodicidade , Estimulação Acústica , Algoritmos , Animais , Comportamento Animal/fisiologia , Macaca mulatta , Masculino , Modelos Neurológicos , Córtex Motor/citologia , Neurônios/fisiologia , Desempenho Psicomotor/fisiologia , Tempo de Reação
20.
J Neurosci ; 37(8): 2186-2202, 2017 02 22.
Artigo em Inglês | MEDLINE | ID: mdl-28123082

RESUMO

Orbitofrontal cortex (OFC), medial frontal cortex (MFC), and amygdala mediate stimulus-reward learning, but the mechanisms through which they interact are unclear. Here, we investigated how neurons in macaque OFC and MFC signaled rewards and the stimuli that predicted them during learning with and without amygdala input. Macaques performed a task that required them to evaluate two stimuli and then choose one to receive the reward associated with that option. Four main findings emerged. First, amygdala lesions slowed the acquisition and use of stimulus-reward associations. Further analyses indicated that this impairment was due, at least in part, to ineffective use of negative feedback to guide subsequent decisions. Second, the activity of neurons in OFC and MFC rapidly evolved to encode the amount of reward associated with each stimulus. Third, amygdalectomy reduced encoding of stimulus-reward associations during the evaluation of different stimuli. Reward encoding of anticipated and received reward after choices were made was not altered. Fourth, amygdala lesions led to an increase in the proportion of neurons in MFC, but not OFC, that encoded the instrumental response that monkeys made on each trial. These correlated changes in behavior and neural activity after amygdala lesions strongly suggest that the amygdala contributes to the ability to learn stimulus-reward associations rapidly by shaping encoding within OFC and MFC.SIGNIFICANCE STATEMENT Altered functional interactions among orbital frontal cortex (OFC), medial frontal cortex (MFC), and amygdala are thought to underlie several psychiatric conditions, many related to reward learning. Here, we investigated the causal contribution of the amygdala to the development of neuronal activity in macaque OFC and MFC related to rewards and the stimuli that predict them during learning. Without amygdala inputs, neurons in both OFC and MFC showed decreased encoding of stimulus-reward associations. MFC also showed increased encoding of the instrumental responses that monkeys made on each trial. Behaviorally, changes in neural activity were accompanied by slower stimulus-reward learning. The findings suggest that interactions among amygdala, OFC, and MFC contribute to learning about stimuli that predict rewards.


Assuntos
Tonsila do Cerebelo/fisiologia , Aprendizagem/fisiologia , Neurônios/fisiologia , Córtex Pré-Frontal/fisiologia , Recompensa , Potenciais de Ação/fisiologia , Tonsila do Cerebelo/citologia , Tonsila do Cerebelo/diagnóstico por imagem , Tonsila do Cerebelo/lesões , Análise de Variância , Animais , Comportamento de Escolha , Aprendizagem por Discriminação/fisiologia , Agonistas de Aminoácidos Excitatórios/toxicidade , Ácido Ibotênico/toxicidade , Macaca mulatta , Imageamento por Ressonância Magnética , Masculino , N-Metilaspartato/toxicidade , Córtex Pré-Frontal/citologia , Córtex Pré-Frontal/diagnóstico por imagem , Tempo de Reação/fisiologia , Fatores de Tempo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA