Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
Behav Brain Sci ; 47: e94, 2024 May 21.
Article in English | MEDLINE | ID: mdl-38770870

ABSTRACT

We link Ivancovsky et al.'s novelty-seeking model (NSM) to computational models of intrinsically motivated behavior and learning. We argue that dissociating different forms of curiosity, creativity, and memory based on the involvement of distinct intrinsic motivations (e.g., surprise and novelty) is essential to empirically test the conceptual claims of the NSM.


Subject(s)
Creativity , Exploratory Behavior , Motivation , Humans , Exploratory Behavior/physiology , Models, Psychological , Learning/physiology , Memory/physiology , Computer Simulation
2.
Cell Rep ; 43(1): 113618, 2024 01 23.
Article in English | MEDLINE | ID: mdl-38150365

ABSTRACT

Goal-directed behaviors involve coordinated activity in many cortical areas, but whether the encoding of task variables is distributed across areas or is more specifically represented in distinct areas remains unclear. Here, we compared representations of sensory, motor, and decision information in the whisker primary somatosensory cortex, medial prefrontal cortex, and tongue-jaw primary motor cortex in mice trained to lick in response to a whisker stimulus with mice that were not taught this association. Irrespective of learning, properties of the sensory stimulus were best encoded in the sensory cortex, whereas fine movement kinematics were best represented in the motor cortex. However, movement initiation and the decision to lick in response to the whisker stimulus were represented in all three areas, with decision neurons in the medial prefrontal cortex being more selective, showing minimal sensory responses in miss trials and motor responses during spontaneous licks. Our results reconcile previous studies indicating highly specific vs. highly distributed sensorimotor processing.


Subject(s)
Neocortex , Somatosensory Cortex , Mice , Animals , Somatosensory Cortex/physiology , Goals , Parietal Lobe , Neurons , Vibrissae/physiology
3.
Trends Neurosci ; 46(12): 1054-1066, 2023 12.
Article in English | MEDLINE | ID: mdl-37925342

ABSTRACT

Curiosity refers to the intrinsic desire of humans and animals to explore the unknown, even when there is no apparent reason to do so. Thus far, no single, widely accepted definition or framework for curiosity has emerged, but there is growing consensus that curious behavior is not goal-directed but related to seeking or reacting to information. In this review, we take a phenomenological approach and group behavioral and neurophysiological studies which meet these criteria into three categories according to the type of information seeking observed. We then review recent computational models of curiosity from the field of machine learning and discuss how they enable integrating different types of information seeking into one theoretical framework. Combinations of behavioral and neurophysiological studies along with computational modeling will be instrumental in demystifying the notion of curiosity.


Subject(s)
Exploratory Behavior , Neurosciences , Humans , Animals , Exploratory Behavior/physiology , Motivation , Computer Simulation
4.
Curr Opin Neurobiol ; 82: 102758, 2023 10.
Article in English | MEDLINE | ID: mdl-37619425

ABSTRACT

Notions of surprise and novelty have been used in various experimental and theoretical studies across multiple brain areas and species. However, 'surprise' and 'novelty' refer to different quantities in different studies, which raises concerns about whether these studies indeed relate to the same functionalities and mechanisms in the brain. Here, we address these concerns through a systematic investigation of how different aspects of surprise and novelty relate to different brain functions and physiological signals. We review recent classifications of definitions proposed for surprise and novelty along with links to experimental observations. We show that computational modeling and quantifiable definitions enable novel interpretations of previous findings and form a foundation for future theoretical and experimental studies.


Subject(s)
Brain , Computer Simulation
5.
Neuroimage ; 246: 118780, 2022 02 01.
Article in English | MEDLINE | ID: mdl-34875383

ABSTRACT

Learning how to reach a reward over long series of actions is a remarkable capability of humans, and potentially guided by multiple parallel learning modules. Current brain imaging of learning modules is limited by (i) simple experimental paradigms, (ii) entanglement of brain signals of different learning modules, and (iii) a limited number of computational models considered as candidates for explaining behavior. Here, we address these three limitations and (i) introduce a complex sequential decision making task with surprising events that allows us to (ii) dissociate correlates of reward prediction errors from those of surprise in functional magnetic resonance imaging (fMRI); and (iii) we test behavior against a large repertoire of model-free, model-based, and hybrid reinforcement learning algorithms, including a novel surprise-modulated actor-critic algorithm. Surprise, derived from an approximate Bayesian approach for learning the world-model, is extracted in our algorithm from a state prediction error. Surprise is then used to modulate the learning rate of a model-free actor, which itself learns via the reward prediction error from model-free value estimation by the critic. We find that action choices are well explained by pure model-free policy gradient, but reaction times and neural data are not. We identify signatures of both model-free and surprise-based learning signals in blood oxygen level dependent (BOLD) responses, supporting the existence of multiple parallel learning modules in the brain. Our results extend previous fMRI findings to a multi-step setting and emphasize the role of policy gradient and surprise signalling in human learning.


Subject(s)
Brain/physiology , Decision Making/physiology , Functional Neuroimaging/methods , Learning/physiology , Magnetic Resonance Imaging/methods , Adult , Brain/diagnostic imaging , Female , Humans , Male , Models, Biological , Reinforcement, Psychology , Young Adult
6.
Neuron ; 109(13): 2183-2201.e9, 2021 07 07.
Article in English | MEDLINE | ID: mdl-34077741

ABSTRACT

The neuronal mechanisms generating a delayed motor response initiated by a sensory cue remain elusive. Here, we tracked the precise sequence of cortical activity in mice transforming a brief whisker stimulus into delayed licking using wide-field calcium imaging, multiregion high-density electrophysiology, and time-resolved optogenetic manipulation. Rapid activity evoked by whisker deflection acquired two prominent features for task performance: (1) an enhanced excitation of secondary whisker motor cortex, suggesting its important role connecting whisker sensory processing to lick motor planning; and (2) a transient reduction of activity in orofacial sensorimotor cortex, which contributed to suppressing premature licking. Subsequent widespread cortical activity during the delay period largely correlated with anticipatory movements, but when these were accounted for, a focal sustained activity remained in frontal cortex, which was causally essential for licking in the response period. Our results demonstrate key cortical nodes for motor plan generation and timely execution in delayed goal-directed licking.


Subject(s)
Behavior, Animal , Neurons/physiology , Psychomotor Performance/physiology , Sensorimotor Cortex/physiology , Touch Perception/physiology , Animals , Female , Male , Mice, Inbred C57BL , Mice, Transgenic , Neural Pathways/physiology , Optogenetics
7.
PLoS Comput Biol ; 17(6): e1009070, 2021 06.
Article in English | MEDLINE | ID: mdl-34081705

ABSTRACT

Classic reinforcement learning (RL) theories cannot explain human behavior in the absence of external reward or when the environment changes. Here, we employ a deep sequential decision-making paradigm with sparse reward and abrupt environmental changes. To explain the behavior of human participants in these environments, we show that RL theories need to include surprise and novelty, each with a distinct role. While novelty drives exploration before the first encounter of a reward, surprise increases the rate of learning of a world-model as well as of model-free action-values. Even though the world-model is available for model-based RL, we find that human decisions are dominated by model-free action choices. The world-model is only marginally used for planning, but it is important to detect surprising events. Our theory predicts human action choices with high probability and allows us to dissociate surprise, novelty, and reward in EEG signals.


Subject(s)
Adaptation, Psychological , Exploratory Behavior , Models, Psychological , Reinforcement, Psychology , Algorithms , Choice Behavior/physiology , Computational Biology , Decision Making/physiology , Electroencephalography/statistics & numerical data , Exploratory Behavior/physiology , Humans , Learning/physiology , Models, Neurological , Reward
8.
Neural Comput ; 33(2): 269-340, 2021 02.
Article in English | MEDLINE | ID: mdl-33400898

ABSTRACT

Surprise-based learning allows agents to rapidly adapt to nonstationary stochastic environments characterized by sudden changes. We show that exact Bayesian inference in a hierarchical model gives rise to a surprise-modulated trade-off between forgetting old observations and integrating them with the new ones. The modulation depends on a probability ratio, which we call the Bayes Factor Surprise, that tests the prior belief against the current belief. We demonstrate that in several existing approximate algorithms, the Bayes Factor Surprise modulates the rate of adaptation to new observations. We derive three novel surprise-based algorithms, one in the family of particle filters, one in the family of variational learning, and one in the family of message passing, that have constant scaling in observation sequence length and particularly simple update dynamics for any distribution in the exponential family. Empirical results show that these surprise-based algorithms estimate parameters better than alternative approximate approaches and reach levels of performance comparable to computationally more expensive algorithms. The Bayes Factor Surprise is related to but different from the Shannon Surprise. In two hypothetical experiments, we make testable predictions for physiological indicators that dissociate the Bayes Factor Surprise from the Shannon Surprise. The theoretical insight of casting various approaches as surprise-based learning, as well as the proposed online algorithms, may be applied to the analysis of animal and human behavior and to reinforcement learning in nonstationary environments.


Subject(s)
Algorithms , Behavior/physiology , Computer Simulation , Learning/physiology , Reinforcement, Psychology , Animals , Bayes Theorem , Humans
9.
Neuroimage ; 196: 302-317, 2019 08 01.
Article in English | MEDLINE | ID: mdl-30980899

ABSTRACT

Having to survive in a continuously changing environment has driven the human brain to actively predict the future state of its surroundings. Oddball tasks are specific types of experiments in which this nature of the human brain is studied. Detailed mathematical models have been constructed to explain the brain's perception in these tasks. These models consider a subject as an ideal observer who abstracts a hypothesis from the previous stimuli, and estimates its hyper-parameters - in order to make the next prediction. The corresponding prediction error is assumed to manifest the subjective surprise of the brain. While the approach of earlier works to this problem has been to suggest an encoding model, we investigated the reverse model: if the stimuli's surprise is assumed as the cause of the observer's surprise, it must be possible to decode the surprise of each stimulus, for every single subject, given only their neural responses, i.e. to tell how unexpected a specific stimulus has been for them. Employing machine learning tools, we developed a surprise decoding model for binary oddball tasks. We constructed our model using the ideal observer proposed by Meyniel et al. in 2016, and applied it to three datasets, one with visual, one with auditory, and one with both visual and auditory stimuli. We demonstrated that our decoding model performs very well for both of the sensory modalities with or without the presence of the subject's motor response.


Subject(s)
Auditory Perception/physiology , Brain/physiology , Models, Neurological , Visual Perception/physiology , Acoustic Stimulation , Adult , Bayes Theorem , Female , Humans , Machine Learning , Male , Neuropsychological Tests , Photic Stimulation , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...