Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 110
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Behav Brain Sci ; 47: e94, 2024 May 21.
Artículo en Inglés | MEDLINE | ID: mdl-38770870

RESUMEN

We link Ivancovsky et al.'s novelty-seeking model (NSM) to computational models of intrinsically motivated behavior and learning. We argue that dissociating different forms of curiosity, creativity, and memory based on the involvement of distinct intrinsic motivations (e.g., surprise and novelty) is essential to empirically test the conceptual claims of the NSM.


Asunto(s)
Creatividad , Conducta Exploratoria , Motivación , Humanos , Conducta Exploratoria/fisiología , Modelos Psicológicos , Aprendizaje/fisiología , Memoria/fisiología , Simulación por Computador
2.
PLoS Comput Biol ; 20(2): e1011839, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38377112

RESUMEN

In humans and animals, surprise is a physiological reaction to an unexpected event, but how surprise can be linked to plausible models of neuronal activity is an open problem. We propose a self-supervised spiking neural network model where a surprise signal is extracted from an increase in neural activity after an imbalance of excitation and inhibition. The surprise signal modulates synaptic plasticity via a three-factor learning rule which increases plasticity at moments of surprise. The surprise signal remains small when transitions between sensory events follow a previously learned rule but increases immediately after rule switching. In a spiking network with several modules, previously learned rules are protected against overwriting, as long as the number of modules is larger than the total number of rules-making a step towards solving the stability-plasticity dilemma in neuroscience. Our model relates the subjective notion of surprise to specific predictions on the circuit level.


Asunto(s)
Inhibición Psicológica , Neurociencias , Animales , Humanos , Aprendizaje , Redes Neurales de la Computación , Plasticidad Neuronal
3.
PLoS Comput Biol ; 20(2): e1011844, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38346073

RESUMEN

Cortical populations of neurons develop sparse representations adapted to the statistics of the environment. To learn efficient population codes, synaptic plasticity mechanisms must differentiate relevant latent features from spurious input correlations, which are omnipresent in cortical networks. Here, we develop a theory for sparse coding and synaptic plasticity that is invariant to second-order correlations in the input. Going beyond classical Hebbian learning, our learning objective explains the functional form of observed excitatory plasticity mechanisms, showing how Hebbian long-term depression (LTD) cancels the sensitivity to second-order correlations so that receptive fields become aligned with features hidden in higher-order statistics. Invariance to second-order correlations enhances the versatility of biologically realistic learning models, supporting optimal decoding from noisy inputs and sparse population coding from spatially correlated stimuli. In a spiking model with triplet spike-timing-dependent plasticity (STDP), we show that individual neurons can learn localized oriented receptive fields, circumventing the need for input preprocessing, such as whitening, or population-level lateral inhibition. The theory advances our understanding of local unsupervised learning in cortical circuits, offers new interpretations of the Bienenstock-Cooper-Munro and triplet STDP models, and assigns a specific functional role to synaptic LTD mechanisms in pyramidal neurons.


Asunto(s)
Plasticidad Neuronal , Neuronas , Potenciales de Acción/fisiología , Plasticidad Neuronal/fisiología , Neuronas/fisiología , Células Piramidales/fisiología , Modelos Neurológicos
5.
Cell Rep ; 43(1): 113618, 2024 01 23.
Artículo en Inglés | MEDLINE | ID: mdl-38150365

RESUMEN

Goal-directed behaviors involve coordinated activity in many cortical areas, but whether the encoding of task variables is distributed across areas or is more specifically represented in distinct areas remains unclear. Here, we compared representations of sensory, motor, and decision information in the whisker primary somatosensory cortex, medial prefrontal cortex, and tongue-jaw primary motor cortex in mice trained to lick in response to a whisker stimulus with mice that were not taught this association. Irrespective of learning, properties of the sensory stimulus were best encoded in the sensory cortex, whereas fine movement kinematics were best represented in the motor cortex. However, movement initiation and the decision to lick in response to the whisker stimulus were represented in all three areas, with decision neurons in the medial prefrontal cortex being more selective, showing minimal sensory responses in miss trials and motor responses during spontaneous licks. Our results reconcile previous studies indicating highly specific vs. highly distributed sensorimotor processing.


Asunto(s)
Neocórtex , Corteza Somatosensorial , Ratones , Animales , Corteza Somatosensorial/fisiología , Objetivos , Lóbulo Parietal , Neuronas , Vibrisas/fisiología
6.
PLoS Comput Biol ; 19(12): e1011727, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38117859

RESUMEN

Empirical evidence shows that memories that are frequently revisited are easy to recall, and that familiar items involve larger hippocampal representations than less familiar ones. In line with these observations, here we develop a modelling approach to provide a mechanistic understanding of how hippocampal neural assemblies evolve differently, depending on the frequency of presentation of the stimuli. For this, we added an online Hebbian learning rule, background firing activity, neural adaptation and heterosynaptic plasticity to a rate attractor network model, thus creating dynamic memory representations that can persist, increase or fade according to the frequency of presentation of the corresponding memory patterns. Specifically, we show that a dynamic interplay between Hebbian learning and background firing activity can explain the relationship between the memory assembly sizes and their frequency of stimulation. Frequently stimulated assemblies increase their size independently from each other (i.e. creating orthogonal representations that do not share neurons, thus avoiding interference). Importantly, connections between neurons of assemblies that are not further stimulated become labile so that these neurons can be recruited by other assemblies, providing a neuronal mechanism of forgetting.


Asunto(s)
Aprendizaje , Refuerzo en Psicología , Aprendizaje/fisiología , Recuerdo Mental/fisiología , Neuronas/fisiología , Hipocampo/fisiología , Plasticidad Neuronal/fisiología , Modelos Neurológicos
7.
Trends Neurosci ; 46(12): 1054-1066, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37925342

RESUMEN

Curiosity refers to the intrinsic desire of humans and animals to explore the unknown, even when there is no apparent reason to do so. Thus far, no single, widely accepted definition or framework for curiosity has emerged, but there is growing consensus that curious behavior is not goal-directed but related to seeking or reacting to information. In this review, we take a phenomenological approach and group behavioral and neurophysiological studies which meet these criteria into three categories according to the type of information seeking observed. We then review recent computational models of curiosity from the field of machine learning and discuss how they enable integrating different types of information seeking into one theoretical framework. Combinations of behavioral and neurophysiological studies along with computational modeling will be instrumental in demystifying the notion of curiosity.


Asunto(s)
Conducta Exploratoria , Neurociencias , Humanos , Animales , Conducta Exploratoria/fisiología , Motivación , Simulación por Computador
8.
Neural Netw ; 168: 74-88, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37742533

RESUMEN

Deep spiking neural networks (SNNs) offer the promise of low-power artificial intelligence. However, training deep SNNs from scratch or converting deep artificial neural networks to SNNs without loss of performance has been a challenge. Here we propose an exact mapping from a network with Rectified Linear Units (ReLUs) to an SNN that fires exactly one spike per neuron. For our constructive proof, we assume that an arbitrary multi-layer ReLU network with or without convolutional layers, batch normalization and max pooling layers was trained to high performance on some training set. Furthermore, we assume that we have access to a representative example of input data used during training and to the exact parameters (weights and biases) of the trained ReLU network. The mapping from deep ReLU networks to SNNs causes zero percent drop in accuracy on CIFAR10, CIFAR100 and the ImageNet-like data sets Places365 and PASS. More generally our work shows that an arbitrary deep ReLU network can be replaced by an energy-efficient single-spike neural network without any loss of performance.


Asunto(s)
Inteligencia Artificial , Redes Neurales de la Computación , Neuronas/fisiología
9.
Curr Opin Neurobiol ; 82: 102758, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37619425

RESUMEN

Notions of surprise and novelty have been used in various experimental and theoretical studies across multiple brain areas and species. However, 'surprise' and 'novelty' refer to different quantities in different studies, which raises concerns about whether these studies indeed relate to the same functionalities and mechanisms in the brain. Here, we address these concerns through a systematic investigation of how different aspects of surprise and novelty relate to different brain functions and physiological signals. We review recent classifications of definitions proposed for surprise and novelty along with links to experimental observations. We show that computational modeling and quantifiable definitions enable novel interpretations of previous findings and form a foundation for future theoretical and experimental studies.


Asunto(s)
Encéfalo , Simulación por Computador
10.
Nat Commun ; 14(1): 2979, 2023 05 23.
Artículo en Inglés | MEDLINE | ID: mdl-37221167

RESUMEN

Birds of the crow family adapt food-caching strategies to anticipated needs at the time of cache recovery and rely on memory of the what, where and when of previous caching events to recover their hidden food. It is unclear if this behavior can be explained by simple associative learning or if it relies on higher cognitive processes like mental time-travel. We present a computational model and propose a neural implementation of food-caching behavior. The model has hunger variables for motivational control, reward-modulated update of retrieval and caching policies and an associative neural network for remembering caching events with a memory consolidation mechanism for flexible decoding of the age of a memory. Our methodology of formalizing experimental protocols is transferable to other domains and facilitates model evaluation and experiment design. Here, we show that memory-augmented, associative reinforcement learning without mental time-travel is sufficient to explain the results of 28 behavioral experiments with food-caching birds.


Asunto(s)
Aves , Cuervos , Animales , Condicionamiento Clásico , Alimentos , Simulación por Computador
11.
Neuroimage ; 246: 118780, 2022 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-34875383

RESUMEN

Learning how to reach a reward over long series of actions is a remarkable capability of humans, and potentially guided by multiple parallel learning modules. Current brain imaging of learning modules is limited by (i) simple experimental paradigms, (ii) entanglement of brain signals of different learning modules, and (iii) a limited number of computational models considered as candidates for explaining behavior. Here, we address these three limitations and (i) introduce a complex sequential decision making task with surprising events that allows us to (ii) dissociate correlates of reward prediction errors from those of surprise in functional magnetic resonance imaging (fMRI); and (iii) we test behavior against a large repertoire of model-free, model-based, and hybrid reinforcement learning algorithms, including a novel surprise-modulated actor-critic algorithm. Surprise, derived from an approximate Bayesian approach for learning the world-model, is extracted in our algorithm from a state prediction error. Surprise is then used to modulate the learning rate of a model-free actor, which itself learns via the reward prediction error from model-free value estimation by the critic. We find that action choices are well explained by pure model-free policy gradient, but reaction times and neural data are not. We identify signatures of both model-free and surprise-based learning signals in blood oxygen level dependent (BOLD) responses, supporting the existence of multiple parallel learning modules in the brain. Our results extend previous fMRI findings to a multi-step setting and emphasize the role of policy gradient and surprise signalling in human learning.


Asunto(s)
Encéfalo/fisiología , Toma de Decisiones/fisiología , Neuroimagen Funcional/métodos , Aprendizaje/fisiología , Imagen por Resonancia Magnética/métodos , Adulto , Encéfalo/diagnóstico por imagen , Femenino , Humanos , Masculino , Modelos Biológicos , Refuerzo en Psicología , Adulto Joven
12.
PLoS Comput Biol ; 17(12): e1009691, 2021 12.
Artículo en Inglés | MEDLINE | ID: mdl-34968383

RESUMEN

Assemblies of neurons, called concepts cells, encode acquired concepts in human Medial Temporal Lobe. Those concept cells that are shared between two assemblies have been hypothesized to encode associations between concepts. Here we test this hypothesis in a computational model of attractor neural networks. We find that for concepts encoded in sparse neural assemblies there is a minimal fraction cmin of neurons shared between assemblies below which associations cannot be reliably implemented; and a maximal fraction cmax of shared neurons above which single concepts can no longer be retrieved. In the presence of a periodically modulated background signal, such as hippocampal oscillations, recall takes the form of association chains reminiscent of those postulated by theories of free recall of words. Predictions of an iterative overlap-generating model match experimental data on the number of concepts to which a neuron responds.


Asunto(s)
Memoria/fisiología , Modelos Neurológicos , Neuronas/citología , Biología Computacional , Hipocampo/citología , Hipocampo/fisiología , Humanos , Red Nerviosa/citología , Red Nerviosa/fisiología , Lóbulo Temporal/citología , Lóbulo Temporal/fisiología
13.
Neuron ; 109(13): 2183-2201.e9, 2021 07 07.
Artículo en Inglés | MEDLINE | ID: mdl-34077741

RESUMEN

The neuronal mechanisms generating a delayed motor response initiated by a sensory cue remain elusive. Here, we tracked the precise sequence of cortical activity in mice transforming a brief whisker stimulus into delayed licking using wide-field calcium imaging, multiregion high-density electrophysiology, and time-resolved optogenetic manipulation. Rapid activity evoked by whisker deflection acquired two prominent features for task performance: (1) an enhanced excitation of secondary whisker motor cortex, suggesting its important role connecting whisker sensory processing to lick motor planning; and (2) a transient reduction of activity in orofacial sensorimotor cortex, which contributed to suppressing premature licking. Subsequent widespread cortical activity during the delay period largely correlated with anticipatory movements, but when these were accounted for, a focal sustained activity remained in frontal cortex, which was causally essential for licking in the response period. Our results demonstrate key cortical nodes for motor plan generation and timely execution in delayed goal-directed licking.


Asunto(s)
Conducta Animal , Neuronas/fisiología , Desempeño Psicomotor/fisiología , Corteza Sensoriomotora/fisiología , Percepción del Tacto/fisiología , Animales , Femenino , Masculino , Ratones Endogámicos C57BL , Ratones Transgénicos , Vías Nerviosas/fisiología , Optogenética
14.
Elife ; 102021 06 17.
Artículo en Inglés | MEDLINE | ID: mdl-34137370

RESUMEN

In adult dentate gyrus neurogenesis, the link between maturation of newborn neurons and their function, such as behavioral pattern separation, has remained puzzling. By analyzing a theoretical model, we show that the switch from excitation to inhibition of the GABAergic input onto maturing newborn cells is crucial for their proper functional integration. When the GABAergic input is excitatory, cooperativity drives the growth of synapses such that newborn cells become sensitive to stimuli similar to those that activate mature cells. When GABAergic input switches to inhibitory, competition pushes the configuration of synapses onto newborn cells toward stimuli that are different from previously stored ones. This enables the maturing newborn cells to code for concepts that are novel, yet similar to familiar ones. Our theory of newborn cell maturation explains both how adult-born dentate granule cells integrate into the preexisting network and why they promote separation of similar but not distinct patterns.


Asunto(s)
Giro Dentado , Modelos Neurológicos , Neurogénesis/fisiología , Animales , Animales Recién Nacidos/fisiología , Giro Dentado/citología , Giro Dentado/crecimiento & desarrollo , Neuronas GABAérgicas/citología , Neuronas GABAérgicas/fisiología , Interneuronas/citología , Interneuronas/fisiología , Red Nerviosa/citología , Red Nerviosa/fisiología , Roedores , Sinapsis/fisiología
15.
PLoS Comput Biol ; 17(6): e1009070, 2021 06.
Artículo en Inglés | MEDLINE | ID: mdl-34081705

RESUMEN

Classic reinforcement learning (RL) theories cannot explain human behavior in the absence of external reward or when the environment changes. Here, we employ a deep sequential decision-making paradigm with sparse reward and abrupt environmental changes. To explain the behavior of human participants in these environments, we show that RL theories need to include surprise and novelty, each with a distinct role. While novelty drives exploration before the first encounter of a reward, surprise increases the rate of learning of a world-model as well as of model-free action-values. Even though the world-model is available for model-based RL, we find that human decisions are dominated by model-free action choices. The world-model is only marginally used for planning, but it is important to detect surprising events. Our theory predicts human action choices with high probability and allows us to dissociate surprise, novelty, and reward in EEG signals.


Asunto(s)
Adaptación Psicológica , Conducta Exploratoria , Modelos Psicológicos , Refuerzo en Psicología , Algoritmos , Conducta de Elección/fisiología , Biología Computacional , Toma de Decisiones/fisiología , Electroencefalografía/estadística & datos numéricos , Conducta Exploratoria/fisiología , Humanos , Aprendizaje/fisiología , Modelos Neurológicos , Recompensa
16.
Neural Comput ; 33(2): 269-340, 2021 02.
Artículo en Inglés | MEDLINE | ID: mdl-33400898

RESUMEN

Surprise-based learning allows agents to rapidly adapt to nonstationary stochastic environments characterized by sudden changes. We show that exact Bayesian inference in a hierarchical model gives rise to a surprise-modulated trade-off between forgetting old observations and integrating them with the new ones. The modulation depends on a probability ratio, which we call the Bayes Factor Surprise, that tests the prior belief against the current belief. We demonstrate that in several existing approximate algorithms, the Bayes Factor Surprise modulates the rate of adaptation to new observations. We derive three novel surprise-based algorithms, one in the family of particle filters, one in the family of variational learning, and one in the family of message passing, that have constant scaling in observation sequence length and particularly simple update dynamics for any distribution in the exponential family. Empirical results show that these surprise-based algorithms estimate parameters better than alternative approximate approaches and reach levels of performance comparable to computationally more expensive algorithms. The Bayes Factor Surprise is related to but different from the Shannon Surprise. In two hypothetical experiments, we make testable predictions for physiological indicators that dissociate the Bayes Factor Surprise from the Shannon Surprise. The theoretical insight of casting various approaches as surprise-based learning, as well as the proposed online algorithms, may be applied to the analysis of animal and human behavior and to reinforcement learning in nonstationary environments.


Asunto(s)
Algoritmos , Conducta/fisiología , Simulación por Computador , Aprendizaje/fisiología , Refuerzo en Psicología , Animales , Teorema de Bayes , Humanos
17.
Front Synaptic Neurosci ; 12: 585539, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33224033

RESUMEN

Experiments have shown that the same stimulation pattern that causes Long-Term Potentiation in proximal synapses, will induce Long-Term Depression in distal ones. In order to understand these, and other, surprising observations we use a phenomenological model of Hebbian plasticity at the location of the synapse. Our model describes the Hebbian condition of joint activity of pre- and postsynaptic neurons in a compact form as the interaction of the glutamate trace left by a presynaptic spike with the time course of the postsynaptic voltage. Instead of simulating the voltage, we test the model using experimentally recorded dendritic voltage traces in hippocampus and neocortex. We find that the time course of the voltage in the neighborhood of a stimulated synapse is a reliable predictor of whether a stimulated synapse undergoes potentiation, depression, or no change. Our computational model can explain the existence of different -at first glance seemingly paradoxical- outcomes of synaptic potentiation and depression experiments depending on the dendritic location of the synapse and the frequency or timing of the stimulation.

18.
PLoS Comput Biol ; 16(4): e1007640, 2020 04.
Artículo en Inglés | MEDLINE | ID: mdl-32271761

RESUMEN

This is a PLOS Computational Biology Education paper. The idea that the brain functions so as to minimize certain costs pervades theoretical neuroscience. Because a cost function by itself does not predict how the brain finds its minima, additional assumptions about the optimization method need to be made to predict the dynamics of physiological quantities. In this context, steepest descent (also called gradient descent) is often suggested as an algorithmic principle of optimization potentially implemented by the brain. In practice, researchers often consider the vector of partial derivatives as the gradient. However, the definition of the gradient and the notion of a steepest direction depend on the choice of a metric. Because the choice of the metric involves a large number of degrees of freedom, the predictive power of models that are based on gradient descent must be called into question, unless there are strong constraints on the choice of the metric. Here, we provide a didactic review of the mathematics of gradient descent, illustrate common pitfalls of using gradient descent as a principle of brain function with examples from the literature, and propose ways forward to constrain the metric.


Asunto(s)
Biofisica/métodos , Encéfalo/diagnóstico por imagen , Encéfalo/fisiología , Biología Computacional/métodos , Algoritmos , Simulación por Computador , Humanos , Procesamiento de Imagen Asistido por Computador , Cinética , Matemática , Redes Neurales de la Computación , Neurociencias/métodos
19.
J Math Neurosci ; 10(1): 5, 2020 Apr 06.
Artículo en Inglés | MEDLINE | ID: mdl-32253526

RESUMEN

Coarse-graining microscopic models of biological neural networks to obtain mesoscopic models of neural activities is an essential step towards multi-scale models of the brain. Here, we extend a recent theory for mesoscopic population dynamics with static synapses to the case of dynamic synapses exhibiting short-term plasticity (STP). The extended theory offers an approximate mean-field dynamics for the synaptic input currents arising from populations of spiking neurons and synapses undergoing Tsodyks-Markram STP. The approximate mean-field dynamics accounts for both finite number of synapses and correlation between the two synaptic variables of the model (utilization and available resources) and its numerical implementation is simple. Comparisons with Monte Carlo simulations of the microscopic model show that in both feedforward and recurrent networks, the mesoscopic mean-field model accurately reproduces the first- and second-order statistics of the total synaptic input into a postsynaptic neuron and accounts for stochastic switches between Up and Down states and for population spikes. The extended mesoscopic population theory of spiking neural networks with STP may be useful for a systematic reduction of detailed biophysical models of cortical microcircuits to numerically efficient and mathematically tractable mean-field models.

20.
Front Comput Neurosci ; 13: 78, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31798436

RESUMEN

Synaptic changes induced by neural activity need to be consolidated to maintain memory over a timescale of hours. In experiments, synaptic consolidation can be induced by repeating a stimulation protocol several times and the effectiveness of consolidation depends crucially on the repetition frequency of the stimulations. We address the question: is there an understandable reason why induction protocols with repetitions at some frequency work better than sustained protocols-even though the accumulated stimulation strength might be exactly the same in both cases? In real synapses, plasticity occurs on multiple time scales from seconds (induction), to several minutes (early phase of long-term potentiation) to hours and days (late phase of synaptic consolidation). We use a simplified mathematical model of just two times scales to elucidate the above question in a purified setting. Our mathematical results show that, even in such a simple model, the repetition frequency of stimulation plays an important role for the successful induction, and stabilization, of potentiation.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...