Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
bioRxiv ; 2024 Feb 28.
Artículo en Inglés | MEDLINE | ID: mdl-38464188

RESUMEN

In this study, we develop a novel recurrent neural network (RNN) model of pre-frontal cortex that predicts sensory inputs, actions, and outcomes at the next time step. Synaptic weights in the model are adjusted to minimize sequence prediction error, adapting a deep learning rule similar to those of large language models. The model, called Sequence Prediction Error Learning (SPEL), is a simple RNN that predicts world state at the next time step, but that differs from standard RNNs by using its own prediction errors from the previous state predictions as inputs to the hidden units of the network. We show that the time course of sequence prediction errors generated by the model closely matched the activity time courses of populations of neurons in macaque prefrontal cortex. Hidden units in the model responded to combinations of task variables and exhibited sensitivity to changing stimulus probability in ways that closely resembled monkey prefrontal neurons. Moreover, the model generated prolonged response times to infrequent, unexpected events as did monkeys. The results suggest that prefrontal cortex may generate internal models of the temporal structure of the world even during tasks that do not explicitly depend on temporal expectation, using a sequence prediction error minimization learning rule to do so. As such, the SPEL model provides a unified, general-purpose theoretical framework for modeling the lateral prefrontal cortex.

2.
PLoS Comput Biol ; 16(11): e1008342, 2020 11.
Artículo en Inglés | MEDLINE | ID: mdl-33141824

RESUMEN

The brain makes flexible and adaptive responses in a complicated and ever-changing environment for an organism's survival. To achieve this, the brain needs to understand the contingencies between its sensory inputs, actions, and rewards. This is analogous to the statistical inference that has been extensively studied in the natural language processing field, where recent developments of recurrent neural networks have found many successes. We wonder whether these neural networks, the gated recurrent unit (GRU) networks in particular, reflect how the brain solves the contingency problem. Therefore, we build a GRU network framework inspired by the statistical learning approach of NLP and test it with four exemplar behavior tasks previously used in empirical studies. The network models are trained to predict future events based on past events, both comprising sensory, action, and reward events. We show the networks can successfully reproduce animal and human behavior. The networks generalize the training, perform Bayesian inference in novel conditions, and adapt their choices when event contingencies vary. Importantly, units in the network encode task variables and exhibit activity patterns that match previous neurophysiology findings. Our results suggest that the neural network approach based on statistical sequence learning may reflect the brain's computational principle underlying flexible and adaptive behaviors and serve as a useful approach to understand the brain.


Asunto(s)
Toma de Decisiones , Aprendizaje , Redes Neurales de la Computación , Animales , Teorema de Bayes , Encéfalo/fisiología , Biología Computacional , Simulación por Computador , Humanos , Modelos Neurológicos , Modelos Estadísticos , Procesamiento de Lenguaje Natural , Refuerzo en Psicología , Recompensa , Análisis y Desempeño de Tareas
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...