Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros

Bases de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Nature ; 557(7705): 429-433, 2018 05.
Artículo en Inglés | MEDLINE | ID: mdl-29743670

RESUMEN

Deep neural networks have achieved impressive successes in fields ranging from object recognition to complex games such as Go1,2. Navigation, however, remains a substantial challenge for artificial agents, with deep neural networks trained by reinforcement learning3-5 failing to rival the proficiency of mammalian spatial behaviour, which is underpinned by grid cells in the entorhinal cortex 6 . Grid cells are thought to provide a multi-scale periodic representation that functions as a metric for coding space7,8 and is critical for integrating self-motion (path integration)6,7,9 and planning direct trajectories to goals (vector-based navigation)7,10,11. Here we set out to leverage the computational functions of grid cells to develop a deep reinforcement learning agent with mammal-like navigational abilities. We first trained a recurrent network to perform path integration, leading to the emergence of representations resembling grid cells, as well as other entorhinal cell types 12 . We then showed that this representation provided an effective basis for an agent to locate goals in challenging, unfamiliar, and changeable environments-optimizing the primary objective of navigation through deep reinforcement learning. The performance of agents endowed with grid-like representations surpassed that of an expert human and comparison agents, with the metric quantities necessary for vector-based navigation derived from grid-like units within the network. Furthermore, grid-like representations enabled agents to conduct shortcut behaviours reminiscent of those performed by mammals. Our findings show that emergent grid-like representations furnish agents with a Euclidean spatial metric and associated vector operations, providing a foundation for proficient navigation. As such, our results support neuroscientific theories that see grid cells as critical for vector-based navigation7,10,11, demonstrating that the latter can be combined with path-based strategies to support navigation in challenging environments.


Asunto(s)
Biomimética/métodos , Aprendizaje Automático , Redes Neurales de la Computación , Navegación Espacial , Animales , Corteza Entorrinal/citología , Corteza Entorrinal/fisiología , Ambiente , Células de Red/fisiología , Humanos
2.
Proc Natl Acad Sci U S A ; 114(13): 3521-3526, 2017 03 28.
Artículo en Inglés | MEDLINE | ID: mdl-28292907

RESUMEN

The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Until now neural networks have not been capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks that they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on a hand-written digit dataset and by learning several Atari 2600 games sequentially.


Asunto(s)
Redes Neurales de la Computación , Algoritmos , Inteligencia Artificial , Simulación por Computador , Humanos , Aprendizaje , Memoria , Recuerdo Mental
4.
Trends Cogn Sci ; 24(12): 1028-1040, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-33158755

RESUMEN

Artificial intelligence research has seen enormous progress over the past few decades, but it predominantly relies on fixed datasets and stationary environments. Continual learning is an increasingly relevant area of study that asks how artificial systems might learn sequentially, as biological systems do, from a continuous stream of correlated data. In the present review, we relate continual learning to the learning dynamics of neural networks, highlighting the potential it has to considerably improve data efficiency. We further consider the many new biologically inspired approaches that have emerged in recent years, focusing on those that utilize regularization, modularity, memory, and meta-learning, and highlight some of the most promising and impactful directions.


Asunto(s)
Inteligencia Artificial , Redes Neurales de la Computación , Humanos , Aprendizaje , Memoria
5.
Neural Netw ; 24(2): 199-207, 2011 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-21036537

RESUMEN

Neurodynamical models of working memory (WM) should provide mechanisms for storing, maintaining, retrieving, and deleting information. Many models address only a subset of these aspects. Here we present a rather simple WM model in which all of these performance modes are trained into a recurrent neural network (RNN) of the echo state network (ESN) type. The model is demonstrated on a bracket level parsing task with a stream of rich and noisy graphical script input. In terms of nonlinear dynamics, memory states correspond, intuitively, to attractors in an input-driven system. As a supplementary contribution, the article proposes a rigorous formal framework to describe such attractors, generalizing from the standard definition of attractors in autonomous (input-free) dynamical systems.


Asunto(s)
Memoria a Corto Plazo , Modelos Neurológicos , Redes Neurales de la Computación , Memoria a Corto Plazo/fisiología , Neuronas/fisiología , Distribución Aleatoria
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA