Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
IEEE Trans Neural Netw Learn Syst ; 29(6): 2259-2270, 2018 06.
Artículo en Inglés | MEDLINE | ID: mdl-28436902

RESUMEN

The reinforcement learning (RL) paradigm allows agents to solve tasks through trial-and-error learning. To be capable of efficient, long-term learning, RL agents should be able to apply knowledge gained in the past to new tasks they may encounter in the future. The ability to predict actions' consequences may facilitate such knowledge transfer. We consider here domains where an RL agent has access to two kinds of information: agent-centric information with constant semantics across tasks, and environment-centric information, which is necessary to solve the task, but with semantics that differ between tasks. For example, in robot navigation, environment-centric information may include the robot's geographic location, while agent-centric information may include sensor readings of various nearby obstacles. We propose that these situations provide an opportunity for a very natural style of knowledge transfer, in which the agent learns to predict actions' environmental consequences using agent-centric information. These predictions contain important information about the affordances and dangers present in a novel environment, and can effectively transfer knowledge from agent-centric to environment-centric learning systems. Using several example problems including spatial navigation and network routing, we show that our knowledge transfer approach can allow faster and lower cost learning than existing alternatives.


Asunto(s)
Algoritmos , Conocimiento , Redes Neurales de la Computación , Refuerzo en Psicología , Transferencia de Experiencia en Psicología/fisiología , Simulación por Computador , Humanos , Valor Predictivo de las Pruebas , Navegación Espacial
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...