Your browser doesn't support javascript.
loading
Generalization Enhancement of Visual Reinforcement Learning through Internal States.
Yang, Hanlin; Zhu, William; Zhu, Xianchao.
Afiliação
  • Yang H; Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China.
  • Zhu W; Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China.
  • Zhu X; School of Artificial Intelligence and Big Data, Henan University of Technology, Zhengzhou 450001, China.
Sensors (Basel) ; 24(14)2024 Jul 12.
Article em En | MEDLINE | ID: mdl-39065911
ABSTRACT
Visual reinforcement learning is important in various practical applications, such as video games, robotic manipulation, and autonomous navigation. However, a major challenge in visual reinforcement learning is the generalization to unseen environments, that is, how agents manage environments with previously unseen backgrounds. This issue is triggered mainly by the high unpredictability inherent in high-dimensional observation space. To deal with this problem, techniques including domain randomization and data augmentation have been explored; nevertheless, these methods still cannot attain a satisfactory result. This paper proposes a new method named Internal States Simulation Auxiliary (ISSA), which uses internal states to improve generalization in visual reinforcement learning tasks. Our method contains two agents, a teacher agent and a student agent the teacher agent has the ability to directly access the environment's internal states and is used to facilitate the student agent's training; the student agent receives initial guidance from the teacher agent and subsequently continues to learn independently. From another perspective, our method can be divided into two phases, the transfer learning phase and traditional visual reinforcement learning phase. In the first phase, the teacher agent interacts with environments and imparts knowledge to the vision-based student agent. With the guidance of the teacher agent, the student agent is able to discover more effective visual representations that address the high unpredictability of high-dimensional observation space. In the next phase, the student agent autonomously learns from the visual information in the environment, and ultimately, it becomes a vision-based reinforcement learning agent with enhanced generalization. The effectiveness of our method is evaluated using the DMControl Generalization Benchmark and the DrawerWorld with texture distortions. Preliminary results indicate that our method significantly improves generalization ability and performance in complex continuous control tasks.
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article