Your browser doesn't support javascript.
loading
Optimistic sequential multi-agent reinforcement learning with motivational communication.
Huang, Anqi; Wang, Yongli; Zhou, Xiaoliang; Zou, Haochen; Dong, Xu; Che, Xun.
Affiliation
  • Huang A; School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China. Electronic address: anqihuang@njust.edu.cn.
  • Wang Y; School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China. Electronic address: yongliwang@njust.edu.cn.
  • Zhou X; School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China.
  • Zou H; School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China.
  • Dong X; School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China.
  • Che X; School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China.
Neural Netw ; 179: 106547, 2024 Jul 22.
Article in En | MEDLINE | ID: mdl-39068677
ABSTRACT
Centralized Training with Decentralized Execution (CTDE) is a prevalent paradigm in the field of fully cooperative Multi-Agent Reinforcement Learning (MARL). Existing algorithms often encounter two major problems independent strategies tend to underestimate the potential value of actions, leading to the convergence on sub-optimal Nash Equilibria (NE); some communication paradigms introduce added complexity to the learning process, complicating the focus on the essential elements of the messages. To address these challenges, we propose a novel method called Optimistic Sequential Soft Actor Critic with Motivational Communication (OSSMC). The key idea of OSSMC is to utilize a greedy-driven approach to explore the potential value of individual policies, named optimistic Q-values, which serve as an upper bound for the Q-value of the current policy. We then integrate a sequential update mechanism with optimistic Q-value for agents, aiming to ensure monotonic improvement in the joint policy optimization process. Moreover, we establish motivational communication modules for each agent to disseminate motivational messages to promote cooperative behaviors. Finally, we employ a value regularization strategy from the Soft Actor Critic (SAC) method to maximize entropy and improve exploration capabilities. The performance of OSSMC was rigorously evaluated against a series of challenging benchmark sets. Empirical results demonstrate that OSSMC not only surpasses current baseline algorithms but also exhibits a more rapid convergence rate.
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Neural Netw Journal subject: NEUROLOGIA Year: 2024 Document type: Article

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Neural Netw Journal subject: NEUROLOGIA Year: 2024 Document type: Article