Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38619961

RESUMO

Deep reinforcement learning (DRL) benefits from the representation power of deep neural networks (NNs), to approximate the value function and policy in the learning process. Batch reinforcement learning (BRL) benefits from stable training and data efficiency with fixed representation and enjoys solid theoretical analysis. This work proposes least-squares deep policy gradient (LSDPG), a hybrid approach that combines least-squares reinforcement learning (RL) with online DRL to achieve the best of both worlds. LSDPG leverages a shared network to share useful features between policy (actor) and value function (critic). LSDPG learns policy, value function, and representation separately. First, LSDPG views deep NNs of the critic as a linear combination of representation weighted by the weights of the last layer and performs policy evaluation with regularized least-squares temporal difference (LSTD) methods. Second, arbitrary policy gradient algorithms can be applied to improve the policy. Third, an auxiliary task is used to periodically distill the features from the critic into the representation. Unlike most DRL methods, where the critic algorithms are often used in a nonstationary situation, i.e., the policy to be evaluated is changing, the critic in LSDPG is working on a stationary case in each iteration of the critic update. We prove that, under some conditions, the critic converges to the regularized TD fixpoint of current policy, and the actor converges to the local optimal policy. The experimental results on challenging Procgen benchmark illustrate the improvement of sample efficiency of LSDPG over proximal policy optimization and phasic policy gradient (PPG).

2.
Artigo em Inglês | MEDLINE | ID: mdl-39190528

RESUMO

For on-policy reinforcement learning (RL), discretizing action space for continuous control can easily express multiple modes and is straightforward to optimize. However, without considering the inherent ordering between the discrete atomic actions, the explosion in the number of discrete actions can possess undesired properties and induce a higher variance for the policy gradient (PG) estimator. In this article, we introduce a straightforward architecture that addresses this issue by constraining the discrete policy to be unimodal using Poisson probability distributions. This unimodal architecture can better leverage the continuity in the underlying continuous action space using explicit unimodal probability distributions. We conduct extensive experiments to show that the discrete policy with the unimodal probability distribution provides significantly faster convergence and higher performance for on-policy RL algorithms in challenging control tasks, especially in highly complex tasks such as Humanoid. We provide theoretical analysis on the variance of the PG estimator, which suggests that our attentively designed unimodal discrete policy can retain a lower variance and yield a stable learning process.

3.
Artigo em Inglês | MEDLINE | ID: mdl-37672377

RESUMO

Communication-based multiagent reinforcement learning (MARL) has shown promising results in promoting cooperation by enabling agents to exchange information. However, the existing methods have limitations in large-scale multiagent systems due to high information redundancy, and they tend to overlook the unstable training process caused by the online-trained communication protocol. In this work, we propose a novel method called neighboring variational information flow (NVIF), which enhances communication among neighboring agents by providing them with the maximum information set (MIS) containing more information than the existing methods. NVIF compresses the MIS into a compact latent state while adopting neighboring communication. To stabilize the overall training process, we introduce a two-stage training mechanism. We first pretrain the NVIF module using a randomly sampled offline dataset to create a task-agnostic and stable communication protocol, and then use the pretrained protocol to perform online policy training with RL algorithms. Our theoretical analysis indicates that NVIF-proximal policy optimization (PPO), which combines NVIF with PPO, has the potential to promote cooperation with agent-specific rewards. Experiment results demonstrate the superiority of our method in both heterogeneous and homogeneous settings. Additional experiment results also demonstrate the potential of our method for multitask learning.

4.
IEEE Trans Cybern ; 53(10): 6443-6455, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35749334

RESUMO

In single-agent Markov decision processes, an agent can optimize its policy based on the interaction with the environment. In multiplayer Markov games (MGs), however, the interaction is nonstationary due to the behaviors of other players, so the agent has no fixed optimization objective. The challenge becomes finding equilibrium policies for all players. In this research, we treat the evolution of player policies as a dynamical process and propose a novel learning scheme for Nash equilibrium. The core is to evolve one's policy according to not just its current in-game performance, but an aggregation of its performance over history. We show that for a variety of MGs, players in our learning scheme will provably converge to a point that is an approximation to Nash equilibrium. Combined with neural networks, we develop an empirical policy optimization algorithm, which is implemented in a reinforcement-learning framework and runs in a distributed way, with each player optimizing its policy based on own observations. We use two numerical examples to validate the convergence property on small-scale MGs, and a pong example to show the potential on large games.

5.
IEEE Trans Neural Netw Learn Syst ; 34(8): 3966-3978, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-34723813

RESUMO

Communicating agents with each other in a distributed manner and behaving as a group are essential in multi-agent reinforcement learning. However, real-world multi-agent systems suffer from restrictions on limited bandwidth communication. If the bandwidth is fully occupied, some agents are not able to send messages promptly to others, causing decision delay and impairing cooperative effects. Recent related work has started to address the problem but still fails in maximally reducing the consumption of communication resources. In this article, we propose an event-triggered communication network (ETCNet) to enhance communication efficiency in multi-agent systems by communicating only when necessary. For different task requirements, two paradigms of the ETCNet framework, event-triggered sending network (ETSNet) and event-triggered receiving network (ETRNet), are proposed for learning efficient sending and receiving protocols, respectively. Leveraging the information theory, the limited bandwidth is translated to the penalty threshold of an event-triggered strategy, which determines whether an agent at each step participates in communication or not. Then, the design of the event-triggered strategy is formulated as a constrained Markov decision problem and reinforcement learning finds the feasible and optimal communication protocol that satisfies the limited bandwidth constraint. Experiments on typical multi-agent tasks demonstrate that ETCNet outperforms other methods in reducing bandwidth occupancy and still preserves the cooperative performance of multi-agent systems at the most.

6.
IEEE Trans Neural Netw Learn Syst ; 34(4): 2093-2104, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-34460404

RESUMO

Multiagent reinforcement learning methods, such as VDN, QMIX, and QTRAN, that adopt centralized training with decentralized execution (CTDE) framework have shown promising results in cooperation and competition. However, in some multiagent scenarios, the number of agents and the size of the action set actually vary over time. We call these unshaped scenarios, and the methods mentioned above fail in performing satisfyingly. In this article, we propose a new method, called Unshaped Networks for Multiagent Systems (UNMAS), which adapts to the number and size changes in multiagent systems. We propose the self-weighting mixing network to factorize the joint action-value. Its adaption to the change in agent number is attributed to the nonlinear mapping from each-agent Q value to the joint action-value with individual weights. Besides, in order to address the change in an action set, each agent constructs an individual action-value network that is composed of two streams to evaluate the constant environment-oriented subset and the varying unit-oriented subset. We evaluate UNMAS on various StarCraft II micromanagement scenarios and compare the results with several state-of-the-art MARL algorithms. The superiority of UNMAS is demonstrated by its highest winning rates especially on the most difficult scenario 3s5z_vs_3s6z. The agents learn to perform effectively cooperative behaviors, while other MARL algorithms fail. Animated demonstrations and source code are provided in https://sites.google.com/view/unmas.

7.
IEEE Trans Neural Netw Learn Syst ; 33(3): 1228-1241, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-33306474

RESUMO

The Nash equilibrium is an important concept in game theory. It describes the least exploitability of one player from any opponents. We combine game theory, dynamic programming, and recent deep reinforcement learning (DRL) techniques to online learn the Nash equilibrium policy for two-player zero-sum Markov games (TZMGs). The problem is first formulated as a Bellman minimax equation, and generalized policy iteration (GPI) provides a double-loop iterative way to find the equilibrium. Then, neural networks are introduced to approximate Q functions for large-scale problems. An online minimax Q network learning algorithm is proposed to train the network with observations. Experience replay, dueling network, and double Q-learning are applied to improve the learning process. The contributions are twofold: 1) DRL techniques are combined with GPI to find the TZMG Nash equilibrium for the first time and 2) the convergence of the online learning algorithm with a lookup table and experience replay is proven, whose proof is not only useful for TZMGs but also instructive for single-agent Markov decision problems. Experiments on different examples validate the effectiveness of the proposed algorithm on TZMG problems.

8.
IEEE Trans Neural Netw Learn Syst ; 33(10): 5830-5844, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-33861716

RESUMO

We study the decentralized event-driven control problem of nonlinear dynamical systems with mismatched interconnections and asymmetric input constraints. To begin with, by introducing a discounted cost function for each auxiliary subsystem, we transform the decentralized event-driven constrained control problem into a group of nonlinear H2 -constrained optimal control problems. Then, we develop the event-driven Hamilton-Jacobi-Bellman equations (ED-HJBEs), which arise in the nonlinear H2 -constrained optimal control problems. Meanwhile, we demonstrate that all the solutions of the ED-HJBEs together keep the overall system stable in the sense of uniform ultimate boundedness (UUB). To solve the ED-HJBEs, we build a critic-only architecture under the framework of adaptive critic designs. The architecture only employs critic neural networks and updates their weight vectors via the gradient descent method. After that, based on the Lyapunov approach, we prove that the UUB stability of all signals in the closed-loop auxiliary subsystems is assured. Finally, simulations of an illustrated nonlinear interconnected plant are provided to validate the present designs.

9.
IEEE Trans Cybern ; 48(2): 500-509, 2018 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-28092589

RESUMO

Sum of squares (SOS) polynomials have provided a computationally tractable way to deal with inequality constraints appearing in many control problems. It can also act as an approximator in the framework of adaptive dynamic programming. In this paper, an approximate solution to the optimal control of polynomial nonlinear systems is proposed. Under a given attenuation coefficient, the Hamilton-Jacobi-Isaacs equation is relaxed to an optimization problem with a set of inequalities. After applying the policy iteration technique and constraining inequalities to SOS, the optimization problem is divided into a sequence of feasible semidefinite programming problems. With the converged solution, the attenuation coefficient is further minimized to a lower value. After iterations, approximate solutions to the smallest -gain and the associated optimal controller are obtained. Four examples are employed to verify the effectiveness of the proposed algorithm.

10.
IEEE Trans Neural Netw Learn Syst ; 28(3): 714-725, 2017 03.
Artigo em Inglês | MEDLINE | ID: mdl-27249839

RESUMO

H∞ control is a powerful method to solve the disturbance attenuation problems that occur in some control systems. The design of such controllers relies on solving the zero-sum game (ZSG). But in practical applications, the exact dynamics is mostly unknown. Identification of dynamics also produces errors that are detrimental to the control performance. To overcome this problem, an iterative adaptive dynamic programming algorithm is proposed in this paper to solve the continuous-time, unknown nonlinear ZSG with only online data. A model-free approach to the Hamilton-Jacobi-Isaacs equation is developed based on the policy iteration method. Control and disturbance policies and value are approximated by neural networks (NNs) under the critic-actor-disturber structure. The NN weights are solved by the least-squares method. According to the theoretical analysis, our algorithm is equivalent to a Gauss-Newton method solving an optimization problem, and it converges uniformly to the optimal solution. The online data can also be used repeatedly, which is highly efficient. Simulation results demonstrate its feasibility to solve the unknown nonlinear ZSG. When compared with other algorithms, it saves a significant amount of online measurement time.

11.
IEEE Trans Cybern ; 46(3): 854-65, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-26529794

RESUMO

In this paper, an approximate online equilibrium solution is developed for an N -player nonzero-sum (NZS) game systems with completely unknown dynamics. First, a model identifier based on a three-layer neural network (NN) is established to reconstruct the unknown NZS games systems. Moreover, the identifier weight vector is updated based on experience replay technique which can relax the traditional persistence of excitation condition to a simplified condition on recorded data. Then, the single-network adaptive dynamic programming (ADP) with experience replay algorithm is proposed for each player to solve the coupled nonlinear Hamilton- (HJ) equations, where only the critic NN weight vectors are required to tune for each player. The feedback Nash equilibrium is provided by the solution of the coupled HJ equations. Based on the experience replay technique, a novel critic NN weights tuning law is proposed to guarantee the stability of the closed-loop system and the convergence of the value functions. Furthermore, a Lyapunov-based stability analysis shows that the uniform ultimate boundedness of the closed-loop system is achieved. Finally, two simulation examples are given to verify the effectiveness of the proposed control scheme.

12.
IEEE Trans Neural Netw Learn Syst ; 26(2): 346-56, 2015 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-25474812

RESUMO

In this paper, the first probably approximately correct (PAC) algorithm for continuous deterministic systems without relying on any system dynamics is proposed. It combines the state aggregation technique and the efficient exploration principle, and makes high utilization of online observed samples. We use a grid to partition the continuous state space into different cells to save samples. A near-upper Q operator is defined to produce a near-upper Q function using samples in each cell. The corresponding greedy policy effectively balances between exploration and exploitation. With the rigorous analysis, we prove that there is a polynomial time bound of executing nonoptimal actions in our algorithm. After finite steps, the final policy reaches near optimal in the framework of PAC. The implementation requires no knowledge of systems and has less computation complexity. Simulation studies confirm that it is a better performance than other similar PAC algorithms.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa