Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Front Comput Neurosci ; 18: 1393025, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38741707

RESUMEN

In recent years, with the rapid development of network applications and the increasing demand for high-quality network service, quality-of-service (QoS) routing has emerged as a critical network technology. The application of machine learning techniques, particularly reinforcement learning and graph neural network, has garnered significant attention in addressing this problem. However, existing reinforcement learning methods lack research on the causal impact of agent actions on the interactive environment, and graph neural network fail to effectively represent link features, which are pivotal for routing optimization. Therefore, this study quantifies the causal influence between the intelligent agent and the interactive environment based on causal inference techniques, aiming to guide the intelligent agent in improving the efficiency of exploring the action space. Simultaneously, graph neural network is employed to embed node and link features, and a reward function is designed that comprehensively considers network performance metrics and causality relevance. A centralized reinforcement learning method is proposed to effectively achieve QoS-aware routing in Software-Defined Networking (SDN). Finally, experiments are conducted in a network simulation environment, and metrics such as packet loss, delay, and throughput all outperform the baseline.

2.
IEEE Trans Cybern ; 54(3): 1639-1649, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37018707

RESUMEN

This article is concerned with the convergence property and error bounds analysis of value iteration (VI) adaptive dynamic programming for continuous-time (CT) nonlinear systems. The size relationship between the total value function and the single integral step cost is described by assuming a contraction assumption. Then, the convergence property of VI is proved while the initial condition is an arbitrary positive semidefinite function. Moreover, the accumulated effects of approximation errors generated in each iteration are taken into consideration while using approximators to implement the algorithm. Based on the contraction assumption, the error bounds condition is proposed, which ensures the approximated iterative results converge to a neighborhood of the optimum, and the relation between the optimal solution and approximated iterative results is also derived. To make the contraction assumption more concrete, an estimation way is proposed to derive a conservative value of the assumption. Finally, three simulation cases are given to validate the theoretical results.

3.
IEEE Trans Neural Netw Learn Syst ; 29(6): 2112-2126, 2018 06.
Artículo en Inglés | MEDLINE | ID: mdl-29771665

RESUMEN

Based on integral sliding mode and approximate dynamic programming (ADP) theory, a novel optimal guaranteed cost sliding mode control is designed for constrained-input nonlinear systems with matched and unmatched disturbances. When the system moves on the sliding surface, the optimal guaranteed cost control problem of sliding mode dynamics is transformed into the optimal control problem of a reformulated auxiliary system with a modified cost function. The ADP algorithm based on single critic neural network (NN) is applied to obtain the approximate optimal control law for the auxiliary system. Lyapunov techniques are used to demonstrate the convergence of the NN weight errors. In addition, the derived approximate optimal control is verified to guarantee the sliding mode dynamics system to be stable in the sense of uniform ultimate boundedness. Some simulation results are presented to verify the feasibility of the proposed control scheme.

4.
IEEE Trans Cybern ; 47(10): 3331-3340, 2017 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-28113535

RESUMEN

In this paper, we investigate the nonzero-sum games for a class of discrete-time (DT) nonlinear systems by using a novel policy iteration (PI) adaptive dynamic programming (ADP) method. The main idea of our proposed PI scheme is to utilize the iterative ADP algorithm to obtain the iterative control policies, which not only ensure the system to achieve stability but also minimize the performance index function for each player. This paper integrates game theory, optimal control theory, and reinforcement learning technique to formulate and handle the DT nonzero-sum games for multiplayer. First, we design three actor-critic algorithms, an offline one and two online ones, for the PI scheme. Subsequently, neural networks are employed to implement these algorithms and the corresponding stability analysis is also provided via the Lyapunov theory. Finally, a numerical simulation example is presented to demonstrate the effectiveness of our proposed approach.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA