Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 23
Filter
Add more filters










Publication year range
1.
Neural Netw ; 176: 106337, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38688071

ABSTRACT

The complex and diverse practical background drives this paper to explore a new neurodynamic approach (NA) to solve nonsmooth interval-valued optimization problems (IVOPs) constrained by interval partial order and more general sets. On the one hand, to deal with the uncertainty of interval-valued information, the LU-optimality condition of IVOPs is established through a deterministic form. On the other hand, according to the penalty method and adaptive controller, the interval partial order constraint and set constraint are punished by one adaptive parameter, which is a key enabler for the feasibility of states while having a lower solution space dimension and avoiding estimating exact penalty parameters. Through nonsmooth analysis and Lyapunov theory, the proposed adaptive penalty-based neurodynamic approach (APNA) is proven to converge to an LU-solution of the considered IVOPs. Finally, the feasibility of the proposed APNA is illustrated by numerical simulations and an investment decision-making problem.


Subject(s)
Algorithms , Computer Simulation , Neural Networks, Computer , Nonlinear Dynamics , Humans , Decision Making/physiology
2.
Neural Netw ; 171: 145-158, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38091759

ABSTRACT

A nonconvex distributed optimization problem involving nonconvex objective functions and inequality constraints within an undirected multi-agent network is considered. Each agent communicates with its neighbors while only obtaining its individual local information (i.e. its constraint and objective function information). To overcome the challenge caused by the nonconvexity of the objective function, a collective neurodynamic penalty approach in the framework of particle swarm optimization is proposed. The state solution convergence of every neurodynamic penalty approach is directed towards the critical point ensemble of the nonconvex distributed optimization problem. Furthermore, employing their individual neurodynamic models, each neural network conducts accurate local searches within constraints. Through the utilization of both locally best-known solution information and globally best-known solution information, along with the incremental enhancement of solution quality through iterations, the globally optimal solution for a nonconvex distributed optimization problem can be found. Simulations and an application are presented to demonstrate the effectiveness and feasibility.


Subject(s)
Algorithms , Neural Networks, Computer , Computer Simulation
3.
Neural Netw ; 171: 73-84, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38091766

ABSTRACT

This paper addresses a distributed time-varying optimization problem with inequality constraints based on multi-agent systems over switching communication graphs. To reduce the influence of time-varying inequality constraints, an exact penalty method and smoothing technique are employed. Then, a Hessian-based distributed control protocol is presented to seek the time-varying optimal solution of the distributed time-varying optimization problem by virtue of only local information and interaction. It is shown that all agents not only achieve finite-time consensus but also track the time-varying global optimal target eventually. Compared with the existing distributed optimization protocols, the proposed control protocol is suitable for more general distributed time-varying optimization problems and enjoys high-efficiency convergence. Finally, numerical examples and experiment on moving target tracking of Unmanned Aircraft Vehicle (UAV) are performed to illustrate the effectiveness of the proposed control protocol.


Subject(s)
Aircraft , Communication , Consensus , Time Pressure
4.
Article in English | MEDLINE | ID: mdl-37948148

ABSTRACT

This article proposes new theoretical results on the multiple Mittag-Leffler stability of almost periodic solutions (APOs) for fractional-order delayed neural networks (FDNNs) with nonlinear and nonmonotonic activation functions. Profited from the superior geometrical construction of activation function, the considered FDNNs have multiple APOs with local Mittag-Leffler stability under given algebraic inequality conditions. To solve the algebraic inequality conditions, especially in high-dimensional cases, a distributed optimization (DOP) model and a corresponding neurodynamic solving approach are employed. The conclusions in this article generalize the multiple stability of integer-or fractional-order NNs. Besides, the consideration of the DOP approach can ameliorate the excessive consumption of computational resources when utilizing the LMI toolbox to deal with high-dimensional complex NNs. Finally, a simulation example is presented to confirm the accuracy of the theoretical conclusions obtained, and an experimental example of associative memories is shown.

5.
Neural Netw ; 166: 595-608, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37586259

ABSTRACT

In this paper, N-cluster games with coupling and private constraints are studied, where each player's cost function is nonsmooth and depends on the actions of all players. In order to seek the generalized Nash equilibrium (GNE) of the nonsmooth N-cluster games, a distributed seeking neurodynamic approach with two-time-scale structure is proposed. An adaptive leader-following consensus technique is adapted to dynamically adjust parameters according to the degree of consensus violation, so as to quickly obtain accurate estimation information of other players' actions which facilitates the evaluation of its own cost. Benefitting from the unique structure of the approach based on primal dual and adaptive penalty methods, the players' actions enter the constraints while completing the seeking for GNE. As a result, the neurodynamic approach is completely distributed, and prior estimation of penalty parameters is avoided. Finally, two engineering examples of power system game and company capacity allocation verify the effectiveness and feasibility of the neurodynamic approach.


Subject(s)
Algorithms , Consensus
6.
Article in English | MEDLINE | ID: mdl-37310826

ABSTRACT

In this article, an adaptive neurodynamic approach over multiagent systems is designed to solve nonsmooth distributed resource allocation problems (DRAPs) with affine-coupled equality constraints, coupled inequality constraints, and private set constraints. It is to say, agents focus on tracking the optimal allocation to minimize the team cost under more general constraints. Among the considered constraints, multiple coupled constraints are dealt with by introducing auxiliary variables to make Lagrange multipliers reach consensus. Furthermore, aiming to address private set constraints, an adaptive controller is proposed with the aid of the penalty method, thus avoiding the disclosure of global information. Through using the Lyapunov stability theory, the convergence of this neurodynamic approach is analyzed. In addition, to reduce the communication burden of systems, the proposed neurodynamic approach is improved by introducing an event-triggered mechanism. In this case, the convergence property is also explored, and the Zeno phenomenon is excluded. Finally, a numerical example and a simplified problem on a virtual 5G system are implemented to demonstrate the effectiveness of the proposed neurodynamic approaches.

7.
Neural Netw ; 146: 161-173, 2022 Feb.
Article in English | MEDLINE | ID: mdl-34864224

ABSTRACT

Based on the theories of inertial systems, a second-order accelerated neurodynamic approach is designed to solve a distributed convex optimization with inequality and set constraints. Most of the existing approaches for distributed convex optimization problems are usually first-order ones, and it is usually hard to analyze the convergence rate for the state solution of those first-order approaches. Due to the control design for the acceleration, the second-order neurodynamic approaches can often achieve faster convergence rate. Moreover, the existing second-order approaches are mostly designed to solve unconstrained distributed convex optimization problems, and are not suitable for solving constrained distributed convex optimization problems. It is acquired that the state solution of the designed neurodynamic approach in this paper converges to the optimal solution of the considered distributed convex optimization problem. An error function which demonstrates the performance of the designed neurodynamic approach, has a superquadratic convergence. Several numerical examples are provided to show the effectiveness of the presented second-order accelerated neurodynamic approach.


Subject(s)
Neural Networks, Computer , Computer Simulation
8.
Neural Netw ; 147: 1-9, 2022 Mar.
Article in English | MEDLINE | ID: mdl-34953297

ABSTRACT

As two important types of generalized convex functions, pseudoconvex and quasiconvex functions appear in many practical optimization problems. The lack of convexity poses some difficulties in solving pseudoconvex optimization with quasiconvex constraint functions. In this paper, we propose a one-layer recurrent neural network for solving such problems. We prove that the state of the proposed neural network is convergent from the feasible region to an optimal solution of the given optimization problem. We show that the proposed neural network has several advantages over the existing neural networks for pseudoconvex optimization. Specifically, the proposed neural network is applicable to optimization problems with quasiconvex inequality constraints as well as affine equality constraints. In addition, parameter matrix inversion is avoided and some assumptions on the objective function and inequality constraints in existing results are relaxed. We demonstrate the superior performance and characteristics of the proposed neural network with simulation results in three numerical examples.


Subject(s)
Algorithms , Neural Networks, Computer , Computer Simulation
9.
Neural Netw ; 143: 52-65, 2021 Nov.
Article in English | MEDLINE | ID: mdl-34087529

ABSTRACT

Distributed optimization problem (DOP) over multi-agent systems, which can be described by minimizing the sum of agents' local objective functions, has recently attracted widespread attention owing to its applications in diverse domains. In this paper, inspired by penalty method and subgradient descent method, a continuous-time neurodynamic approach is proposed for solving a DOP with inequality and set constraints. The state of continuous-time neurodynamic approach exists globally and converges to an optimal solution of the considered DOP. Comparisons reveal that the proposed neurodynamic approach can not only resolve more general convex DOPs, but also has lower dimension of solution space. Additionally, the discretization of the neurodynamic approach is also introduced for the convenience of implementation in practice. The iteration sequence of discrete-time method is also convergent to an optimal solution of DOP from any initial point. The effectiveness of the neurodynamic approach is verified by simulation examples and an application in L1-norm minimization problem in the end.


Subject(s)
Neural Networks, Computer , Computer Simulation
10.
Chaos ; 30(3): 033110, 2020 Mar.
Article in English | MEDLINE | ID: mdl-32237793

ABSTRACT

This paper investigates the exponential bipartite synchronization of a general class of delayed signed networks with multi-links by using an aperiodically intermittent control strategy. The main result is a set of sufficient conditions for bipartite synchronization that depend on the network's topology, control gain, and the maximum proportion of rest time. An application to Chua's circuits is then considered, and some numerical simulation results are presented.

11.
Neural Netw ; 124: 180-192, 2020 Apr.
Article in English | MEDLINE | ID: mdl-32007718

ABSTRACT

This paper presents a new neurodynamic approach for solving the constrained pseudoconvex optimization problem based on more general assumptions. The proposed neural network is equipped with a hard comparator function and a piecewise linear function, which make the state solution not only stay in the feasible region, but also converge to an optimal solution of the constrained pseudoconvex optimization problem. Compared with other related existing conclusions, the neurodynamic approach here enjoys global convergence and lower dimension of the solution space. Moreover, the neurodynamic approach does not depend on some additional assumptions, such as the feasible region is bounded, the objective function is lower bounded over the feasible region or the objective function is coercive. Finally, both numerical illustrations and simulation results in support vector regression problem show the well performance and the viability of the proposed neurodynamic approach.


Subject(s)
Neural Networks, Computer , Computer Simulation
12.
IEEE Trans Neural Netw Learn Syst ; 31(6): 1914-1926, 2020 06.
Article in English | MEDLINE | ID: mdl-31395559

ABSTRACT

This paper presents the multistability analysis of almost periodic state solutions for memristive Cohen-Grossberg neural networks (MCGNNs) with both distributed delay and discrete delay. The activation function of the considered MCGNNs is generalized to be nonmonotonic and nonpiecewise linear. It is shown that the MCGNNs with n -neuron have (K+1)n locally exponentially stable almost periodic solutions, where nature number K depends on the geometrical structure of the considered activation function. Compared with the previous related works, the number of almost periodic state solutions of the MCGNNs is extensively increased. The obtained conclusions in this paper are also capable of studying the multistability of equilibrium points or periodic solutions of the MCGNNs. Moreover, the enlarged attraction basins of attractors are estimated based on original partition. Some comparisons and convincing numerical examples are provided to substantiate the superiority and efficiency of obtained results.

13.
Neural Netw ; 119: 46-56, 2019 Nov.
Article in English | MEDLINE | ID: mdl-31376637

ABSTRACT

In this paper, a generalized neural network with a novel auxiliary function is proposed to solve a distributed non-differentiable optimization over a multi-agent network. The constructed auxiliary function can ensure that the state solution of proposed neural network is bounded, and enters the inequality constraint set in finite time. Furthermore, the proposed neural network is demonstrated to reach consensus and ultimately converges to the optimal solution under several mild assumptions. Compared with the existing methods, the neural network proposed in this paper has a simple structure with a low amount of state variables, and does not depend on projection operator method for constrained distributed optimization. Finally, two numerical simulations and an application in power system are delineated to show the characteristics and practicability of the presented neural network.


Subject(s)
Algorithms , Computer Simulation , Neural Networks, Computer
14.
Neural Netw ; 109: 147-158, 2019 Jan.
Article in English | MEDLINE | ID: mdl-30419480

ABSTRACT

This paper presents a neurodynamic approach to nonlinear optimization problems with affine equality and convex inequality constraints. The proposed neural network endows with a time-varying auxiliary function, which can guarantee that the state of the neural network enters the feasible region in finite time and remains there thereafter. Moreover, the state with any initial point is shown to be convergent to the critical point set when the objective function is generally nonconvex. Especially, when the objective function is pseudoconvex (or convex), the state is proved to be globally convergent to an optimal solution of the considered optimization problem. Compared with other neural networks for related optimization problems, the proposed neural network in this paper has good convergence and does not depend on some additional assumptions, such as the assumption that the inequality feasible region is bounded, the assumption that the penalty parameter is sufficiently large and the assumption that the objective function is lower bounded over the equality feasible region. Finally, some numerical examples and an application in real-time data reconciliation are provided to display the well performance of the proposed neural network.


Subject(s)
Computer Simulation , Neural Networks, Computer , Nonlinear Dynamics , Algorithms , Socioeconomic Factors
15.
IEEE Trans Cybern ; 49(11): 3946-3956, 2019 Nov.
Article in English | MEDLINE | ID: mdl-30059329

ABSTRACT

Complex-variable pseudoconvex optimization has been widely used in numerous scientific and engineering optimization problems. A neurodynamic approach is proposed in this paper for complex-variable pseudoconvex optimization problems subject to bound and linear equality constraints. An efficient penalty function is introduced to guarantee the boundedness of the state of the presented neural network, and make the state enter the feasible region of the considered optimization in finite time and stay there thereafter. The state is also shown to be convergent to an optimal point of the considered optimization. Compared with other neurodynamic approaches, the presented neural network does not need any penalty parameters, and has lower model complexity. Furthermore, some additional assumptions in other existing related neural networks are also removed in this paper, such as the assumption that the objective function is lower bounded over the equality constraint set and so on. Finally, some numerical examples and an application in beamforming formulation are provided.

16.
Neural Netw ; 101: 1-14, 2018 May.
Article in English | MEDLINE | ID: mdl-29471133

ABSTRACT

In this paper, a one-layer recurrent neural network is proposed for solving a class of nonsmooth, pseudoconvex optimization problems with general convex constraints. Based on the smoothing method, we construct a new regularization function, which does not depend on any information of the feasible region. Thanks to the special structure of the regularization function, we prove the global existence, uniqueness and "slow solution" character of the state of the proposed neural network. Moreover, the state solution of the proposed network is proved to be convergent to the feasible region in finite time and to the optimal solution set of the related optimization problem subsequently. In particular, the convergence of the state to an exact optimal solution is also considered in this paper. Numerical examples with simulation results are given to show the efficiency and good characteristics of the proposed network. In addition, some preliminary theoretical analysis and application of the proposed network for a wider class of dynamic portfolio optimization are included.


Subject(s)
Neural Networks, Computer , Computer Simulation
17.
IEEE Trans Neural Netw Learn Syst ; 29(3): 534-544, 2018 03.
Article in English | MEDLINE | ID: mdl-28026786

ABSTRACT

In this paper, based on calculus and penalty method, a one-layer recurrent neural network is proposed for solving constrained complex-variable convex optimization. It is proved that for any initial point from a given domain, the state of the proposed neural network reaches the feasible region in finite time and converges to an optimal solution of the constrained complex-variable convex optimization finally. In contrast to existing neural networks for complex-variable convex optimization, the proposed neural network has a lower model complexity and better convergence. Some numerical examples and application are presented to substantiate the effectiveness of the proposed neural network.

18.
IEEE Trans Neural Netw Learn Syst ; 28(11): 2580-2591, 2017 11.
Article in English | MEDLINE | ID: mdl-28113639

ABSTRACT

This paper presents a neurodynamic optimization approach to bilevel quadratic programming (BQP). Based on the Karush-Kuhn-Tucker (KKT) theorem, the BQP problem is reduced to a one-level mathematical program subject to complementarity constraints (MPCC). It is proved that the global solution of the MPCC is the minimal one of the optimal solutions to multiple convex optimization subproblems. A recurrent neural network is developed for solving these convex optimization subproblems. From any initial state, the state of the proposed neural network is convergent to an equilibrium point of the neural network, which is just the optimal solution of the convex optimization subproblem. Compared with existing recurrent neural networks for BQP, the proposed neural network is guaranteed for delivering the exact optimal solutions to any convex BQP problems. Moreover, it is proved that the proposed neural network for bilevel linear programming is convergent to an equilibrium point in finite time. Finally, three numerical examples are elaborated to substantiate the efficacy of the proposed approach.

19.
IEEE Trans Cybern ; 47(10): 3063-3074, 2017 Oct.
Article in English | MEDLINE | ID: mdl-27244757

ABSTRACT

Pseudoconvex optimization problem, as an important nonconvex optimization problem, plays an important role in scientific and engineering applications. In this paper, a recurrent one-layer neural network is proposed for solving the pseudoconvex optimization problem with equality and inequality constraints. It is proved that from any initial state, the state of the proposed neural network reaches the feasible region in finite time and stays there thereafter. It is also proved that the state of the proposed neural network is convergent to an optimal solution of the related problem. Compared with the related existing recurrent neural networks for the pseudoconvex optimization problems, the proposed neural network in this paper does not need the penalty parameters and has a better convergence. Meanwhile, the proposed neural network is used to solve three nonsmooth optimization problems, and we make some detailed comparisons with the known related conclusions. In the end, some numerical examples are provided to illustrate the effectiveness of the performance of the proposed neural network.


Subject(s)
Neural Networks, Computer , Algorithms , Nonlinear Dynamics
20.
Neural Netw ; 84: 113-124, 2016 Dec.
Article in English | MEDLINE | ID: mdl-27718390

ABSTRACT

This paper presents a neurodynamic approach with a recurrent neural network for solving convex optimization problems with general constraint. It is proved that for any initial point, the state of the proposed neural network reaches the constraint set in finite time, and converges to an optimal solution of the convex optimization problem finally. In contrast to the existing related neural networks, the convergence rate of the state of the proposed neural network can be calculated quantitatively via the Lojasiewicz exponent under some mild assumptions. As applications, we estimate explicitly some Lojasiewicz exponents to show the convergence rate of the state of the proposed neural network for solving convex quadratic optimization problems. And some numerical examples are given to demonstrate the effectiveness of the proposed neural network.


Subject(s)
Neural Networks, Computer , Problem Solving , Algorithms , Computer Simulation
SELECTION OF CITATIONS
SEARCH DETAIL
...