Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Neural Netw ; 179: 106548, 2024 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-39128274

RESUMO

This paper proposes a novel fractional-order memristive Hopfield neural network (HNN) to address traveling salesman problem (TSP). Fractional-order memristive HNN can efficiently converge to a globally optimal solution, while conventional HNN tends to become stuck at a local minimum in solving TSP. Incorporating fractional-order calculus and memristors gives the system long-term memory properties and complex chaotic characteristics, resulting in faster convergence speeds and shorter average distances in solving TSP. Moreover, a novel chaotic optimization algorithm based on fractional-order memristive HNN is designed for the calculation process to deal with mutual constraint between convergence accuracy and convergence speed, which circumvents random search and diminishes the rate of invalid solutions. Numerical simulations demonstrate the effectiveness and merits of the proposed algorithm. Furthermore, Field Programmable Gate Array (FPGA) technology is utilized to implement the proposed neural network.


Assuntos
Algoritmos , Simulação por Computador , Redes Neurais de Computação , Dinâmica não Linear
2.
Artigo em Inglês | MEDLINE | ID: mdl-39141464

RESUMO

This article presents a novel proximal gradient neurodynamic network (PGNN) for solving composite optimization problems (COPs). The proposed PGNN with time-varying coefficients can be flexibly chosen to accelerate the network convergence. Based on PGNN and sliding mode control technique, the proposed time-varying fixed-time proximal gradient neurodynamic network (TVFxPGNN) has fixed-time stability and a settling time independent of the initial value. It is further shown that fixed-time convergence can be achieved by relaxing the strict convexity condition via the Polyak-Lojasiewicz condition. In addition, the proposed TVFxPGNN is being applied to solve the sparse optimization problems with the log-sum function. Furthermore, the field-programmable gate array (FPGA) circuit framework for time-varying fixed-time PGNN is implemented, and the practicality of the proposed FPGA circuit is verified through an example simulation in Vivado 2019.1. Simulation and signal recovery experimental results demonstrate the effectiveness and superiority of the proposed PGNN.

3.
Neural Netw ; 174: 106247, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38518707

RESUMO

In this paper, we propose a novel neurodynamic approach with predefined-time stability that offers a solution to address mixed variational inequality problems. Our approach introduces an adjustable time parameter, thereby enhancing flexibility and applicability compared to conventional fixed-time stability methods. By satisfying certain conditions, the proposed approach is capable of converging to a unique solution within a predefined-time, which sets it apart from fixed-time stability and finite-time stability approaches. Furthermore, our approach can be extended to address a wide range of mathematical optimization problems, including variational inequalities, nonlinear complementarity problems, sparse signal recovery problems, and nash equilibria seeking problems in noncooperative games. We provide numerical simulations to validate the theoretical derivation and showcase the effectiveness and feasibility of our proposed method.


Assuntos
Algoritmos , Redes Neurais de Computação
4.
Artigo em Inglês | MEDLINE | ID: mdl-37956013

RESUMO

This article investigates a class of systems of nonlinear equations (SNEs). Three distributed neurodynamic models (DNMs), namely a two-layer model (DNM-I) and two single-layer models (DNM-II and DNM-III), are proposed to search for such a system's exact solution or a solution in the sense of least-squares. Combining a dynamic positive definite matrix with the primal-dual method, DNM-I is designed and it is proved to be globally convergent. To obtain a concise model, based on the dynamic positive definite matrix, time-varying gain, and activation function, DNM-II is developed and it enjoys global convergence. To inherit DNM-II's concise structure and improved convergence, DNM-III is proposed with the aid of time-varying gain and activation function, and this model possesses global fixed-time consensus and convergence. For the smooth case, DNM-III's globally exponential convergence is demonstrated under the Polyak-Lojasiewicz (PL) condition. Moreover, for the nonsmooth case, DNM-III's globally finite-time convergence is proved under the Kurdyka-Lojasiewicz (KL) condition. Finally, the proposed DNMs are applied to tackle quadratic programming (QP), and some numerical examples are provided to illustrate the effectiveness and advantages of the proposed models.

5.
Artigo em Inglês | MEDLINE | ID: mdl-37819816

RESUMO

This article proposes two novel projection neural networks (PNNs) with fixed-time ( FIXt ) convergence to deal with variational inequality problems (VIPs). The remarkable features of the proposed PNNs are FIXt convergence and more accurate upper bounds for arbitrary initial conditions. The robustness of the proposed PNNs under bounded noises is further studied. In addition, the proposed PNNs are applied to deal with absolute value equations (AVEs), noncooperative games, and sparse signal reconstruction problems (SSRPs). The upper bounds of the settling time for the proposed PNNs are tighter than the bounds in the existing neural networks. The effectiveness and advantages of the proposed PNNs are confirmed by numerical examples.

6.
Neural Netw ; 165: 971-981, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37454612

RESUMO

This paper proposes three novel accelerated inverse-free neurodynamic approaches to solve absolute value equations (AVEs). The first two are finite-time converging approaches and the third one is a fixed-time converging approach. It is shown that the proposed first two neurodynamic approaches converge to the solution of the concerned AVEs in a finite-time while, under some mild conditions, the third one converges to the solution in a fixed-time. It is also shown that the settling time for the proposed fixed-time converging approach has an uniform upper bound for all initial conditions, while the settling times for the proposed finite-time converging approaches are dependent on initial conditions. The proposed neurodynamic approaches have the advantage that they are all robust against bounded vanishing perturbations. The theoretical results are validated by means of a numerical example and an application in boundary value problems.


Assuntos
Redes Neurais de Computação
7.
IEEE Trans Neural Netw Learn Syst ; 34(10): 7500-7514, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35143401

RESUMO

This article proposes a novel fixed-time converging proximal neurodynamic network (FXPNN) via a proximal operator to deal with equilibrium problems (EPs). A distinctive feature of the proposed FXPNN is its better transient performance in comparison to most existing proximal neurodynamic networks. It is shown that the FXPNN converges to the solution of the corresponding EP in fixed-time under some mild conditions. It is also shown that the settling time of the FXPNN is independent of initial conditions and the fixed-time interval can be prescribed, unlike existing results with asymptotical or exponential convergence. Moreover, the proposed FXPNN is applied to solve composition optimization problems (COPs), l1 -regularized least-squares problems, mixed variational inequalities (MVIs), and variational inequalities (VIs). It is further shown, in the case of solving COPs, that the fixed-time convergence can be established via the Polyak-Lojasiewicz condition, which is a relaxation of the more demanding convexity condition. Finally, numerical examples are presented to validate the effectiveness and advantages of the proposed neurodynamic network.

8.
IEEE Trans Cybern ; 52(12): 12942-12953, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34347618

RESUMO

This article proposes a novel fixed-time converging forward-backward-forward neurodynamic network (FXFNN) to deal with mixed variational inequalities (MVIs). A distinctive feature of the FXFNN is its fast and fixed-time convergence, in contrast to conventional forward-backward-forward neurodynamic network and projected neurodynamic network. It is shown that the solution of the proposed FXFNN exists uniquely and converges to the unique solution of the corresponding MVIs in fixed time under some mild conditions. It is also shown that the fixed-time convergence result obtained for the FXFNN is independent of initial conditions, unlike most of the existing asymptotical and exponential convergence results. Furthermore, the proposed FXFNN is applied in solving sparse recovery problems, variational inequalities, nonlinear complementarity problems, and min-max problems. Finally, numerical and experimental examples are presented to validate the effectiveness of the proposed neurodynamic network.


Assuntos
Redes Neurais de Computação
9.
Neural Netw ; 138: 1-9, 2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-33610091

RESUMO

This paper proposes a proximal neurodynamic model (PNDM) for solving inverse mixed variational inequalities (IMVIs) based on the proximal operator. It is shown that the PNDM has a unique continuous solution under the condition of Lipschitz continuity (L-continuity). It is also shown that the equilibrium point of the proposed PNDM is asymptotically stable or exponentially stable under some mild conditions. Finally, three numerical examples are presented to illustrate effectiveness of the proposed PNDM.


Assuntos
Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA