Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Neural Netw ; 174: 106247, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38518707

RESUMO

In this paper, we propose a novel neurodynamic approach with predefined-time stability that offers a solution to address mixed variational inequality problems. Our approach introduces an adjustable time parameter, thereby enhancing flexibility and applicability compared to conventional fixed-time stability methods. By satisfying certain conditions, the proposed approach is capable of converging to a unique solution within a predefined-time, which sets it apart from fixed-time stability and finite-time stability approaches. Furthermore, our approach can be extended to address a wide range of mathematical optimization problems, including variational inequalities, nonlinear complementarity problems, sparse signal recovery problems, and nash equilibria seeking problems in noncooperative games. We provide numerical simulations to validate the theoretical derivation and showcase the effectiveness and feasibility of our proposed method.


Assuntos
Algoritmos , Redes Neurais de Computação
2.
Neural Comput ; 27(4): 982-1004, 2015 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-25602771

RESUMO

This letter presents the stability analysis for two steepest descent algorithms with momentum for quadratic functions. The corresponding local optimal parameters in Torii and Hagan ( 2002 ) and Zhang ( 2013 ) are extended to the global optimal parameters, that is, both the optimal learning rates and the optimal momentum factors are obtained simultaneously which make for the fastest convergence.

3.
Neural Comput ; 25(5): 1277-301, 2013 May.
Artigo em Inglês | MEDLINE | ID: mdl-23470121

RESUMO

Two steepest-descent algorithms with momentum for quadratic functions are considered. For a given learning rate, the sufficient and necessary conditions for the semistability of the steepest-descent algorithms with momentum are established. Moreover, the optimal momentum factors that generally make for the fastest convergence are obtained.


Assuntos
Algoritmos , Redes Neurais de Computação
4.
IEEE Trans Neural Netw ; 17(2): 522-5, 2006 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-16566479

RESUMO

A gradient method with momentum for two-layer feedforward neural networks is considered. The learning rate is set to be a constant and the momentum factor an adaptive variable. Both the weak and strong convergence results are proved, as well as the convergence rates for the error function and for the weight. Compared to the existing convergence results, our results are more general since we do not require the error function to be quadratic.


Assuntos
Algoritmos , Modelos Teóricos , Redes Neurais de Computação , Análise Numérica Assistida por Computador , Reconhecimento Automatizado de Padrão/métodos , Inteligência Artificial , Simulação por Computador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA