Convergence analysis of sliding mode trajectories in multi-objective neural networks learning.
Neural Netw
; 33: 21-31, 2012 Sep.
Article
em En
| MEDLINE
| ID: mdl-22561006
The Pareto-optimality concept is used in this paper in order to represent a constrained set of solutions that are able to trade-off the two main objective functions involved in neural networks supervised learning: data-set error and network complexity. The neural network is described as a dynamic system having error and complexity as its state variables and learning is presented as a process of controlling a learning trajectory in the resulting state space. In order to control the trajectories, sliding mode dynamics is imposed to the network. It is shown that arbitrary learning trajectories can be achieved by maintaining the sliding mode gains within their convergence intervals. Formal proofs of convergence conditions are therefore presented. The concept of trajectory learning presented in this paper goes further beyond the selection of a final state in the Pareto set, since it can be reached through different trajectories and states in the trajectory can be assessed individually against an additional objective function.
Texto completo:
1
Coleções:
01-internacional
Base de dados:
MEDLINE
Assunto principal:
Redes Neurais de Computação
/
Aprendizagem
Tipo de estudo:
Prognostic_studies
Idioma:
En
Revista:
Neural Netw
Assunto da revista:
NEUROLOGIA
Ano de publicação:
2012
Tipo de documento:
Article
País de afiliação:
Brasil