Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Network ; 10(1): 1-13, 1999 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-10372759

RESUMO

Optimization of perceptron neural network classifiers requires an optimization algorithm that is robust. In general, the best network is selected after a number of optimization trials. An effective optimization algorithm generates good weight-vector solutions in a few optimization trial runs owing to its inherent ability to escape local minima, where a less effective algorithm requires a larger number of trial runs. Repetitive training and testing is a tedious process, so that an effective algorithm is desirable to reduce training time and increase the quality of the set of available weight-vector solutions. We present leap-frog as a robust optimization algorithm for training neural networks. In this paper the dynamic principles of leap-frog are described together with experiments to show the ability of leap-frog to generate reliable weight-vector solutions. Performance histograms are used to compare leap-frog with a variable-metric method, a conjugate-gradient method with modified restarts, and a constrained-momentum-based algorithm. Results indicate that leap-frog performs better in terms of classification error than the remaining three algorithms on two distinctly different test problems.


Assuntos
Algoritmos , Redes Neurais de Computação , Modelos Teóricos , Ensino/métodos
2.
IEEE Trans Neural Netw ; 4(5): 794-802, 1993.
Artigo em Inglês | MEDLINE | ID: mdl-18276509

RESUMO

The ability of neural net classifiers to deal with a priori information is investigated. For this purpose, backpropagation classifiers are trained with data from known distributions with variable a priori probabilities, and their performance on separate test sets is evaluated. It is found that backpropagation employs a priori information in a slightly suboptimal fashion, but this does not have serious consequences on the performance of the classifier. Furthermore, it is found that the inferior generalization that results when an excessive number of network parameters are used can (partially) be ascribed to this suboptimality.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...