Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 1 de 1
Filter
Add more filters

Database
Language
Publication year range
1.
Neural Netw ; 11(2): 215-34, 1998 Mar.
Article in English | MEDLINE | ID: mdl-12662833

ABSTRACT

Modular neural networks use a single gating neuron to select the outputs of a collection of agent neurons. Expectation-maximization (EM) algorithms provide one way of training modular neural networks to approximate non-linear functionals. This paper introduces a hybrid interior-point (HIP) algorithm for training modular networks. The HIP algorithm combines an interior-point linear programming (LP) algorithm with a Newton-Raphson iteration in such a way that the computational efficiency of the interior point LP methods is preserved. The algorithm is formally proven to converge asymptotically to locally optimal networks with a total computational cost that scales in a polynomial manner with problem size. Simulation experiments show that the HIP algorithm produces networks whose average approximation error is better than that of EM-trained networks. These results also demonstrate that the computational cost of the HIP algorithm scales at a slower rate than the EM-procedure and that, for small-size networks, the total computational costs of both methods are comparable.

SELECTION OF CITATIONS
SEARCH DETAIL