Your browser doesn't support javascript.
loading
On the optimality of neural-network approximation using incremental algorithms.
Meir, R; Maiorov, V E.
Affiliation
  • Meir R; Department of Electrical Engineering, Technion, Haifa 32000, Israel.
IEEE Trans Neural Netw ; 11(2): 323-37, 2000.
Article in En | MEDLINE | ID: mdl-18249764
ABSTRACT
The problem of approximating functions by neural networks using incremental algorithms is studied. For functions belonging to a rather general class, characterized by certain smoothness properties with respect to the L2 norm, we compute upper bounds on the approximation error where error is measured by the Lq norm, 1< or =q< or =infinity. These results extend previous work, applicable in the case q=2, and provide an explicit algorithm to achieve the derived approximation error rate. In the range q< or =2 near-optimal rates of convergence are demonstrated. A gap remains, however, with respect to a recently established lower bound in the case q>2, although the rates achieved are provably better than those obtained by optimal linear approximation. Extensions of the results from the L2 norm to Lp are also discussed. A further interesting conclusion from our results is that no loss of generality is suffered using networks with positive hidden-to-output weights. Moreover, explicit bounds on the size of the hidden-to-output weights are established, which are sufficient to guarantee the established convergence rates.

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: IEEE Trans Neural Netw Journal subject: INFORMATICA MEDICA Year: 2000 Document type: Article Affiliation country: Israel

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: IEEE Trans Neural Netw Journal subject: INFORMATICA MEDICA Year: 2000 Document type: Article Affiliation country: Israel