Your browser doesn't support javascript.
loading
Connectivity and performance tradeoffs in the cascade correlation learning architecture.
Phatak, D S; Koren, I.
Afiliação
  • Phatak DS; Dept. of Electr. Eng., State Univ. of New York, Binghamton, NY.
IEEE Trans Neural Netw ; 5(6): 930-5, 1994.
Article em En | MEDLINE | ID: mdl-18267867
ABSTRACT
The cascade correlation is a very flexible, efficient and fast algorithm for supervised learning. It incrementally builds the network by adding hidden units one at a time, until the desired input/output mapping is achieved. It connects all the previously installed units to the new unit being added. Consequently, each new unit in effect adds a new layer and the fan-in of the hidden and output units keeps on increasing as more units get added. The resulting structure could be hard to implement in VLSI, because the connections are irregular and the fan-in is unbounded. Moreover, the depth or the propagation delay through the resulting network is directly proportional to the number of units and can be excessive. We have modified the algorithm to generate networks with restricted fan-in and small depth (propagation delay) by controlling the connectivity. Our results reveal that there is a tradeoff between connectivity and other performance attributes like depth, total number of independent parameters, and learning time.
Buscar no Google
Base de dados: MEDLINE Idioma: En Ano de publicação: 1994 Tipo de documento: Article
Buscar no Google
Base de dados: MEDLINE Idioma: En Ano de publicação: 1994 Tipo de documento: Article