Your browser doesn't support javascript.
loading
Sparse RNNs can support high-capacity classification.
Turcu, Denis; Abbott, L F.
Afiliación
  • Turcu D; The Mortimer B. Zuckerman Mind, Brain and Behavior Institute, Department of Neuroscience, Columbia University, New York, New York, United States of America.
  • Abbott LF; The Mortimer B. Zuckerman Mind, Brain and Behavior Institute, Department of Neuroscience, Columbia University, New York, New York, United States of America.
PLoS Comput Biol ; 18(12): e1010759, 2022 12.
Article en En | MEDLINE | ID: mdl-36516226
ABSTRACT
Feedforward network models performing classification tasks rely on highly convergent output units that collect the information passed on by preceding layers. Although convergent output-unit like neurons may exist in some biological neural circuits, notably the cerebellar cortex, neocortical circuits do not exhibit any obvious candidates for this role; instead they are highly recurrent. We investigate whether a sparsely connected recurrent neural network (RNN) can perform classification in a distributed manner without ever bringing all of the relevant information to a single convergence site. Our model is based on a sparse RNN that performs classification dynamically. Specifically, the interconnections of the RNN are trained to resonantly amplify the magnitude of responses to some external inputs but not others. The amplified and non-amplified responses then form the basis for binary classification. Furthermore, the network acts as an evidence accumulator and maintains its decision even after the input is turned off. Despite highly sparse connectivity, learned recurrent connections allow input information to flow to every neuron of the RNN, providing the basis for distributed computation. In this arrangement, the minimum number of synapses per neuron required to reach maximum memory capacity scales only logarithmically with network size. The model is robust to various types of noise, works with different activation and loss functions and with both backpropagation- and Hebbian-based learning rules. The RNN can also be constructed with a split excitation-inhibition architecture with little reduction in performance.
Asunto(s)

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Asunto principal: Redes Neurales de la Computación / Aprendizaje Tipo de estudio: Prognostic_studies Idioma: En Revista: PLoS Comput Biol Asunto de la revista: BIOLOGIA / INFORMATICA MEDICA Año: 2022 Tipo del documento: Article País de afiliación: Estados Unidos

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Asunto principal: Redes Neurales de la Computación / Aprendizaje Tipo de estudio: Prognostic_studies Idioma: En Revista: PLoS Comput Biol Asunto de la revista: BIOLOGIA / INFORMATICA MEDICA Año: 2022 Tipo del documento: Article País de afiliación: Estados Unidos