Neural networks learn highly selective representations in order to overcome the superposition catastrophe.
Psychol Rev
; 121(2): 248-61, 2014 Apr.
Article
en En
| MEDLINE
| ID: mdl-24564411
ABSTRACT
A key insight from 50 years of neurophysiology is that some neurons in cortex respond to information in a highly selective manner. Why is this? We argue that selective representations support the coactivation of multiple "things" (e.g., words, objects, faces) in short-term memory, whereas nonselective codes are often unsuitable for this purpose. That is, the coactivation of nonselective codes often results in a blend pattern that is ambiguous; the so-called superposition catastrophe. We show that a recurrent parallel distributed processing network trained to code for multiple words at the same time over the same set of units learns localist letter and word codes, and the number of localist codes scales with the level of the superposition. Given that many cortical systems are required to coactivate multiple things in short-term memory, we suggest that the superposition constraint plays a role in explaining the existence of selective codes in cortex.
Texto completo:
1
Colección:
01-internacional
Base de datos:
MEDLINE
Asunto principal:
Corteza Cerebral
/
Redes Neurales de la Computación
Límite:
Humans
Idioma:
En
Revista:
Psychol Rev
Año:
2014
Tipo del documento:
Article
Pais de publicación:
EEUU
/
ESTADOS UNIDOS
/
ESTADOS UNIDOS DA AMERICA
/
EUA
/
UNITED STATES
/
UNITED STATES OF AMERICA
/
US
/
USA