Your browser doesn't support javascript.
loading
A mathematical theory of relational generalization in transitive inference.
Lippl, Samuel; Kay, Kenneth; Jensen, Greg; Ferrera, Vincent P; Abbott, L F.
Afiliación
  • Lippl S; Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY 10027.
  • Kay K; Center for Theoretical Neuroscience, Department of Neuroscience, Columbia University, New York, NY 10027.
  • Jensen G; Department of Neuroscience, Columbia University Medical Center, New York, NY 10032.
  • Ferrera VP; Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY 10027.
  • Abbott LF; Center for Theoretical Neuroscience, Department of Neuroscience, Columbia University, New York, NY 10027.
Proc Natl Acad Sci U S A ; 121(28): e2314511121, 2024 Jul 09.
Article en En | MEDLINE | ID: mdl-38968113
ABSTRACT
Humans and animals routinely infer relations between different items or events and generalize these relations to novel combinations of items. This allows them to respond appropriately to radically novel circumstances and is fundamental to advanced cognition. However, how learning systems (including the brain) can implement the necessary inductive biases has been unclear. We investigated transitive inference (TI), a classic relational task paradigm in which subjects must learn a relation ([Formula see text] and [Formula see text]) and generalize it to new combinations of items ([Formula see text]). Through mathematical analysis, we found that a broad range of biologically relevant learning models (e.g. gradient flow or ridge regression) perform TI successfully and recapitulate signature behavioral patterns long observed in living subjects. First, we found that models with item-wise additive representations automatically encode transitive relations. Second, for more general representations, a single scalar "conjunctivity factor" determines model behavior on TI and, further, the principle of norm minimization (a standard statistical inductive bias) enables models with fixed, partly conjunctive representations to generalize transitively. Finally, neural networks in the "rich regime," which enables representation learning and improves generalization on many tasks, unexpectedly show poor generalization and anomalous behavior on TI. We find that such networks implement a form of norm minimization (over hidden weights) that yields a local encoding mechanism lacking transitivity. Our findings show how minimal statistical learning principles give rise to a classical relational inductive bias (transitivity), explain empirically observed behaviors, and establish a formal approach to understanding the neural basis of relational abstraction.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Asunto principal: Generalización Psicológica Límite: Humans Idioma: En Revista: Proc Natl Acad Sci U S A Año: 2024 Tipo del documento: Article

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Asunto principal: Generalización Psicológica Límite: Humans Idioma: En Revista: Proc Natl Acad Sci U S A Año: 2024 Tipo del documento: Article