Your browser doesn't support javascript.
loading
Explainable neural networks that simulate reasoning.
Blazek, Paul J; Lin, Milo M.
  • Blazek PJ; Green Center for Systems Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA.
  • Lin MM; Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA.
Nat Comput Sci ; 1(9): 607-618, 2021 Sep.
Article en En | MEDLINE | ID: mdl-38217134
ABSTRACT
The success of deep neural networks suggests that cognition may emerge from indecipherable patterns of distributed neural activity. Yet these networks are pattern-matching black boxes that cannot simulate higher cognitive functions and lack numerous neurobiological features. Accordingly, they are currently insufficient computational models for understanding neural information processing. Here, we show how neural circuits can directly encode cognitive processes via simple neurobiological principles. To illustrate, we implemented this model in a non-gradient-based machine learning algorithm to train deep neural networks called essence neural networks (ENNs). Neural information processing in ENNs is intrinsically explainable, even on benchmark computer vision tasks. ENNs can also simulate higher cognitive functions such as deliberation, symbolic reasoning and out-of-distribution generalization. ENNs display network properties associated with the brain, such as modularity, distributed and localist firing, and adversarial robustness. ENNs establish a broad computational framework to decipher the neural basis of cognition and pursue artificial general intelligence.

Texto completo: 1 Banco de datos: MEDLINE Idioma: En Año: 2021 Tipo del documento: Article

Texto completo: 1 Banco de datos: MEDLINE Idioma: En Año: 2021 Tipo del documento: Article