Your browser doesn't support javascript.
loading
Learning probabilistic neural representations with randomly connected circuits.
Maoz, Ori; Tkacik, Gasper; Esteki, Mohamad Saleh; Kiani, Roozbeh; Schneidman, Elad.
Afiliação
  • Maoz O; Department of Neurobiology, Weizmann Institute of Science, Rehovot 76100, Israel.
  • Tkacik G; Department of Computer Science, Weizmann Institute of Science, Rehovot 76100, Israel.
  • Esteki MS; Institute of Science and Technology Austria, Klosterneuburg A-3400, Austria.
  • Kiani R; Center for Neural Science, New York University, New York, NY 10003.
  • Schneidman E; Center for Neural Science, New York University, New York, NY 10003; roozbeh@nyu.edu elad.schneidman@weizmann.ac.il.
Proc Natl Acad Sci U S A ; 117(40): 25066-25073, 2020 10 06.
Article em En | MEDLINE | ID: mdl-32948691
The brain represents and reasons probabilistically about complex stimuli and motor actions using a noisy, spike-based neural code. A key building block for such neural computations, as well as the basis for supervised and unsupervised learning, is the ability to estimate the surprise or likelihood of incoming high-dimensional neural activity patterns. Despite progress in statistical modeling of neural responses and deep learning, current approaches either do not scale to large neural populations or cannot be implemented using biologically realistic mechanisms. Inspired by the sparse and random connectivity of real neuronal circuits, we present a model for neural codes that accurately estimates the likelihood of individual spiking patterns and has a straightforward, scalable, efficient, learnable, and realistic neural implementation. This model's performance on simultaneously recorded spiking activity of >100 neurons in the monkey visual and prefrontal cortices is comparable with or better than that of state-of-the-art models. Importantly, the model can be learned using a small number of samples and using a local learning rule that utilizes noise intrinsic to neural circuits. Slower, structural changes in random connectivity, consistent with rewiring and pruning processes, further improve the efficiency and sparseness of the resulting neural representations. Our results merge insights from neuroanatomy, machine learning, and theoretical neuroscience to suggest random sparse connectivity as a key design principle for neuronal computation.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Encéfalo / Potenciais de Ação / Redes Neurais de Computação / Neurônios Tipo de estudo: Clinical_trials Limite: Humans Idioma: En Revista: Proc Natl Acad Sci U S A Ano de publicação: 2020 Tipo de documento: Article País de afiliação: Israel

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Encéfalo / Potenciais de Ação / Redes Neurais de Computação / Neurônios Tipo de estudo: Clinical_trials Limite: Humans Idioma: En Revista: Proc Natl Acad Sci U S A Ano de publicação: 2020 Tipo de documento: Article País de afiliação: Israel