Your browser doesn't support javascript.
loading
Autoencoder networks extract latent variables and encode these variables in their connectomes.
Farrell, Matthew; Recanatesi, Stefano; Reid, R Clay; Mihalas, Stefan; Shea-Brown, Eric.
Afiliação
  • Farrell M; Applied Mathematics Department, University of Washington, Seattle, WA, United States of America; Computational Neuroscience Center, University of Washington, Seattle, WA, United States of America. Electronic address: msf9@uw.edu.
  • Recanatesi S; Computational Neuroscience Center, University of Washington, Seattle, WA, United States of America.
  • Reid RC; Allen Institute for Brain Science, Seattle, WA, United States of America.
  • Mihalas S; Allen Institute for Brain Science, Seattle, WA, United States of America.
  • Shea-Brown E; Applied Mathematics Department, University of Washington, Seattle, WA, United States of America; Computational Neuroscience Center, University of Washington, Seattle, WA, United States of America; Allen Institute for Brain Science, Seattle, WA, United States of America.
Neural Netw ; 141: 330-343, 2021 Sep.
Article em En | MEDLINE | ID: mdl-33957382
ABSTRACT
Advances in electron microscopy and data processing techniques are leading to increasingly large and complete microscale connectomes. At the same time, advances in artificial neural networks have produced model systems that perform comparably rich computations with perfectly specified connectivity. This raises an exciting scientific opportunity for the study of both biological and artificial neural networks to infer the underlying circuit function from the structure of its connectivity. A potential roadblock, however, is that - even with well constrained neural dynamics - there are in principle many different connectomes that could support a given computation. Here, we define a tractable setting in which the problem of inferring circuit function from circuit connectivity can be analyzed in detail the function of input compression and reconstruction, in an autoencoder network with a single hidden layer. Here, in general there is substantial ambiguity in the weights that can produce the same circuit function, because largely arbitrary changes to input weights can be undone by applying the inverse modifications to the output weights. However, we use mathematical arguments and simulations to show that adding simple, biologically motivated regularization of connectivity resolves this ambiguity in an interesting way weights are constrained such that the latent variable structure underlying the inputs can be extracted from the weights by using nonlinear dimensionality reduction methods.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Conectoma Tipo de estudo: Prognostic_studies Idioma: En Revista: Neural Netw Assunto da revista: NEUROLOGIA Ano de publicação: 2021 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Conectoma Tipo de estudo: Prognostic_studies Idioma: En Revista: Neural Netw Assunto da revista: NEUROLOGIA Ano de publicação: 2021 Tipo de documento: Article