Your browser doesn't support javascript.
loading
Interpretable deep learning for deconvolutional analysis of neural signals.
Tolooshams, Bahareh; Matias, Sara; Wu, Hao; Temereanca, Simona; Uchida, Naoshige; Murthy, Venkatesh N; Masset, Paul; Ba, Demba.
Afiliação
  • Tolooshams B; Center for Brain Science, Harvard University, Cambridge MA, 02138.
  • Matias S; John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge MA, 02138.
  • Wu H; Computing + Mathematical Sciences, California Institute of Technology, Pasadena, CA, 91125.
  • Temereanca S; Center for Brain Science, Harvard University, Cambridge MA, 02138.
  • Uchida N; Department of Molecular and Cellular Biology, Harvard University, Cambridge MA, 02138.
  • Murthy VN; Center for Brain Science, Harvard University, Cambridge MA, 02138.
  • Masset P; Department of Molecular and Cellular Biology, Harvard University, Cambridge MA, 02138.
  • Ba D; Carney Institute for Brain Science, Brown University, Providence, RI, 02906.
bioRxiv ; 2024 Jan 23.
Article em En | MEDLINE | ID: mdl-38260512
ABSTRACT
The widespread adoption of deep learning to build models that capture the dynamics of neural populations is typically based on "black-box" approaches that lack an interpretable link between neural activity and function. Here, we propose to apply algorithm unrolling, a method for interpretable deep learning, to design the architecture of sparse deconvolutional neural networks and obtain a direct interpretation of network weights in relation to stimulus-driven single-neuron activity through a generative model. We characterize our method, referred to as deconvolutional unrolled neural learning (DUNL), and show its versatility by applying it to deconvolve single-trial local signals across multiple brain areas and recording modalities. To exemplify use cases of our decomposition method, we uncover multiplexed salience and reward prediction error signals from midbrain dopamine neurons in an unbiased manner, perform simultaneous event detection and characterization in somatosensory thalamus recordings, and characterize the responses of neurons in the piriform cortex. Our work leverages the advances in interpretable deep learning to gain a mechanistic understanding of neural dynamics.

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article