Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters

Database
Language
Publication year range
1.
Cell ; 184(14): 3731-3747.e21, 2021 07 08.
Article in English | MEDLINE | ID: mdl-34214470

ABSTRACT

In motor neuroscience, state changes are hypothesized to time-lock neural assemblies coordinating complex movements, but evidence for this remains slender. We tested whether a discrete change from more autonomous to coherent spiking underlies skilled movement by imaging cerebellar Purkinje neuron complex spikes in mice making targeted forelimb-reaches. As mice learned the task, millimeter-scale spatiotemporally coherent spiking emerged ipsilateral to the reaching forelimb, and consistent neural synchronization became predictive of kinematic stereotypy. Before reach onset, spiking switched from more disordered to internally time-locked concerted spiking and silence. Optogenetic manipulations of cerebellar feedback to the inferior olive bi-directionally modulated neural synchronization and reaching direction. A simple model explained the reorganization of spiking during reaching as reflecting a discrete bifurcation in olivary network dynamics. These findings argue that to prepare learned movements, olivo-cerebellar circuits enter a self-regulated, synchronized state promoting motor coordination. State changes facilitating behavioral transitions may generalize across neural systems.


Subject(s)
Movement/physiology , Nerve Net/physiology , Action Potentials/physiology , Animals , Calcium/metabolism , Cerebellum/physiology , Cortical Synchronization , Forelimb/physiology , Interneurons/physiology , Learning , Mice, Inbred C57BL , Mice, Transgenic , Models, Neurological , Motor Activity/physiology , Olivary Nucleus/physiology , Optogenetics , Purkinje Cells/physiology , Stereotyped Behavior , Task Performance and Analysis
2.
J Neurosci ; 42(48): 8960-8979, 2022 11 30.
Article in English | MEDLINE | ID: mdl-36241385

ABSTRACT

Detecting object boundaries is crucial for recognition, but how the process unfolds in visual cortex remains unknown. To study the problem faced by a hypothetical boundary cell, and to predict how cortical circuitry could produce a boundary cell from a population of conventional "simple cells," we labeled 30,000 natural image patches and used Bayes' rule to help determine how a simple cell should influence a nearby boundary cell depending on its relative offset in receptive field position and orientation. We identified the following three basic types of cell-cell interactions: rising and falling interactions with a range of slopes and saturation rates, and nonmonotonic (bump-shaped) interactions with varying modes and amplitudes. Using simple models, we show that a ubiquitous cortical circuit motif consisting of direct excitation and indirect inhibition-a compound effect we call "incitation"-can produce the entire spectrum of simple cell-boundary cell interactions found in our dataset. Moreover, we show that the synaptic weights that parameterize an incitation circuit can be learned by a single-layer "delta" rule. We conclude that incitatory interconnections are a generally useful computing mechanism that the cortex may exploit to help solve difficult natural classification problems.SIGNIFICANCE STATEMENT Simple cells in primary visual cortex (V1) respond to oriented edges and have long been supposed to detect object boundaries, yet the prevailing model of a simple cell-a divisively normalized linear filter-is a surprisingly poor natural boundary detector. To understand why, we analyzed image statistics on and off object boundaries, allowing us to characterize the neural-style computations needed to perform well at this difficult natural classification task. We show that a simple circuit motif known to exist in V1 is capable of extracting high-quality boundary probability signals from local populations of simple cells. Our findings suggest a new, more general way of conceptualizing cell-cell interconnections in the cortex.


Subject(s)
Visual Cortex , Bayes Theorem , Recognition, Psychology , Learning , Cell Communication
3.
PLoS Comput Biol ; 15(5): e1006892, 2019 05.
Article in English | MEDLINE | ID: mdl-31050662

ABSTRACT

In order to record the stream of autobiographical information that defines our unique personal history, our brains must form durable memories from single brief exposures to the patterned stimuli that impinge on them continuously throughout life. However, little is known about the computational strategies or neural mechanisms that underlie the brain's ability to perform this type of "online" learning. Based on increasing evidence that dendrites act as both signaling and learning units in the brain, we developed an analytical model that relates online recognition memory capacity to roughly a dozen dendritic, network, pattern, and task-related parameters. We used the model to determine what dendrite size maximizes storage capacity under varying assumptions about pattern density and noise level. We show that over a several-fold range of both of these parameters, and over multiple orders-of-magnitude of memory size, capacity is maximized when dendrites contain a few hundred synapses-roughly the natural number found in memory-related areas of the brain. Thus, in comparison to entire neurons, dendrites increase storage capacity by providing a larger number of better-sized learning units. Our model provides the first normative theory that explains how dendrites increase the brain's capacity for online learning; predicts which combinations of parameter settings we should expect to find in the brain under normal operating conditions; leads to novel interpretations of an array of existing experimental results; and provides a tool for understanding which changes associated with neurological disorders, aging, or stress are most likely to produce memory deficits-knowledge that could eventually help in the design of improved clinical treatments for memory loss.


Subject(s)
Dendrites/physiology , Memory/physiology , Recognition, Psychology/physiology , Animals , Brain/physiology , Computer Simulation , Dendrites/metabolism , Humans , Learning/physiology , Models, Neurological , Neural Networks, Computer , Neuronal Plasticity/physiology , Neurons/physiology , Synapses/physiology
4.
Phys Rev E ; 108(5-1): 054129, 2023 Nov.
Article in English | MEDLINE | ID: mdl-38115511

ABSTRACT

Across many disciplines spanning from neuroscience and genomics to machine learning, atmospheric science, and finance, the problems of denoising large data matrices to recover hidden signals obscured by noise, and of estimating the structure of these signals, is of fundamental importance. A key to solving these problems lies in understanding how the singular value structure of a signal is deformed by noise. This question has been thoroughly studied in the well-known spiked matrix model, in which data matrices originate from low-rank signal matrices perturbed by additive noise matrices, in an asymptotic limit where matrix size tends to infinity but the signal rank remains finite. We first show, strikingly, that the singular value structure of large finite matrices (of size ∼1000) with even moderate-rank signals, as low as 10, is not accurately predicted by the finite-rank theory, thereby limiting the application of this theory to real data. To address these deficiencies, we analytically compute how the singular values and vectors of an arbitrary high-rank signal matrix are deformed by additive noise. We focus on an asymptotic limit corresponding to an extensive spike model, in which both the signal rank and the size of the data matrix tend to infinity at a constant ratio. We map out the phase diagram of the singular value structure of the extensive spike model as a joint function of signal strength and rank. We further exploit these analytics to derive optimal rotationally invariant denoisers to recover the hidden high-rank signal from the data, as well as optimal invariant estimators of the signal covariance structure. Our extensive-rank results yield several conceptual differences compared to the finite-rank case: (1) as signal strength increases, the singular value spectrum does not directly transition from a unimodal bulk phase to a disconnected phase, but instead there is a bimodal connected regime separating them; (2) the signal singular vectors can be partially estimated even in the unimodal bulk regime, and thus the transitions in the data singular value spectrum do not coincide with a detectability threshold for the signal singular vectors, unlike in the finite-rank theory; (3) signal singular values interact nontrivially to generate data singular values in the extensive-rank model, whereas they are noninteracting in the finite-rank theory; and (4) as a result, the more sophisticated data denoisers and signal covariance estimators we derive, which take into account these nontrivial extensive-rank interactions, significantly outperform their simpler, noninteracting, finite-rank counterparts, even on data matrices of only moderate rank. Overall, our results provide fundamental theory governing how high-dimensional signals are deformed by additive noise, together with practical formulas for optimal denoising and covariance estimation.

5.
Neuron ; 111(1): 121-137.e13, 2023 01 04.
Article in English | MEDLINE | ID: mdl-36306779

ABSTRACT

The discovery of entorhinal grid cells has generated considerable interest in how and why hexagonal firing fields might emerge in a generic manner from neural circuits, and what their computational significance might be. Here, we forge a link between the problem of path integration and the existence of hexagonal grids, by demonstrating that such grids arise in neural networks trained to path integrate under simple biologically plausible constraints. Moreover, we develop a unifying theory for why hexagonal grids are ubiquitous in path-integrator circuits. Such trained networks also yield powerful mechanistic hypotheses, exhibiting realistic levels of biological variability not captured by hand-designed models. We furthermore develop methods to analyze the connectome and activity maps of our networks to elucidate fundamental mechanisms underlying path integration. These methods provide a road map to go from connectomic and physiological measurements to conceptual understanding in a manner that could generalize to other settings.


Subject(s)
Grid Cells , Grid Cells/physiology , Entorhinal Cortex/physiology , Models, Neurological , Neural Networks, Computer , Computer Systems
SELECTION OF CITATIONS
SEARCH DETAIL