Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 57
Filtrar
2.
Neural Comput ; 36(1): 151-174, 2023 Dec 12.
Artículo en Inglés | MEDLINE | ID: mdl-38052080

RESUMEN

In this work, we explore the limiting dynamics of deep neural networks trained with stochastic gradient descent (SGD). As observed previously, long after performance has converged, networks continue to move through parameter space by a process of anomalous diffusion in which distance traveled grows as a power law in the number of gradient updates with a nontrivial exponent. We reveal an intricate interaction among the hyperparameters of optimization, the structure in the gradient noise, and the Hessian matrix at the end of training that explains this anomalous diffusion. To build this understanding, we first derive a continuous-time model for SGD with finite learning rates and batch sizes as an underdamped Langevin equation. We study this equation in the setting of linear regression, where we can derive exact, analytic expressions for the phase-space dynamics of the parameters and their instantaneous velocities from initialization to stationarity. Using the Fokker-Planck equation, we show that the key ingredient driving these dynamics is not the original training loss but rather the combination of a modified loss, which implicitly regularizes the velocity, and probability currents that cause oscillations in phase space. We identify qualitative and quantitative predictions of this theory in the dynamics of a ResNet-18 model trained on ImageNet. Through the lens of statistical physics, we uncover a mechanistic origin for the anomalous limiting dynamics of deep neural networks trained with SGD. Understanding the limiting dynamics of SGD, and its dependence on various important hyperparameters like batch size, learning rate, and momentum, can serve as a basis for future work that can turn these insights into algorithmic gains.

3.
Phys Rev E ; 108(5-1): 054129, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-38115511

RESUMEN

Across many disciplines spanning from neuroscience and genomics to machine learning, atmospheric science, and finance, the problems of denoising large data matrices to recover hidden signals obscured by noise, and of estimating the structure of these signals, is of fundamental importance. A key to solving these problems lies in understanding how the singular value structure of a signal is deformed by noise. This question has been thoroughly studied in the well-known spiked matrix model, in which data matrices originate from low-rank signal matrices perturbed by additive noise matrices, in an asymptotic limit where matrix size tends to infinity but the signal rank remains finite. We first show, strikingly, that the singular value structure of large finite matrices (of size ∼1000) with even moderate-rank signals, as low as 10, is not accurately predicted by the finite-rank theory, thereby limiting the application of this theory to real data. To address these deficiencies, we analytically compute how the singular values and vectors of an arbitrary high-rank signal matrix are deformed by additive noise. We focus on an asymptotic limit corresponding to an extensive spike model, in which both the signal rank and the size of the data matrix tend to infinity at a constant ratio. We map out the phase diagram of the singular value structure of the extensive spike model as a joint function of signal strength and rank. We further exploit these analytics to derive optimal rotationally invariant denoisers to recover the hidden high-rank signal from the data, as well as optimal invariant estimators of the signal covariance structure. Our extensive-rank results yield several conceptual differences compared to the finite-rank case: (1) as signal strength increases, the singular value spectrum does not directly transition from a unimodal bulk phase to a disconnected phase, but instead there is a bimodal connected regime separating them; (2) the signal singular vectors can be partially estimated even in the unimodal bulk regime, and thus the transitions in the data singular value spectrum do not coincide with a detectability threshold for the signal singular vectors, unlike in the finite-rank theory; (3) signal singular values interact nontrivially to generate data singular values in the extensive-rank model, whereas they are noninteracting in the finite-rank theory; and (4) as a result, the more sophisticated data denoisers and signal covariance estimators we derive, which take into account these nontrivial extensive-rank interactions, significantly outperform their simpler, noninteracting, finite-rank counterparts, even on data matrices of only moderate rank. Overall, our results provide fundamental theory governing how high-dimensional signals are deformed by additive noise, together with practical formulas for optimal denoising and covariance estimation.

4.
Phys Rev E ; 108(1-1): 014403, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37583173

RESUMEN

We combine stochastic thermodynamics, large deviation theory, and information theory to derive fundamental limits on the accuracy with which single cell receptors can estimate external concentrations. As expected, if the estimation is performed by an ideal observer of the entire trajectory of receptor states, then no energy consuming nonequilibrium receptor that can be divided into bound and unbound states can outperform an equilibrium two-state receptor. However, when the estimation is performed by a simple observer that measures the fraction of time the receptor is bound, we derive a fundamental limit on the accuracy of general nonequilibrium receptors as a function of energy consumption. We further derive and exploit explicit formulas to numerically estimate a Pareto-optimal tradeoff between accuracy and energy. We find this tradeoff can be achieved by nonuniform ring receptors with a number of states that necessarily increases with energy. Our results yield a thermodynamic uncertainty relation for the time a physical system spends in a pool of states and generalize the classic Berg-Purcell limit [H. C. Berg and E. M. Purcell, Biophys. J. 20, 193 (1977)0006-349510.1016/S0006-3495(77)85544-6] on cellular sensing along multiple dimensions.

5.
Neuron ; 111(17): 2742-2755.e4, 2023 09 06.
Artículo en Inglés | MEDLINE | ID: mdl-37451264

RESUMEN

Understanding the circuit mechanisms of the visual code for natural scenes is a central goal of sensory neuroscience. We show that a three-layer network model predicts retinal natural scene responses with an accuracy nearing experimental limits. The model's internal structure is interpretable, as interneurons recorded separately and not modeled directly are highly correlated with model interneurons. Models fitted only to natural scenes reproduce a diverse set of phenomena related to motion encoding, adaptation, and predictive coding, establishing their ethological relevance to natural visual computation. A new approach decomposes the computations of model ganglion cells into the contributions of model interneurons, allowing automatic generation of new hypotheses for how interneurons with different spatiotemporal responses are combined to generate retinal computations, including predictive phenomena currently lacking an explanation. Our results demonstrate a unified and general approach to study the circuit mechanisms of ethological retinal computations under natural visual scenes.


Asunto(s)
Modelos Neurológicos , Retina , Retina/fisiología , Neuronas/fisiología , Interneuronas/fisiología
6.
bioRxiv ; 2023 Oct 28.
Artículo en Inglés | MEDLINE | ID: mdl-37292703

RESUMEN

The ability for the brain to discriminate among visual stimuli is constrained by their retinal representations. Previous studies of visual discriminability have been limited to either low-dimensional artificial stimuli or pure theoretical considerations without a realistic encoding model. Here we propose a novel framework for understanding stimulus discriminability achieved by retinal representations of naturalistic stimuli with the method of information geometry. To model the joint probability distribution of neural responses conditioned on the stimulus, we created a stochastic encoding model of a population of salamander retinal ganglion cells based on a three-layer convolutional neural network model. This model not only accurately captured the mean response to natural scenes but also a variety of second-order statistics. With the model and the proposed theory, we computed the Fisher information metric over stimuli to study the most discriminable stimulus directions. We found that the most discriminable stimulus varied substantially across stimuli, allowing an examination of the relationship between the most discriminable stimulus and the current stimulus. By examining responses generated by the most discriminable stimuli we further found that the most discriminative response mode is often aligned with the most stochastic mode. This finding carries the important implication that under natural scenes, retinal noise correlations are information-limiting rather than increasing information transmission as has been previously speculated. We additionally observed that sensitivity saturates less in the population than for single cells and that as a function of firing rate, Fisher information varies less than sensitivity. We conclude that under natural scenes, population coding benefits from complementary coding and helps to equalize the information carried by different firing rates, which may facilitate decoding of the stimulus under principles of information maximization.

7.
Nat Commun ; 14(1): 1597, 2023 03 22.
Artículo en Inglés | MEDLINE | ID: mdl-36949048

RESUMEN

Neuroscience has long been an essential driver of progress in artificial intelligence (AI). We propose that to accelerate progress in AI, we must invest in fundamental research in NeuroAI. A core component of this is the embodied Turing test, which challenges AI animal models to interact with the sensorimotor world at skill levels akin to their living counterparts. The embodied Turing test shifts the focus from those capabilities like game playing and language that are especially well-developed or uniquely human to those capabilities - inherited from over 500 million years of evolution - that are shared with all animals. Building models that can pass the embodied Turing test will provide a roadmap for the next generation of AI.


Asunto(s)
Inteligencia Artificial , Neurociencias , Animales , Humanos
8.
Cell ; 186(1): 178-193.e15, 2023 01 05.
Artículo en Inglés | MEDLINE | ID: mdl-36608653

RESUMEN

The hypothalamus regulates innate social behaviors, including mating and aggression. These behaviors can be evoked by optogenetic stimulation of specific neuronal subpopulations within MPOA and VMHvl, respectively. Here, we perform dynamical systems modeling of population neuronal activity in these nuclei during social behaviors. In VMHvl, unsupervised analysis identified a dominant dimension of neural activity with a large time constant (>50 s), generating an approximate line attractor in neural state space. Progression of the neural trajectory along this attractor was correlated with an escalation of agonistic behavior, suggesting that it may encode a scalable state of aggressiveness. Consistent with this, individual differences in the magnitude of the integration dimension time constant were strongly correlated with differences in aggressiveness. In contrast, approximate line attractors were not observed in MPOA during mating; instead, neurons with fast dynamics were tuned to specific actions. Thus, different hypothalamic nuclei employ distinct neural population codes to represent similar social behaviors.


Asunto(s)
Conducta Sexual Animal , Núcleo Hipotalámico Ventromedial , Animales , Conducta Sexual Animal/fisiología , Núcleo Hipotalámico Ventromedial/fisiología , Hipotálamo/fisiología , Agresión/fisiología , Conducta Social
9.
Neuron ; 111(1): 121-137.e13, 2023 01 04.
Artículo en Inglés | MEDLINE | ID: mdl-36306779

RESUMEN

The discovery of entorhinal grid cells has generated considerable interest in how and why hexagonal firing fields might emerge in a generic manner from neural circuits, and what their computational significance might be. Here, we forge a link between the problem of path integration and the existence of hexagonal grids, by demonstrating that such grids arise in neural networks trained to path integrate under simple biologically plausible constraints. Moreover, we develop a unifying theory for why hexagonal grids are ubiquitous in path-integrator circuits. Such trained networks also yield powerful mechanistic hypotheses, exhibiting realistic levels of biological variability not captured by hand-designed models. We furthermore develop methods to analyze the connectome and activity maps of our networks to elucidate fundamental mechanisms underlying path integration. These methods provide a road map to go from connectomic and physiological measurements to conceptual understanding in a manner that could generalize to other settings.


Asunto(s)
Células de Red , Células de Red/fisiología , Corteza Entorrinal/fisiología , Modelos Neurológicos , Redes Neurales de la Computación , Sistemas de Computación
10.
PLoS Comput Biol ; 18(10): e1010593, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-36251693

RESUMEN

Neural circuits consist of many noisy, slow components, with individual neurons subject to ion channel noise, axonal propagation delays, and unreliable and slow synaptic transmission. This raises a fundamental question: how can reliable computation emerge from such unreliable components? A classic strategy is to simply average over a population of N weakly-coupled neurons to achieve errors that scale as [Formula: see text]. But more interestingly, recent work has introduced networks of leaky integrate-and-fire (LIF) neurons that achieve coding errors that scale superclassically as 1/N by combining the principles of predictive coding and fast and tight inhibitory-excitatory balance. However, spike transmission delays preclude such fast inhibition, and computational studies have observed that such delays can cause pathological synchronization that in turn destroys superclassical coding performance. Intriguingly, it has also been observed in simulations that noise can actually improve coding performance, and that there exists some optimal level of noise that minimizes coding error. However, we lack a quantitative theory that describes this fascinating interplay between delays, noise and neural coding performance in spiking networks. In this work, we elucidate the mechanisms underpinning this beneficial role of noise by deriving analytical expressions for coding error as a function of spike propagation delay and noise levels in predictive coding tight-balance networks of LIF neurons. Furthermore, we compute the minimal coding error and the associated optimal noise level, finding that they grow as power-laws with the delay. Our analysis reveals quantitatively how optimal levels of noise can rescue neural coding performance in spiking neural networks with delays by preventing the build up of pathological synchrony without overwhelming the overall spiking dynamics. This analysis can serve as a foundation for the further study of precise computation in the presence of noise and delays in efficient spiking neural circuits.


Asunto(s)
Modelos Neurológicos , Red Nerviosa , Potenciales de Acción/fisiología , Red Nerviosa/fisiología , Redes Neurales de la Computación , Neuronas/fisiología , Transmisión Sináptica/fisiología
11.
Proc Natl Acad Sci U S A ; 119(43): e2205791119, 2022 10 25.
Artículo en Inglés | MEDLINE | ID: mdl-36264834

Asunto(s)
Algoritmos
12.
Proc Natl Acad Sci U S A ; 119(43): e2200800119, 2022 10 25.
Artículo en Inglés | MEDLINE | ID: mdl-36251997

RESUMEN

Understanding the neural basis of the remarkable human cognitive capacity to learn novel concepts from just one or a few sensory experiences constitutes a fundamental problem. We propose a simple, biologically plausible, mathematically tractable, and computationally powerful neural mechanism for few-shot learning of naturalistic concepts. We posit that the concepts that can be learned from few examples are defined by tightly circumscribed manifolds in the neural firing-rate space of higher-order sensory areas. We further posit that a single plastic downstream readout neuron learns to discriminate new concepts based on few examples using a simple plasticity rule. We demonstrate the computational power of our proposal by showing that it can achieve high few-shot learning accuracy on natural visual concepts using both macaque inferotemporal cortex representations and deep neural network (DNN) models of these representations and can even learn novel visual concepts specified only through linguistic descriptors. Moreover, we develop a mathematical theory of few-shot learning that links neurophysiology to predictions about behavioral outcomes by delineating several fundamental and measurable geometric properties of neural representations that can accurately predict the few-shot learning performance of naturalistic concepts across all our numerical simulations. This theory reveals, for instance, that high-dimensional manifolds enhance the ability to learn new concepts from few examples. Intriguingly, we observe striking mismatches between the geometry of manifolds in the primate visual pathway and in trained DNNs. We discuss testable predictions of our theory for psychophysics and neurophysiological experiments.


Asunto(s)
Formación de Concepto , Redes Neurales de la Computación , Animales , Humanos , Aprendizaje/fisiología , Macaca , Plásticos , Primates , Vías Visuales/fisiología
13.
PLoS Comput Biol ; 18(9): e1010418, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-36121844

RESUMEN

We introduce a novel, biologically plausible local learning rule that provably increases the robustness of neural dynamics to noise in nonlinear recurrent neural networks with homogeneous nonlinearities. Our learning rule achieves higher noise robustness without sacrificing performance on the task and without requiring any knowledge of the particular task. The plasticity dynamics-an integrable dynamical system operating on the weights of the network-maintains a multiplicity of conserved quantities, most notably the network's entire temporal map of input to output trajectories. The outcome of our learning rule is a synaptic balancing between the incoming and outgoing synapses of every neuron. This synaptic balancing rule is consistent with many known aspects of experimentally observed heterosynaptic plasticity, and moreover makes new experimentally testable predictions relating plasticity at the incoming and outgoing synapses of individual neurons. Overall, this work provides a novel, practical local learning rule that exactly preserves overall network function and, in doing so, provides new conceptual bridges between the disparate worlds of the neurobiology of heterosynaptic plasticity, the engineering of regularized noise-robust networks, and the mathematics of integrable Lax dynamical systems.


Asunto(s)
Modelos Neurológicos , Análisis y Desempeño de Tareas , Potenciales de Acción/fisiología , Aprendizaje/fisiología , Redes Neurales de la Computación , Plasticidad Neuronal/fisiología , Sinapsis/fisiología
14.
Neural Comput ; 34(8): 1652-1675, 2022 07 14.
Artículo en Inglés | MEDLINE | ID: mdl-35798321

RESUMEN

The computational role of the abundant feedback connections in the ventral visual stream is unclear, enabling humans and nonhuman primates to effortlessly recognize objects across a multitude of viewing conditions. Prior studies have augmented feedforward convolutional neural networks (CNNs) with recurrent connections to study their role in visual processing; however, often these recurrent networks are optimized directly on neural data or the comparative metrics used are undefined for standard feedforward networks that lack these connections. In this work, we develop task-optimized convolutional recurrent (ConvRNN) network models that more correctly mimic the timing and gross neuroanatomy of the ventral pathway. Properly chosen intermediate-depth ConvRNN circuit architectures, which incorporate mechanisms of feedforward bypassing and recurrent gating, can achieve high performance on a core recognition task, comparable to that of much deeper feedforward networks. We then develop methods that allow us to compare both CNNs and ConvRNNs to finely grained measurements of primate categorization behavior and neural response trajectories across thousands of stimuli. We find that high-performing ConvRNNs provide a better match to these data than feedforward networks of any depth, predicting the precise timings at which each stimulus is behaviorally decoded from neural activation patterns. Moreover, these ConvRNN circuits consistently produce quantitatively accurate predictions of neural dynamics from V4 and IT across the entire stimulus presentation. In fact, we find that the highest-performing ConvRNNs, which best match neural and behavioral data, also achieve a strong Pareto trade-off between task performance and overall network size. Taken together, our results suggest the functional purpose of recurrence in the ventral pathway is to fit a high-performing network in cortex, attaining computational power through temporal rather than spatial complexity.


Asunto(s)
Análisis y Desempeño de Tareas , Percepción Visual , Animales , Humanos , Macaca mulatta/fisiología , Redes Neurales de la Computación , Reconocimiento Visual de Modelos/fisiología , Reconocimiento en Psicología/fisiología , Vías Visuales/fisiología , Percepción Visual/fisiología
15.
Nat Commun ; 13(1): 4276, 2022 07 25.
Artículo en Inglés | MEDLINE | ID: mdl-35879320

RESUMEN

Neurons in the CA1 area of the mouse hippocampus encode the position of the animal in an environment. However, given the variability in individual neurons responses, the accuracy of this code is still poorly understood. It was proposed that downstream areas could achieve high spatial accuracy by integrating the activity of thousands of neurons, but theoretical studies point to shared fluctuations in the firing rate as a potential limitation. Using high-throughput calcium imaging in freely moving mice, we demonstrated the limiting factors in the accuracy of the CA1 spatial code. We found that noise correlations in the hippocampus bound the estimation error of spatial coding to ~10 cm (the size of a mouse). Maximal accuracy was obtained using approximately [300-1400] neurons, depending on the animal. These findings reveal intrinsic limits in the brain's representations of space and suggest that single neurons downstream of the hippocampus can extract maximal spatial information from several hundred inputs.


Asunto(s)
Hipocampo , Neuronas , Potenciales de Acción/fisiología , Animales , Hipocampo/fisiología , Ratones , Neuronas/fisiología
16.
Nature ; 605(7911): 713-721, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-35589841

RESUMEN

Reliable sensory discrimination must arise from high-fidelity neural representations and communication between brain areas. However, how neocortical sensory processing overcomes the substantial variability of neuronal sensory responses remains undetermined1-6. Here we imaged neuronal activity in eight neocortical areas concurrently and over five days in mice performing a visual discrimination task, yielding longitudinal recordings of more than 21,000 neurons. Analyses revealed a sequence of events across the neocortex starting from a resting state, to early stages of perception, and through the formation of a task response. At rest, the neocortex had one pattern of functional connections, identified through sets of areas that shared activity cofluctuations7,8. Within about 200 ms after the onset of the sensory stimulus, such connections rearranged, with different areas sharing cofluctuations and task-related information. During this short-lived state (approximately 300 ms duration), both inter-area sensory data transmission and the redundancy of sensory encoding peaked, reflecting a transient increase in correlated fluctuations among task-related neurons. By around 0.5 s after stimulus onset, the visual representation reached a more stable form, the structure of which was robust to the prominent, day-to-day variations in the responses of individual cells. About 1 s into stimulus presentation, a global fluctuation mode conveyed the upcoming response of the mouse to every area examined and was orthogonal to modes carrying sensory data. Overall, the neocortex supports sensory performance through brief elevations in sensory coding redundancy near the start of perception, neural population codes that are robust to cellular variability, and widespread inter-area fluctuation modes that transmit sensory data and task responses in non-interfering channels.


Asunto(s)
Neocórtex , Percepción Visual , Animales , Discriminación en Psicología/fisiología , Ratones , Neocórtex/fisiología , Neuronas/fisiología , Reproducibilidad de los Resultados , Percepción Visual/fisiología
17.
Cell Rep ; 37(6): 109972, 2021 11 09.
Artículo en Inglés | MEDLINE | ID: mdl-34758304

RESUMEN

Cortical function relies on the balanced activation of excitatory and inhibitory neurons. However, little is known about the organization and dynamics of shaft excitatory synapses onto cortical inhibitory interneurons. Here, we use the excitatory postsynaptic marker PSD-95, fluorescently labeled at endogenous levels, as a proxy for excitatory synapses onto layer 2/3 pyramidal neurons and parvalbumin-positive (PV+) interneurons in the barrel cortex of adult mice. Longitudinal in vivo imaging under baseline conditions reveals that, although synaptic weights in both neuronal types are log-normally distributed, synapses onto PV+ neurons are less heterogeneous and more stable. Markov model analyses suggest that the synaptic weight distribution is set intrinsically by ongoing cell-type-specific dynamics, and substantial changes are due to accumulated gradual changes. Synaptic weight dynamics are multiplicative, i.e., changes scale with weights, although PV+ synapses also exhibit an additive component. These results reveal that cell-type-specific processes govern cortical synaptic strengths and dynamics.


Asunto(s)
Homólogo 4 de la Proteína Discs Large/fisiología , Potenciales Postsinápticos Excitadores/fisiología , Interneuronas/fisiología , Inhibición Neural , Parvalbúminas/metabolismo , Células Piramidales/fisiología , Sinapsis/fisiología , Animales , Femenino , Masculino , Ratones , Ratones Endogámicos C57BL , Ratones Noqueados , Plasticidad Neuronal
18.
Nat Commun ; 12(1): 5721, 2021 10 06.
Artículo en Inglés | MEDLINE | ID: mdl-34615862

RESUMEN

The intertwined processes of learning and evolution in complex environmental niches have resulted in a remarkable diversity of morphological forms. Moreover, many aspects of animal intelligence are deeply embodied in these evolved morphologies. However, the principles governing relations between environmental complexity, evolved morphology, and the learnability of intelligent control, remain elusive, because performing large-scale in silico experiments on evolution and learning is challenging. Here, we introduce Deep Evolutionary Reinforcement Learning (DERL): a computational framework which can evolve diverse agent morphologies to learn challenging locomotion and manipulation tasks in complex environments. Leveraging DERL we demonstrate several relations between environmental complexity, morphological intelligence and the learnability of control. First, environmental complexity fosters the evolution of morphological intelligence as quantified by the ability of a morphology to facilitate the learning of novel tasks. Second, we demonstrate a morphological Baldwin effect i.e., in our simulations evolution rapidly selects morphologies that learn faster, thereby enabling behaviors learned late in the lifetime of early ancestors to be expressed early in the descendants lifetime. Third, we suggest a mechanistic basis for the above relationships through the evolution of morphologies that are more physically stable and energy efficient, and can therefore facilitate learning and control.


Asunto(s)
Evolución Biológica , Aprendizaje Profundo , Recompensa , Animales , Simulación por Computador
19.
Cell Rep ; 36(10): 109669, 2021 09 07.
Artículo en Inglés | MEDLINE | ID: mdl-34496249

RESUMEN

During navigation, animals estimate their position using path integration and landmarks, engaging many brain areas. Whether these areas follow specialized or universal cue integration principles remains incompletely understood. We combine electrophysiology with virtual reality to quantify cue integration across thousands of neurons in three navigation-relevant areas: primary visual cortex (V1), retrosplenial cortex (RSC), and medial entorhinal cortex (MEC). Compared with V1 and RSC, path integration influences position estimates more in MEC, and conflicts between path integration and landmarks trigger remapping more readily. Whereas MEC codes position prospectively, V1 codes position retrospectively, and RSC is intermediate between the two. Lowered visual contrast increases the influence of path integration on position estimates only in MEC. These properties are most pronounced in a population of MEC neurons, overlapping with grid cells, tuned to distance run in darkness. These results demonstrate the specialized role that path integration plays in MEC compared with other navigation-relevant cortical areas.


Asunto(s)
Potenciales de Acción/fisiología , Corteza Entorrinal/fisiología , Giro del Cíngulo/fisiología , Percepción Visual/fisiología , Animales , Neuronas/fisiología , Corteza Visual Primaria/fisiología , Estudios Retrospectivos , Navegación Espacial/fisiología
20.
Cell ; 184(14): 3731-3747.e21, 2021 07 08.
Artículo en Inglés | MEDLINE | ID: mdl-34214470

RESUMEN

In motor neuroscience, state changes are hypothesized to time-lock neural assemblies coordinating complex movements, but evidence for this remains slender. We tested whether a discrete change from more autonomous to coherent spiking underlies skilled movement by imaging cerebellar Purkinje neuron complex spikes in mice making targeted forelimb-reaches. As mice learned the task, millimeter-scale spatiotemporally coherent spiking emerged ipsilateral to the reaching forelimb, and consistent neural synchronization became predictive of kinematic stereotypy. Before reach onset, spiking switched from more disordered to internally time-locked concerted spiking and silence. Optogenetic manipulations of cerebellar feedback to the inferior olive bi-directionally modulated neural synchronization and reaching direction. A simple model explained the reorganization of spiking during reaching as reflecting a discrete bifurcation in olivary network dynamics. These findings argue that to prepare learned movements, olivo-cerebellar circuits enter a self-regulated, synchronized state promoting motor coordination. State changes facilitating behavioral transitions may generalize across neural systems.


Asunto(s)
Movimiento/fisiología , Red Nerviosa/fisiología , Potenciales de Acción/fisiología , Animales , Calcio/metabolismo , Cerebelo/fisiología , Sincronización Cortical , Miembro Anterior/fisiología , Interneuronas/fisiología , Aprendizaje , Ratones Endogámicos C57BL , Ratones Transgénicos , Modelos Neurológicos , Actividad Motora/fisiología , Núcleo Olivar/fisiología , Optogenética , Células de Purkinje/fisiología , Conducta Estereotipada , Análisis y Desempeño de Tareas
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA