Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
Nat Neurosci ; 27(6): 1199-1210, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38710876

ABSTRACT

Recent work has argued that large-scale neural recordings are often well described by patterns of coactivation across neurons. Yet the view that neural variability is constrained to a fixed, low-dimensional subspace may overlook higher-dimensional structure, including stereotyped neural sequences or slowly evolving latent spaces. Here we argue that task-relevant variability in neural data can also cofluctuate over trials or time, defining distinct 'covariability classes' that may co-occur within the same dataset. To demix these covariability classes, we develop sliceTCA (slice tensor component analysis), a new unsupervised dimensionality reduction method for neural data tensors. In three example datasets, including motor cortical activity during a classic reaching task in primates and recent multiregion recordings in mice, we show that sliceTCA can capture more task-relevant structure in neural data using fewer components than traditional methods. Overall, our theoretical framework extends the classic view of low-dimensional population activity by incorporating additional classes of latent variables capturing higher-dimensional structure.


Subject(s)
Neurons , Animals , Mice , Neurons/physiology , Motor Cortex/physiology , Models, Neurological
2.
Curr Opin Neurobiol ; 83: 102759, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37708653

ABSTRACT

While neural plasticity has long been studied as the basis of learning, the growth of large-scale neural recording techniques provides a unique opportunity to study how learning-induced activity changes are coordinated across neurons within the same circuit. These distributed changes can be understood through an evolution of the geometry of neural manifolds and latent dynamics underlying new computations. In parallel, studies of multi-task and continual learning in artificial neural networks hint at a tradeoff between non-interference and compositionality as guiding principles to understand how neural circuits flexibly support multiple behaviors. In this review, we highlight recent findings from both biological and artificial circuits that together form a new framework for understanding task learning at the population level.


Subject(s)
Learning , Neural Networks, Computer , Learning/physiology , Neurons/physiology , Neuronal Plasticity/physiology
3.
Nat Neurosci ; 24(8): 1142-1150, 2021 08.
Article in English | MEDLINE | ID: mdl-34168340

ABSTRACT

In classical theories of cerebellar cortex, high-dimensional sensorimotor representations are used to separate neuronal activity patterns, improving associative learning and motor performance. Recent experimental studies suggest that cerebellar granule cell (GrC) population activity is low-dimensional. To examine sensorimotor representations from the point of view of downstream Purkinje cell 'decoders', we used three-dimensional acousto-optic lens two-photon microscopy to record from hundreds of GrC axons. Here we show that GrC axon population activity is high dimensional and distributed with little fine-scale spatial structure during spontaneous behaviors. Moreover, distinct behavioral states are represented along orthogonal dimensions in neuronal activity space. These results suggest that the cerebellar cortex supports high-dimensional representations and segregates behavioral state-dependent computations into orthogonal subspaces, as reported in the neocortex. Our findings match the predictions of cerebellar pattern separation theories and suggest that the cerebellum and neocortex use population codes with common features, despite their vastly different circuit structures.


Subject(s)
Axons/physiology , Cerebellum/physiology , Animals , Behavior, Animal/physiology , Female , Imaging, Three-Dimensional/methods , Locomotion/physiology , Male , Mice , Mice, Transgenic
4.
Nat Neurosci ; 24(7): 903-904, 2021 07.
Article in English | MEDLINE | ID: mdl-34131326

Subject(s)
Decision Making
5.
Neuron ; 103(3): 395-411.e5, 2019 08 07.
Article in English | MEDLINE | ID: mdl-31201122

ABSTRACT

Computational models are powerful tools for exploring the properties of complex biological systems. In neuroscience, data-driven models of neural circuits that span multiple scales are increasingly being used to understand brain function in health and disease. But their adoption and reuse has been limited by the specialist knowledge required to evaluate and use them. To address this, we have developed Open Source Brain, a platform for sharing, viewing, analyzing, and simulating standardized models from different brain regions and species. Model structure and parameters can be automatically visualized and their dynamical properties explored through browser-based simulations. Infrastructure and tools for collaborative interaction, development, and testing are also provided. We demonstrate how existing components can be reused by constructing new models of inhibition-stabilized cortical networks that match recent experimental results. These features of Open Source Brain improve the accessibility, transparency, and reproducibility of models and facilitate their reuse by the wider community.


Subject(s)
Brain/physiology , Computational Biology/standards , Computer Simulation , Models, Neurological , Neurons/physiology , Brain/cytology , Computational Biology/methods , Humans , Internet , Neural Networks, Computer , Online Systems
6.
Neuron ; 101(4): 584-602, 2019 02 20.
Article in English | MEDLINE | ID: mdl-30790539

ABSTRACT

When animals interact with complex environments, their neural circuits must separate overlapping patterns of activity that represent sensory and motor information. Pattern separation is thought to be a key function of several brain regions, including the cerebellar cortex, insect mushroom body, and dentate gyrus. However, recent findings have questioned long-held ideas on how these circuits perform this fundamental computation. Here, we re-evaluate the functional and structural mechanisms underlying pattern separation. We argue that the dimensionality of the space available for population codes representing sensory and motor information provides a common framework for understanding pattern separation. We then discuss how these three circuits use different strategies to separate activity patterns and facilitate associative learning in the presence of trial-to-trial variability.


Subject(s)
Psychomotor Performance , Visual Perception , Animals , Hippocampus/physiology , Models, Neurological , Sensorimotor Cortex/physiology
7.
J Neurosci ; 38(29): 6442-6444, 2018 07 18.
Article in English | MEDLINE | ID: mdl-30021764
8.
Entropy (Basel) ; 20(7)2018 Jun 23.
Article in English | MEDLINE | ID: mdl-33265579

ABSTRACT

Correlations in neural activity have been demonstrated to have profound consequences for sensory encoding. To understand how neural populations represent stimulus information, it is therefore necessary to model how pairwise and higher-order spiking correlations between neurons contribute to the collective structure of population-wide spiking patterns. Maximum entropy models are an increasingly popular method for capturing collective neural activity by including successively higher-order interaction terms. However, incorporating higher-order interactions in these models is difficult in practice due to two factors. First, the number of parameters exponentially increases as higher orders are added. Second, because triplet (and higher) spiking events occur infrequently, estimates of higher-order statistics may be contaminated by sampling noise. To address this, we extend previous work on the Reliable Interaction class of models to develop a normalized variant that adaptively identifies the specific pairwise and higher-order moments that can be estimated from a given dataset for a specified confidence level. The resulting "Reliable Moment" model is able to capture cortical-like distributions of population spiking patterns. Finally, we show that, compared with the Reliable Interaction model, the Reliable Moment model infers fewer strong spurious higher-order interactions and is better able to predict the frequencies of previously unobserved spiking patterns.

9.
Nat Commun ; 8(1): 1116, 2017 10 24.
Article in English | MEDLINE | ID: mdl-29061964

ABSTRACT

Pattern separation is a fundamental function of the brain. The divergent feedforward networks thought to underlie this computation are widespread, yet exhibit remarkably similar sparse synaptic connectivity. Marr-Albus theory postulates that such networks separate overlapping activity patterns by mapping them onto larger numbers of sparsely active neurons. But spatial correlations in synaptic input and those introduced by network connectivity are likely to compromise performance. To investigate the structural and functional determinants of pattern separation we built models of the cerebellar input layer with spatially correlated input patterns, and systematically varied their synaptic connectivity. Performance was quantified by the learning speed of a classifier trained on either the input or output patterns. Our results show that sparse synaptic connectivity is essential for separating spatially correlated input patterns over a wide range of network activity, and that expansion and correlations, rather than sparse activity, are the major determinants of pattern separation.


Subject(s)
Nerve Net/physiology , Neural Pathways/physiology , Neurons/physiology , Synaptic Transmission , Animals , Axons/metabolism , Brain/physiology , Cerebellum/physiology , Dendrites/physiology , Humans , Imaging, Three-Dimensional , Learning , Models, Neurological , Models, Statistical , Neural Networks, Computer , Normal Distribution , Synapses/physiology
10.
Neural Comput ; 25(7): 1768-806, 2013 Jul.
Article in English | MEDLINE | ID: mdl-23607560

ABSTRACT

Recent experimental and computational evidence suggests that several dynamical properties may characterize the operating point of functioning neural networks: critical branching, neutral stability, and production of a wide range of firing patterns. We seek the simplest setting in which these properties emerge, clarifying their origin and relationship in random, feedforward networks of McCullochs-Pitts neurons. Two key parameters are the thresholds at which neurons fire spikes and the overall level of feedforward connectivity. When neurons have low thresholds, we show that there is always a connectivity for which the properties in question all occur, that is, these networks preserve overall firing rates from layer to layer and produce broad distributions of activity in each layer. This fails to occur, however, when neurons have high thresholds. A key tool in explaining this difference is the eigenstructure of the resulting mean-field Markov chain, as this reveals which activity modes will be preserved from layer to layer. We extend our analysis from purely excitatory networks to more complex models that include inhibition and local noise, and find that both of these features extend the parameter ranges over which networks produce the properties of interest.


Subject(s)
Action Potentials/physiology , Models, Neurological , Nerve Net/physiology , Neural Networks, Computer , Neurons/physiology , Synaptic Transmission/physiology , Animals , Brain , Computer Simulation , Humans , Markov Chains , Neural Inhibition , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...