Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 44
Filter
Add more filters










Publication year range
1.
Trends Cogn Sci ; 2024 Mar 28.
Article in English | MEDLINE | ID: mdl-38553340

ABSTRACT

One major challenge of neuroscience is identifying structure in seemingly disorganized neural activity. Different types of structure have different computational implications that can help neuroscientists understand the functional role of a particular brain area. Here, we outline a unified approach to characterize structure by inspecting the representational geometry and the modularity properties of the recorded activity and show that a similar approach can also reveal structure in connectivity. We start by setting up a general framework for determining geometry and modularity in activity and connectivity and relating these properties with computations performed by the network. We then use this framework to review the types of structure found in recent studies of model networks performing three classes of computations.

2.
Elife ; 122023 Nov 16.
Article in English | MEDLINE | ID: mdl-37970945

ABSTRACT

Grouping sets of sounds into relevant categories is an important cognitive ability that enables the association of stimuli with appropriate goal-directed behavioral responses. In perceptual tasks, the primary auditory cortex (A1) assumes a prominent role by concurrently encoding both sound sensory features and task-related variables. Here, we sought to explore the role of A1 in the initiation of sound categorization, shedding light on its involvement in this cognitive process. We trained ferrets to discriminate click trains of different rates in a Go/No-Go delayed categorization task and recorded neural activity during both active behavior and passive exposure to the same sounds. Purely categorical response components were extracted and analyzed separately from sensory responses to reveal their contributions to the overall population response throughout the trials. We found that categorical activity emerged during sound presentation in the population average and was present in both active behavioral and passive states. However, upon task engagement, categorical responses to the No-Go category became suppressed in the population code, leading to an asymmetrical representation of the Go stimuli relative to the No-Go sounds and pre-stimulus baseline. The population code underwent an abrupt change at stimulus offset, with sustained responses after the Go sounds during the delay period. Notably, the categorical responses observed during the stimulus period exhibited a significant correlation with those extracted from the delay epoch, suggesting an early involvement of A1 in stimulus categorization.


Subject(s)
Auditory Cortex , Auditory Perception , Animals , Auditory Perception/physiology , Auditory Cortex/physiology , Ferrets , Sound , Behavior, Animal/physiology , Acoustic Stimulation
3.
Nat Commun ; 14(1): 6837, 2023 10 27.
Article in English | MEDLINE | ID: mdl-37884507

ABSTRACT

Brains can gracefully weed out irrelevant stimuli to guide behavior. This feat is believed to rely on a progressive selection of task-relevant stimuli across the cortical hierarchy, but the specific across-area interactions enabling stimulus selection are still unclear. Here, we propose that population gating, occurring within primary auditory cortex (A1) but controlled by top-down inputs from prelimbic region of medial prefrontal cortex (mPFC), can support across-area stimulus selection. Examining single-unit activity recorded while rats performed an auditory context-dependent task, we found that A1 encoded relevant and irrelevant stimuli along a common dimension of its neural space. Yet, the relevant stimulus encoding was enhanced along an extra dimension. In turn, mPFC encoded only the stimulus relevant to the ongoing context. To identify candidate mechanisms for stimulus selection within A1, we reverse-engineered low-rank RNNs trained on a similar task. Our analyses predicted that two context-modulated neural populations gated their preferred stimulus in opposite contexts, which we confirmed in further analyses of A1. Finally, we show in a two-region RNN how population gating within A1 could be controlled by top-down inputs from PFC, enabling flexible across-area communication despite fixed inter-areal connectivity.


Subject(s)
Auditory Cortex , Brain , Rats , Animals , Acoustic Stimulation/methods , Prefrontal Cortex
4.
ArXiv ; 2023 Aug 31.
Article in English | MEDLINE | ID: mdl-37693175

ABSTRACT

One major challenge of neuroscience is finding interesting structures in a seemingly disorganized neural activity. Often these structures have computational implications that help to understand the functional role of a particular brain area. Here we outline a unified approach to characterize these structures by inspecting the representational geometry and the modularity properties of the recorded activity, and show that this approach can also reveal structures in connectivity. We start by setting up a general framework for determining geometry and modularity in activity and connectivity and relating these properties with computations performed by the network. We then use this framework to review the types of structure found in recent works on model networks performing three classes of computations.

5.
PLoS Comput Biol ; 19(8): e1011315, 2023 08.
Article in English | MEDLINE | ID: mdl-37549194

ABSTRACT

Recurrent network models are instrumental in investigating how behaviorally-relevant computations emerge from collective neural dynamics. A recently developed class of models based on low-rank connectivity provides an analytically tractable framework for understanding of how connectivity structure determines the geometry of low-dimensional dynamics and the ensuing computations. Such models however lack some fundamental biological constraints, and in particular represent individual neurons in terms of abstract units that communicate through continuous firing rates rather than discrete action potentials. Here we examine how far the theoretical insights obtained from low-rank rate networks transfer to more biologically plausible networks of spiking neurons. Adding a low-rank structure on top of random excitatory-inhibitory connectivity, we systematically compare the geometry of activity in networks of integrate-and-fire neurons to rate networks with statistically equivalent low-rank connectivity. We show that the mean-field predictions of rate networks allow us to identify low-dimensional dynamics at constant population-average activity in spiking networks, as well as novel non-linear regimes of activity such as out-of-phase oscillations and slow manifolds. We finally exploit these results to directly build spiking networks that perform nonlinear computations.


Subject(s)
Models, Neurological , Neurons , Action Potentials/physiology , Neurons/physiology , Nerve Net/physiology
6.
Neuron ; 111(5): 739-753.e8, 2023 03 01.
Article in English | MEDLINE | ID: mdl-36640766

ABSTRACT

Biological brains possess an unparalleled ability to adapt behavioral responses to changing stimuli and environments. How neural processes enable this capacity is a fundamental open question. Previous works have identified two candidate mechanisms: a low-dimensional organization of neural activity and a modulation by contextual inputs. We hypothesized that combining the two might facilitate generalization and adaptation in complex tasks. We tested this hypothesis in flexible timing tasks where dynamics play a key role. Examining trained recurrent neural networks, we found that confining the dynamics to a low-dimensional subspace allowed tonic inputs to parametrically control the overall input-output transform, enabling generalization to novel inputs and adaptation to changing conditions. Reverse-engineering and theoretical analyses demonstrated that this parametric control relies on a mechanism where tonic inputs modulate the dynamics along non-linear manifolds while preserving their geometry. Comparisons with data from behaving monkeys confirmed the behavioral and neural signatures of this mechanism.


Subject(s)
Brain , Neural Networks, Computer
7.
PLoS Comput Biol ; 19(1): e1010855, 2023 01.
Article in English | MEDLINE | ID: mdl-36689488

ABSTRACT

How the connectivity of cortical networks determines the neural dynamics and the resulting computations is one of the key questions in neuroscience. Previous works have pursued two complementary approaches to quantify the structure in connectivity. One approach starts from the perspective of biological experiments where only the local statistics of connectivity motifs between small groups of neurons are accessible. Another approach is based instead on the perspective of artificial neural networks where the global connectivity matrix is known, and in particular its low-rank structure can be used to determine the resulting low-dimensional dynamics. A direct relationship between these two approaches is however currently missing. Specifically, it remains to be clarified how local connectivity statistics and the global low-rank connectivity structure are inter-related and shape the low-dimensional activity. To bridge this gap, here we develop a method for mapping local connectivity statistics onto an approximate global low-rank structure. Our method rests on approximating the global connectivity matrix using dominant eigenvectors, which we compute using perturbation theory for random matrices. We demonstrate that multi-population networks defined from local connectivity statistics for which the central limit theorem holds can be approximated by low-rank connectivity with Gaussian-mixture statistics. We specifically apply this method to excitatory-inhibitory networks with reciprocal motifs, and show that it yields reliable predictions for both the low-dimensional dynamics, and statistics of population activity. Importantly, it analytically accounts for the activity heterogeneity of individual neurons in specific realizations of local connectivity. Altogether, our approach allows us to disentangle the effects of mean connectivity and reciprocal motifs on the global recurrent feedback, and provides an intuitive picture of how local connectivity shapes global network dynamics.


Subject(s)
Neural Networks, Computer , Neurons , Neurons/physiology , Homeostasis , Normal Distribution , Models, Neurological , Nerve Net/physiology
8.
Curr Biol ; 33(4): 622-638.e7, 2023 02 27.
Article in English | MEDLINE | ID: mdl-36657448

ABSTRACT

The strategies found by animals facing a new task are determined both by individual experience and by structural priors evolved to leverage the statistics of natural environments. Rats quickly learn to capitalize on the trial sequence correlations of two-alternative forced choice (2AFC) tasks after correct trials but consistently deviate from optimal behavior after error trials. To understand this outcome-dependent gating, we first show that recurrent neural networks (RNNs) trained in the same 2AFC task outperform rats as they can readily learn to use across-trial information both after correct and error trials. We hypothesize that, although RNNs can optimize their behavior in the 2AFC task without any a priori restrictions, rats' strategy is constrained by a structural prior adapted to a natural environment in which rewarded and non-rewarded actions provide largely asymmetric information. When pre-training RNNs in a more ecological task with more than two possible choices, networks develop a strategy by which they gate off the across-trial evidence after errors, mimicking rats' behavior. Population analyses show that the pre-trained networks form an accurate representation of the sequence statistics independently of the outcome in the previous trial. After error trials, gating is implemented by a change in the network dynamics that temporarily decouple the categorization of the stimulus from the across-trial accumulated evidence. Our results suggest that the rats' suboptimal behavior reflects the influence of a structural prior that reacts to errors by isolating the network decision dynamics from the context, ultimately constraining the performance in a 2AFC laboratory task.


Subject(s)
Learning , Neural Networks, Computer , Rats , Animals , Behavior, Animal , Choice Behavior
9.
PLoS Comput Biol ; 18(8): e1010426, 2022 08.
Article in English | MEDLINE | ID: mdl-35944030

ABSTRACT

Neural population dynamics are often highly coordinated, allowing task-related computations to be understood as neural trajectories through low-dimensional subspaces. How the network connectivity and input structure give rise to such activity can be investigated with the aid of low-rank recurrent neural networks, a recently-developed class of computational models which offer a rich theoretical framework linking the underlying connectivity structure to emergent low-dimensional dynamics. This framework has so far relied on the assumption of all-to-all connectivity, yet cortical networks are known to be highly sparse. Here we investigate the dynamics of low-rank recurrent networks in which the connections are randomly sparsified, which makes the network connectivity formally full-rank. We first analyse the impact of sparsity on the eigenvalue spectrum of low-rank connectivity matrices, and use this to examine the implications for the dynamics. We find that in the presence of sparsity, the eigenspectra in the complex plane consist of a continuous bulk and isolated outliers, a form analogous to the eigenspectra of connectivity matrices composed of a low-rank and a full-rank random component. This analogy allows us to characterise distinct dynamical regimes of the sparsified low-rank network as a function of key network parameters. Altogether, we find that the low-dimensional dynamics induced by low-rank connectivity structure are preserved even at high levels of sparsity, and can therefore support rich and robust computations even in networks sparsified to a biologically-realistic extent.


Subject(s)
Models, Neurological , Neural Networks, Computer , Population Dynamics
10.
Neural Comput ; 34(9): 1871-1892, 2022 08 16.
Article in English | MEDLINE | ID: mdl-35896161

ABSTRACT

A large body of work has suggested that neural populations exhibit low-dimensional dynamics during behavior. However, there are a variety of different approaches for modeling low-dimensional neural population activity. One approach involves latent linear dynamical system (LDS) models, in which population activity is described by a projection of low-dimensional latent variables with linear dynamics. A second approach involves low-rank recurrent neural networks (RNNs), in which population activity arises directly from a low-dimensional projection of past activity. Although these two modeling approaches have strong similarities, they arise in different contexts and tend to have different domains of application. Here we examine the precise relationship between latent LDS models and linear low-rank RNNs. When can one model class be converted to the other, and vice versa? We show that latent LDS models can only be converted to RNNs in specific limit cases, due to the non-Markovian property of latent LDS models. Conversely, we show that linear RNNs can be mapped onto LDS models, with latent dimensionality at most twice the rank of the RNN. A surprising consequence of our results is that a partially observed RNN is better represented by an LDS model than by an RNN consisting of only observed units.


Subject(s)
Neural Networks, Computer , Linear Models
11.
Nat Neurosci ; 25(6): 783-794, 2022 06.
Article in English | MEDLINE | ID: mdl-35668174

ABSTRACT

Neural computations are currently investigated using two separate approaches: sorting neurons into functional subpopulations or examining the low-dimensional dynamics of collective activity. Whether and how these two aspects interact to shape computations is currently unclear. Using a novel approach to extract computational mechanisms from networks trained on neuroscience tasks, here we show that the dimensionality of the dynamics and subpopulation structure play fundamentally complementary roles. Although various tasks can be implemented by increasing the dimensionality in networks with fully random population structure, flexible input-output mappings instead require a non-random population structure that can be described in terms of multiple subpopulations. Our analyses revealed that such a subpopulation structure enables flexible computations through a mechanism based on gain-controlled modulations that flexibly shape the collective dynamics. Our results lead to task-specific predictions for the structure of neural selectivity, for inactivation experiments and for the implication of different neurons in multi-tasking.


Subject(s)
Models, Neurological , Neurons , Neurons/physiology
12.
Neural Comput ; 33(6): 1572-1615, 2021 05 13.
Article in English | MEDLINE | ID: mdl-34496384

ABSTRACT

An emerging paradigm proposes that neural computations can be understood at the level of dynamic systems that govern low-dimensional trajectories of collective neural activity. How the connectivity structure of a network determines the emergent dynamical system, however, remains to be clarified. Here we consider a novel class of models, gaussian-mixture, low-rank recurrent networks in which the rank of the connectivity matrix and the number of statistically defined populations are independent hyperparameters. We show that the resulting collective dynamics form a dynamical system, where the rank sets the dimensionality and the population structure shapes the dynamics. In particular, the collective dynamics can be described in terms of a simplified effective circuit of interacting latent variables. While having a single global population strongly restricts the possible dynamics, we demonstrate that if the number of populations is large enough, a rank R network can approximate any R-dimensional dynamical system.

13.
Curr Opin Neurobiol ; 70: 113-120, 2021 10.
Article in English | MEDLINE | ID: mdl-34537579

ABSTRACT

The ongoing exponential rise in recording capacity calls for new approaches for analysing and interpreting neural data. Effective dimensionality has emerged as an important property of neural activity across populations of neurons, yet different studies rely on different definitions and interpretations of this quantity. Here, we focus on intrinsic and embedding dimensionality, and discuss how they might reveal computational principles from data. Reviewing recent works, we propose that the intrinsic dimensionality reflects information about the latent variables encoded in collective activity while embedding dimensionality reveals the manner in which this information is processed. We conclude by highlighting the role of network models as an ideal substrate for testing more specifically various hypotheses on the computational principles reflected through intrinsic and embedding dimensionality.


Subject(s)
Neural Networks, Computer
14.
Elife ; 102021 03 24.
Article in English | MEDLINE | ID: mdl-33759763

ABSTRACT

Across sensory systems, complex spatio-temporal patterns of neural activity arise following the onset (ON) and offset (OFF) of stimuli. While ON responses have been widely studied, the mechanisms generating OFF responses in cortical areas have so far not been fully elucidated. We examine here the hypothesis that OFF responses are single-cell signatures of recurrent interactions at the network level. To test this hypothesis, we performed population analyses of two-photon calcium recordings in the auditory cortex of awake mice listening to auditory stimuli, and compared them to linear single-cell and network models. While the single-cell model explained some prominent features of the data, it could not capture the structure across stimuli and trials. In contrast, the network model accounted for the low-dimensional organization of population responses and their global structure across stimuli, where distinct stimuli activated mostly orthogonal dimensions in the neural state-space.


Subject(s)
Auditory Cortex/physiology , Auditory Perception/physiology , Calcium/physiology , Hearing/physiology , Neurons/physiology , Photons , Acoustic Stimulation , Animals , Mice
16.
Nat Commun ; 11(1): 3176, 2020 06 18.
Article in English | MEDLINE | ID: mdl-32555158

ABSTRACT

An amendment to this paper has been published and can be accessed via a link at the top of the paper.

17.
Elife ; 92020 03 09.
Article in English | MEDLINE | ID: mdl-32149602

ABSTRACT

Depending on environmental demands, humans can learn and exploit multiple concurrent sets of stimulus-response associations. Mechanisms underlying the learning of such task-sets remain unknown. Here we investigate the hypothesis that task-set learning relies on unsupervised chunking of stimulus-response associations that occur in temporal proximity. We examine behavioral and neural data from a task-set learning experiment using a network model. We first show that task-set learning can be achieved provided the timescale of chunking is slower than the timescale of stimulus-response learning. Fitting the model to behavioral data on a subject-by-subject basis confirmed this expectation and led to specific predictions linking chunking and task-set retrieval that were borne out by behavioral performance and reaction times. Comparing the model activity with BOLD signal allowed us to identify neural correlates of task-set retrieval in a functional network involving ventral and dorsal prefrontal cortex, with the dorsal system preferentially engaged when retrievals are used to improve performance.


Subject(s)
Learning , Memory , Prefrontal Cortex/physiology , Brain Mapping , Humans , Magnetic Resonance Imaging , Models, Neurological , Neural Networks, Computer , Neurons/physiology , Oxygen/blood , Prefrontal Cortex/diagnostic imaging , Reaction Time , Unsupervised Machine Learning
18.
PLoS Comput Biol ; 16(2): e1007655, 2020 02.
Article in English | MEDLINE | ID: mdl-32053594

ABSTRACT

Following a stimulus, the neural response typically strongly varies in time and across neurons before settling to a steady-state. While classical population coding theory disregards the temporal dimension, recent works have argued that trajectories of transient activity can be particularly informative about stimulus identity and may form the basis of computations through dynamics. Yet the dynamical mechanisms needed to generate a population code based on transient trajectories have not been fully elucidated. Here we examine transient coding in a broad class of high-dimensional linear networks of recurrently connected units. We start by reviewing a well-known result that leads to a distinction between two classes of networks: networks in which all inputs lead to weak, decaying transients, and networks in which specific inputs elicit amplified transient responses and are mapped onto output states during the dynamics. Theses two classes are simply distinguished based on the spectrum of the symmetric part of the connectivity matrix. For the second class of networks, which is a sub-class of non-normal networks, we provide a procedure to identify transiently amplified inputs and the corresponding readouts. We first apply these results to standard randomly-connected and two-population networks. We then build minimal, low-rank networks that robustly implement trajectories mapping a specific input onto a specific orthogonal output state. Finally, we demonstrate that the capacity of the obtained networks increases proportionally with their size.


Subject(s)
Computational Biology/methods , Neural Networks, Computer , Software , Computer Simulation , Humans , Linear Models , Models, Neurological , Neurons/physiology , Normal Distribution , Synapses
19.
Cell Rep ; 30(3): 630-641.e5, 2020 01 21.
Article in English | MEDLINE | ID: mdl-31968242

ABSTRACT

In the neocortex, synaptic inhibition shapes all forms of spontaneous and sensory evoked activity. Importantly, inhibitory transmission is highly plastic, but the functional role of inhibitory synaptic plasticity is unknown. In the mouse barrel cortex, activation of layer (L) 2/3 pyramidal neurons (PNs) elicits strong feedforward inhibition (FFI) onto L5 PNs. We find that FFI involving parvalbumin (PV)-expressing cells is strongly potentiated by postsynaptic PN burst firing. FFI plasticity modifies the PN excitation-to-inhibition (E/I) ratio, strongly modulates PN gain, and alters information transfer across cortical layers. Moreover, our LTPi-inducing protocol modifies firing of L5 PNs and alters the temporal association of PN spikes to γ-oscillations both in vitro and in vivo. All of these effects are captured by unbalancing the E/I ratio in a feedforward inhibition circuit model. Altogether, our results indicate that activity-dependent modulation of perisomatic inhibitory strength effectively influences the participation of single principal cortical neurons to cognition-relevant network activity.


Subject(s)
Neocortex/physiology , Neural Inhibition/physiology , Neuronal Plasticity/physiology , Synapses/physiology , Action Potentials/physiology , Action Potentials/radiation effects , Animals , Female , Gamma Rhythm/radiation effects , Light , Long-Term Potentiation/physiology , Long-Term Potentiation/radiation effects , Mice, Inbred C57BL , Models, Neurological , Neural Inhibition/radiation effects , Neuronal Plasticity/radiation effects , Pyramidal Cells/physiology , Pyramidal Cells/radiation effects , Synapses/radiation effects , Time Factors , gamma-Aminobutyric Acid/metabolism
20.
Nat Commun ; 10(1): 4933, 2019 10 30.
Article in English | MEDLINE | ID: mdl-31666513

ABSTRACT

The interpretation of neuronal spike train recordings often relies on abstract statistical models that allow for principled parameter estimation and model selection but provide only limited insights into underlying microcircuits. In contrast, mechanistic models are useful to interpret microcircuit dynamics, but are rarely quantitatively matched to experimental data due to methodological challenges. Here we present analytical methods to efficiently fit spiking circuit models to single-trial spike trains. Using derived likelihood functions, we statistically infer the mean and variance of hidden inputs, neuronal adaptation properties and connectivity for coupled integrate-and-fire neurons. Comprehensive evaluations on synthetic data, validations using ground truth in-vitro and in-vivo recordings, and comparisons with existing techniques demonstrate that parameter estimation is very accurate and efficient, even for highly subsampled networks. Our methods bridge statistical, data-driven and theoretical, model-based neurosciences at the level of spiking circuits, for the purpose of a quantitative, mechanistic interpretation of recorded neuronal population activity.


Subject(s)
Action Potentials/physiology , Neurons/physiology , Animals , Computer Simulation , Interneurons/physiology , Likelihood Functions , Mice , Patch-Clamp Techniques , Pyramidal Cells/physiology , Rats , Reproducibility of Results , Visual Cortex
SELECTION OF CITATIONS
SEARCH DETAIL
...