Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 44
Filtrar
1.
PLoS Comput Biol ; 19(1): e1010855, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36689488

RESUMEN

How the connectivity of cortical networks determines the neural dynamics and the resulting computations is one of the key questions in neuroscience. Previous works have pursued two complementary approaches to quantify the structure in connectivity. One approach starts from the perspective of biological experiments where only the local statistics of connectivity motifs between small groups of neurons are accessible. Another approach is based instead on the perspective of artificial neural networks where the global connectivity matrix is known, and in particular its low-rank structure can be used to determine the resulting low-dimensional dynamics. A direct relationship between these two approaches is however currently missing. Specifically, it remains to be clarified how local connectivity statistics and the global low-rank connectivity structure are inter-related and shape the low-dimensional activity. To bridge this gap, here we develop a method for mapping local connectivity statistics onto an approximate global low-rank structure. Our method rests on approximating the global connectivity matrix using dominant eigenvectors, which we compute using perturbation theory for random matrices. We demonstrate that multi-population networks defined from local connectivity statistics for which the central limit theorem holds can be approximated by low-rank connectivity with Gaussian-mixture statistics. We specifically apply this method to excitatory-inhibitory networks with reciprocal motifs, and show that it yields reliable predictions for both the low-dimensional dynamics, and statistics of population activity. Importantly, it analytically accounts for the activity heterogeneity of individual neurons in specific realizations of local connectivity. Altogether, our approach allows us to disentangle the effects of mean connectivity and reciprocal motifs on the global recurrent feedback, and provides an intuitive picture of how local connectivity shapes global network dynamics.


Asunto(s)
Redes Neurales de la Computación , Neuronas , Neuronas/fisiología , Homeostasis , Distribución Normal , Modelos Neurológicos , Red Nerviosa/fisiología
2.
PLoS Comput Biol ; 19(8): e1011315, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37549194

RESUMEN

Recurrent network models are instrumental in investigating how behaviorally-relevant computations emerge from collective neural dynamics. A recently developed class of models based on low-rank connectivity provides an analytically tractable framework for understanding of how connectivity structure determines the geometry of low-dimensional dynamics and the ensuing computations. Such models however lack some fundamental biological constraints, and in particular represent individual neurons in terms of abstract units that communicate through continuous firing rates rather than discrete action potentials. Here we examine how far the theoretical insights obtained from low-rank rate networks transfer to more biologically plausible networks of spiking neurons. Adding a low-rank structure on top of random excitatory-inhibitory connectivity, we systematically compare the geometry of activity in networks of integrate-and-fire neurons to rate networks with statistically equivalent low-rank connectivity. We show that the mean-field predictions of rate networks allow us to identify low-dimensional dynamics at constant population-average activity in spiking networks, as well as novel non-linear regimes of activity such as out-of-phase oscillations and slow manifolds. We finally exploit these results to directly build spiking networks that perform nonlinear computations.


Asunto(s)
Modelos Neurológicos , Neuronas , Potenciales de Acción/fisiología , Neuronas/fisiología , Red Nerviosa/fisiología
3.
PLoS Comput Biol ; 18(8): e1010426, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35944030

RESUMEN

Neural population dynamics are often highly coordinated, allowing task-related computations to be understood as neural trajectories through low-dimensional subspaces. How the network connectivity and input structure give rise to such activity can be investigated with the aid of low-rank recurrent neural networks, a recently-developed class of computational models which offer a rich theoretical framework linking the underlying connectivity structure to emergent low-dimensional dynamics. This framework has so far relied on the assumption of all-to-all connectivity, yet cortical networks are known to be highly sparse. Here we investigate the dynamics of low-rank recurrent networks in which the connections are randomly sparsified, which makes the network connectivity formally full-rank. We first analyse the impact of sparsity on the eigenvalue spectrum of low-rank connectivity matrices, and use this to examine the implications for the dynamics. We find that in the presence of sparsity, the eigenspectra in the complex plane consist of a continuous bulk and isolated outliers, a form analogous to the eigenspectra of connectivity matrices composed of a low-rank and a full-rank random component. This analogy allows us to characterise distinct dynamical regimes of the sparsified low-rank network as a function of key network parameters. Altogether, we find that the low-dimensional dynamics induced by low-rank connectivity structure are preserved even at high levels of sparsity, and can therefore support rich and robust computations even in networks sparsified to a biologically-realistic extent.


Asunto(s)
Modelos Neurológicos , Redes Neurales de la Computación , Dinámica Poblacional
4.
Neural Comput ; 34(9): 1871-1892, 2022 08 16.
Artículo en Inglés | MEDLINE | ID: mdl-35896161

RESUMEN

A large body of work has suggested that neural populations exhibit low-dimensional dynamics during behavior. However, there are a variety of different approaches for modeling low-dimensional neural population activity. One approach involves latent linear dynamical system (LDS) models, in which population activity is described by a projection of low-dimensional latent variables with linear dynamics. A second approach involves low-rank recurrent neural networks (RNNs), in which population activity arises directly from a low-dimensional projection of past activity. Although these two modeling approaches have strong similarities, they arise in different contexts and tend to have different domains of application. Here we examine the precise relationship between latent LDS models and linear low-rank RNNs. When can one model class be converted to the other, and vice versa? We show that latent LDS models can only be converted to RNNs in specific limit cases, due to the non-Markovian property of latent LDS models. Conversely, we show that linear RNNs can be mapped onto LDS models, with latent dimensionality at most twice the rank of the RNN. A surprising consequence of our results is that a partially observed RNN is better represented by an LDS model than by an RNN consisting of only observed units.


Asunto(s)
Redes Neurales de la Computación , Modelos Lineales
5.
Neural Comput ; 33(6): 1572-1615, 2021 05 13.
Artículo en Inglés | MEDLINE | ID: mdl-34496384

RESUMEN

An emerging paradigm proposes that neural computations can be understood at the level of dynamic systems that govern low-dimensional trajectories of collective neural activity. How the connectivity structure of a network determines the emergent dynamical system, however, remains to be clarified. Here we consider a novel class of models, gaussian-mixture, low-rank recurrent networks in which the rank of the connectivity matrix and the number of statistically defined populations are independent hyperparameters. We show that the resulting collective dynamics form a dynamical system, where the rank sets the dimensionality and the population structure shapes the dynamics. In particular, the collective dynamics can be described in terms of a simplified effective circuit of interacting latent variables. While having a single global population strongly restricts the possible dynamics, we demonstrate that if the number of populations is large enough, a rank R network can approximate any R-dimensional dynamical system.

6.
PLoS Comput Biol ; 16(2): e1007655, 2020 02.
Artículo en Inglés | MEDLINE | ID: mdl-32053594

RESUMEN

Following a stimulus, the neural response typically strongly varies in time and across neurons before settling to a steady-state. While classical population coding theory disregards the temporal dimension, recent works have argued that trajectories of transient activity can be particularly informative about stimulus identity and may form the basis of computations through dynamics. Yet the dynamical mechanisms needed to generate a population code based on transient trajectories have not been fully elucidated. Here we examine transient coding in a broad class of high-dimensional linear networks of recurrently connected units. We start by reviewing a well-known result that leads to a distinction between two classes of networks: networks in which all inputs lead to weak, decaying transients, and networks in which specific inputs elicit amplified transient responses and are mapped onto output states during the dynamics. Theses two classes are simply distinguished based on the spectrum of the symmetric part of the connectivity matrix. For the second class of networks, which is a sub-class of non-normal networks, we provide a procedure to identify transiently amplified inputs and the corresponding readouts. We first apply these results to standard randomly-connected and two-population networks. We then build minimal, low-rank networks that robustly implement trajectories mapping a specific input onto a specific orthogonal output state. Finally, we demonstrate that the capacity of the obtained networks increases proportionally with their size.


Asunto(s)
Biología Computacional/métodos , Redes Neurales de la Computación , Programas Informáticos , Simulación por Computador , Humanos , Modelos Lineales , Modelos Neurológicos , Neuronas/fisiología , Distribución Normal , Sinapsis
7.
J Neurosci ; 39(19): 3676-3686, 2019 05 08.
Artículo en Inglés | MEDLINE | ID: mdl-30842247

RESUMEN

Stimulation and functional imaging studies have revealed the existence of a large network of cortical regions involved in the regulation of heart rate. However, very little is known about the link between cortical neural firing and cardiac-cycle duration (CCD). Here, we analyze single-unit and multiunit data obtained in humans at rest, and show that firing rate covaries with CCD in 16.7% of the sample (25 of 150). The link between firing rate and CCD was most prevalent in the anterior medial temporal lobe (entorhinal and perirhinal cortices, anterior hippocampus, and amygdala), where 36% (18 of 50) of the units show the effect, and to a lesser extent in the mid-to-anterior cingulate cortex (11.1%, 5 of 45). The variance in firing rate explained by CCD ranged from 0.5 to 11%. Several lines of analysis indicate that neural firing influences CCD, rather than the other way around, and that neural firing affects CCD through vagally mediated mechanisms in most cases. These results show that part of the spontaneous fluctuations in firing rate can be attributed to the cortical control of the cardiac cycle. The fine tuning of the regulation of CCD represents a novel physiological factor accounting for spontaneous variance in firing rate. It remains to be determined whether the "noise" introduced in firing rate by the regulation of CCD is detrimental or beneficial to the cognitive information processing carried out in the parahippocampal and cingulate regions.SIGNIFICANCE STATEMENT Fluctuations in heart rate are known to be under the control of cortical structures, but spontaneous fluctuations in cortical firing rate, or "noise," have seldom been related to heart rate. Here, we analyze unit activity in humans at rest and show that spontaneous fluctuations in neural firing in the medial temporal lobe, as well as in the mid-to-anterior cingulate cortex, influence heart rate. This phenomenon was particularly pronounced in the entorhinal and perirhinal cortices, where it could be observed in one of three neurons. Our results show that part of spontaneous firing rate variability in regions best known for their cognitive role in spatial navigation and memory corresponds to precise physiological regulations.


Asunto(s)
Potenciales de Acción/fisiología , Giro del Cíngulo/fisiología , Frecuencia Cardíaca/fisiología , Neuronas/fisiología , Giro Parahipocampal/fisiología , Descanso/fisiología , Adulto , Epilepsia Refractaria/diagnóstico , Epilepsia Refractaria/fisiopatología , Electrocardiografía/métodos , Femenino , Giro del Cíngulo/citología , Humanos , Masculino , Giro Parahipocampal/citología
9.
PLoS Comput Biol ; 15(3): e1006893, 2019 03.
Artículo en Inglés | MEDLINE | ID: mdl-30897092

RESUMEN

Neural activity in awake behaving animals exhibits a vast range of timescales that can be several fold larger than the membrane time constant of individual neurons. Two types of mechanisms have been proposed to explain this conundrum. One possibility is that large timescales are generated by a network mechanism based on positive feedback, but this hypothesis requires fine-tuning of the strength or structure of the synaptic connections. A second possibility is that large timescales in the neural dynamics are inherited from large timescales of underlying biophysical processes, two prominent candidates being intrinsic adaptive ionic currents and synaptic transmission. How the timescales of adaptation or synaptic transmission influence the timescale of the network dynamics has however not been fully explored. To address this question, here we analyze large networks of randomly connected excitatory and inhibitory units with additional degrees of freedom that correspond to adaptation or synaptic filtering. We determine the fixed points of the systems, their stability to perturbations and the corresponding dynamical timescales. Furthermore, we apply dynamical mean field theory to study the temporal statistics of the activity in the fluctuating regime, and examine how the adaptation and synaptic timescales transfer from individual units to the whole population. Our overarching finding is that synaptic filtering and adaptation in single neurons have very different effects at the network level. Unexpectedly, the macroscopic network dynamics do not inherit the large timescale present in adaptive currents. In contrast, the timescales of network activity increase proportionally to the time constant of the synaptic filter. Altogether, our study demonstrates that the timescales of different biophysical processes have different effects on the network level, so that the slow processes within individual neurons do not necessarily induce slow activity in large recurrent neural networks.


Asunto(s)
Modelos Neurológicos , Red Nerviosa/fisiología , Sinapsis/fisiología , Transmisión Sináptica/fisiología , Animales , Biología Computacional , Neuronas/fisiología
10.
Neural Comput ; 31(6): 1139-1182, 2019 06.
Artículo en Inglés | MEDLINE | ID: mdl-30979353

RESUMEN

Recurrent neural networks have been extensively studied in the context of neuroscience and machine learning due to their ability to implement complex computations. While substantial progress in designing effective learning algorithms has been achieved, a full understanding of trained recurrent networks is still lacking. Specifically, the mechanisms that allow computations to emerge from the underlying recurrent dynamics are largely unknown. Here we focus on a simple yet underexplored computational setup: a feedback architecture trained to associate a stationary output to a stationary input. As a starting point, we derive an approximate analytical description of global dynamics in trained networks, which assumes uncorrelated connectivity weights in the feedback and in the random bulk. The resulting mean-field theory suggests that the task admits several classes of solutions, which imply different stability properties. Different classes are characterized in terms of the geometrical arrangement of the readout with respect to the input vectors, defined in the high-dimensional space spanned by the network population. We find that such an approximate theoretical approach can be used to understand how standard training techniques implement the input-output task in finite-size feedback networks. In particular, our simplified description captures the local and the global stability properties of the target solution, and thus predicts training performance.


Asunto(s)
Aprendizaje Automático , Modelos Teóricos , Redes Neurales de la Computación , Algoritmos
11.
PLoS Comput Biol ; 13(4): e1005498, 2017 04.
Artículo en Inglés | MEDLINE | ID: mdl-28437436

RESUMEN

Recurrent networks of non-linear units display a variety of dynamical regimes depending on the structure of their synaptic connectivity. A particularly remarkable phenomenon is the appearance of strongly fluctuating, chaotic activity in networks of deterministic, but randomly connected rate units. How this type of intrinsically generated fluctuations appears in more realistic networks of spiking neurons has been a long standing question. To ease the comparison between rate and spiking networks, recent works investigated the dynamical regimes of randomly-connected rate networks with segregated excitatory and inhibitory populations, and firing rates constrained to be positive. These works derived general dynamical mean field (DMF) equations describing the fluctuating dynamics, but solved these equations only in the case of purely inhibitory networks. Using a simplified excitatory-inhibitory architecture in which DMF equations are more easily tractable, here we show that the presence of excitation qualitatively modifies the fluctuating activity compared to purely inhibitory networks. In presence of excitation, intrinsically generated fluctuations induce a strong increase in mean firing rates, a phenomenon that is much weaker in purely inhibitory networks. Excitation moreover induces two different fluctuating regimes: for moderate overall coupling, recurrent inhibition is sufficient to stabilize fluctuations; for strong coupling, firing rates are stabilized solely by the upper bound imposed on activity, even if inhibition is stronger than excitation. These results extend to more general network architectures, and to rate networks receiving noisy inputs mimicking spiking activity. Finally, we show that signatures of the second dynamical regime appear in networks of integrate-and-fire neurons.


Asunto(s)
Potenciales de Acción/fisiología , Modelos Neurológicos , Red Nerviosa/fisiología , Neuronas/fisiología , Biología Computacional
12.
J Neurosci ; 36(44): 11238-11258, 2016 11 02.
Artículo en Inglés | MEDLINE | ID: mdl-27807166

RESUMEN

Synaptic plasticity is sensitive to the rate and the timing of presynaptic and postsynaptic action potentials. In experimental protocols inducing plasticity, the imposed spike trains are typically regular and the relative timing between every presynaptic and postsynaptic spike is fixed. This is at odds with firing patterns observed in the cortex of intact animals, where cells fire irregularly and the timing between presynaptic and postsynaptic spikes varies. To investigate synaptic changes elicited by in vivo-like firing, we used numerical simulations and mathematical analysis of synaptic plasticity models. We found that the influence of spike timing on plasticity is weaker than expected from regular stimulation protocols. Moreover, when neurons fire irregularly, synaptic changes induced by precise spike timing can be equivalently induced by a modest firing rate variation. Our findings bridge the gap between existing results on synaptic plasticity and plasticity occurring in vivo, and challenge the dominant role of spike timing in plasticity. SIGNIFICANCE STATEMENT: Synaptic plasticity, the change in efficacy of connections between neurons, is thought to underlie learning and memory. The dominant paradigm posits that the precise timing of neural action potentials (APs) is central for plasticity induction. This concept is based on experiments using highly regular and stereotyped patterns of APs, in stark contrast with natural neuronal activity. Using synaptic plasticity models, we investigated how irregular, in vivo-like activity shapes synaptic plasticity. We found that synaptic changes induced by precise timing of APs are much weaker than suggested by regular stimulation protocols, and can be equivalently induced by modest variations of the AP rate alone. Our results call into question the dominant role of precise AP timing for plasticity in natural conditions.


Asunto(s)
Potenciales de Acción/fisiología , Modelos Neurológicos , Plasticidad Neuronal/fisiología , Neuronas/fisiología , Sinapsis/fisiología , Transmisión Sináptica/fisiología , Animales , Simulación por Computador , Humanos , Modelos Estadísticos , Red Nerviosa/fisiología
13.
J Neurosci ; 35(18): 7056-68, 2015 May 06.
Artículo en Inglés | MEDLINE | ID: mdl-25948257

RESUMEN

The attenuation of neuronal voltage responses to high-frequency current inputs by the membrane capacitance is believed to limit single-cell bandwidth. However, neuronal populations subject to stochastic fluctuations can follow inputs beyond this limit. We investigated this apparent paradox theoretically and experimentally using Purkinje cells in the cerebellum, a motor structure that benefits from rapid information transfer. We analyzed the modulation of firing in response to the somatic injection of sinusoidal currents. Computational modeling suggested that, instead of decreasing with frequency, modulation amplitude can increase up to high frequencies because of cellular morphology. Electrophysiological measurements in adult rat slices confirmed this prediction and displayed a marked resonance at 200 Hz. We elucidated the underlying mechanism, showing that the two-compartment morphology of the Purkinje cell, interacting with a simple spiking mechanism and dendritic fluctuations, is sufficient to create high-frequency signal amplification. This mechanism, which we term morphology-induced resonance, is selective for somatic inputs, which in the Purkinje cell are exclusively inhibitory. The resonance sensitizes Purkinje cells in the frequency range of population oscillations observed in vivo.


Asunto(s)
Potenciales de Acción/fisiología , Neuronas/fisiología , Células de Purkinje/fisiología , Animales , Cerebelo/citología , Cerebelo/fisiología , Masculino , Ratas , Ratas Wistar
14.
J Physiol ; 594(10): 2729-49, 2016 05 15.
Artículo en Inglés | MEDLINE | ID: mdl-26918702

RESUMEN

KEY POINTS: We performed extracellular recording of pairs of interneuron-Purkinje cells in vivo. A single interneuron produces a substantial, short-lasting, inhibition of Purkinje cells. Feed-forward inhibition is associated with characteristic asymmetric cross-correlograms. In vivo, Purkinje cell spikes only depend on the most recent synaptic activity. ABSTRACT: Cerebellar molecular layer interneurons are considered to control the firing rate and spike timing of Purkinje cells. However, interactions between these cell types are largely unexplored in vivo. Using tetrodes, we performed simultaneous extracellular recordings of neighbouring Purkinje cells and molecular layer interneurons, presumably basket cells, in adult rats in vivo. The high levels of afferent synaptic activity encountered in vivo yield irregular spiking and reveal discharge patterns characteristic of feed-forward inhibition, thus suggesting an overlap of the afferent excitatory inputs between Purkinje cells and basket cells. Under conditions of intense background synaptic inputs, interneuron spikes exert a short-lasting inhibitory effect, delaying the following Purkinje cell spike by an amount remarkably independent of the Purkinje cell firing cycle. This effect can be explained by the short memory time of the Purkinje cell potential as a result of the intense incoming synaptic activity. Finally, we found little evidence for any involvement of the interneurons that we recorded with the cerebellar high-frequency oscillations promoting Purkinje cell synchrony. The rapid interactions between interneurons and Purkinje cells might be of particular importance in fine motor control because the inhibitory action of interneurons on Purkinje cells leads to deep cerebellar nuclear disinhibition and hence increased cerebellar output.


Asunto(s)
Corteza Cerebelosa/fisiología , Interneuronas/fisiología , Inhibición Neural/fisiología , Células de Purkinje/fisiología , Potenciales de Acción/fisiología , Animales , Corteza Cerebelosa/citología , Masculino , Técnicas de Cultivo de Órganos , Ratas , Ratas Wistar , Factores de Tiempo
15.
PLoS Comput Biol ; 9(10): e1003301, 2013 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-24204236

RESUMEN

Firing-rate models provide an attractive approach for studying large neural networks because they can be simulated rapidly and are amenable to mathematical analysis. Traditional firing-rate models assume a simple form in which the dynamics are governed by a single time constant. These models fail to replicate certain dynamic features of populations of spiking neurons, especially those involving synchronization. We present a complex-valued firing-rate model derived from an eigenfunction expansion of the Fokker-Planck equation and apply it to the linear, quadratic and exponential integrate-and-fire models. Despite being almost as simple as a traditional firing-rate description, this model can reproduce firing-rate dynamics due to partial synchronization of the action potentials in a spiking model, and it successfully predicts the transition to spike synchronization in networks of coupled excitatory and inhibitory neurons.


Asunto(s)
Potenciales de Acción/fisiología , Simulación por Computador , Modelos Neurológicos , Red Nerviosa/fisiología , Algoritmos , Biología Computacional , Neuronas/fisiología
16.
Trends Cogn Sci ; 28(7): 677-690, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38553340

RESUMEN

One major challenge of neuroscience is identifying structure in seemingly disorganized neural activity. Different types of structure have different computational implications that can help neuroscientists understand the functional role of a particular brain area. Here, we outline a unified approach to characterize structure by inspecting the representational geometry and the modularity properties of the recorded activity and show that a similar approach can also reveal structure in connectivity. We start by setting up a general framework for determining geometry and modularity in activity and connectivity and relating these properties with computations performed by the network. We then use this framework to review the types of structure found in recent studies of model networks performing three classes of computations.


Asunto(s)
Encéfalo , Modelos Neurológicos , Red Nerviosa , Humanos , Encéfalo/fisiología , Red Nerviosa/fisiología , Animales , Conectoma
17.
ArXiv ; 2023 Aug 31.
Artículo en Inglés | MEDLINE | ID: mdl-37693175

RESUMEN

One major challenge of neuroscience is finding interesting structures in a seemingly disorganized neural activity. Often these structures have computational implications that help to understand the functional role of a particular brain area. Here we outline a unified approach to characterize these structures by inspecting the representational geometry and the modularity properties of the recorded activity, and show that this approach can also reveal structures in connectivity. We start by setting up a general framework for determining geometry and modularity in activity and connectivity and relating these properties with computations performed by the network. We then use this framework to review the types of structure found in recent works on model networks performing three classes of computations.

18.
Elife ; 122023 Nov 16.
Artículo en Inglés | MEDLINE | ID: mdl-37970945

RESUMEN

Grouping sets of sounds into relevant categories is an important cognitive ability that enables the association of stimuli with appropriate goal-directed behavioral responses. In perceptual tasks, the primary auditory cortex (A1) assumes a prominent role by concurrently encoding both sound sensory features and task-related variables. Here, we sought to explore the role of A1 in the initiation of sound categorization, shedding light on its involvement in this cognitive process. We trained ferrets to discriminate click trains of different rates in a Go/No-Go delayed categorization task and recorded neural activity during both active behavior and passive exposure to the same sounds. Purely categorical response components were extracted and analyzed separately from sensory responses to reveal their contributions to the overall population response throughout the trials. We found that categorical activity emerged during sound presentation in the population average and was present in both active behavioral and passive states. However, upon task engagement, categorical responses to the No-Go category became suppressed in the population code, leading to an asymmetrical representation of the Go stimuli relative to the No-Go sounds and pre-stimulus baseline. The population code underwent an abrupt change at stimulus offset, with sustained responses after the Go sounds during the delay period. Notably, the categorical responses observed during the stimulus period exhibited a significant correlation with those extracted from the delay epoch, suggesting an early involvement of A1 in stimulus categorization.


Asunto(s)
Corteza Auditiva , Percepción Auditiva , Animales , Percepción Auditiva/fisiología , Corteza Auditiva/fisiología , Hurones , Sonido , Conducta Animal/fisiología , Estimulación Acústica
19.
Neuron ; 111(5): 739-753.e8, 2023 03 01.
Artículo en Inglés | MEDLINE | ID: mdl-36640766

RESUMEN

Biological brains possess an unparalleled ability to adapt behavioral responses to changing stimuli and environments. How neural processes enable this capacity is a fundamental open question. Previous works have identified two candidate mechanisms: a low-dimensional organization of neural activity and a modulation by contextual inputs. We hypothesized that combining the two might facilitate generalization and adaptation in complex tasks. We tested this hypothesis in flexible timing tasks where dynamics play a key role. Examining trained recurrent neural networks, we found that confining the dynamics to a low-dimensional subspace allowed tonic inputs to parametrically control the overall input-output transform, enabling generalization to novel inputs and adaptation to changing conditions. Reverse-engineering and theoretical analyses demonstrated that this parametric control relies on a mechanism where tonic inputs modulate the dynamics along non-linear manifolds while preserving their geometry. Comparisons with data from behaving monkeys confirmed the behavioral and neural signatures of this mechanism.


Asunto(s)
Encéfalo , Redes Neurales de la Computación
20.
Curr Biol ; 33(4): 622-638.e7, 2023 02 27.
Artículo en Inglés | MEDLINE | ID: mdl-36657448

RESUMEN

The strategies found by animals facing a new task are determined both by individual experience and by structural priors evolved to leverage the statistics of natural environments. Rats quickly learn to capitalize on the trial sequence correlations of two-alternative forced choice (2AFC) tasks after correct trials but consistently deviate from optimal behavior after error trials. To understand this outcome-dependent gating, we first show that recurrent neural networks (RNNs) trained in the same 2AFC task outperform rats as they can readily learn to use across-trial information both after correct and error trials. We hypothesize that, although RNNs can optimize their behavior in the 2AFC task without any a priori restrictions, rats' strategy is constrained by a structural prior adapted to a natural environment in which rewarded and non-rewarded actions provide largely asymmetric information. When pre-training RNNs in a more ecological task with more than two possible choices, networks develop a strategy by which they gate off the across-trial evidence after errors, mimicking rats' behavior. Population analyses show that the pre-trained networks form an accurate representation of the sequence statistics independently of the outcome in the previous trial. After error trials, gating is implemented by a change in the network dynamics that temporarily decouple the categorization of the stimulus from the across-trial accumulated evidence. Our results suggest that the rats' suboptimal behavior reflects the influence of a structural prior that reacts to errors by isolating the network decision dynamics from the context, ultimately constraining the performance in a 2AFC laboratory task.


Asunto(s)
Aprendizaje , Redes Neurales de la Computación , Ratas , Animales , Conducta Animal , Conducta de Elección
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA