Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
1.
Cell ; 185(19): 3568-3587.e27, 2022 09 15.
Artículo en Inglés | MEDLINE | ID: mdl-36113428

RESUMEN

Computational analysis of cellular activity has developed largely independently of modern transcriptomic cell typology, but integrating these approaches may be essential for full insight into cellular-level mechanisms underlying brain function and dysfunction. Applying this approach to the habenula (a structure with diverse, intermingled molecular, anatomical, and computational features), we identified encoding of reward-predictive cues and reward outcomes in distinct genetically defined neural populations, including TH+ cells and Tac1+ cells. Data from genetically targeted recordings were used to train an optimized nonlinear dynamical systems model and revealed activity dynamics consistent with a line attractor. High-density, cell-type-specific electrophysiological recordings and optogenetic perturbation provided supporting evidence for this model. Reverse-engineering predicted how Tac1+ cells might integrate reward history, which was complemented by in vivo experimentation. This integrated approach describes a process by which data-driven computational models of population activity can generate and frame actionable hypotheses for cell-type-specific investigation in biological systems.


Asunto(s)
Habénula , Recompensa , Dinámica Poblacional
2.
Annu Rev Neurosci ; 43: 249-275, 2020 07 08.
Artículo en Inglés | MEDLINE | ID: mdl-32640928

RESUMEN

Significant experimental, computational, and theoretical work has identified rich structure within the coordinated activity of interconnected neural populations. An emerging challenge now is to uncover the nature of the associated computations, how they are implemented, and what role they play in driving behavior. We term this computation through neural population dynamics. If successful, this framework will reveal general motifs of neural population activity and quantitatively describe how neural population dynamics implement computations necessary for driving goal-directed behavior. Here, we start with a mathematical primer on dynamical systems theory and analytical tools necessary to apply this perspective to experimental data. Next, we highlight some recent discoveries resulting from successful application of dynamical systems. We focus on studies spanning motor control, timing, decision-making, and working memory. Finally, we briefly discuss promising recent lines of investigation and future directions for the computation through neural population dynamics framework.


Asunto(s)
Encéfalo/fisiología , Biología Computacional , Aprendizaje Profundo , Red Nerviosa/fisiología , Animales , Biología Computacional/métodos , Humanos , Neuronas/fisiología , Dinámica Poblacional
3.
PLoS Comput Biol ; 19(1): e1010784, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36607933

RESUMEN

The relationship between neuronal activity and computations embodied by it remains an open question. We develop a novel methodology that condenses observed neuronal activity into a quantitatively accurate, simple, and interpretable model and validate it on diverse systems and scales from single neurons in C. elegans to fMRI in humans. The model treats neuronal activity as collections of interlocking 1-dimensional trajectories. Despite their simplicity, these models accurately predict future neuronal activity and future decisions made by human participants. Moreover, the structure formed by interconnected trajectories-a scaffold-is closely related to the computational strategy of the system. We use these scaffolds to compare the computational strategy of primates and artificial systems trained on the same task to identify specific conditions under which the artificial agent learns the same strategy as the primate. The computational strategy extracted using our methodology predicts specific errors on novel stimuli. These results show that our methodology is a powerful tool for studying the relationship between computation and neuronal activity across diverse systems.


Asunto(s)
Caenorhabditis elegans , Modelos Neurológicos , Animales , Humanos , Caenorhabditis elegans/fisiología , Neuronas/fisiología , Primates
4.
Neural Comput ; 34(8): 1652-1675, 2022 07 14.
Artículo en Inglés | MEDLINE | ID: mdl-35798321

RESUMEN

The computational role of the abundant feedback connections in the ventral visual stream is unclear, enabling humans and nonhuman primates to effortlessly recognize objects across a multitude of viewing conditions. Prior studies have augmented feedforward convolutional neural networks (CNNs) with recurrent connections to study their role in visual processing; however, often these recurrent networks are optimized directly on neural data or the comparative metrics used are undefined for standard feedforward networks that lack these connections. In this work, we develop task-optimized convolutional recurrent (ConvRNN) network models that more correctly mimic the timing and gross neuroanatomy of the ventral pathway. Properly chosen intermediate-depth ConvRNN circuit architectures, which incorporate mechanisms of feedforward bypassing and recurrent gating, can achieve high performance on a core recognition task, comparable to that of much deeper feedforward networks. We then develop methods that allow us to compare both CNNs and ConvRNNs to finely grained measurements of primate categorization behavior and neural response trajectories across thousands of stimuli. We find that high-performing ConvRNNs provide a better match to these data than feedforward networks of any depth, predicting the precise timings at which each stimulus is behaviorally decoded from neural activation patterns. Moreover, these ConvRNN circuits consistently produce quantitatively accurate predictions of neural dynamics from V4 and IT across the entire stimulus presentation. In fact, we find that the highest-performing ConvRNNs, which best match neural and behavioral data, also achieve a strong Pareto trade-off between task performance and overall network size. Taken together, our results suggest the functional purpose of recurrence in the ventral pathway is to fit a high-performing network in cortex, attaining computational power through temporal rather than spatial complexity.


Asunto(s)
Análisis y Desempeño de Tareas , Percepción Visual , Animales , Humanos , Macaca mulatta/fisiología , Redes Neurales de la Computación , Reconocimiento Visual de Modelos/fisiología , Reconocimiento en Psicología/fisiología , Vías Visuales/fisiología , Percepción Visual/fisiología
5.
Nat Methods ; 15(10): 805-815, 2018 10.
Artículo en Inglés | MEDLINE | ID: mdl-30224673

RESUMEN

Neuroscience is experiencing a revolution in which simultaneous recording of thousands of neurons is revealing population dynamics that are not apparent from single-neuron responses. This structure is typically extracted from data averaged across many trials, but deeper understanding requires studying phenomena detected in single trials, which is challenging due to incomplete sampling of the neural population, trial-to-trial variability, and fluctuations in action potential timing. We introduce latent factor analysis via dynamical systems, a deep learning method to infer latent dynamics from single-trial neural spiking data. When applied to a variety of macaque and human motor cortical datasets, latent factor analysis via dynamical systems accurately predicts observed behavioral variables, extracts precise firing rate estimates of neural dynamics on single trials, infers perturbations to those dynamics that correlate with behavioral choices, and combines data from non-overlapping recording sessions spanning months to improve inference of underlying dynamics.


Asunto(s)
Potenciales de Acción , Algoritmos , Modelos Neurológicos , Corteza Motora/fisiología , Neuronas/fisiología , Animales , Humanos , Masculino , Persona de Mediana Edad , Dinámica Poblacional , Primates
6.
Nature ; 503(7474): 78-84, 2013 Nov 07.
Artículo en Inglés | MEDLINE | ID: mdl-24201281

RESUMEN

Prefrontal cortex is thought to have a fundamental role in flexible, context-dependent behaviour, but the exact nature of the computations underlying this role remains largely unknown. In particular, individual prefrontal neurons often generate remarkably complex responses that defy deep understanding of their contribution to behaviour. Here we study prefrontal cortex activity in macaque monkeys trained to flexibly select and integrate noisy sensory inputs towards a choice. We find that the observed complexity and functional roles of single neurons are readily understood in the framework of a dynamical process unfolding at the level of the population. The population dynamics can be reproduced by a trained recurrent neural network, which suggests a previously unknown mechanism for selection and integration of task-relevant inputs. This mechanism indicates that selection and integration are two aspects of a single dynamical process unfolding within the same prefrontal circuits, and potentially provides a novel, general framework for understanding context-dependent computations.


Asunto(s)
Macaca mulatta/fisiología , Modelos Neurológicos , Corteza Prefrontal/fisiología , Animales , Conducta de Elección/fisiología , Aprendizaje Discriminativo , Masculino , Red Nerviosa/citología , Red Nerviosa/fisiología , Neuronas/fisiología , Corteza Prefrontal/citología
7.
Nat Neurosci ; 27(7): 1349-1363, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38982201

RESUMEN

Flexible computation is a hallmark of intelligent behavior. However, little is known about how neural networks contextually reconfigure for different computations. In the present work, we identified an algorithmic neural substrate for modular computation through the study of multitasking artificial recurrent neural networks. Dynamical systems analyses revealed learned computational strategies mirroring the modular subtask structure of the training task set. Dynamical motifs, which are recurring patterns of neural activity that implement specific computations through dynamics, such as attractors, decision boundaries and rotations, were reused across tasks. For example, tasks requiring memory of a continuous circular variable repurposed the same ring attractor. We showed that dynamical motifs were implemented by clusters of units when the unit activation function was restricted to be positive. Cluster lesions caused modular performance deficits. Motifs were reconfigured for fast transfer learning after an initial phase of learning. This work establishes dynamical motifs as a fundamental unit of compositional computation, intermediate between neuron and network. As whole-brain studies simultaneously record activity from multiple specialized systems, the dynamical motif framework will guide questions about specialization and generalization.


Asunto(s)
Redes Neurales de la Computación , Animales , Modelos Neurológicos , Neuronas/fisiología , Aprendizaje/fisiología , Algoritmos , Red Nerviosa/fisiología
8.
Neural Comput ; 25(3): 626-49, 2013 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-23272922

RESUMEN

Recurrent neural networks (RNNs) are useful tools for learning nonlinear relationships between time-varying inputs and outputs with complex temporal dependencies. Recently developed algorithms have been successful at training RNNs to perform a wide variety of tasks, but the resulting networks have been treated as black boxes: their mechanism of operation remains unknown. Here we explore the hypothesis that fixed points, both stable and unstable, and the linearized dynamics around them, can reveal crucial aspects of how RNNs implement their computations. Further, we explore the utility of linearization in areas of phase space that are not true fixed points but merely points of very slow movement. We present a simple optimization technique that is applied to trained RNNs to find the fixed and slow points of their dynamics. Linearization around these slow regions can be used to explore, or reverse-engineer, the behavior of the RNN. We describe the technique, illustrate it using simple examples, and finally showcase it on three high-dimensional RNN examples: a 3-bit flip-flop device, an input-dependent sine wave generator, and a two-point moving average. In all cases, the mechanisms of trained networks could be inferred from the sets of fixed and slow points and the linearized dynamics around them.


Asunto(s)
Inteligencia Artificial , Redes Neurales de la Computación
9.
Neuron ; 111(5): 631-649.e10, 2023 03 01.
Artículo en Inglés | MEDLINE | ID: mdl-36630961

RESUMEN

Neural activity is often described in terms of population-level factors extracted from the responses of many neurons. Factors provide a lower-dimensional description with the aim of shedding light on network computations. Yet, mechanistically, computations are performed not by continuously valued factors but by interactions among neurons that spike discretely and variably. Models provide a means of bridging these levels of description. We developed a general method for training model networks of spiking neurons by leveraging factors extracted from either data or firing-rate-based networks. In addition to providing a useful model-building framework, this formalism illustrates how reliable and continuously valued factors can arise from seemingly stochastic spiking. Our framework establishes procedures for embedding this property in network models with different levels of realism. The relationship between spikes and factors in such networks provides a foundation for interpreting (and subtly redefining) commonly used quantities such as firing rates.


Asunto(s)
Redes Neurales de la Computación , Neuronas , Potenciales de Acción/fisiología , Neuronas/fisiología , Red Nerviosa/fisiología , Modelos Neurológicos
10.
Nat Commun ; 14(1): 1597, 2023 03 22.
Artículo en Inglés | MEDLINE | ID: mdl-36949048

RESUMEN

Neuroscience has long been an essential driver of progress in artificial intelligence (AI). We propose that to accelerate progress in AI, we must invest in fundamental research in NeuroAI. A core component of this is the embodied Turing test, which challenges AI animal models to interact with the sensorimotor world at skill levels akin to their living counterparts. The embodied Turing test shifts the focus from those capabilities like game playing and language that are especially well-developed or uniquely human to those capabilities - inherited from over 500 million years of evolution - that are shared with all animals. Building models that can pass the embodied Turing test will provide a roadmap for the next generation of AI.


Asunto(s)
Inteligencia Artificial , Neurociencias , Animales , Humanos
11.
Curr Opin Neurobiol ; 58: 229-238, 2019 10.
Artículo en Inglés | MEDLINE | ID: mdl-31670073

RESUMEN

With the increasing acquisition of large-scale neural recordings comes the challenge of inferring the computations they perform and understanding how these give rise to behavior. Here, we review emerging conceptual and technological advances that begin to address this challenge, garnering insights from both biological and artificial neural networks. We argue that neural data should be recorded during rich behavioral tasks, to model cognitive processes and estimate latent behavioral variables. Careful quantification of animal movements can also provide a more complete picture of how movements shape neural dynamics and reflect changes in brain state, such as arousal or stress. Artificial neural networks (ANNs) could serve as artificial model organisms to connect neural dynamics and rich behavioral data. ANNs have already begun to reveal how a wide range of different behaviors can be implemented, generating hypotheses about how observed neural activity might drive behavior and explaining diversity in behavioral strategies.


Asunto(s)
Cognición , Redes Neurales de la Computación , Animales , Encéfalo , Movimiento
12.
Adv Neural Inf Process Syst ; 2019: 15629-15641, 2019 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-32782422

RESUMEN

Task-based modeling with recurrent neural networks (RNNs) has emerged as a popular way to infer the computational function of different brain regions. These models are quantitatively assessed by comparing the low-dimensional neural representations of the model with the brain, for example using canonical correlation analysis (CCA). However, the nature of the detailed neurobiological inferences one can draw from such efforts remains elusive. For example, to what extent does training neural networks to solve common tasks uniquely determine the network dynamics, independent of modeling architectural choices? Or alternatively, are the learned dynamics highly sensitive to different model choices? Knowing the answer to these questions has strong implications for whether and how we should use task-based RNN modeling to understand brain dynamics. To address these foundational questions, we study populations of thousands of networks, with commonly used RNN architectures, trained to solve neuroscientifically motivated tasks and characterize their nonlinear dynamics. We find the geometry of the RNN representations can be highly sensitive to different network architectures, yielding a cautionary tale for measures of similarity that rely on representational geometry, such as CCA. Moreover, we find that while the geometry of neural dynamics can vary greatly across architectures, the underlying computational scaffold-the topological structure of fixed points, transitions between them, limit cycles, and linearized dynamics-often appears universal across all architectures.

13.
Adv Neural Inf Process Syst ; 32: 15696-15705, 2019 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-32782423

RESUMEN

Recurrent neural networks (RNNs) are a widely used tool for modeling sequential data, yet they are often treated as inscrutable black boxes. Given a trained recurrent network, we would like to reverse engineer it-to obtain a quantitative, interpretable description of how it solves a particular task. Even for simple tasks, a detailed understanding of how recurrent networks work, or a prescription for how to develop such an understanding, remains elusive. In this work, we use tools from dynamical systems analysis to reverse engineer recurrent networks trained to perform sentiment classification, a foundational natural language processing task. Given a trained network, we find fixed points of the recurrent dynamics and linearize the nonlinear system around these fixed points. Despite their theoretical capacity to implement complex, high-dimensional computations, we find that trained networks converge to highly interpretable, low-dimensional representations. In particular, the topological structure of the fixed points and corresponding linearized dynamics reveal an approximate line attractor within the RNN, which we can use to quantitatively understand how the RNN solves the sentiment analysis task. Finally, we find this mechanism present across RNN architectures (including LSTMs, GRUs, and vanilla RNNs) trained on multiple datasets, suggesting that our findings are not unique to a particular architecture or dataset. Overall, these results demonstrate that surprisingly universal and human interpretable computations can arise across a range of recurrent networks.

14.
J Neurosci ; 27(13): 3383-7, 2007 Mar 28.
Artículo en Inglés | MEDLINE | ID: mdl-17392454

RESUMEN

It is still poorly understood how epileptiform events can recruit cortical circuits. Moreover, the speed of propagation of epileptiform discharges in vivo and in vitro can vary over several orders of magnitude (0.1-100 mm/s), a range difficult to explain by a single mechanism. We previously showed how epileptiform spread in neocortical slices is opposed by a powerful feedforward inhibition ahead of the ictal wave. When this feedforward inhibition is intact, epileptiform spreads very slowly (approximately 100 microm/s). We now investigate whether changes in this inhibitory restraint can also explain much faster propagation velocities. We made use of a very characteristic pattern of evolution of ictal activity in the zero magnesium (0 Mg2+) model of epilepsy. With each successive ictal event, the number of preictal inhibitory barrages dropped, and in parallel with this change, the propagation velocity increased. There was a highly significant correlation (p < 0.001) between the two measures over a 1000-fold range of velocities, indicating that feedforward inhibition was the prime determinant of the speed of epileptiform propagation. We propose that the speed of propagation is set by the extent of the recruitment steps, which in turn is set by how successfully the feedforward inhibitory restraint contains the excitatory drive. Thus, a single mechanism could account for the wide range of propagation velocities of epileptiform events observed in vitro and in vivo.


Asunto(s)
Corteza Cerebral/fisiopatología , Epilepsia/fisiopatología , Células Piramidales/fisiopatología , Animales , Mapeo Encefálico , Corteza Cerebral/patología , Electroencefalografía , Epilepsia/diagnóstico , Retroalimentación Fisiológica , Técnicas In Vitro , Ratones , Ratones Endogámicos C57BL , Microscopía Confocal , Técnicas de Placa-Clamp
15.
Neuron ; 98(5): 873-875, 2018 06 06.
Artículo en Inglés | MEDLINE | ID: mdl-29879388

RESUMEN

Population dynamics is emerging as a language for understanding high-dimensional neural recordings. Remington et al. (2018) explore how inputs to frontal cortex modulate neural dynamics in order to implement a computation of interest.


Asunto(s)
Lóbulo Frontal , Neuronas
16.
J Neurosci ; 26(48): 12447-55, 2006 Nov 29.
Artículo en Inglés | MEDLINE | ID: mdl-17135406

RESUMEN

What regulates the spread of activity through cortical circuits? We present here data indicating a pivotal role for a vetoing inhibition restraining modules of pyramidal neurons. We combined fast calcium imaging of network activity with whole-cell recordings to examine epileptiform propagation in mouse neocortical slices. Epileptiform activity was induced by washing Mg2+ ions out of the slice. Pyramidal cells receive barrages of inhibitory inputs in advance of the epileptiform wave. The inhibitory barrages are effectively nullified at low doses of picrotoxin (2.5-5 microM). When present, however, these inhibitory barrages occlude an intense excitatory synaptic drive that would normally exceed action potential threshold by approximately a factor of 10. Despite this level of excitation, the inhibitory barrages suppress firing, thereby limiting further neuronal recruitment to the ictal event. Pyramidal neurons are recruited to the epileptiform event once the inhibitory restraint fails and are recruited in spatially clustered populations (150-250 microm diameter). The recruitment of the cells within a given module is virtually simultaneous, and thus epileptiform events progress in intermittent (0.5-1 Hz) steps across the cortical network. We propose that the interneurons that supply the vetoing inhibition define these modular circuit territories.


Asunto(s)
Potenciales de Acción/fisiología , Epilepsia/fisiopatología , Neocórtex/fisiología , Inhibición Neural/fisiología , Potenciales de Acción/efectos de los fármacos , Animales , Estimulación Eléctrica/métodos , Moduladores del GABA/farmacología , Agonistas de Receptores de GABA-A , Antagonistas de Receptores de GABA-A , Técnicas In Vitro , Ratones , Ratones Endogámicos C57BL , Neocórtex/efectos de los fármacos , Inhibición Neural/efectos de los fármacos , Receptores de GABA-A/fisiología
17.
eNeuro ; 3(4)2016.
Artículo en Inglés | MEDLINE | ID: mdl-27761519

RESUMEN

Neural activity in monkey motor cortex (M1) and dorsal premotor cortex (PMd) can reflect a chosen movement well before that movement begins. The pattern of neural activity then changes profoundly just before movement onset. We considered the prediction, derived from formal considerations, that the transition from preparation to movement might be accompanied by a large overall change in the neural state that reflects when movement is made rather than which movement is made. Specifically, we examined "components" of the population response: time-varying patterns of activity from which each neuron's response is approximately composed. Amid the response complexity of individual M1 and PMd neurons, we identified robust response components that were "condition-invariant": their magnitude and time course were nearly identical regardless of reach direction or path. These condition-invariant response components occupied dimensions orthogonal to those occupied by the "tuned" response components. The largest condition-invariant component was much larger than any of the tuned components; i.e., it explained more of the structure in individual-neuron responses. This condition-invariant response component underwent a rapid change before movement onset. The timing of that change predicted most of the trial-by-trial variance in reaction time. Thus, although individual M1 and PMd neurons essentially always reflected which movement was made, the largest component of the population response reflected movement timing rather than movement type.


Asunto(s)
Actividad Motora/fisiología , Corteza Motora/fisiología , Neuronas/fisiología , Potenciales de Acción , Animales , Brazo/fisiología , Electromiografía , Macaca mulatta , Masculino , Microelectrodos , Músculo Esquelético/fisiología , Pruebas Neuropsicológicas , Tiempo de Reacción , Factores de Tiempo
18.
Nat Commun ; 7: 13749, 2016 12 13.
Artículo en Inglés | MEDLINE | ID: mdl-27958268

RESUMEN

A major hurdle to clinical translation of brain-machine interfaces (BMIs) is that current decoders, which are trained from a small quantity of recent data, become ineffective when neural recording conditions subsequently change. We tested whether a decoder could be made more robust to future neural variability by training it to handle a variety of recording conditions sampled from months of previously collected data as well as synthetic training data perturbations. We developed a new multiplicative recurrent neural network BMI decoder that successfully learned a large variety of neural-to-kinematic mappings and became more robust with larger training data sets. Here we demonstrate that when tested with a non-human primate preclinical BMI model, this decoder is robust under conditions that disabled a state-of-the-art Kalman filter-based decoder. These results validate a new BMI strategy in which accumulated data history are effectively harnessed, and may facilitate reliable BMI use by reducing decoder retraining downtime.


Asunto(s)
Interfaces Cerebro-Computador , Red Nerviosa , Animales , Mapeo Encefálico , Macaca mulatta , Masculino
19.
Nat Neurosci ; 18(7): 1025-33, 2015 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-26075643

RESUMEN

It remains an open question how neural responses in motor cortex relate to movement. We explored the hypothesis that motor cortex reflects dynamics appropriate for generating temporally patterned outgoing commands. To formalize this hypothesis, we trained recurrent neural networks to reproduce the muscle activity of reaching monkeys. Models had to infer dynamics that could transform simple inputs into temporally and spatially complex patterns of muscle activity. Analysis of trained models revealed that the natural dynamical solution was a low-dimensional oscillator that generated the necessary multiphasic commands. This solution closely resembled, at both the single-neuron and population levels, what was observed in neural recordings from the same monkeys. Notably, data and simulations agreed only when models were optimized to find simple solutions. An appealing interpretation is that the empirically observed dynamics of motor cortex may reflect a simple solution to the problem of generating temporally patterned descending commands.


Asunto(s)
Actividad Motora/fisiología , Corteza Motora/fisiología , Músculo Esquelético/fisiología , Red Nerviosa/fisiología , Redes Neurales de la Computación , Neuronas/fisiología , Animales , Electromiografía , Fenómenos Electrofisiológicos , Haplorrinos
20.
Curr Opin Neurobiol ; 25: 156-63, 2014 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-24509098

RESUMEN

Many recent studies of neurons recorded from cortex reveal complex temporal dynamics. How such dynamics embody the computations that ultimately lead to behavior remains a mystery. Approaching this issue requires developing plausible hypotheses couched in terms of neural dynamics. A tool ideally suited to aid in this question is the recurrent neural network (RNN). RNNs straddle the fields of nonlinear dynamical systems and machine learning and have recently seen great advances in both theory and application. I summarize recent theoretical and technological advances and highlight an example of how RNNs helped to explain perplexing high-dimensional neurophysiological data in the prefrontal cortex.


Asunto(s)
Modelos Neurológicos , Red Nerviosa/fisiología , Redes Neurales de la Computación , Corteza Prefrontal/fisiología , Animales , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA