Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Nat Commun ; 14(1): 1597, 2023 03 22.
Artículo en Inglés | MEDLINE | ID: mdl-36949048

RESUMEN

Neuroscience has long been an essential driver of progress in artificial intelligence (AI). We propose that to accelerate progress in AI, we must invest in fundamental research in NeuroAI. A core component of this is the embodied Turing test, which challenges AI animal models to interact with the sensorimotor world at skill levels akin to their living counterparts. The embodied Turing test shifts the focus from those capabilities like game playing and language that are especially well-developed or uniquely human to those capabilities - inherited from over 500 million years of evolution - that are shared with all animals. Building models that can pass the embodied Turing test will provide a roadmap for the next generation of AI.


Asunto(s)
Inteligencia Artificial , Neurociencias , Animales , Humanos
2.
Neuron ; 111(5): 631-649.e10, 2023 03 01.
Artículo en Inglés | MEDLINE | ID: mdl-36630961

RESUMEN

Neural activity is often described in terms of population-level factors extracted from the responses of many neurons. Factors provide a lower-dimensional description with the aim of shedding light on network computations. Yet, mechanistically, computations are performed not by continuously valued factors but by interactions among neurons that spike discretely and variably. Models provide a means of bridging these levels of description. We developed a general method for training model networks of spiking neurons by leveraging factors extracted from either data or firing-rate-based networks. In addition to providing a useful model-building framework, this formalism illustrates how reliable and continuously valued factors can arise from seemingly stochastic spiking. Our framework establishes procedures for embedding this property in network models with different levels of realism. The relationship between spikes and factors in such networks provides a foundation for interpreting (and subtly redefining) commonly used quantities such as firing rates.


Asunto(s)
Redes Neurales de la Computación , Neuronas , Potenciales de Acción/fisiología , Neuronas/fisiología , Red Nerviosa/fisiología , Modelos Neurológicos
3.
PLoS Comput Biol ; 19(1): e1010784, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36607933

RESUMEN

The relationship between neuronal activity and computations embodied by it remains an open question. We develop a novel methodology that condenses observed neuronal activity into a quantitatively accurate, simple, and interpretable model and validate it on diverse systems and scales from single neurons in C. elegans to fMRI in humans. The model treats neuronal activity as collections of interlocking 1-dimensional trajectories. Despite their simplicity, these models accurately predict future neuronal activity and future decisions made by human participants. Moreover, the structure formed by interconnected trajectories-a scaffold-is closely related to the computational strategy of the system. We use these scaffolds to compare the computational strategy of primates and artificial systems trained on the same task to identify specific conditions under which the artificial agent learns the same strategy as the primate. The computational strategy extracted using our methodology predicts specific errors on novel stimuli. These results show that our methodology is a powerful tool for studying the relationship between computation and neuronal activity across diverse systems.


Asunto(s)
Caenorhabditis elegans , Modelos Neurológicos , Animales , Humanos , Caenorhabditis elegans/fisiología , Neuronas/fisiología , Primates
4.
Cell ; 185(19): 3568-3587.e27, 2022 09 15.
Artículo en Inglés | MEDLINE | ID: mdl-36113428

RESUMEN

Computational analysis of cellular activity has developed largely independently of modern transcriptomic cell typology, but integrating these approaches may be essential for full insight into cellular-level mechanisms underlying brain function and dysfunction. Applying this approach to the habenula (a structure with diverse, intermingled molecular, anatomical, and computational features), we identified encoding of reward-predictive cues and reward outcomes in distinct genetically defined neural populations, including TH+ cells and Tac1+ cells. Data from genetically targeted recordings were used to train an optimized nonlinear dynamical systems model and revealed activity dynamics consistent with a line attractor. High-density, cell-type-specific electrophysiological recordings and optogenetic perturbation provided supporting evidence for this model. Reverse-engineering predicted how Tac1+ cells might integrate reward history, which was complemented by in vivo experimentation. This integrated approach describes a process by which data-driven computational models of population activity can generate and frame actionable hypotheses for cell-type-specific investigation in biological systems.


Asunto(s)
Habénula , Recompensa , Dinámica Poblacional
5.
Neural Comput ; 34(8): 1652-1675, 2022 07 14.
Artículo en Inglés | MEDLINE | ID: mdl-35798321

RESUMEN

The computational role of the abundant feedback connections in the ventral visual stream is unclear, enabling humans and nonhuman primates to effortlessly recognize objects across a multitude of viewing conditions. Prior studies have augmented feedforward convolutional neural networks (CNNs) with recurrent connections to study their role in visual processing; however, often these recurrent networks are optimized directly on neural data or the comparative metrics used are undefined for standard feedforward networks that lack these connections. In this work, we develop task-optimized convolutional recurrent (ConvRNN) network models that more correctly mimic the timing and gross neuroanatomy of the ventral pathway. Properly chosen intermediate-depth ConvRNN circuit architectures, which incorporate mechanisms of feedforward bypassing and recurrent gating, can achieve high performance on a core recognition task, comparable to that of much deeper feedforward networks. We then develop methods that allow us to compare both CNNs and ConvRNNs to finely grained measurements of primate categorization behavior and neural response trajectories across thousands of stimuli. We find that high-performing ConvRNNs provide a better match to these data than feedforward networks of any depth, predicting the precise timings at which each stimulus is behaviorally decoded from neural activation patterns. Moreover, these ConvRNN circuits consistently produce quantitatively accurate predictions of neural dynamics from V4 and IT across the entire stimulus presentation. In fact, we find that the highest-performing ConvRNNs, which best match neural and behavioral data, also achieve a strong Pareto trade-off between task performance and overall network size. Taken together, our results suggest the functional purpose of recurrence in the ventral pathway is to fit a high-performing network in cortex, attaining computational power through temporal rather than spatial complexity.


Asunto(s)
Análisis y Desempeño de Tareas , Percepción Visual , Animales , Humanos , Macaca mulatta/fisiología , Redes Neurales de la Computación , Reconocimiento Visual de Modelos/fisiología , Reconocimiento en Psicología/fisiología , Vías Visuales/fisiología , Percepción Visual/fisiología
6.
Annu Rev Neurosci ; 43: 249-275, 2020 07 08.
Artículo en Inglés | MEDLINE | ID: mdl-32640928

RESUMEN

Significant experimental, computational, and theoretical work has identified rich structure within the coordinated activity of interconnected neural populations. An emerging challenge now is to uncover the nature of the associated computations, how they are implemented, and what role they play in driving behavior. We term this computation through neural population dynamics. If successful, this framework will reveal general motifs of neural population activity and quantitatively describe how neural population dynamics implement computations necessary for driving goal-directed behavior. Here, we start with a mathematical primer on dynamical systems theory and analytical tools necessary to apply this perspective to experimental data. Next, we highlight some recent discoveries resulting from successful application of dynamical systems. We focus on studies spanning motor control, timing, decision-making, and working memory. Finally, we briefly discuss promising recent lines of investigation and future directions for the computation through neural population dynamics framework.


Asunto(s)
Encéfalo/fisiología , Biología Computacional , Aprendizaje Profundo , Red Nerviosa/fisiología , Animales , Biología Computacional/métodos , Humanos , Neuronas/fisiología , Dinámica Poblacional
7.
Curr Opin Neurobiol ; 58: 229-238, 2019 10.
Artículo en Inglés | MEDLINE | ID: mdl-31670073

RESUMEN

With the increasing acquisition of large-scale neural recordings comes the challenge of inferring the computations they perform and understanding how these give rise to behavior. Here, we review emerging conceptual and technological advances that begin to address this challenge, garnering insights from both biological and artificial neural networks. We argue that neural data should be recorded during rich behavioral tasks, to model cognitive processes and estimate latent behavioral variables. Careful quantification of animal movements can also provide a more complete picture of how movements shape neural dynamics and reflect changes in brain state, such as arousal or stress. Artificial neural networks (ANNs) could serve as artificial model organisms to connect neural dynamics and rich behavioral data. ANNs have already begun to reveal how a wide range of different behaviors can be implemented, generating hypotheses about how observed neural activity might drive behavior and explaining diversity in behavioral strategies.


Asunto(s)
Cognición , Redes Neurales de la Computación , Animales , Encéfalo , Movimiento
8.
Adv Neural Inf Process Syst ; 2019: 15629-15641, 2019 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-32782422

RESUMEN

Task-based modeling with recurrent neural networks (RNNs) has emerged as a popular way to infer the computational function of different brain regions. These models are quantitatively assessed by comparing the low-dimensional neural representations of the model with the brain, for example using canonical correlation analysis (CCA). However, the nature of the detailed neurobiological inferences one can draw from such efforts remains elusive. For example, to what extent does training neural networks to solve common tasks uniquely determine the network dynamics, independent of modeling architectural choices? Or alternatively, are the learned dynamics highly sensitive to different model choices? Knowing the answer to these questions has strong implications for whether and how we should use task-based RNN modeling to understand brain dynamics. To address these foundational questions, we study populations of thousands of networks, with commonly used RNN architectures, trained to solve neuroscientifically motivated tasks and characterize their nonlinear dynamics. We find the geometry of the RNN representations can be highly sensitive to different network architectures, yielding a cautionary tale for measures of similarity that rely on representational geometry, such as CCA. Moreover, we find that while the geometry of neural dynamics can vary greatly across architectures, the underlying computational scaffold-the topological structure of fixed points, transitions between them, limit cycles, and linearized dynamics-often appears universal across all architectures.

9.
Adv Neural Inf Process Syst ; 32: 15696-15705, 2019 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-32782423

RESUMEN

Recurrent neural networks (RNNs) are a widely used tool for modeling sequential data, yet they are often treated as inscrutable black boxes. Given a trained recurrent network, we would like to reverse engineer it-to obtain a quantitative, interpretable description of how it solves a particular task. Even for simple tasks, a detailed understanding of how recurrent networks work, or a prescription for how to develop such an understanding, remains elusive. In this work, we use tools from dynamical systems analysis to reverse engineer recurrent networks trained to perform sentiment classification, a foundational natural language processing task. Given a trained network, we find fixed points of the recurrent dynamics and linearize the nonlinear system around these fixed points. Despite their theoretical capacity to implement complex, high-dimensional computations, we find that trained networks converge to highly interpretable, low-dimensional representations. In particular, the topological structure of the fixed points and corresponding linearized dynamics reveal an approximate line attractor within the RNN, which we can use to quantitatively understand how the RNN solves the sentiment analysis task. Finally, we find this mechanism present across RNN architectures (including LSTMs, GRUs, and vanilla RNNs) trained on multiple datasets, suggesting that our findings are not unique to a particular architecture or dataset. Overall, these results demonstrate that surprisingly universal and human interpretable computations can arise across a range of recurrent networks.

10.
Nat Methods ; 15(10): 805-815, 2018 10.
Artículo en Inglés | MEDLINE | ID: mdl-30224673

RESUMEN

Neuroscience is experiencing a revolution in which simultaneous recording of thousands of neurons is revealing population dynamics that are not apparent from single-neuron responses. This structure is typically extracted from data averaged across many trials, but deeper understanding requires studying phenomena detected in single trials, which is challenging due to incomplete sampling of the neural population, trial-to-trial variability, and fluctuations in action potential timing. We introduce latent factor analysis via dynamical systems, a deep learning method to infer latent dynamics from single-trial neural spiking data. When applied to a variety of macaque and human motor cortical datasets, latent factor analysis via dynamical systems accurately predicts observed behavioral variables, extracts precise firing rate estimates of neural dynamics on single trials, infers perturbations to those dynamics that correlate with behavioral choices, and combines data from non-overlapping recording sessions spanning months to improve inference of underlying dynamics.


Asunto(s)
Potenciales de Acción , Algoritmos , Modelos Neurológicos , Corteza Motora/fisiología , Neuronas/fisiología , Animales , Humanos , Masculino , Persona de Mediana Edad , Dinámica Poblacional , Primates
11.
Neuron ; 98(5): 873-875, 2018 06 06.
Artículo en Inglés | MEDLINE | ID: mdl-29879388

RESUMEN

Population dynamics is emerging as a language for understanding high-dimensional neural recordings. Remington et al. (2018) explore how inputs to frontal cortex modulate neural dynamics in order to implement a computation of interest.


Asunto(s)
Lóbulo Frontal , Neuronas
13.
Nat Commun ; 7: 13749, 2016 12 13.
Artículo en Inglés | MEDLINE | ID: mdl-27958268

RESUMEN

A major hurdle to clinical translation of brain-machine interfaces (BMIs) is that current decoders, which are trained from a small quantity of recent data, become ineffective when neural recording conditions subsequently change. We tested whether a decoder could be made more robust to future neural variability by training it to handle a variety of recording conditions sampled from months of previously collected data as well as synthetic training data perturbations. We developed a new multiplicative recurrent neural network BMI decoder that successfully learned a large variety of neural-to-kinematic mappings and became more robust with larger training data sets. Here we demonstrate that when tested with a non-human primate preclinical BMI model, this decoder is robust under conditions that disabled a state-of-the-art Kalman filter-based decoder. These results validate a new BMI strategy in which accumulated data history are effectively harnessed, and may facilitate reliable BMI use by reducing decoder retraining downtime.


Asunto(s)
Interfaces Cerebro-Computador , Red Nerviosa , Animales , Mapeo Encefálico , Macaca mulatta , Masculino
14.
eNeuro ; 3(4)2016.
Artículo en Inglés | MEDLINE | ID: mdl-27761519

RESUMEN

Neural activity in monkey motor cortex (M1) and dorsal premotor cortex (PMd) can reflect a chosen movement well before that movement begins. The pattern of neural activity then changes profoundly just before movement onset. We considered the prediction, derived from formal considerations, that the transition from preparation to movement might be accompanied by a large overall change in the neural state that reflects when movement is made rather than which movement is made. Specifically, we examined "components" of the population response: time-varying patterns of activity from which each neuron's response is approximately composed. Amid the response complexity of individual M1 and PMd neurons, we identified robust response components that were "condition-invariant": their magnitude and time course were nearly identical regardless of reach direction or path. These condition-invariant response components occupied dimensions orthogonal to those occupied by the "tuned" response components. The largest condition-invariant component was much larger than any of the tuned components; i.e., it explained more of the structure in individual-neuron responses. This condition-invariant response component underwent a rapid change before movement onset. The timing of that change predicted most of the trial-by-trial variance in reaction time. Thus, although individual M1 and PMd neurons essentially always reflected which movement was made, the largest component of the population response reflected movement timing rather than movement type.


Asunto(s)
Actividad Motora/fisiología , Corteza Motora/fisiología , Neuronas/fisiología , Potenciales de Acción , Animales , Brazo/fisiología , Electromiografía , Macaca mulatta , Masculino , Microelectrodos , Músculo Esquelético/fisiología , Pruebas Neuropsicológicas , Tiempo de Reacción , Factores de Tiempo
15.
Nat Neurosci ; 18(7): 1025-33, 2015 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-26075643

RESUMEN

It remains an open question how neural responses in motor cortex relate to movement. We explored the hypothesis that motor cortex reflects dynamics appropriate for generating temporally patterned outgoing commands. To formalize this hypothesis, we trained recurrent neural networks to reproduce the muscle activity of reaching monkeys. Models had to infer dynamics that could transform simple inputs into temporally and spatially complex patterns of muscle activity. Analysis of trained models revealed that the natural dynamical solution was a low-dimensional oscillator that generated the necessary multiphasic commands. This solution closely resembled, at both the single-neuron and population levels, what was observed in neural recordings from the same monkeys. Notably, data and simulations agreed only when models were optimized to find simple solutions. An appealing interpretation is that the empirically observed dynamics of motor cortex may reflect a simple solution to the problem of generating temporally patterned descending commands.


Asunto(s)
Actividad Motora/fisiología , Corteza Motora/fisiología , Músculo Esquelético/fisiología , Red Nerviosa/fisiología , Redes Neurales de la Computación , Neuronas/fisiología , Animales , Electromiografía , Fenómenos Electrofisiológicos , Haplorrinos
16.
Curr Opin Neurobiol ; 25: 156-63, 2014 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-24509098

RESUMEN

Many recent studies of neurons recorded from cortex reveal complex temporal dynamics. How such dynamics embody the computations that ultimately lead to behavior remains a mystery. Approaching this issue requires developing plausible hypotheses couched in terms of neural dynamics. A tool ideally suited to aid in this question is the recurrent neural network (RNN). RNNs straddle the fields of nonlinear dynamical systems and machine learning and have recently seen great advances in both theory and application. I summarize recent theoretical and technological advances and highlight an example of how RNNs helped to explain perplexing high-dimensional neurophysiological data in the prefrontal cortex.


Asunto(s)
Modelos Neurológicos , Red Nerviosa/fisiología , Redes Neurales de la Computación , Corteza Prefrontal/fisiología , Animales , Humanos
17.
Nature ; 503(7474): 78-84, 2013 Nov 07.
Artículo en Inglés | MEDLINE | ID: mdl-24201281

RESUMEN

Prefrontal cortex is thought to have a fundamental role in flexible, context-dependent behaviour, but the exact nature of the computations underlying this role remains largely unknown. In particular, individual prefrontal neurons often generate remarkably complex responses that defy deep understanding of their contribution to behaviour. Here we study prefrontal cortex activity in macaque monkeys trained to flexibly select and integrate noisy sensory inputs towards a choice. We find that the observed complexity and functional roles of single neurons are readily understood in the framework of a dynamical process unfolding at the level of the population. The population dynamics can be reproduced by a trained recurrent neural network, which suggests a previously unknown mechanism for selection and integration of task-relevant inputs. This mechanism indicates that selection and integration are two aspects of a single dynamical process unfolding within the same prefrontal circuits, and potentially provides a novel, general framework for understanding context-dependent computations.


Asunto(s)
Macaca mulatta/fisiología , Modelos Neurológicos , Corteza Prefrontal/fisiología , Animales , Conducta de Elección/fisiología , Aprendizaje Discriminativo , Masculino , Red Nerviosa/citología , Red Nerviosa/fisiología , Neuronas/fisiología , Corteza Prefrontal/citología
18.
Prog Neurobiol ; 103: 214-22, 2013 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-23438479

RESUMEN

Working memory is a crucial component of most cognitive tasks. Its neuronal mechanisms are still unclear despite intensive experimental and theoretical explorations. Most theoretical models of working memory assume both time-invariant neural representations and precise connectivity schemes based on the tuning properties of network neurons. A different, more recent class of models assumes randomly connected neurons that have no tuning to any particular task, and bases task performance purely on adjustment of network readout. Intermediate between these schemes are networks that start out random but are trained by a learning scheme. Experimental studies of a delayed vibrotactile discrimination task indicate that some of the neurons in prefrontal cortex are persistently tuned to the frequency of a remembered stimulus, but the majority exhibit more complex relationships to the stimulus that vary considerably across time. We compare three models, ranging from a highly organized line attractor model to a randomly connected network with chaotic activity, with data recorded during this task. The random network does a surprisingly good job of both performing the task and matching certain aspects of the data. The intermediate model, in which an initially random network is partially trained to perform the working memory task by tuning its recurrent and readout connections, provides a better description, although none of the models matches all features of the data. Our results suggest that prefrontal networks may begin in a random state relative to the task and initially rely on modified readout for task performance. With further training, however, more tuned neurons with less time-varying responses should emerge as the networks become more structured.


Asunto(s)
Encéfalo/fisiología , Discriminación en Psicología/fisiología , Memoria a Corto Plazo/fisiología , Modelos Neurológicos , Animales , Humanos
19.
Neural Comput ; 25(3): 626-49, 2013 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-23272922

RESUMEN

Recurrent neural networks (RNNs) are useful tools for learning nonlinear relationships between time-varying inputs and outputs with complex temporal dependencies. Recently developed algorithms have been successful at training RNNs to perform a wide variety of tasks, but the resulting networks have been treated as black boxes: their mechanism of operation remains unknown. Here we explore the hypothesis that fixed points, both stable and unstable, and the linearized dynamics around them, can reveal crucial aspects of how RNNs implement their computations. Further, we explore the utility of linearization in areas of phase space that are not true fixed points but merely points of very slow movement. We present a simple optimization technique that is applied to trained RNNs to find the fixed and slow points of their dynamics. Linearization around these slow regions can be used to explore, or reverse-engineer, the behavior of the RNN. We describe the technique, illustrate it using simple examples, and finally showcase it on three high-dimensional RNN examples: a 3-bit flip-flop device, an input-dependent sine wave generator, and a two-point moving average. In all cases, the mechanisms of trained networks could be inferred from the sets of fixed and slow points and the linearized dynamics around them.


Asunto(s)
Inteligencia Artificial , Redes Neurales de la Computación
20.
PLoS One ; 7(5): e37372, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-22655041

RESUMEN

Modifying weights within a recurrent network to improve performance on a task has proven to be difficult. Echo-state networks in which modification is restricted to the weights of connections onto network outputs provide an easier alternative, but at the expense of modifying the typically sparse architecture of the network by including feedback from the output back into the network. We derive methods for using the values of the output weights from a trained echo-state network to set recurrent weights within the network. The result of this "transfer of learning" is a recurrent network that performs the task without requiring the output feedback present in the original network. We also discuss a hybrid version in which online learning is applied to both output and recurrent weights. Both approaches provide efficient ways of training recurrent networks to perform complex tasks. Through an analysis of the conditions required to make transfer of learning work, we define the concept of a "self-sensing" network state, and we compare and contrast this with compressed sensing.


Asunto(s)
Redes Neurales de la Computación , Transferencia de Experiencia en Psicología , Algoritmos , Retroalimentación
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...