Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 42
Filtrar
1.
Cell ; 183(5): 1147-1148, 2020 11 25.
Artículo en Inglés | MEDLINE | ID: mdl-33242414

RESUMEN

Whittington et al. demonstrate how network architectures defined in a spatial context may be useful for inference on different types of relational knowledge. These architectures allow for learning the structure of the environment and then transferring that knowledge to allow prediction of novel transitions.


Asunto(s)
Aprendizaje , Memoria , Hipocampo
2.
Cell ; 182(6): 1372-1376, 2020 09 17.
Artículo en Inglés | MEDLINE | ID: mdl-32946777

RESUMEN

Large scientific projects in genomics and astronomy are influential not because they answer any single question but because they enable investigation of continuously arising new questions from the same data-rich sources. Advances in automated mapping of the brain's synaptic connections (connectomics) suggest that the complicated circuits underlying brain function are ripe for analysis. We discuss benefits of mapping a mouse brain at the level of synapses.


Asunto(s)
Encéfalo/fisiología , Conectoma/métodos , Red Nerviosa/fisiología , Neuronas/fisiología , Sinapsis/fisiología , Animales , Ratones
3.
Cell ; 175(3): 736-750.e30, 2018 10 18.
Artículo en Inglés | MEDLINE | ID: mdl-30270041

RESUMEN

How the topography of neural circuits relates to their function remains unclear. Although topographic maps exist for sensory and motor variables, they are rarely observed for cognitive variables. Using calcium imaging during virtual navigation, we investigated the relationship between the anatomical organization and functional properties of grid cells, which represent a cognitive code for location during navigation. We found a substantial degree of grid cell micro-organization in mouse medial entorhinal cortex: grid cells and modules all clustered anatomically. Within a module, the layout of grid cells was a noisy two-dimensional lattice in which the anatomical distribution of grid cells largely matched their spatial tuning phases. This micro-arrangement of phases demonstrates the existence of a topographical map encoding a cognitive variable in rodents. It contributes to a foundation for evaluating circuit models of the grid cell network and is consistent with continuous attractor models as the mechanism of grid formation.


Asunto(s)
Corteza Entorrinal/citología , Células de Red/citología , Animales , Corteza Entorrinal/fisiología , Células de Red/fisiología , Masculino , Ratones , Ratones Endogámicos C57BL , Red Nerviosa
4.
Annu Rev Neurosci ; 2024 Apr 29.
Artículo en Inglés | MEDLINE | ID: mdl-38684081

RESUMEN

The activity patterns of grid cells form distinctively regular triangular lattices over the explored spatial environment and are largely invariant to visual stimuli, animal movement, and environment geometry. These neurons present numerous fascinating challenges to the curious (neuro)scientist: What are the circuit mechanisms responsible for creating spatially periodic activity patterns from the monotonic input-output responses of single neurons? How and why does the brain encode a local, nonperiodic variable-the allocentric position of the animal-with a periodic, nonlocal code? And, are grid cells truly specialized for spatial computations? Otherwise, what is their role in general cognition more broadly? We review efforts in uncovering the mechanisms and functional properties of grid cells, highlighting recent progress in the experimental validation of mechanistic grid cell models, and discuss the coding properties and functional advantages of the grid code as suggested by continuous attractor network models of grid cells.

5.
Nature ; 630(8017): 704-711, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38867051

RESUMEN

A cognitive map is a suitably structured representation that enables novel computations using previous experience; for example, planning a new route in a familiar space1. Work in mammals has found direct evidence for such representations in the presence of exogenous sensory inputs in both spatial2,3 and non-spatial domains4-10. Here we tested a foundational postulate of the original cognitive map theory1,11: that cognitive maps support endogenous computations without external input. We recorded from the entorhinal cortex of monkeys in a mental navigation task that required the monkeys to use a joystick to produce one-dimensional vectors between pairs of visual landmarks without seeing the intermediate landmarks. The ability of the monkeys to perform the task and generalize to new pairs indicated that they relied on a structured representation of the landmarks. Task-modulated neurons exhibited periodicity and ramping that matched the temporal structure of the landmarks and showed signatures of continuous attractor networks12,13. A continuous attractor network model of path integration14 augmented with a Hebbian-like learning mechanism provided an explanation of how the system could endogenously recall landmarks. The model also made an unexpected prediction that endogenous landmarks transiently slow path integration, reset the dynamics and thereby reduce variability. This prediction was borne out in a reanalysis of firing rate variability and behaviour. Our findings link the structured patterns of activity in the entorhinal cortex to the endogenous recruitment of a cognitive map during mental navigation.


Asunto(s)
Cognición , Corteza Entorrinal , Macaca mulatta , Modelos Neurológicos , Navegación Espacial , Animales , Masculino , Cognición/fisiología , Corteza Entorrinal/fisiología , Corteza Entorrinal/citología , Macaca mulatta/fisiología , Neuronas/fisiología , Navegación Espacial/fisiología , Aprendizaje/fisiología
6.
Nature ; 603(7902): 667-671, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-35296862

RESUMEN

Most social species self-organize into dominance hierarchies1,2, which decreases aggression and conserves energy3,4, but it is not clear how individuals know their social rank. We have only begun to learn how the brain represents social rank5-9 and guides behaviour on the basis of this representation. The medial prefrontal cortex (mPFC) is involved in social dominance in rodents7,8 and humans10,11. Yet, precisely how the mPFC encodes relative social rank and which circuits mediate this computation is not known. We developed a social competition assay in which mice compete for rewards, as well as a computer vision tool (AlphaTracker) to track multiple, unmarked animals. A hidden Markov model combined with generalized linear models was able to decode social competition behaviour from mPFC ensemble activity. Population dynamics in the mPFC predicted social rank and competitive success. Finally, we demonstrate that mPFC cells that project to the lateral hypothalamus promote dominance behaviour during reward competition. Thus, we reveal a cortico-hypothalamic circuit by which the mPFC exerts top-down modulation of social dominance.


Asunto(s)
Hipotálamo , Corteza Prefrontal , Animales , Área Hipotalámica Lateral , Ratones , Recompensa , Conducta Social
7.
Nature ; 608(7923): 586-592, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35859170

RESUMEN

The ability to associate temporally segregated information and assign positive or negative valence to environmental cues is paramount for survival. Studies have shown that different projections from the basolateral amygdala (BLA) are potentiated following reward or punishment learning1-7. However, we do not yet understand how valence-specific information is routed to the BLA neurons with the appropriate downstream projections, nor do we understand how to reconcile the sub-second timescales of synaptic plasticity8-11 with the longer timescales separating the predictive cues from their outcomes. Here we demonstrate that neurotensin (NT)-expressing neurons in the paraventricular nucleus of the thalamus (PVT) projecting to the BLA (PVT-BLA:NT) mediate valence assignment by exerting NT concentration-dependent modulation in BLA during associative learning. We found that optogenetic activation of the PVT-BLA:NT projection promotes reward learning, whereas PVT-BLA projection-specific knockout of the NT gene (Nts) augments punishment learning. Using genetically encoded calcium and NT sensors, we further revealed that both calcium dynamics within the PVT-BLA:NT projection and NT concentrations in the BLA are enhanced after reward learning and reduced after punishment learning. Finally, we showed that CRISPR-mediated knockout of the Nts gene in the PVT-BLA pathway blunts BLA neural dynamics and attenuates the preference for active behavioural strategies to reward and punishment predictive cues. In sum, we have identified NT as a neuropeptide that signals valence in the BLA, and showed that NT is a critical neuromodulator that orchestrates positive and negative valence assignment in amygdala neurons by extending valence-specific plasticity to behaviourally relevant timescales.


Asunto(s)
Complejo Nuclear Basolateral , Aprendizaje , Vías Nerviosas , Neurotensina , Castigo , Recompensa , Complejo Nuclear Basolateral/citología , Complejo Nuclear Basolateral/fisiología , Calcio/metabolismo , Señales (Psicología) , Plasticidad Neuronal , Neurotensina/metabolismo , Optogenética , Núcleos Talámicos/citología , Núcleos Talámicos/fisiología
8.
Nat Rev Neurosci ; 23(12): 744-766, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-36329249

RESUMEN

In this Review, we describe the singular success of attractor neural network models in describing how the brain maintains persistent activity states for working memory, corrects errors and integrates noisy cues. We consider the mechanisms by which simple and forgetful units can organize to collectively generate dynamics on the long timescales required for such computations. We discuss the myriad potential uses of attractor dynamics for computation in the brain, and showcase notable examples of brain systems in which inherently low-dimensional continuous-attractor dynamics have been concretely and rigorously identified. Thus, it is now possible to conclusively state that the brain constructs and uses such systems for computation. Finally, we highlight recent theoretical advances in understanding how the fundamental trade-offs between robustness and capacity and between structure and flexibility can be overcome by reusing and recombining the same set of modular attractors for multiple functions, so they together produce representations that are structurally constrained and robust but exhibit high capacity and are flexible.


Asunto(s)
Encéfalo , Neuronas , Humanos , Redes Neurales de la Computación , Memoria a Corto Plazo , Modelos Neurológicos
9.
10.
Neural Comput ; 35(11): 1850-1869, 2023 Oct 10.
Artículo en Inglés | MEDLINE | ID: mdl-37725708

RESUMEN

Recurrent neural networks (RNNs) are often used to model circuits in the brain and can solve a variety of difficult computational problems requiring memory, error correction, or selection (Hopfield, 1982; Maass et al., 2002; Maass, 2011). However, fully connected RNNs contrast structurally with their biological counterparts, which are extremely sparse (about 0.1%). Motivated by the neocortex, where neural connectivity is constrained by physical distance along cortical sheets and other synaptic wiring costs, we introduce locality masked RNNs (LM-RNNs) that use task-agnostic predetermined graphs with sparsity as low as 4%. We study LM-RNNs in a multitask learning setting relevant to cognitive systems neuroscience with a commonly used set of tasks, 20-Cog-tasks (Yang et al., 2019). We show through reductio ad absurdum that 20-Cog-tasks can be solved by a small pool of separated autapses that we can mechanistically analyze and understand. Thus, these tasks fall short of the goal of inducing complex recurrent dynamics and modular structure in RNNs. We next contribute a new cognitive multitask battery, Mod-Cog, consisting of up to 132 tasks that expands by about seven-fold the number of tasks and task complexity of 20-Cog-tasks. Importantly, while autapses can solve the simple 20-Cog-tasks, the expanded task set requires richer neural architectures and continuous attractor dynamics. On these tasks, we show that LM-RNNs with an optimal sparsity result in faster training and better data efficiency than fully connected networks.

11.
Proc Natl Acad Sci U S A ; 117(41): 25505-25516, 2020 10 13.
Artículo en Inglés | MEDLINE | ID: mdl-33008882

RESUMEN

An elemental computation in the brain is to identify the best in a set of options and report its value. It is required for inference, decision-making, optimization, action selection, consensus, and foraging. Neural computing is considered powerful because of its parallelism; however, it is unclear whether neurons can perform this max-finding operation in a way that improves upon the prohibitively slow optimal serial max-finding computation (which takes [Formula: see text] time for N noisy candidate options) by a factor of N, the benchmark for parallel computation. Biologically plausible architectures for this task are winner-take-all (WTA) networks, where individual neurons inhibit each other so only those with the largest input remain active. We show that conventional WTA networks fail the parallelism benchmark and, worse, in the presence of noise, altogether fail to produce a winner when N is large. We introduce the nWTA network, in which neurons are equipped with a second nonlinearity that prevents weakly active neurons from contributing inhibition. Without parameter fine-tuning or rescaling as N varies, the nWTA network achieves the parallelism benchmark. The network reproduces experimentally observed phenomena like Hick's law without needing an additional readout stage or adaptive N-dependent thresholds. Our work bridges scales by linking cellular nonlinearities to circuit-level decision-making, establishes that distributed computation saturating the parallelism benchmark is possible in networks of noisy, finite-memory neurons, and shows that Hick's law may be a symptom of near-optimal parallel decision-making with noisy input.


Asunto(s)
Toma de Decisiones/fisiología , Modelos Neurológicos , Redes Neurales de la Computación , Neuronas/fisiología , Red Nerviosa/fisiología , Dinámicas no Lineales
12.
PLoS Comput Biol ; 16(4): e1007796, 2020 04.
Artículo en Inglés | MEDLINE | ID: mdl-32343687

RESUMEN

We shed light on the potential of entorhinal grid cells to efficiently encode variables of dimension greater than two, while remaining faithful to empirical data on their low-dimensional structure. Our model constructs representations of high-dimensional inputs through a combination of low-dimensional random projections and "classical" low-dimensional hexagonal grid cell responses. Without reconfiguration of the recurrent circuit, the same system can flexibly encode multiple variables of different dimensions while maximizing the coding range (per dimension) by automatically trading-off dimension with an exponentially large coding range. It achieves high efficiency and flexibility by combining two powerful concepts, modularity and mixed selectivity, in what we call "mixed modular coding". In contrast to previously proposed schemes, the model does not require the formation of higher-dimensional grid responses, a cell-inefficient and rigid mechanism. The firing fields observed in flying bats or climbing rats can be generated by neurons that combine activity from multiple grid modules, each representing higher-dimensional spaces according to our model. The idea expands our understanding of grid cells, suggesting that they could implement a general circuit that generates on-demand coding and memory states for variables in high-dimensional vector spaces.


Asunto(s)
Biología Computacional/métodos , Células de Red , Modelos Neurológicos , Animales , Quirópteros , Cognición , Corteza Entorrinal/fisiología , Células de Red/citología , Células de Red/fisiología , Hipocampo/fisiología , Memoria , Ratas
13.
Proc Natl Acad Sci U S A ; 109(43): 17645-50, 2012 Oct 23.
Artículo en Inglés | MEDLINE | ID: mdl-23047704

RESUMEN

Neural noise limits the fidelity of representations in the brain. This limitation has been extensively analyzed for sensory coding. However, in short-term memory and integrator networks, where noise accumulates and can play an even more prominent role, much less is known about how neural noise interacts with neural and network parameters to determine the accuracy of the computation. Here we analytically derive how the stored memory in continuous attractor networks of probabilistically spiking neurons will degrade over time through diffusion. By combining statistical and dynamical approaches, we establish a fundamental limit on the network's ability to maintain a persistent state: The noise-induced drift of the memory state over time within the network is strictly lower-bounded by the accuracy of estimation of the network's instantaneous memory state by an ideal external observer. This result takes the form of an information-diffusion inequality. We derive some unexpected consequences: Despite the persistence time of short-term memory networks, it does not pay to accumulate spikes for longer than the cellular time-constant to read out their contents. For certain neural transfer functions, the conditions for optimal sensory coding coincide with those for optimal storage, implying that short-term memory may be co-localized with sensory representation.


Asunto(s)
Neuronas/fisiología , Potenciales de Acción , Distribución de Poisson , Probabilidad , Procesos Estocásticos
14.
Neuron ; 112(4): 661-675.e7, 2024 Feb 21.
Artículo en Inglés | MEDLINE | ID: mdl-38091984

RESUMEN

The sensory cortex amplifies relevant features of external stimuli. This sensitivity and selectivity arise through the transformation of inputs by cortical circuitry. We characterize the circuit mechanisms and dynamics of cortical amplification by making large-scale simultaneous measurements of single cells in awake primates and testing computational models. By comparing network activity in both driven and spontaneous states with models, we identify the circuit as operating in a regime of non-normal balanced amplification. Incoming inputs are strongly but transiently amplified by strong recurrent feedback from the disruption of excitatory-inhibitory balance in the network. Strong inhibition rapidly quenches responses, thereby permitting the tracking of time-varying stimuli.


Asunto(s)
Neocórtex , Animales , Neocórtex/fisiología , Primates , Vigilia , Lóbulo Parietal , Neuronas/fisiología , Modelos Neurológicos
15.
Neuron ; 2024 Jul 10.
Artículo en Inglés | MEDLINE | ID: mdl-39013467

RESUMEN

Every day, hundreds of thousands of people undergo general anesthesia. One hypothesis is that anesthesia disrupts dynamic stability-the ability of the brain to balance excitability with the need to be stable and controllable. To test this hypothesis, we developed a method for quantifying changes in population-level dynamic stability in complex systems: delayed linear analysis for stability estimation (DeLASE). Propofol was used to transition animals between the awake state and anesthetized unconsciousness. DeLASE was applied to macaque cortex local field potentials (LFPs). We found that neural dynamics were more unstable in unconsciousness compared with the awake state. Cortical trajectories mirrored predictions from destabilized linear systems. We mimicked the effect of propofol in simulated neural networks by increasing inhibitory tone. This in turn destabilized the networks, as observed in the neural data. Our results suggest that anesthesia disrupts dynamical stability that is required for consciousness.

16.
ArXiv ; 2023 Dec 16.
Artículo en Inglés | MEDLINE | ID: mdl-38106458

RESUMEN

Work on deep learning-based models of grid cells suggests that grid cells generically and robustly arise from optimizing networks to path integrate, i.e., track one's spatial position by integrating self-velocity signals. In previous work [27], we challenged this path integration hypothesis by showing that deep neural networks trained to path integrate almost always do so, but almost never learn grid-like tuning unless separately inserted by researchers via mechanisms unrelated to path integration. In this work, we restate the key evidence substantiating these insights, then address a response to [27] by authors of one of the path integration hypothesis papers [32]. First, we show that the response misinterprets our work, indirectly confirming our points. Second, we evaluate the response's preferred "unified theory for the origin of grid cells" in trained deep path integrators [31, 33, 34] and show that it is at best "occasionally suggestive," not exact or comprehensive. We finish by considering why assessing model quality through prediction of biological neural activity by regression of activity in deep networks [23] can lead to the wrong conclusions.

17.
Curr Biol ; 31(24): R1552-R1555, 2021 12 20.
Artículo en Inglés | MEDLINE | ID: mdl-34932958

RESUMEN

Interview with Ila Fiete, who studies the microscopic cellular and synaptic processes responsible for behaviors of memory and cognition in the brain at Massachusetts Institute of Technology.

18.
Elife ; 102021 05 24.
Artículo en Inglés | MEDLINE | ID: mdl-34028354

RESUMEN

What factors constrain the arrangement of the multiple fields of a place cell? By modeling place cells as perceptrons that act on multiscale periodic grid-cell inputs, we analytically enumerate a place cell's repertoire - how many field arrangements it can realize without external cues while its grid inputs are unique - and derive its capacity - the spatial range over which it can achieve any field arrangement. We show that the repertoire is very large and relatively noise-robust. However, the repertoire is a vanishing fraction of all arrangements, while capacity scales only as the sum of the grid periods so field arrangements are constrained over larger distances. Thus, grid-driven place field arrangements define a large response scaffold that is strongly constrained by its structured inputs. Finally, we show that altering grid-place weights to generate an arbitrary new place field strongly affects existing arrangements, which could explain the volatility of the place code.


Asunto(s)
Señales (Psicología) , Hipocampo/fisiología , Modelos Neurológicos , Células de Lugar/fisiología , Percepción Espacial , Animales , Simulación por Computador , Hipocampo/citología , Humanos , Redes Neurales de la Computación , Plasticidad Neuronal , Análisis Numérico Asistido por Computador
19.
PLoS Comput Biol ; 5(2): e1000291, 2009 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-19229307

RESUMEN

Grid cells in the rat entorhinal cortex display strikingly regular firing responses to the animal's position in 2-D space and have been hypothesized to form the neural substrate for dead-reckoning. However, errors accumulate rapidly when velocity inputs are integrated in existing models of grid cell activity. To produce grid-cell-like responses, these models would require frequent resets triggered by external sensory cues. Such inadequacies, shared by various models, cast doubt on the dead-reckoning potential of the grid cell system. Here we focus on the question of accurate path integration, specifically in continuous attractor models of grid cell activity. We show, in contrast to previous models, that continuous attractor models can generate regular triangular grid responses, based on inputs that encode only the rat's velocity and heading direction. We consider the role of the network boundary in the integration performance of the network and show that both periodic and aperiodic networks are capable of accurate path integration, despite important differences in their attractor manifolds. We quantify the rate at which errors in the velocity integration accumulate as a function of network size and intrinsic noise within the network. With a plausible range of parameters and the inclusion of spike variability, our model networks can accurately integrate velocity inputs over a maximum of approximately 10-100 meters and approximately 1-10 minutes. These findings form a proof-of-concept that continuous attractor dynamics may underlie velocity integration in the dorsolateral medial entorhinal cortex. The simulations also generate pertinent upper bounds on the accuracy of integration that may be achieved by continuous attractor dynamics in the grid cell network. We suggest experiments to test the continuous attractor model and differentiate it from models in which single cells establish their responses independently of each other.


Asunto(s)
Corteza Entorrinal/fisiología , Modelos Neurológicos , Red Nerviosa/fisiología , Integración de Sistemas , Potenciales de Acción/fisiología , Animales , Percepción de Movimiento/fisiología , Redes Neurales de la Computación , Vías Nerviosas , Dinámicas no Lineales , Ratas , Percepción Espacial/fisiología , Factores de Tiempo
20.
Nat Neurosci ; 23(10): 1286-1296, 2020 10.
Artículo en Inglés | MEDLINE | ID: mdl-32895567

RESUMEN

Understanding the mechanisms of neural computation and learning will require knowledge of the underlying circuitry. Because it is difficult to directly measure the wiring diagrams of neural circuits, there has long been an interest in estimating them algorithmically from multicell activity recordings. We show that even sophisticated methods, applied to unlimited data from every cell in the circuit, are biased toward inferring connections between unconnected but highly correlated neurons. This failure to 'explain away' connections occurs when there is a mismatch between the true network dynamics and the model used for inference, which is inevitable when modeling the real world. Thus, causal inference suffers when variables are highly correlated, and activity-based estimates of connectivity should be treated with special caution in strongly connected networks. Finally, performing inference on the activity of circuits pushed far out of equilibrium by a simple low-dimensional suppressive drive might ameliorate inference bias.


Asunto(s)
Potenciales de Acción , Encéfalo/anatomía & histología , Encéfalo/fisiología , Modelos Neurológicos , Redes Neurales de la Computación , Neuronas/fisiología , Animales , Análisis de Datos , Humanos , Vías Nerviosas/anatomía & histología , Vías Nerviosas/fisiología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA