Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 29
Filter
1.
Cell ; 175(5): 1213-1227.e18, 2018 11 15.
Article in English | MEDLINE | ID: mdl-30318147

ABSTRACT

Neurons use two main schemes to encode information: rate coding (frequency of firing) and temporal coding (timing or pattern of firing). While the importance of rate coding is well established, it remains controversial whether temporal codes alone are sufficient for controlling behavior. Moreover, the molecular mechanisms underlying the generation of specific temporal codes are enigmatic. Here, we show in Drosophila clock neurons that distinct temporal spike patterns, dissociated from changes in firing rate, encode time-dependent arousal and regulate sleep. From a large-scale genetic screen, we identify the molecular pathways mediating the circadian-dependent changes in ionic flux and spike morphology that rhythmically modulate spike timing. Remarkably, the daytime spiking pattern alone is sufficient to drive plasticity in downstream arousal neurons, leading to increased firing of these cells. These findings demonstrate a causal role for temporal coding in behavior and define a form of synaptic plasticity triggered solely by temporal spike patterns.


Subject(s)
Neuronal Plasticity , Sleep/physiology , Action Potentials , Animals , Circadian Clocks/physiology , Drosophila , Drosophila Proteins/antagonists & inhibitors , Drosophila Proteins/genetics , Drosophila Proteins/metabolism , Models, Neurological , Neurons/metabolism , Optogenetics , Potassium Channels/genetics , Potassium Channels/metabolism , Potassium Channels, Calcium-Activated/metabolism , RNA Interference , RNA, Small Interfering/metabolism , Receptors, N-Methyl-D-Aspartate/metabolism , Signal Transduction , Sodium-Potassium-Exchanging ATPase/antagonists & inhibitors , Sodium-Potassium-Exchanging ATPase/genetics , Sodium-Potassium-Exchanging ATPase/metabolism , Synaptic Transmission
2.
PLoS Biol ; 16(10): e2006422, 2018 10.
Article in English | MEDLINE | ID: mdl-30365484

ABSTRACT

Temporal analysis of sound is fundamental to auditory processing throughout the animal kingdom. Echolocating bats are powerful models for investigating the underlying mechanisms of auditory temporal processing, as they show microsecond precision in discriminating the timing of acoustic events. However, the neural basis for microsecond auditory discrimination in bats has eluded researchers for decades. Combining extracellular recordings in the midbrain inferior colliculus (IC) and mathematical modeling, we show that microsecond precision in registering stimulus events emerges from synchronous neural firing, revealed through low-latency variability of stimulus-evoked extracellular field potentials (EFPs, 200-600 Hz). The temporal precision of the EFP increases with the number of neurons firing in synchrony. Moreover, there is a functional relationship between the temporal precision of the EFP and the spectrotemporal features of the echolocation calls. In addition, EFP can measure the time difference of simulated echolocation call-echo pairs with microsecond precision. We propose that synchronous firing of populations of neurons operates in diverse species to support temporal analysis for auditory localization and complex sound processing.


Subject(s)
Auditory Perception/physiology , Chiroptera/physiology , Time Perception/physiology , Acoustic Stimulation , Animals , Auditory Pathways/physiology , Biophysical Phenomena , Chiroptera/anatomy & histology , Computer Simulation , Echolocation/physiology , Evoked Potentials, Auditory/physiology , Female , Inferior Colliculi/cytology , Inferior Colliculi/physiology , Male , Models, Neurological , Neurons/physiology , Sound Localization/physiology
3.
Annu Rev Neurosci ; 35: 267-85, 2012.
Article in English | MEDLINE | ID: mdl-22462545

ABSTRACT

Attractor networks are a popular computational construct used to model different brain systems. These networks allow elegant computations that are thought to represent a number of aspects of brain function. Although there is good reason to believe that the brain displays attractor dynamics, it has proven difficult to test experimentally whether any particular attractor architecture resides in any particular brain circuit. We review models and experimental evidence for three systems in the rat brain that are presumed to be components of the rat's navigational and memory system. Head-direction cells have been modeled as a ring attractor, grid cells as a plane attractor, and place cells both as a plane attractor and as a point attractor. Whereas the models have proven to be extremely useful conceptual tools, the experimental evidence in their favor, although intriguing, is still mostly circumstantial.


Subject(s)
Brain/physiology , Limbic System/physiology , Models, Neurological , Neural Networks, Computer , Animals , Nonlinear Dynamics
4.
PLoS Comput Biol ; 15(1): e1006741, 2019 01.
Article in English | MEDLINE | ID: mdl-30682012

ABSTRACT

During spatial navigation, the frequency and timing of spikes from spatial neurons including place cells in hippocampus and grid cells in medial entorhinal cortex are temporally organized by continuous theta oscillations (6-11 Hz). The theta rhythm is regulated by subcortical structures including the medial septum, but it is unclear how spatial information from place cells may reciprocally organize subcortical theta-rhythmic activity. Here we recorded single-unit spiking from a constellation of subcortical and hippocampal sites to study spatial modulation of rhythmic spike timing in rats freely exploring an open environment. Our analysis revealed a novel class of neurons that we termed 'phaser cells,' characterized by a symmetric coupling between firing rate and spike theta-phase. Phaser cells encoded space by assigning distinct phases to allocentric isocontour levels of each cell's spatial firing pattern. In our dataset, phaser cells were predominantly located in the lateral septum, but also the hippocampus, anteroventral thalamus, lateral hypothalamus, and nucleus accumbens. Unlike the unidirectional late-to-early phase precession of place cells, bidirectional phase modulation acted to return phaser cells to the same theta-phase along a given spatial isocontour, including cells that characteristically shifted to later phases at higher firing rates. Our dynamical models of intrinsic theta-bursting neurons demonstrated that experience-independent temporal coding mechanisms can qualitatively explain (1) the spatial rate-phase relationships of phaser cells and (2) the observed temporal segregation of phaser cells according to phase-shift direction. In open-field phaser cell simulations, competitive learning embedded phase-code entrainment maps into the weights of downstream targets, including path integration networks. Bayesian phase decoding revealed error correction capable of resetting path integration at subsecond timescales. Our findings suggest that phaser cells may instantiate a subcortical theta-rhythmic loop of spatial feedback. We outline a framework in which location-dependent synchrony reconciles internal idiothetic processes with the allothetic reference points of sensory experience.


Subject(s)
Models, Neurological , Neurons/physiology , Spatial Navigation/physiology , Animals , Computational Biology , Cortical Synchronization , Hippocampus/physiology , Male , Rats , Rats, Long-Evans , Theta Rhythm/physiology
5.
Biol Cybern ; 114(2): 269-284, 2020 04.
Article in English | MEDLINE | ID: mdl-32236692

ABSTRACT

Neurobiological theories of spatial cognition developed with respect to recording data from relatively small and/or simplistic environments compared to animals' natural habitats. It has been unclear how to extend theoretical models to large or complex spaces. Complementarily, in autonomous systems technology, applications have been growing for distributed control methods that scale to large numbers of low-footprint mobile platforms. Animals and many-robot groups must solve common problems of navigating complex and uncertain environments. Here, we introduce the NeuroSwarms control framework to investigate whether adaptive, autonomous swarm control of minimal artificial agents can be achieved by direct analogy to neural circuits of rodent spatial cognition. NeuroSwarms analogizes agents to neurons and swarming groups to recurrent networks. We implemented neuron-like agent interactions in which mutually visible agents operate as if they were reciprocally connected place cells in an attractor network. We attributed a phase state to agents to enable patterns of oscillatory synchronization similar to hippocampal models of theta-rhythmic (5-12 Hz) sequence generation. We demonstrate that multi-agent swarming and reward-approach dynamics can be expressed as a mobile form of Hebbian learning and that NeuroSwarms supports a single-entity paradigm that directly informs theoretical models of animal cognition. We present emergent behaviors including phase-organized rings and trajectory sequences that interact with environmental cues and geometry in large, fragmented mazes. Thus, NeuroSwarms is a model artificial spatial system that integrates autonomous control and theoretical neuroscience to potentially uncover common principles to advance both domains.


Subject(s)
Cognition , Theta Rhythm/physiology , Animals , Hippocampus/physiology , Learning , Models, Neurological , Neural Networks, Computer , Reward , Spatial Memory/physiology
6.
Entropy (Basel) ; 21(3)2019 Mar 04.
Article in English | MEDLINE | ID: mdl-33266958

ABSTRACT

Although Shannon mutual information has been widely used, its effective calculation is often difficult for many practical problems, including those in neural population coding. Asymptotic formulas based on Fisher information sometimes provide accurate approximations to the mutual information but this approach is restricted to continuous variables because the calculation of Fisher information requires derivatives with respect to the encoded variables. In this paper, we consider information-theoretic bounds and approximations of the mutual information based on Kullback-Leibler divergence and Rényi divergence. We propose several information metrics to approximate Shannon mutual information in the context of neural population coding. While our asymptotic formulas all work for discrete variables, one of them has consistent performance and high accuracy regardless of whether the encoded variables are discrete or continuous. We performed numerical simulations and confirmed that our approximation formulas were highly accurate for approximating the mutual information between the stimuli and the responses of a large neural population. These approximation formulas may potentially bring convenience to the applications of information theory to many practical and theoretical problems.

7.
Neural Comput ; 30(4): 885-944, 2018 04.
Article in English | MEDLINE | ID: mdl-29342399

ABSTRACT

While Shannon's mutual information has widespread applications in many disciplines, for practical applications it is often difficult to calculate its value accurately for high-dimensional variables because of the curse of dimensionality. This article focuses on effective approximation methods for evaluating mutual information in the context of neural population coding. For large but finite neural populations, we derive several information-theoretic asymptotic bounds and approximation formulas that remain valid in high-dimensional spaces. We prove that optimizing the population density distribution based on these approximation formulas is a convex optimization problem that allows efficient numerical solutions. Numerical simulation results confirmed that our asymptotic formulas were highly accurate for approximating mutual information for large neural populations. In special cases, the approximation formulas are exactly equal to the true mutual information. We also discuss techniques of variable transformation and dimensionality reduction to facilitate computation of the approximations.


Subject(s)
Algorithms , Information Theory , Models, Neurological , Neurons/physiology , Computer Simulation , Humans
8.
J Biol Phys ; 44(3): 449-469, 2018 09.
Article in English | MEDLINE | ID: mdl-29860641

ABSTRACT

We present a theoretical study aiming at model fitting for sensory neurons. Conventional neural network training approaches are not applicable to this problem due to lack of continuous data. Although the stimulus can be considered as a smooth time-dependent variable, the associated response will be a set of neural spike timings (roughly the instants of successive action potential peaks) that have no amplitude information. A recurrent neural network model can be fitted to such a stimulus-response data pair by using the maximum likelihood estimation method where the likelihood function is derived from Poisson statistics of neural spiking. The universal approximation feature of the recurrent dynamical neuron network models allows us to describe excitatory-inhibitory characteristics of an actual sensory neural network with any desired number of neurons. The stimulus data are generated by a phased cosine Fourier series having a fixed amplitude and frequency but a randomly shot phase. Various values of amplitude, stimulus component size, and sample size are applied in order to examine the effect of the stimulus to the identification process. Results are presented in tabular and graphical forms at the end of this text. In addition, to demonstrate the success of this research, a study involving the same model, nominal parameters and stimulus structure, and another study that works on different models are compared to that of this research.


Subject(s)
Action Potentials , Neural Networks, Computer , Sensory Receptor Cells/physiology , Animals , Electric Stimulation , Models, Neurological , Photic Stimulation , Reaction Time
9.
J Neurophysiol ; 116(2): 868-91, 2016 08 01.
Article in English | MEDLINE | ID: mdl-27193320

ABSTRACT

The problem of how the hippocampus encodes both spatial and nonspatial information at the cellular network level remains largely unresolved. Spatial memory is widely modeled through the theoretical framework of attractor networks, but standard computational models can only represent spaces that are much smaller than the natural habitat of an animal. We propose that hippocampal networks are built on a basic unit called a "megamap," or a cognitive attractor map in which place cells are flexibly recombined to represent a large space. Its inherent flexibility gives the megamap a huge representational capacity and enables the hippocampus to simultaneously represent multiple learned memories and naturally carry nonspatial information at no additional cost. On the other hand, the megamap is dynamically stable, because the underlying network of place cells robustly encodes any location in a large environment given a weak or incomplete input signal from the upstream entorhinal cortex. Our results suggest a general computational strategy by which a hippocampal network enjoys the stability of attractor dynamics without sacrificing the flexibility needed to represent a complex, changing world.


Subject(s)
Brain Mapping , Hippocampus/physiology , Models, Neurological , Nerve Net/physiology , Neurons/physiology , Spatial Behavior/physiology , Animals , Computer Simulation , Conditioning, Operant , Humans , Memory/physiology , Neural Pathways/physiology
10.
Proc Natl Acad Sci U S A ; 109(17): 6716-20, 2012 Apr 24.
Article in English | MEDLINE | ID: mdl-22493275

ABSTRACT

Animals are capable of navigation even in the absence of prominent landmark cues. This behavioral demonstration of path integration is supported by the discovery of place cells and other neurons that show path-invariant response properties even in the dark. That is, under suitable conditions, the activity of these neurons depends primarily on the spatial location of the animal regardless of which trajectory it followed to reach that position. Although many models of path integration have been proposed, no known single theoretical framework can formally accommodate their diverse computational mechanisms. Here we derive a set of necessary and sufficient conditions for a general class of systems that performs exact path integration. These conditions include multiplicative modulation by velocity inputs and a path-invariance condition that limits the structure of connections in the underlying neural network. In particular, for a linear system to satisfy the path-invariance condition, the effective synaptic weight matrices under different velocities must commute. Our theory subsumes several existing exact path integration models as special cases. We use entorhinal grid cells as an example to demonstrate that our framework can provide useful guidance for finding unexpected solutions to the path integration problem. This framework may help constrain future experimental and modeling studies pertaining to a broad class of neural integration systems.


Subject(s)
Nervous System Physiological Phenomena , Animals , Humans , Models, Theoretical
11.
Neural Comput ; 26(8): 1542-99, 2014 Aug.
Article in English | MEDLINE | ID: mdl-24877729

ABSTRACT

A neural network with symmetric reciprocal connections always admits a Lyapunov function, whose minima correspond to the memory states stored in the network. Networks with suitable asymmetric connections can store and retrieve a sequence of memory patterns, but the dynamics of these networks cannot be characterized as readily as that of the symmetric networks due to the lack of established general methods. Here, a reduction method is developed for a class of asymmetric attractor networks that store sequences of activity patterns as associative memories, as in a Hopfield network. The method projects the original activity pattern of the network to a low-dimensional space such that sequential memory retrievals in the original network correspond to periodic oscillations in the reduced system. The reduced system is self-contained and provides quantitative information about the stability and speed of sequential memory retrieval in the original network. The time evolution of the overlaps between the network state and the stored memory patterns can also be determined from extended reduced systems. The reduction procedure can be summarized by a few reduction rules, which are applied to several network models, including coupled networks and networks with time-delayed connections, and the analytical solutions of the reduced systems are confirmed by numerical simulations of the original networks. Finally, a local learning rule that provides an approximation to the connection weights involving the pseudoinverse is also presented.


Subject(s)
Memory , Neural Networks, Computer , Periodicity , Algorithms , Association Learning
12.
J Neurosci ; 31(45): 16157-76, 2011 Nov 09.
Article in English | MEDLINE | ID: mdl-22072668

ABSTRACT

The rodent septohippocampal system contains "theta cells," which burst rhythmically at 4-12 Hz, but the functional significance of this rhythm remains poorly understood (Buzsáki, 2006). Theta rhythm commonly modulates the spike trains of spatially tuned neurons such as place (O'Keefe and Dostrovsky, 1971), head direction (Tsanov et al., 2011a), grid (Hafting et al., 2005), and border cells (Savelli et al., 2008; Solstad et al., 2008). An "oscillatory interference" theory has hypothesized that some of these spatially tuned neurons may derive their positional firing from phase interference among theta oscillations with frequencies that are modulated by the speed and direction of translational movements (Burgess et al., 2005, 2007). This theory is supported by studies reporting modulation of theta frequency by movement speed (Rivas et al., 1996; Geisler et al., 2007; Jeewajee et al., 2008a), but modulation of theta frequency by movement direction has never been observed. Here we recorded theta cells from hippocampus, medial septum, and anterior thalamus of freely behaving rats. Theta cell burst frequencies varied as the cosine of the rat's movement direction, and this directional tuning was influenced by landmark cues, in agreement with predictions of the oscillatory interference theory. Computer simulations and mathematical analysis demonstrated how a postsynaptic neuron can detect location-dependent synchrony among inputs from such theta cells, and thereby mimic the spatial tuning properties of place, grid, or border cells. These results suggest that theta cells may serve a high-level computational function by encoding a basis set of oscillatory signals that interfere with one another to synthesize spatial memory representations.


Subject(s)
Action Potentials/physiology , Brain/cytology , Neurons/physiology , Orientation , Space Perception/physiology , Theta Rhythm/physiology , Animals , Biophysics , Brain/physiology , Computer Simulation , Cues , Exploratory Behavior , Male , Models, Neurological , Movement/physiology , Rats , Rats, Long-Evans , Video Recording/methods , Wakefulness
13.
Array (N Y) ; 152022 Sep.
Article in English | MEDLINE | ID: mdl-36213421

ABSTRACT

Dynamical systems models for controlling multi-agent swarms have demonstrated advances toward resilient, decentralized navigation algorithms. We previously introduced the NeuroSwarms controller, in which agent-based interactions were modeled by analogy to neuronal network interactions, including attractor dynamics and phase synchrony, that have been theorized to operate within hippocampal place-cell circuits in navigating rodents. This complexity precludes linear analyses of stability, controllability, and performance typically used to study conventional swarm models. Further, tuning dynamical controllers by manual or grid-based search is often inadequate due to the complexity of objectives, dimensionality of model parameters, and computational costs of simulation-based sampling. Here, we present a framework for tuning dynamical controller models of autonomous multi-agent systems with Bayesian optimization. Our approach utilizes a task-dependent objective function to train Gaussian process surrogate models to achieve adaptive and efficient exploration of a dynamical controller model's parameter space. We demonstrate this approach by studying an objective function selecting for NeuroSwarms behaviors that cooperatively localize and capture spatially distributed rewards under time pressure. We generalized task performance across environments by combining scores for simulations in multiple mazes with distinct geometries. To validate search performance, we compared high-dimensional clustering for high- vs. low-likelihood parameter points by visualizing sample trajectories in 2-dimensional embeddings. Our findings show that adaptive, sample-efficient evaluation of the self-organizing behavioral capacities of complex systems, including dynamical swarm controllers, can accelerate the translation of neuroscientific theory to applied domains.

14.
Sci Rep ; 12(1): 3210, 2022 02 25.
Article in English | MEDLINE | ID: mdl-35217679

ABSTRACT

Insect neural systems are a promising source of inspiration for new navigation algorithms, especially on low size, weight, and power platforms. There have been unprecedented recent neuroscience breakthroughs with Drosophila in behavioral and neural imaging experiments as well as the mapping of detailed connectivity of neural structures. General mechanisms for learning orientation in the central complex (CX) of Drosophila have been investigated previously; however, it is unclear how these underlying mechanisms extend to cases where there is translation through an environment (beyond only rotation), which is critical for navigation in robotic systems. Here, we develop a CX neural connectivity-constrained model that performs sensor fusion, as well as unsupervised learning of visual features for path integration; we demonstrate the viability of this circuit for use in robotic systems in simulated and physical environments. Furthermore, we propose a theoretical understanding of how distributed online unsupervised network weight modification can be leveraged for learning in a trajectory through an environment by minimizing orientation estimation error. Overall, our results may enable a new class of CX-derived low power robotic navigation algorithms and lead to testable predictions to inform future neuroscience experiments.


Subject(s)
Education, Distance , Algorithms , Animals , Drosophila , Insecta , Nervous System
15.
Neural Comput ; 23(9): 2242-88, 2011 Sep.
Article in English | MEDLINE | ID: mdl-21671794

ABSTRACT

The stimulus-response relationship of many sensory neurons is nonlinear, but fully quantifying this relationship by a complex nonlinear model may require too much data to be experimentally tractable. Here we present a theoretical study of a general two-stage computational method that may help to significantly reduce the number of stimuli needed to obtain an accurate mathematical description of nonlinear neural responses. Our method of active data collection first adaptively generates stimuli that are optimal for estimating the parameters of competing nonlinear models and then uses these estimates to generate stimuli online that are optimal for discriminating these models. We applied our method to simple hierarchical circuit models, including nonlinear networks built on the spatiotemporal or spectral-temporal receptive fields, and confirmed that collecting data using our two-stage adaptive algorithm was far more effective for estimating and comparing competing nonlinear sensory processing models than standard nonadaptive methods using random stimuli.


Subject(s)
Algorithms , Models, Neurological , Neural Networks, Computer , Neurons/physiology , Nonlinear Dynamics
16.
Neural Comput ; 22(1): 1-47, 2010 Jan.
Article in English | MEDLINE | ID: mdl-19842986

ABSTRACT

It is generally unknown when distinct neural networks having different synaptic weights and thresholds implement identical input-output transformations. Determining the exact conditions for structurally distinct yet functionally equivalent networks may shed light on the theoretical constraints on how diverse neural circuits might develop and be maintained to serve identical functions. Such consideration also imposes practical limits on our ability to uniquely infer the structure of underlying neural circuits from stimulus-response measurements. We introduce a biologically inspired mathematical method for determining when the structure of a neural network can be perturbed gradually while preserving functionality. We show that for common three-layer networks with convergent and nondegenerate connection weights, this is possible only when the hidden unit gains are power functions, exponentials, or logarithmic functions, which are known to approximate the gains seen in some biological neurons. For practical applications, our numerical simulations with finite and noisy data show that continuous confounding of parameters due to network functional equivalence tends to occur approximately even when the gain function is not one of the aforementioned three types, suggesting that our analytical results are applicable to more general situations and may help identify a common source of parameter variability in neural network modeling.


Subject(s)
Central Nervous System/physiology , Nerve Net/physiology , Neural Networks, Computer , Neural Pathways/physiology , Neurons/physiology , Synaptic Transmission/physiology , Action Potentials/physiology , Algorithms , Mathematical Computing , Mathematical Concepts , Models, Neurological , Synapses/physiology
17.
Front Comput Neurosci ; 13: 96, 2019.
Article in English | MEDLINE | ID: mdl-32038213

ABSTRACT

Head-direction cells have been found in several areas in the mammalian brains. The firing rate of an ideal head-direction cell reaches its peak value only when the animal's head points in a specific direction, and this preferred direction stays the same regardless of spatial location. In this paper we combine mathematical analytical techniques and numerical simulations to fully analyze the equilibrium states of a generic ring attractor network, which is a widely used modeling framework for the head-direction system. Under specific conditions, all solutions of the ring network are bounded, and there exists a Lyapunov function that guarantees the stability of the network for any given inputs, which may come from multiple sources in the biological system, including self-motion information for inertially based updating and landmark information for calibration. We focus on the first few terms of the Fourier series of the ring network to explicitly solve for all possible equilibrium states, followed by a stability analysis based on small perturbations. In particular, these equilibrium states include the standard single-peaked activity pattern as well as double-peaked activity pattern, whose existence is unknown but has testable experimental implications. To our surprise, we have also found an asymmetric equilibrium activity profile even when the network connectivity is strictly symmetric. Finally we examine how these different equilibrium solutions depend on the network parameters and obtain the phase diagrams in the parameter space of the ring network.

18.
J Neurosci ; 27(12): 3211-29, 2007 Mar 21.
Article in English | MEDLINE | ID: mdl-17376982

ABSTRACT

The dorsomedial entorhinal cortex (dMEC) of the rat brain contains a remarkable population of spatially tuned neurons called grid cells (Hafting et al., 2005). Each grid cell fires selectively at multiple spatial locations, which are geometrically arranged to form a hexagonal lattice that tiles the surface of the rat's environment. Here, we show that grid fields can combine with one another to form moiré interference patterns, referred to as "moiré grids," that replicate the hexagonal lattice over an infinite range of spatial scales. We propose that dMEC grids are actually moiré grids formed by interference between much smaller "theta grids," which are hypothesized to be the primary source of movement-related theta rhythm in the rat brain. The formation of moiré grids from theta grids obeys two scaling laws, referred to as the length and rotational scaling rules. The length scaling rule appears to account for firing properties of grid cells in layer II of dMEC, whereas the rotational scaling rule can better explain properties of layer III grid cells. Moiré grids built from theta grids can be combined to form yet larger grids and can also be used as basis functions to construct memory representations of spatial locations (place cells) or visual images. Memory representations built from moiré grids are automatically endowed with size invariance by the scaling properties of the moiré grids. We therefore propose that moiré interference between grid fields may constitute an important principle of neural computation underlying the construction of scale-invariant memory representations.


Subject(s)
Memory/physiology , Models, Neurological , Moire Topography/methods , Theta Rhythm/methods , Animals , Computational Biology/methods , Entorhinal Cortex/physiology , Male , Rats , Rats, Long-Evans
19.
Hippocampus ; 18(12): 1239-55, 2008.
Article in English | MEDLINE | ID: mdl-19021259

ABSTRACT

As a rat navigates through a familiar environment, its position in space is encoded by firing rates of place cells and grid cells. Oscillatory interference models propose that this positional firing rate code is derived from a phase code, which stores the rat's position as a pattern of phase angles between velocity-modulated theta oscillations. Here we describe a three-stage network model, which formalizes the computational steps that are necessary for converting phase-coded position signals (represented by theta oscillations) into rate-coded position signals (represented by grid cells and place cells). The first stage of the model proposes that the phase-coded position signal is stored and updated by a bank of ring attractors, like those that have previously been hypothesized to perform angular path integration in the head-direction cell system. We show analytically how ring attractors can serve as central pattern generators for producing velocity-modulated theta oscillations, and we propose that such ring attractors may reside in subcortical areas where hippocampal theta rhythm is known to originate. In the second stage of the model, grid fields are formed by oscillatory interference between theta cells residing in different (but not the same) ring attractors. The model's third stage assumes that hippocampal neurons generate Gaussian place fields by computing weighted sums of inputs from a basis set of many grid fields. Here we show that under this assumption, the spatial frequency spectrum of the Gaussian place field defines the vertex spacings of grid cells that must provide input to the place cell. This analysis generates a testable prediction that grid cells with large vertex spacings should send projections to the entire hippocampus, whereas grid cells with smaller vertex spacings may project more selectively to the dorsal hippocampus, where place fields are smallest.


Subject(s)
Action Potentials/physiology , Biological Clocks/physiology , Entorhinal Cortex/physiology , Nerve Net/physiology , Neurons/physiology , Theta Rhythm , Animals , Computer Simulation , Entorhinal Cortex/cytology , Hippocampus/cytology , Hippocampus/physiology , Nerve Net/cytology , Neural Pathways/cytology , Neural Pathways/physiology , Normal Distribution , Rats , Space Perception/physiology , Synaptic Transmission/physiology
20.
J Math Neurosci ; 8(1): 6, 2018 May 16.
Article in English | MEDLINE | ID: mdl-29767380

ABSTRACT

The theory of attractor neural networks has been influential in our understanding of the neural processes underlying spatial, declarative, and episodic memory. Many theoretical studies focus on the inherent properties of an attractor, such as its structure and capacity. Relatively little is known about how an attractor neural network responds to external inputs, which often carry conflicting information about a stimulus. In this paper we analyze the behavior of an attractor neural network driven by two conflicting external inputs. Our focus is on analyzing the emergent properties of the megamap model, a quasi-continuous attractor network in which place cells are flexibly recombined to represent a large spatial environment. In this model, the system shows a sharp transition from the winner-take-all mode, which is characteristic of standard continuous attractor neural networks, to a combinatorial mode in which the equilibrium activity pattern combines embedded attractor states in response to conflicting external inputs. We derive a numerical test for determining the operational mode of the system a priori. We then derive a linear transformation from the full megamap model with thousands of neurons to a reduced 2-unit model that has similar qualitative behavior. Our analysis of the reduced model and explicit expressions relating the parameters of the reduced model to the megamap elucidate the conditions under which the combinatorial mode emerges and the dynamics in each mode given the relative strength of the attractor network and the relative strength of the two conflicting inputs. Although we focus on a particular attractor network model, we describe a set of conditions under which our analysis can be applied to more general attractor neural networks.

SELECTION OF CITATIONS
SEARCH DETAIL