Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 27
Filter
Add more filters










Publication year range
1.
Front Comput Neurosci ; 18: 1276292, 2024.
Article in English | MEDLINE | ID: mdl-38707680

ABSTRACT

Introduction: Recent work on bats flying over long distances has revealed that single hippocampal cells have multiple place fields of different sizes. At the network level, a multi-scale, multi-field place cell code outperforms classical single-scale, single-field place codes, yet the performance boundaries of such a code remain an open question. In particular, it is unknown how general multi-field codes compare to a highly regular grid code, in which cells form distinct modules with different scales. Methods: In this work, we address the coding properties of theoretical spatial coding models with rigorous analyses of comprehensive simulations. Starting from a multi-scale, multi-field network, we performed evolutionary optimization. The resulting multi-field networks sometimes retained the multi-scale property at the single-cell level but most often converged to a single scale, with all place fields in a given cell having the same size. We compared the results against a single-scale single-field code and a one-dimensional grid code, focusing on two main characteristics: the performance of the code itself and the dynamics of the network generating it. Results: Our simulation experiments revealed that, under normal conditions, a regular grid code outperforms all other codes with respect to decoding accuracy, achieving a given precision with fewer neurons and fields. In contrast, multi-field codes are more robust against noise and lesions, such as random drop-out of neurons, given that the significantly higher number of fields provides redundancy. Contrary to our expectations, the network dynamics of all models, from the original multi-scale models before optimization to the multi-field models that resulted from optimization, did not maintain activity bumps at their original locations when a position-specific external input was removed. Discussion: Optimized multi-field codes appear to strike a compromise between a place code and a grid code that reflects a trade-off between accurate positional encoding and robustness. Surprisingly, the recurrent neural network models we implemented and optimized for either multi- or single-scale, multi-field codes did not intrinsically produce a persistent "memory" of attractor states. These models, therefore, were not continuous attractor networks.

2.
eNeuro ; 9(5)2022.
Article in English | MEDLINE | ID: mdl-36216507

ABSTRACT

Dendritic spines are submicron, subcellular compartments whose shape is defined by actin filaments and associated proteins. Accurately mapping the cytoskeleton is a challenge, given the small size of its components. It remains unclear whether the actin-associated structures analyzed in dendritic spines of neurons in vitro apply to dendritic spines of intact, mature neurons in situ. Here, we combined advanced preparative methods with multitilt serial section electron microscopy (EM) tomography and computational analysis to reveal the full three-dimensional (3D) internal architecture of spines in the intact brains of male mice at nanometer resolution. We compared hippocampal (CA1) pyramidal cells and cerebellar Purkinje cells in terms of the length distribution and connectivity of filaments, their branching-angles and absolute orientations, and the elementary loops formed by the network. Despite differences in shape and size across spines and between spine heads and necks, the internal organization was remarkably similar in both neuron types and largely homogeneous throughout the spine volume. In the tortuous mesh of highly branched and interconnected filaments, branches exhibited no preferred orientation except in the immediate vicinity of the cell membrane. We found that new filaments preferentially split off from the convex side of a bending filament, consistent with the behavior of Arp2/3-mediated branching of actin under mechanical deformation. Based on the quantitative analysis, the spine cytoskeleton is likely subject to considerable mechanical force in situ.


Subject(s)
Actins , Dendritic Spines , Animals , Male , Mice , Dendritic Spines/metabolism , Actins/metabolism , Cytoskeleton/metabolism , Hippocampus/metabolism , Neurons/metabolism
3.
J Neurosci ; 40(23): 4512-4524, 2020 06 03.
Article in English | MEDLINE | ID: mdl-32332120

ABSTRACT

Principal neurons in rodent medial entorhinal cortex (MEC) generate high-frequency bursts during natural behavior. While in vitro studies point to potential mechanisms that could support such burst sequences, it remains unclear whether these mechanisms are effective under in vivo conditions. In this study, we focused on the membrane-potential dynamics immediately following action potentials (APs), as measured in whole-cell recordings from male mice running in virtual corridors (Domnisoru et al., 2013). These afterpotentials consisted either of a hyperpolarization, an extended ramp-like shoulder, or a depolarization reminiscent of depolarizing afterpotentials (DAPs) recorded in vitro in MEC principal neurons. Next, we correlated the afterpotentials with the cells' propensity to fire bursts. All DAP cells with known location resided in Layer II, generated bursts, and their interspike intervals (ISIs) were typically between 5 and 15 ms. The ISI distributions of Layer-II cells without DAPs peaked sharply at around 4 ms and varied only minimally across that group. This dichotomy in burst behavior is explained by cell-group-specific DAP dynamics. The same two groups of bursting neurons also emerged when we clustered extracellular spike-train autocorrelations measured in real 2D arenas (Latuske et al., 2015). Apart from slight variations in grid spacing, no difference in the spatial coding properties of the grid cells across all three groups was discernible. Layer III neurons were only sparsely bursting (SB) and had no DAPs. As various mechanisms for modulating ion-channels underlying DAPs exist, our results suggest that temporal features of MEC activity can be altered while maintaining the cells' overall spatial tuning characteristics.SIGNIFICANCE STATEMENT Depolarizing afterpotentials (DAPs) are frequently observed in principal neurons from slice preparations of rodent medial entorhinal cortex (MEC), but their functional role in vivo is unknown. Analyzing whole-cell data from mice running on virtual tracks, we show that DAPs do occur during behavior. Cells with prominent DAPs are found in Layer II; their interspike intervals (ISIs) reflect DAP time-scales. In contrast, neither the rarely bursting cells in Layer III, nor the high-frequency bursters in Layer II, have a DAP. Extracellular recordings from mice exploring real 2D arenas demonstrate that grid cells within these three groups have similar spatial coding properties. We conclude that DAPs shape the temporal response characteristics of principal neurons in MEC with little effect on spatial properties.


Subject(s)
Action Potentials/physiology , Entorhinal Cortex/cytology , Entorhinal Cortex/physiology , Grid Cells/physiology , Animals , Male , Membrane Potentials/physiology , Mice , Mice, Inbred C57BL , Principal Component Analysis/methods
4.
Hippocampus ; 30(4): 367-383, 2020 04.
Article in English | MEDLINE | ID: mdl-32045073

ABSTRACT

Grid cells in medial entorhinal cortex are notoriously variable in their responses, despite the striking hexagonal arrangement of their spatial firing fields. Indeed, when the animal moves through a firing field, grid cells often fire much more vigorously than predicted or do not fire at all. The source of this trial-to-trial variability is not completely understood. By analyzing grid-cell spike trains from mice running in open arenas and on linear tracks, we characterize the phenomenon of "missed" firing fields using the statistical theory of zero inflation. We find that one major cause of grid-cell variability lies in the spatial representation itself: firing fields are not as strongly anchored to spatial location as the averaged grid suggests. In addition, grid fields from different cells drift together from trial to trial, regardless of whether the environment is real or virtual, or whether the animal moves in light or darkness. Spatial realignment across trials sharpens the grid representation, yielding firing fields that are more pronounced and significantly narrower. These findings indicate that ensembles of grid cells encode relative position more reliably than absolute position.


Subject(s)
Action Potentials/physiology , Entorhinal Cortex/cytology , Entorhinal Cortex/physiology , Grid Cells/physiology , Animals , Male , Mice , Mice, Inbred C57BL
5.
J Neurosci ; 39(15): 2847-2859, 2019 04 10.
Article in English | MEDLINE | ID: mdl-30692223

ABSTRACT

Insects and vertebrates harbor specific neurons that encode the animal's head direction (HD) and provide an internal compass for spatial navigation. Each HD cell fires most strongly in one preferred direction. As the animal turns its head, however, HD cells in rat anterodorsal thalamic nucleus (ADN) and other brain areas fire already before their preferred direction is reached, as if the neurons anticipated the future HD. This phenomenon has been explained at a mechanistic level, but a functional interpretation is still missing. To close this gap, we use a computational approach based on the movement statistics of male rats and a simple model for the neural responses within the ADN HD network. Network activity is read out using population vectors in a biologically plausible manner, so that only past spikes are taken into account. We find that anticipatory firing improves the representation of the present HD by reducing the motion-induced temporal bias inherent in causal decoding. The amount of anticipation observed in ADN enhances the precision of the HD compass read-out by up to 40%. More generally, our theoretical framework predicts that neural integration times not only reflect biophysical constraints, but also the statistics of behaviorally relevant stimuli; in particular, anticipatory tuning should be found wherever neurons encode sensory signals that change gradually in time.SIGNIFICANCE STATEMENT Across different brain regions, populations of noisy neurons encode dynamically changing stimuli. Decoding a time-varying stimulus from the population response involves a trade-off: For short read-out times, stimulus estimates are unreliable as the number of stochastic spikes is small; for long read-outs, estimates are biased because they lag behind the true stimulus. We show that optimal decoding of temporally correlated stimuli not only relies on finding the right read-out time window but requires neurons to anticipate future stimulus values. We apply this general framework to the rodent head-direction system and show that the experimentally observed anticipation of future head directions can be explained at a quantitative level from the neuronal tuning properties, network size, and the animal's head-movement statistics.


Subject(s)
Anticipation, Psychological/physiology , Head Movements/physiology , Psychomotor Performance/physiology , Algorithms , Animals , Anterior Thalamic Nuclei/physiology , Computer Simulation , Male , Models, Neurological , Nerve Net/physiology , Orientation/physiology , Rats , Space Perception/physiology , Spatial Navigation
6.
PLoS Comput Biol ; 15(1): e1006666, 2019 01.
Article in English | MEDLINE | ID: mdl-30601804

ABSTRACT

The release of neurotransmitters from synapses obeys complex and stochastic dynamics. Depending on the recent history of synaptic activation, many synapses depress the probability of releasing more neurotransmitter, which is known as synaptic depression. Our understanding of how synaptic depression affects the information efficacy, however, is limited. Here we propose a mathematically tractable model of both synchronous spike-evoked release and asynchronous release that permits us to quantify the information conveyed by a synapse. The model transits between discrete states of a communication channel, with the present state depending on many past time steps, emulating the gradual depression and exponential recovery of the synapse. Asynchronous and spontaneous releases play a critical role in shaping the information efficacy of the synapse. We prove that depression can enhance both the information rate and the information rate per unit energy expended, provided that synchronous spike-evoked release depresses less (or recovers faster) than asynchronous release. Furthermore, we explore the theoretical implications of short-term synaptic depression adapting on longer time scales, as part of the phenomenon of metaplasticity. In particular, we show that a synapse can adjust its energy expenditure by changing the dynamics of short-term synaptic depression without affecting the net information conveyed by each successful release. Moreover, the optimal input spike rate is independent of the amplitude or time constant of synaptic depression. We analyze the information efficacy of three types of synapses for which the short-term dynamics of both synchronous and asynchronous release have been experimentally measured. In hippocampal autaptic synapses, the persistence of asynchronous release during depression cannot compensate for the reduction of synchronous release, so that the rate of information transmission declines with synaptic depression. In the calyx of Held, the information rate per release remains constant despite large variations in the measured asynchronous release rate. Lastly, we show that dopamine, by controlling asynchronous release in corticostriatal synapses, increases the synaptic information efficacy in nucleus accumbens.


Subject(s)
Models, Neurological , Neurotransmitter Agents/metabolism , Synapses/metabolism , Action Potentials/physiology , Animals , Computational Biology , Dopamine/metabolism , Hippocampus/cytology , Memory/physiology , Nucleus Accumbens/cytology
7.
Entropy (Basel) ; 21(8)2019 Aug 02.
Article in English | MEDLINE | ID: mdl-33267470

ABSTRACT

Action potentials (spikes) can trigger the release of a neurotransmitter at chemical synapses between neurons. Such release is uncertain, as it occurs only with a certain probability. Moreover, synaptic release can occur independently of an action potential (asynchronous release) and depends on the history of synaptic activity. We focus here on short-term synaptic facilitation, in which a sequence of action potentials can temporarily increase the release probability of the synapse. In contrast to the phenomenon of short-term depression, quantifying the information transmission in facilitating synapses remains to be done. We find rigorous lower and upper bounds for the rate of information transmission in a model of synaptic facilitation. We treat the synapse as a two-state binary asymmetric channel, in which the arrival of an action potential shifts the synapse to a facilitated state, while in the absence of a spike, the synapse returns to its baseline state. The information bounds are functions of both the asynchronous and synchronous release parameters. If synchronous release facilitates more than asynchronous release, the mutual information rate increases. In contrast, short-term facilitation degrades information transmission when the synchronous release probability is intrinsically high. As synaptic release is energetically expensive, we exploit the information bounds to determine the energy-information trade-off in facilitating synapses. We show that unlike information rate, the energy-normalized information rate is robust with respect to variations in the strength of facilitation.

8.
Curr Opin Neurobiol ; 46: 99-108, 2017 10.
Article in English | MEDLINE | ID: mdl-28888183

ABSTRACT

Across the nervous system, neurons often encode circular stimuli using tuning curves that are not sine or cosine functions, but that belong to the richer class of von Mises functions, which are periodic variants of Gaussians. For a population of neurons encoding a single circular variable with such canonical tuning curves, computing a simple population vector is the optimal read-out of the most likely stimulus. We argue that the advantages of population vector read-outs are so compelling that even the neural representation of the outside world's flat Euclidean geometry is curled up into a torus (a circle times a circle), creating the hexagonal activity patterns of mammalian grid cells. Here, the circular scale is not set a priori, so the nervous system can use multiple scales and gain fields to overcome the ambiguity inherent in periodic representations of linear variables. We review the experimental evidence for this framework and discuss its testable predictions and generalizations to more abstract grid-like neural representations.


Subject(s)
Models, Neurological , Nervous System Physiological Phenomena , Neurons/physiology , Animals , Humans
9.
Curr Biol ; 27(15): R755-R758, 2017 08 07.
Article in English | MEDLINE | ID: mdl-28787605

ABSTRACT

The firing fields of mammalian grid cells, which map an animal's environment, lie on hexagonal lattices. Three new studies report significant field-to-field differences in the firing rates, a finding with far-reaching consequences for how grid fields form and encode spatial information.


Subject(s)
Grid Cells , Animals , Cognition , Hippocampus , Mammals , Neurons
10.
Neural Comput ; 29(6): 1528-1560, 2017 06.
Article in English | MEDLINE | ID: mdl-28410051

ABSTRACT

Synapses are the communication channels for information transfer between neurons; these are the points at which pulse-like signals are converted into the stochastic release of quantized amounts of chemical neurotransmitter. At many synapses, prior neuronal activity depletes synaptic resources, depressing subsequent responses of both spontaneous and spike-evoked releases. We analytically compute the information transmission rate of a synaptic release site, which we model as a binary asymmetric channel. Short-term depression is incorporated by assigning the channel a memory of depth one. A successful release, whether spike evoked or spontaneous, decreases the probability of a subsequent release; if no release occurs on the following time step, the release probabilities recover back to their default values. We prove that synaptic depression can increase the release site's information rate if spontaneous release is more strongly depressed than spike-evoked release. When depression affects spontaneous and evoked release equally, the information rate must invariably decrease, even when the rate is normalized by the resources used for synaptic transmission. For identical depression levels, we analytically disprove the hypothesis, at least in this simplified model, that synaptic depression serves energy- and information-efficient encoding.

12.
Elife ; 42015 Apr 24.
Article in English | MEDLINE | ID: mdl-25910055

ABSTRACT

Lattices abound in nature-from the crystal structure of minerals to the honey-comb organization of ommatidia in the compound eye of insects. These arrangements provide solutions for optimal packings, efficient resource distribution, and cryptographic protocols. Do lattices also play a role in how the brain represents information? We focus on higher-dimensional stimulus domains, with particular emphasis on neural representations of physical space, and derive which neuronal lattice codes maximize spatial resolution. For mammals navigating on a surface, we show that the hexagonal activity patterns of grid cells are optimal. For species that move freely in three dimensions, a face-centered cubic lattice is best. This prediction could be tested experimentally in flying bats, arboreal monkeys, or marine mammals. More generally, our theory suggests that the brain encodes higher-dimensional sensory or cognitive variables with populations of grid-cell-like neurons whose activity patterns exhibit lattice structures at multiple, nested scales.


Subject(s)
Adaptation, Biological/physiology , Mammals/psychology , Models, Neurological , Space Perception/physiology , Spatial Navigation/physiology , Animals , Brain Mapping/methods , Species Specificity
13.
Neuron ; 85(3): 590-601, 2015 Feb 04.
Article in English | MEDLINE | ID: mdl-25619656

ABSTRACT

Neuronal dendritic spines have been speculated to function as independent computational units, yet evidence for active electrical computation in spines is scarce. Here we show that strictly local voltage-gated sodium channel (Nav) activation can occur during excitatory postsynaptic potentials in the spines of olfactory bulb granule cells, which we mimic and detect via combined two-photon uncaging of glutamate and calcium imaging in conjunction with whole-cell recordings. We find that local Nav activation boosts calcium entry into spines through high-voltage-activated calcium channels and accelerates postsynaptic somatic depolarization, without affecting NMDA receptor-mediated signaling. Hence, Nav-mediated boosting promotes rapid output from the reciprocal granule cell spine onto the lateral mitral cell dendrite and thus can speed up recurrent inhibition. This striking example of electrical compartmentalization both adds to the understanding of olfactory network processing and broadens the general view of spine function.


Subject(s)
Dendritic Spines/physiology , Excitatory Postsynaptic Potentials/physiology , Olfactory Bulb/physiology , Voltage-Gated Sodium Channel Agonists/pharmacology , Voltage-Gated Sodium Channels/physiology , Animals , Dendritic Spines/drug effects , Female , Male , Olfactory Bulb/drug effects , Organ Culture Techniques , Rats , Rats, Wistar
14.
Sci Adv ; 1(11): e1500816, 2015 Dec.
Article in English | MEDLINE | ID: mdl-26824061

ABSTRACT

Mammalian grid cells fire when an animal crosses the points of an imaginary hexagonal grid tessellating the environment. We show how animals can navigate by reading out a simple population vector of grid cell activity across multiple spatial scales, even though neural activity is intrinsically stochastic. This theory of dead reckoning explains why grid cells are organized into discrete modules within which all cells have the same lattice scale and orientation. The lattice scale changes from module to module and should form a geometric progression with a scale ratio of around 3/2 to minimize the risk of making large-scale errors in spatial localization. Such errors should also occur if intermediate-scale modules are silenced, whereas knocking out the module at the smallest scale will only affect spatial precision. For goal-directed navigation, the allocentric grid cell representation can be readily transformed into the egocentric goal coordinates needed for planning movements. The goal location is set by nonlinear gain fields that act on goal vector cells. This theory predicts neural and behavioral correlates of grid cell readout that transcend the known link between grid cells of the medial entorhinal cortex and place cells of the hippocampus.

15.
PLoS One ; 9(6): e100638, 2014.
Article in English | MEDLINE | ID: mdl-24959748

ABSTRACT

As a rat moves, grid cells in its entorhinal cortex (EC) discharge at multiple locations of the external world, and the firing fields of each grid cell span a hexagonal lattice. For movements on linear tracks, spikes tend to occur at successively earlier phases of the theta-band filtered local field potential during the traversal of a firing field - a phenomenon termed phase precession. The complex movement patterns observed in two-dimensional (2D) open-field environments may fundamentally alter phase precession. To study this question at the behaviorally relevant single-run level, we analyzed EC spike patterns as a function of the distance traveled by the rat along each trajectory. This analysis revealed that cells across all EC layers fire spikes that phase-precess; indeed, the rate and extent of phase precession were the same, only the correlation between spike phase and path length was weaker in EC layer III. Both slope and correlation of phase precession were surprisingly similar on linear tracks and in 2D open-field environments despite strong differences in the movement statistics, including running speed. While the phase-precession slope did not correlate with the average running speed, it did depend on specific properties of the animal's path. The longer a curving path through a grid-field in a 2D environment, the shallower was the rate of phase precession, while runs that grazed a grid field tangentially led to a steeper phase-precession slope than runs through the field center. Oscillatory interference models for grid cells do not reproduce the observed phenomena.


Subject(s)
Action Potentials , Entorhinal Cortex/physiology , Movement , Neurons/physiology , Algorithms , Animals , Models, Neurological , Rats
16.
Article in English | MEDLINE | ID: mdl-24032870

ABSTRACT

Encoding information about continuous variables using noisy computational units is a challenge; nonetheless, asymptotic theory shows that combining multiple periodic scales for coding can be highly precise despite the corrupting influence of noise [Mathis, Herz, and Stemmler, Phys. Rev. Lett. 109, 018103 (2012)]. Indeed, the cortex seems to use periodic, multiscale grid codes to represent position accurately. Here we show how such codes can be read out without taking the long-term limit; even on short time scales, the precision of such codes scales exponentially in the number N of neurons. Does this finding also hold for neurons that are not firing in a statistically independent fashion? To assess the extent to which biological grid codes are subject to statistical dependences, we first analyze the noise correlations between pairs of grid code neurons in behaving rodents. We find that if the grids of two neurons align and have the same length scale, the noise correlations between the neurons can reach values as high as 0.8. For increasing mismatches between the grids of the two neurons, the noise correlations fall rapidly. Incorporating such correlations into a population coding model reveals that the correlations lessen the resolution, but the exponential scaling of resolution with N is unaffected.


Subject(s)
Models, Neurological , Nervous System Physiological Phenomena , Nervous System/cytology , Likelihood Functions , Neurons/cytology
17.
PLoS Comput Biol ; 9(7): e1003157, 2013.
Article in English | MEDLINE | ID: mdl-23935475

ABSTRACT

In systems biology, questions concerning the molecular and cellular makeup of an organism are of utmost importance, especially when trying to understand how unreliable components--like genetic circuits, biochemical cascades, and ion channels, among others--enable reliable and adaptive behaviour. The repertoire and speed of biological computations are limited by thermodynamic or metabolic constraints: an example can be found in neurons, where fluctuations in biophysical states limit the information they can encode--with almost 20-60% of the total energy allocated for the brain used for signalling purposes, either via action potentials or by synaptic transmission. Here, we consider the imperatives for neurons to optimise computational and metabolic efficiency, wherein benefits and costs trade-off against each other in the context of self-organised and adaptive behaviour. In particular, we try to link information theoretic (variational) and thermodynamic (Helmholtz) free-energy formulations of neuronal processing and show how they are related in a fundamental way through a complexity minimisation lemma.


Subject(s)
Nervous System Physiological Phenomena , Action Potentials , Humans , Signal Transduction , Thermodynamics
18.
Phys Rev Lett ; 109(1): 018103, 2012 Jul 06.
Article in English | MEDLINE | ID: mdl-23031134

ABSTRACT

Collective computation is typically polynomial in the number of computational elements, such as transistors or neurons, whether one considers the storage capacity of a memory device or the number of floating-point operations per second of a CPU. However, we show here that the capacity of a computational network to resolve real-valued signals of arbitrary dimensions can be exponential in N, even if the individual elements are noisy and unreliable. Nested, modular codes that achieve such high resolutions mirror the properties of grid cells in vertebrates, which underlie spatial navigation.


Subject(s)
Models, Neurological , Neurons/physiology , Neurons/cytology , Stochastic Processes
19.
Neural Comput ; 24(9): 2280-317, 2012 Sep.
Article in English | MEDLINE | ID: mdl-22594833

ABSTRACT

Rodents use two distinct neuronal coordinate systems to estimate their position: place fields in the hippocampus and grid fields in the entorhinal cortex. Whereas place cells spike at only one particular spatial location, grid cells fire at multiple sites that correspond to the points of an imaginary hexagonal lattice. We study how to best construct place and grid codes, taking the probabilistic nature of neural spiking into account. Which spatial encoding properties of individual neurons confer the highest resolution when decoding the animal's position from the neuronal population response? A priori, estimating a spatial position from a grid code could be ambiguous, as regular periodic lattices possess translational symmetry. The solution to this problem requires lattices for grid cells with different spacings; the spatial resolution crucially depends on choosing the right ratios of these spacings across the population. We compute the expected error in estimating the position in both the asymptotic limit, using Fisher information, and for low spike counts, using maximum likelihood estimation. Achieving high spatial resolution and covering a large range of space in a grid code leads to a trade-off: the best grid code for spatial resolution is built of nested modules with different spatial periods, one inside the other, whereas maximizing the spatial range requires distinct spatial periods that are pairwisely incommensurate. Optimizing the spatial resolution predicts two grid cell properties that have been experimentally observed. First, short lattice spacings should outnumber long lattice spacings. Second, the grid code should be self-similar across different lattice spacings, so that the grid field always covers a fixed fraction of the lattice period. If these conditions are satisfied and the spatial "tuning curves" for each neuron span the same range of firing rates, then the resolution of the grid code easily exceeds that of the best possible place code with the same number of neurons.


Subject(s)
Computer Simulation , Models, Neurological , Neurons/physiology , Orientation/physiology , Space Perception/physiology , Animals , Cerebral Cortex/cytology , Hippocampus/cytology , Humans , Neurons/classification , Population Dynamics , Probability
20.
Proc Natl Acad Sci U S A ; 109(16): 6301-6, 2012 Apr 17.
Article in English | MEDLINE | ID: mdl-22474395

ABSTRACT

When a rat moves, grid cells in its entorhinal cortex become active in multiple regions of the external world that form a hexagonal lattice. As the animal traverses one such "firing field," spikes tend to occur at successively earlier theta phases of the local field potential. This phenomenon is called phase precession. Here, we show that spike phases provide 80% more spatial information than spike counts and that they improve position estimates from single neurons down to a few centimeters. To understand what limits the resolution and how variable spike phases are across different field traversals, we analyze spike trains run by run. We find that the multiple firing fields of a grid cell operate as independent elements for encoding physical space. In addition, phase precession is significantly stronger than the pooled-run data suggest. Despite the inherent stochasticity of grid-cell firing, phase precession is therefore a robust phenomenon at the single-trial level, making a theta-phase code for spatial navigation feasible.


Subject(s)
Entorhinal Cortex/physiology , Neurons/physiology , Running/physiology , Space Perception/physiology , Action Potentials/physiology , Algorithms , Animals , Entorhinal Cortex/cytology , Models, Neurological , Nerve Net/physiology , Rats
SELECTION OF CITATIONS
SEARCH DETAIL