Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 24
Filter
Add more filters











Publication year range
1.
Phys Rev E ; 104(2-1): 024413, 2021 Aug.
Article in English | MEDLINE | ID: mdl-34525545

ABSTRACT

Rhythmic activity has been observed in numerous animal species ranging from insects to humans, and in relation to a wide range of cognitive tasks. Various experimental and theoretical studies have investigated rhythmic activity. The theoretical efforts have mainly been focused on the neuronal dynamics, under the assumption that network connectivity satisfies certain fine-tuning conditions required to generate oscillations. However, it remains unclear how this fine-tuning is achieved. Here we investigated the hypothesis that spike-timing-dependent plasticity (STDP) can provide the underlying mechanism for tuning synaptic connectivity to generate rhythmic activity. We addressed this question in a modeling study. We examined STDP dynamics in the framework of a network of excitatory and inhibitory neuronal populations that has been suggested to underlie the generation of oscillations in the gamma range. Mean-field Fokker-Planck equations for the synaptic weight dynamics are derived in the limit of slow learning. We drew on this approximation to determine which types of STDP rules drive the system to exhibit rhythmic activity, and we demonstrate how the parameters that characterize the plasticity rule govern the rhythmic activity. Finally, we propose a mechanism that can ensure the robustness of self-developing processes in general, and for rhythmogenesis in particular.

2.
PLoS Comput Biol ; 17(9): e1009353, 2021 09.
Article in English | MEDLINE | ID: mdl-34534208

ABSTRACT

Rats and mice use their whiskers to probe the environment. By rhythmically swiping their whiskers back and forth they can detect the existence of an object, locate it, and identify its texture. Localization can be accomplished by inferring the whisker's position. Rhythmic neurons that track the phase of the whisking cycle encode information about the azimuthal location of the whisker. These neurons are characterized by preferred phases of firing that are narrowly distributed. Consequently, pooling the rhythmic signal from several upstream neurons is expected to result in a much narrower distribution of preferred phases in the downstream population, which however has not been observed empirically. Here, we show how spike timing dependent plasticity (STDP) can provide a solution to this conundrum. We investigated the effect of STDP on the utility of a neural population to transmit rhythmic information downstream using the framework of a modeling study. We found that under a wide range of parameters, STDP facilitated the transfer of rhythmic information despite the fact that all the synaptic weights remained dynamic. As a result, the preferred phase of the downstream neuron was not fixed, but rather drifted in time at a drift velocity that depended on the preferred phase, thus inducing a distribution of preferred phases. We further analyzed how the STDP rule governs the distribution of preferred phases in the downstream population. This link between the STDP rule and the distribution of preferred phases constitutes a natural test for our theory.


Subject(s)
Models, Neurological , Vibrissae/innervation , Vibrissae/physiology , Action Potentials/physiology , Animals , Computational Biology , Computer Simulation , Evoked Potentials, Somatosensory/physiology , Learning/physiology , Mechanoreceptors/physiology , Mice , Motor Neurons/physiology , Neuronal Plasticity/physiology , Rats , Somatosensory Cortex/physiology , Thalamus/physiology
3.
PLoS Comput Biol ; 16(6): e1008000, 2020 06.
Article in English | MEDLINE | ID: mdl-32598350

ABSTRACT

Rhythmic activity has been associated with a wide range of cognitive processes including the encoding of sensory information, navigation, the transfer of information and others. Rhythmic activity in the brain has also been suggested to be used for multiplexing information. Multiplexing is the ability to transmit more than one signal via the same channel. Here we focus on frequency division multiplexing, in which different signals are transmitted in different frequency bands. Recent work showed that spike-timing-dependent plasticity (STDP) can facilitate the transfer of rhythmic activity downstream the information processing pathway. However, STDP has also been known to generate strong winner-take-all like competition between subgroups of correlated synaptic inputs. This competition between different rhythmicity channels, induced by STDP, may prevent the multiplexing of information. Thus, raising doubts whether STDP is consistent with the idea of multiplexing. This study explores whether STDP can facilitate the multiplexing of information across multiple frequency channels, and if so, under what conditions. We address this question in a modelling study, investigating the STDP dynamics of two populations synapsing downstream onto the same neuron in a feed-forward manner. Each population was assumed to exhibit rhythmic activity, albeit in a different frequency band. Our theory reveals that the winner-take-all like competitions between the two populations is limited, in the sense that different rhythmic populations will not necessarily fully suppress each other. Furthermore, we found that for a wide range of parameters, the network converged to a solution in which the downstream neuron responded to both rhythms. Yet, the synaptic weights themselves did not converge to a fixed point, rather remained dynamic. These findings imply that STDP can support the multiplexing of rhythmic information, and demonstrate how functionality (multiplexing of information) can be retained in the face of continuous remodeling of all the synaptic weights. The constraints on the types of STDP rules that can support multiplexing provide a natural test for our theory.


Subject(s)
Action Potentials , Neuronal Plasticity , Humans , Models, Biological , Neurons/physiology
4.
Curr Opin Neurobiol ; 58: 70-77, 2019 10.
Article in English | MEDLINE | ID: mdl-31408837

ABSTRACT

Rhythmogenesis is the process that develops the capacity for rhythmic activity in a non-rhythmic system. Theoretical works suggested a wide array of possible mechanisms for rhythmogenesis ranging from the regulation of cellular properties to top-down control. Here we discuss theories of rhythmogenesis with an emphasis on spike timing-dependent plasticity. We argue that even though the specifics of different mechanisms vary greatly they all share certain key features. Namely, rhythmogenesis can be described as a flow on the phase diagram leading the system into a rhythmic region and stabilizing it on a specific manifold characterized by the desired rhythmic activity. Functionality is retained despite biological diversity by forcing the system into a specific manifold, but allowing fluctuations within that manifold.


Subject(s)
Neurons , Humans
5.
Sci Rep ; 8(1): 13050, 2018 08 29.
Article in English | MEDLINE | ID: mdl-30158555

ABSTRACT

Brain rhythms are widely believed to reflect numerous cognitive processes. Changes in rhythmicity have been associated with pathological states. However, the mechanism underlying these rhythms remains unknown. Here, we present a theoretical analysis of the evolvement of rhythm generating capabilities in neuronal circuits. We tested the hypothesis that brain rhythms can be acquired via an intrinsic unsupervised learning process of activity dependent plasticity. Specifically, we focused on spike timing dependent plasticity (STDP) of inhibitory synapses. We detail how rhythmicity can develop via STDP under certain conditions that serve as a natural prediction of the hypothesis. We show how global features of the STDP rule govern and stabilize the resultant rhythmic activity. Finally, we demonstrate how rhythmicity is retained even in the face of synaptic variability. This study suggests a role for inhibitory plasticity that is beyond homeostatic processes.


Subject(s)
Brain Waves , Brain/physiology , Neuronal Plasticity , Neurons/physiology , Action Potentials , Animals , Humans , Models, Neurological , Time
6.
Front Comput Neurosci ; 12: 12, 2018.
Article in English | MEDLINE | ID: mdl-29556186

ABSTRACT

Noise correlations in neuronal responses can have a strong influence on the information available in large populations. In addition, the structure of noise correlations may have a great impact on the utility of different algorithms to extract this information that may depend on the specific algorithm, and hence may affect our understanding of population codes in the brain. Thus, a better understanding of the structure of noise correlations and their interplay with different readout algorithms is required. Here we use eigendecomposition to investigate the structure of noise correlations in populations of about 50-100 simultaneously recorded neurons in the primary visual cortex of anesthetized monkeys, and we relate this structure to the performance of two common decoders: the population vector and the optimal linear estimator. Our analysis reveals a non-trivial correlation structure, in which the eigenvalue spectrum is composed of several distinct large eigenvalues that represent different shared modes of fluctuation extending over most of the population, and a semi-continuous tail. The largest eigenvalue represents a uniform collective mode of fluctuation. The second and third eigenvalues typically show either a clear functional (i.e., dependent on the preferred orientation of the neurons) or spatial structure (i.e., dependent on the physical position of the neurons). We find that the number of shared modes increases with the population size, being roughly 10% of that size. Furthermore, we find that the noise in each of these collective modes grows linearly with the population. This linear growth of correlated noise power can have limiting effects on the utility of averaging neuronal responses across large populations, depending on the readout. Specifically, the collective modes of fluctuation limit the accuracy of the population vector but not of the optimal linear estimator.

7.
Front Comput Neurosci ; 10: 107, 2016.
Article in English | MEDLINE | ID: mdl-27812332

ABSTRACT

Response latency has been suggested as a possible source of information in the central nervous system when fast decisions are required. The accuracy of latency codes was studied in the past using a simplified readout algorithm termed the temporal-winner-take-all (tWTA). The tWTA is a competitive readout algorithm in which populations of neurons with a similar decision preference compete, and the algorithm selects according to the preference of the population that reaches the decision threshold first. It has been shown that this algorithm can account for accurate decisions among a small number of alternatives during short biologically relevant time periods. However, one of the major points of criticism of latency codes has been that it is unclear how can such a readout be implemented by the central nervous system. Here we show that the solution to this long standing puzzle may be rather simple. We suggest a mechanism that is based on reciprocal inhibition architecture, similar to that of the conventional winner-take-all, and show that under a wide range of parameters this mechanism is sufficient to implement the tWTA algorithm. This is done by first analyzing a rate toy model, and demonstrating its ability to discriminate short latency differences between its inputs. We then study the sensitivity of this mechanism to fine-tuning of its initial conditions, and show that it is robust to wide range of noise levels in the initial conditions. These results are then generalized to a Hodgkin-Huxley type of neuron model, using numerical simulations. Latency codes have been criticized for requiring a reliable stimulus-onset detection mechanism as a reference for measuring latency. Here we show that this frequent assumption does not hold, and that, an additional onset estimator is not needed to trigger this simple tWTA mechanism.

8.
PLoS Comput Biol ; 12(4): e1004878, 2016 Apr.
Article in English | MEDLINE | ID: mdl-27082118

ABSTRACT

Neuronal oscillatory activity has been reported in relation to a wide range of cognitive processes including the encoding of external stimuli, attention, and learning. Although the specific role of these oscillations has yet to be determined, it is clear that neuronal oscillations are abundant in the central nervous system. This raises the question of the origin of these oscillations: are the mechanisms for generating these oscillations genetically hard-wired or can they be acquired via a learning process? Here, we study the conditions under which oscillatory activity emerges through a process of spike timing dependent plasticity (STDP) in a feed-forward architecture. First, we analyze the effect of oscillations on STDP-driven synaptic dynamics of a single synapse, and study how the parameters that characterize the STDP rule and the oscillations affect the resultant synaptic weight. Next, we analyze STDP-driven synaptic dynamics of a pre-synaptic population of neurons onto a single post-synaptic cell. The pre-synaptic neural population is assumed to be oscillating at the same frequency, albeit with different phases, such that the net activity of the pre-synaptic population is constant in time. Thus, in the homogeneous case in which all synapses are equal, the post-synaptic neuron receives constant input and hence does not oscillate. To investigate the transition to oscillatory activity, we develop a mean-field Fokker-Planck approximation of the synaptic dynamics. We analyze the conditions causing the homogeneous solution to lose its stability. The findings show that oscillatory activity appears through a mechanism of spontaneous symmetry breaking. However, in the general case the homogeneous solution is unstable, and the synaptic dynamics does not converge to a different fixed point, but rather to a limit cycle. We show how the temporal structure of the STDP rule determines the stability of the homogeneous solution and the drift velocity of the limit cycle.


Subject(s)
Models, Neurological , Neuronal Plasticity/physiology , Action Potentials/physiology , Algorithms , Animals , Brain/physiology , Computational Biology , Computer Simulation , Feedback, Physiological , Humans , Learning/physiology , Presynaptic Terminals/physiology
9.
J Neurophysiol ; 113(9): 3410-20, 2015 May 01.
Article in English | MEDLINE | ID: mdl-25787960

ABSTRACT

Identifying the properties of correlations in the firing of neocortical neurons is central to our understanding of cortical information processing. It has been generally assumed, by virtue of the columnar organization of the neocortex, that the firing of neurons residing in a certain vertical domain is highly correlated. On the other hand, firing correlations between neurons steeply decline with horizontal distance. Technical difficulties in sampling neurons with sufficient spatial information have precluded the critical evaluation of these notions. We used 128-channel "silicon probes" to examine the spike-count noise correlations during spontaneous activity between multiple neurons with identified laminar position and over large horizontal distances in the anesthetized rat barrel cortex. Eigen decomposition of correlation coefficient matrices revealed that the laminar position of a neuron is a significant determinant of these correlations, such that the fluctuations of layer 5B/6 neurons are in opposite direction to those of layers 5A and 4. Moreover, we found that within each experiment, the distribution of horizontal, intralaminar spike-count correlation coefficients, up to a distance of ∼1.5 mm, is practically identical to the distribution of vertical correlations. Taken together, these data reveal that the neuron's laminar position crucially affects its role in cortical processing. Moreover, our analyses reveal that this laminar effect extends over several functional columns. We propose that within the cortex the influence of the horizontal elements exists in a dynamic balance with the influence of the vertical domain and this balance is modulated with brain states to shape the network's behavior.


Subject(s)
Action Potentials/physiology , Nerve Net/physiology , Neurons/physiology , Somatosensory Cortex/cytology , Somatosensory Cortex/physiology , Afferent Pathways/physiology , Animals , Electricity , Male , Physical Stimulation , Rats , Rats, Wistar , Statistics as Topic , Vibrissae/innervation , Voltage-Sensitive Dye Imaging
10.
PLoS One ; 9(7): e101109, 2014.
Article in English | MEDLINE | ID: mdl-24999634

ABSTRACT

Spike-Timing Dependent Plasticity (STDP) is characterized by a wide range of temporal kernels. However, much of the theoretical work has focused on a specific kernel - the "temporally asymmetric Hebbian" learning rules. Previous studies linked excitatory STDP to positive feedback that can account for the emergence of response selectivity. Inhibitory plasticity was associated with negative feedback that can balance the excitatory and inhibitory inputs. Here we study the possible computational role of the temporal structure of the STDP. We represent the STDP as a superposition of two processes: potentiation and depression. This allows us to model a wide range of experimentally observed STDP kernels, from Hebbian to anti-Hebbian, by varying a single parameter. We investigate STDP dynamics of a single excitatory or inhibitory synapse in purely feed-forward architecture. We derive a mean-field-Fokker-Planck dynamics for the synaptic weight and analyze the effect of STDP structure on the fixed points of the mean field dynamics. We find a phase transition along the Hebbian to anti-Hebbian parameter from a phase that is characterized by a unimodal distribution of the synaptic weight, in which the STDP dynamics is governed by negative feedback, to a phase with positive feedback characterized by a bimodal distribution. The critical point of this transition depends on general properties of the STDP dynamics and not on the fine details. Namely, the dynamics is affected by the pre-post correlations only via a single number that quantifies its overlap with the STDP kernel. We find that by manipulating the STDP temporal kernel, negative feedback can be induced in excitatory synapses and positive feedback in inhibitory. Moreover, there is an exact symmetry between inhibitory and excitatory plasticity, i.e., for every STDP rule of inhibitory synapse there exists an STDP rule for excitatory synapse, such that their dynamics is identical.


Subject(s)
Excitatory Postsynaptic Potentials , Inhibitory Postsynaptic Potentials , Learning/physiology , Models, Neurological , Neuronal Plasticity/physiology , Neurons/cytology , Synapses/physiology , Feedback, Physiological
11.
Curr Opin Neurobiol ; 25: 140-8, 2014 Apr.
Article in English | MEDLINE | ID: mdl-24487341

ABSTRACT

Population coding theory aims to provide quantitative tests for hypotheses concerning the neural code. Over the last two decades theory has focused on analyzing the ways in which various parameters that characterize neuronal responses to external stimuli affect the information content of these responses. This article reviews and provides an intuitive explanation for the major effects of noise correlations and neuronal heterogeneity, and discusses their implications for our ability to investigate the neural code. It is argued that to test neural code hypotheses further, additional constraints are required, including relating trial-to-trial variation in neuronal population responses to behavioral decisions and specifying how information is decoded by downstream networks.


Subject(s)
Nerve Net/physiology , Nervous System Physiological Phenomena/physiology , Neurons/physiology , Animals , Humans
12.
PLoS One ; 8(12): e81660, 2013.
Article in English | MEDLINE | ID: mdl-24358120

ABSTRACT

It has been suggested that the considerable noise in single-cell responses to a stimulus can be overcome by pooling information from a large population. Theoretical studies indicated that correlations in trial-to-trial fluctuations in the responses of different neurons may limit the improvement due to pooling. Subsequent theoretical studies have suggested that inherent neuronal diversity, i.e., the heterogeneity of tuning curves and other response properties of neurons preferentially tuned to the same stimulus, can provide a means to overcome this limit. Here we study the effect of spike-count correlations and the inherent neuronal heterogeneity on the ability to extract information from large neural populations. We use electrophysiological data from the guinea pig Inferior-Colliculus to capture inherent neuronal heterogeneity and single cell statistics, and introduce response correlations artificially. To this end, we generate pseudo-population responses, based on single-cell recording of neurons responding to auditory stimuli with varying binaural correlations. Typically, when pseudo-populations are generated from single cell data, the responses within the population are statistically independent. As a result, the information content of the population will increase indefinitely with its size. In contrast, here we apply a simple algorithm that enables us to generate pseudo-population responses with variable spike-count correlations. This enables us to study the effect of neuronal correlations on the accuracy of conventional rate codes. We show that in a homogenous population, in the presence of even low-level correlations, information content is bounded. In contrast, utilizing a simple linear readout, that takes into account the natural heterogeneity, even of neurons preferentially tuned to the same stimulus, within the neural population, one can overcome the correlated noise and obtain a readout whose accuracy grows linearly with the size of the population.


Subject(s)
Action Potentials/physiology , Inferior Colliculi/physiology , Models, Neurological , Nerve Net/physiology , Neurons/physiology , Animals , Auditory Pathways/physiology , Guinea Pigs , Sound Localization/physiology
13.
PLoS Comput Biol ; 8(6): e1002536, 2012.
Article in English | MEDLINE | ID: mdl-22719237

ABSTRACT

Understanding how populations of neurons encode sensory information is a major goal of systems neuroscience. Attempts to answer this question have focused on responses measured over several hundred milliseconds, a duration much longer than that frequently used by animals to make decisions about the environment. How reliably sensory information is encoded on briefer time scales, and how best to extract this information, is unknown. Although it has been proposed that neuronal response latency provides a major cue for fast decisions in the visual system, this hypothesis has not been tested systematically and in a quantitative manner. Here we use a simple 'race to threshold' readout mechanism to quantify the information content of spike time latency of primary visual (V1) cortical cells to stimulus orientation. We find that many V1 cells show pronounced tuning of their spike latency to stimulus orientation and that almost as much information can be extracted from spike latencies as from firing rates measured over much longer durations. To extract this information, stimulus onset must be estimated accurately. We show that the responses of cells with weak tuning of spike latency can provide a reliable onset detector. We find that spike latency information can be pooled from a large neuronal population, provided that the decision threshold is scaled linearly with the population size, yielding a processing time of the order of a few tens of milliseconds. Our results provide a novel mechanism for extracting information from neuronal populations over the very brief time scales in which behavioral judgments must sometimes be made.


Subject(s)
Models, Neurological , Orientation/physiology , Visual Cortex/physiology , Action Potentials/physiology , Animals , Computational Biology , Evoked Potentials, Visual/physiology , Humans , Macaca fascicularis/physiology , Neurons/physiology , Photic Stimulation , Time Factors , Visual Cortex/cytology
14.
Eur J Neurosci ; 35(3): 436-44, 2012 Feb.
Article in English | MEDLINE | ID: mdl-22288480

ABSTRACT

Archer fish are known for their unique hunting method, where one fish in a group shoots down an insect with a jet of water while all the other fish are observing the prey's motion. To reap its reward, the archer fish must reach the prey before its competitors. This requires fast computation of the direction of motion of the prey, which enables the fish to initiate a turn towards the prey with an accuracy of 99%, at about 100 ms after the prey is shot. We explored the hypothesis that direction-selective retinal ganglion cells may underlie this rapid processing. We quantified the degree of directional selectivity of ganglion cells in the archer fish retina. The cells could be categorized into three groups: sharply (5%), broadly (37%) and non-tuned (58%) directionally selective cells. To relate the electrophysiological data to the behavioral results we studied a computational model and estimated the time required to accumulate sufficient directional information to match the decision accuracy of the fish. The computational model is based on two direction-selective populations that race against each other until one reaches the threshold and drives the decision. We found that this competition model can account for the observed response time at the required accuracy. Thus, our results are consistent with the hypothesis that the fast response behavior of the archer fish relies on retinal identification of movement direction.


Subject(s)
Fishes/anatomy & histology , Fishes/physiology , Motion Perception/physiology , Movement/physiology , Predatory Behavior/physiology , Retinal Ganglion Cells/physiology , Action Potentials/physiology , Animals , Electrophysiology , Reaction Time/physiology
15.
PLoS Comput Biol ; 8(1): e1002334, 2012 Jan.
Article in English | MEDLINE | ID: mdl-22291583

ABSTRACT

It has been suggested that excitatory and inhibitory inputs to cortical cells are balanced, and that this balance is important for the highly irregular firing observed in the cortex. There are two hypotheses as to the origin of this balance. One assumes that it results from a stable solution of the recurrent neuronal dynamics. This model can account for a balance of steady state excitation and inhibition without fine tuning of parameters, but not for transient inputs. The second hypothesis suggests that the feed forward excitatory and inhibitory inputs to a postsynaptic cell are already balanced. This latter hypothesis thus does account for the balance of transient inputs. However, it remains unclear what mechanism underlies the fine tuning required for balancing feed forward excitatory and inhibitory inputs. Here we investigated whether inhibitory synaptic plasticity is responsible for the balance of transient feed forward excitation and inhibition. We address this issue in the framework of a model characterizing the stochastic dynamics of temporally anti-symmetric Hebbian spike timing dependent plasticity of feed forward excitatory and inhibitory synaptic inputs to a single post-synaptic cell. Our analysis shows that inhibitory Hebbian plasticity generates 'negative feedback' that balances excitation and inhibition, which contrasts with the 'positive feedback' of excitatory Hebbian synaptic plasticity. As a result, this balance may increase the sensitivity of the learning dynamics to the correlation structure of the excitatory inputs.


Subject(s)
Neuronal Plasticity/physiology , Neurons/physiology , Synaptic Transmission/physiology , Excitatory Postsynaptic Potentials , Models, Neurological
16.
J Neurosci ; 31(25): 9192-204, 2011 Jun 22.
Article in English | MEDLINE | ID: mdl-21697370

ABSTRACT

First spike latency has been suggested as a source of the information required for fast discrimination tasks. However, the accuracy of such a mechanism has not been analyzed rigorously. Here, we investigate the utility of first spike latency for encoding information about the location of a sound source, based on the responses of inferior colliculus (IC) neurons in the guinea pig to interaural phase differences (IPDs). First spike latencies of many cells in the guinea pig IC show unimodal tuning to stimulus IPD. We investigated the discrimination accuracy of a simple latency code that estimates stimulus IPD from the preferred IPD of the single cell that fired first. Surprisingly, despite being based on only a single spike, the accuracy of the latency code is comparable to that of a conventional rate code computed over the entire response. We show that spontaneous firing limits the capacity of the latency code to accumulate information from large neural populations. This detrimental effect can be overcome by generalizing the latency code to estimate the stimulus IPD from the preferred IPDs of the population of cells that fired the first n spikes. In addition, we show that a good estimate of the neural response time to the stimulus, which can be obtained from the responses of the cells whose response latency is invariant to stimulus identity, limits the detrimental effect of spontaneous firing. Thus, a latency code may provide great improvement in response speed at a small cost to the accuracy of the decision.


Subject(s)
Action Potentials/physiology , Guinea Pigs/physiology , Inferior Colliculi/physiology , Neurons/physiology , Sound Localization/physiology , Synaptic Transmission/physiology , Animals , Female , Male
17.
PLoS Comput Biol ; 6(11): e1000977, 2010 Nov 04.
Article in English | MEDLINE | ID: mdl-21079682

ABSTRACT

Traditionally, the information content of the neural response is quantified using statistics of the responses relative to stimulus onset time with the assumption that the brain uses onset time to infer stimulus identity. However, stimulus onset time must also be estimated by the brain, making the utility of such an approach questionable. How can stimulus onset be estimated from the neural responses with sufficient accuracy to ensure reliable stimulus identification? We address this question using the framework of colour coding by the archer fish retinal ganglion cell. We found that stimulus identity, "what", can be estimated from the responses of best single cells with an accuracy comparable to that of the animal's psychophysical estimation. However, to extract this information, an accurate estimation of stimulus onset is essential. We show that stimulus onset time, "when", can be estimated using a linear-nonlinear readout mechanism that requires the response of a population of 100 cells. Thus, stimulus onset time can be estimated using a relatively simple readout. However, large nerve cell populations are required to achieve sufficient accuracy.


Subject(s)
Computational Biology/methods , Perciformes/physiology , Photic Stimulation , Retinal Ganglion Cells/physiology , Animals , Models, Animal
18.
PLoS Comput Biol ; 5(5): e1000370, 2009 May.
Article in English | MEDLINE | ID: mdl-19412531

ABSTRACT

Sensory processing is associated with gamma frequency oscillations (30-80 Hz) in sensory cortices. This raises the question whether gamma oscillations can be directly involved in the representation of time-varying stimuli, including stimuli whose time scale is longer than a gamma cycle. We are interested in the ability of the system to reliably distinguish different stimuli while being robust to stimulus variations such as uniform time-warp. We address this issue with a dynamical model of spiking neurons and study the response to an asymmetric sawtooth input current over a range of shape parameters. These parameters describe how fast the input current rises and falls in time. Our network consists of inhibitory and excitatory populations that are sufficient for generating oscillations in the gamma range. The oscillations period is about one-third of the stimulus duration. Embedded in this network is a subpopulation of excitatory cells that respond to the sawtooth stimulus and a subpopulation of cells that respond to an onset cue. The intrinsic gamma oscillations generate a temporally sparse code for the external stimuli. In this code, an excitatory cell may fire a single spike during a gamma cycle, depending on its tuning properties and on the temporal structure of the specific input; the identity of the stimulus is coded by the list of excitatory cells that fire during each cycle. We quantify the properties of this representation in a series of simulations and show that the sparseness of the code makes it robust to uniform warping of the time scale. We find that resetting of the oscillation phase at stimulus onset is important for a reliable representation of the stimulus and that there is a tradeoff between the resolution of the neural representation of the stimulus and robustness to time-warp.


Subject(s)
Models, Neurological , Neural Networks, Computer , Algorithms , Computational Biology , Oscillometry , Speech , Time Factors
19.
PLoS Comput Biol ; 5(2): e1000286, 2009 Feb.
Article in English | MEDLINE | ID: mdl-19229309

ABSTRACT

How can the central nervous system make accurate decisions about external stimuli at short times on the basis of the noisy responses of nerve cell populations? It has been suggested that spike time latency is the source of fast decisions. Here, we propose a simple and fast readout mechanism, the temporal Winner-Take-All (tWTA), and undertake a study of its accuracy. The tWTA is studied in the framework of a statistical model for the dynamic response of a nerve cell population to an external stimulus. Each cell is characterized by a preferred stimulus, a unique value of the external stimulus for which it responds fastest. The tWTA estimate for the stimulus is the preferred stimulus of the cell that fired the first spike in the entire population. We then pose the questions: How accurate is the tWTA readout? What are the parameters that govern this accuracy? What are the effects of noise correlations and baseline firing? We find that tWTA sensitivity to the stimulus grows algebraically fast with the number of cells in the population, N, in contrast to the logarithmic slow scaling of the conventional rate-WTA sensitivity with N. Noise correlations in first-spike times of different cells can limit the accuracy of the tWTA readout, even in the limit of large N, similar to the effect that has been observed in population coding theory. We show that baseline firing also has a detrimental effect on tWTA accuracy. We suggest a generalization of the tWTA, the n-tWTA, which estimates the stimulus by the identity of the group of cells firing the first n spikes and show how this simple generalization can overcome the detrimental effect of baseline firing. Thus, the tWTA can provide fast and accurate responses discriminating between a small number of alternatives. High accuracy in estimation of a continuous stimulus can be obtained using the n-tWTA.


Subject(s)
Action Potentials/physiology , Models, Neurological , Nerve Net/physiology , Models, Statistical , Neural Networks, Computer , Nonlinear Dynamics , Synaptic Transmission/physiology
20.
Neural Comput ; 19(12): 3239-61, 2007 Dec.
Article in English | MEDLINE | ID: mdl-17970652

ABSTRACT

Temporal structure is an inherent property of various sensory inputs and motor outputs of the brain. For example, auditory stimuli are defined by the sound waveform. Temporal structure is also an important feature of certain visual stimuli, for example, the image on the retina of a fly during flight. In many cases, this temporal structure of the stimulus is being represented by a time-dependent neuronal activity that is locked to certain features of the stimulus. Here, we study the information capacity of the temporal code. In particular we are interested in the following questions. First, how does the information content of the code depend on the observation time of the cell's response, and what is the effect of temporal noise correlations on this information capacity? Second, what is the effect on the information content of reading the code with a finite temporal resolution for the neural response? We address these questions in the framework of a statistical model for the neuronal temporal response to a time-varying stimulus in a two-alternative forced-choice paradigm. We show that information content of the temporal response scales linearly with the overall time of the response, even in the presence of temporal noise correlations. More precisely, we find that positive temporal noise correlations have a scaling effect that decreases the information content. Nevertheless, the information content of the response continues to scale linearly with the observation time. We further show that finite temporal resolution is sufficient for obtaining most of the information from the cell's response. This finite timescale is related to the response properties of the cell.


Subject(s)
Action Potentials/physiology , Auditory Perception/physiology , Brain/physiology , Nerve Net/physiology , Time Perception/physiology , Algorithms , Animals , Humans , Models, Neurological , Models, Statistical , Normal Distribution , Poisson Distribution , Synaptic Transmission/physiology , Time Factors , Vocalization, Animal/physiology
SELECTION OF CITATIONS
SEARCH DETAIL