Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 21
Filtrar
1.
PLoS Comput Biol ; 19(5): e1010989, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-37130121

RESUMEN

Animals rely on different decision strategies when faced with ambiguous or uncertain cues. Depending on the context, decisions may be biased towards events that were most frequently experienced in the past, or be more explorative. A particular type of decision making central to cognition is sequential memory recall in response to ambiguous cues. A previously developed spiking neuronal network implementation of sequence prediction and recall learns complex, high-order sequences in an unsupervised manner by local, biologically inspired plasticity rules. In response to an ambiguous cue, the model deterministically recalls the sequence shown most frequently during training. Here, we present an extension of the model enabling a range of different decision strategies. In this model, explorative behavior is generated by supplying neurons with noise. As the model relies on population encoding, uncorrelated noise averages out, and the recall dynamics remain effectively deterministic. In the presence of locally correlated noise, the averaging effect is avoided without impairing the model performance, and without the need for large noise amplitudes. We investigate two forms of correlated noise occurring in nature: shared synaptic background inputs, and random locking of the stimulus to spatiotemporal oscillations in the network activity. Depending on the noise characteristics, the network adopts various recall strategies. This study thereby provides potential mechanisms explaining how the statistics of learned sequences affect decision making, and how decision strategies can be adjusted after learning.


Asunto(s)
Redes Neurales de la Computación , Neuronas , Animales , Neuronas/fisiología , Aprendizaje/fisiología , Memoria/fisiología , Recuerdo Mental , Modelos Neurológicos , Plasticidad Neuronal/fisiología , Potenciales de Acción/fisiología
2.
PLoS Comput Biol ; 18(6): e1010233, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-35727857

RESUMEN

Sequence learning, prediction and replay have been proposed to constitute the universal computations performed by the neocortex. The Hierarchical Temporal Memory (HTM) algorithm realizes these forms of computation. It learns sequences in an unsupervised and continuous manner using local learning rules, permits a context specific prediction of future sequence elements, and generates mismatch signals in case the predictions are not met. While the HTM algorithm accounts for a number of biological features such as topographic receptive fields, nonlinear dendritic processing, and sparse connectivity, it is based on abstract discrete-time neuron and synapse dynamics, as well as on plasticity mechanisms that can only partly be related to known biological mechanisms. Here, we devise a continuous-time implementation of the temporal-memory (TM) component of the HTM algorithm, which is based on a recurrent network of spiking neurons with biophysically interpretable variables and parameters. The model learns high-order sequences by means of a structural Hebbian synaptic plasticity mechanism supplemented with a rate-based homeostatic control. In combination with nonlinear dendritic input integration and local inhibitory feedback, this type of plasticity leads to the dynamic self-organization of narrow sequence-specific subnetworks. These subnetworks provide the substrate for a faithful propagation of sparse, synchronous activity, and, thereby, for a robust, context specific prediction of future sequence elements as well as for the autonomous replay of previously learned sequences. By strengthening the link to biology, our implementation facilitates the evaluation of the TM hypothesis based on experimentally accessible quantities. The continuous-time implementation of the TM algorithm permits, in particular, an investigation of the role of sequence timing for sequence learning, prediction and replay. We demonstrate this aspect by studying the effect of the sequence speed on the sequence learning performance and on the speed of autonomous sequence replay.


Asunto(s)
Modelos Neurológicos , Redes Neurales de la Computación , Aprendizaje/fisiología , Plasticidad Neuronal/fisiología , Neuronas/fisiología , Sinapsis/fisiología
3.
PLoS Comput Biol ; 16(8): e1007790, 2020 08.
Artículo en Inglés | MEDLINE | ID: mdl-32841234

RESUMEN

The impairment of cognitive function in Alzheimer's disease is clearly correlated to synapse loss. However, the mechanisms underlying this correlation are only poorly understood. Here, we investigate how the loss of excitatory synapses in sparsely connected random networks of spiking excitatory and inhibitory neurons alters their dynamical characteristics. Beyond the effects on the activity statistics, we find that the loss of excitatory synapses on excitatory neurons reduces the network's sensitivity to small perturbations. This decrease in sensitivity can be considered as an indication of a reduction of computational capacity. A full recovery of the network's dynamical characteristics and sensitivity can be achieved by firing rate homeostasis, here implemented by an up-scaling of the remaining excitatory-excitatory synapses. Mean-field analysis reveals that the stability of the linearised network dynamics is, in good approximation, uniquely determined by the firing rate, and thereby explains why firing rate homeostasis preserves not only the firing rate but also the network's sensitivity to small perturbations.


Asunto(s)
Enfermedad de Alzheimer/fisiopatología , Modelos Neurológicos , Red Nerviosa/fisiopatología , Sinapsis/fisiología , Homeostasis/fisiología , Humanos
4.
J Comput Neurosci ; 45(2): 103-132, 2018 10.
Artículo en Inglés | MEDLINE | ID: mdl-30146661

RESUMEN

Capturing the response behavior of spiking neuron models with rate-based models facilitates the investigation of neuronal networks using powerful methods for rate-based network dynamics. To this end, we investigate the responses of two widely used neuron model types, the Izhikevich and augmented multi-adapative threshold (AMAT) models, to a range of spiking inputs ranging from step responses to natural spike data. We find (i) that linear-nonlinear firing rate models fitted to test data can be used to describe the firing-rate responses of AMAT and Izhikevich spiking neuron models in many cases; (ii) that firing-rate responses are generally too complex to be captured by first-order low-pass filters but require bandpass filters instead; (iii) that linear-nonlinear models capture the response of AMAT models better than of Izhikevich models; (iv) that the wide range of response types evoked by current-injection experiments collapses to few response types when neurons are driven by stationary or sinusoidally modulated Poisson input; and (v) that AMAT and Izhikevich models show different responses to spike input despite identical responses to current injections. Together, these findings suggest that rate-based models of network dynamics may capture a wider range of neuronal response properties by incorporating second-order bandpass filters fitted to responses of spiking model neurons. These models may contribute to bringing rate-based network modeling closer to the reality of biological neuronal networks.


Asunto(s)
Potenciales de Acción/fisiología , Modelos Neurológicos , Neuronas/fisiología , Animales , Simulación por Computador , Estimulación Eléctrica , Modelos Lineales , Red Nerviosa , Dinámicas no Lineales
5.
Cereb Cortex ; 26(12): 4461-4496, 2016 12.
Artículo en Inglés | MEDLINE | ID: mdl-27797828

RESUMEN

With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network model for a ∼1 mm2 patch of primary visual cortex, predict laminar LFPs for different network states, assess the relative LFP contribution from different laminar populations, and investigate effects of input correlations and neuron density on the LFP. The generic nature of the hybrid scheme and its public implementation in hybridLFPy form the basis for LFP predictions from other and larger point-neuron network models, as well as extensions of the current application with additional biological detail.


Asunto(s)
Corteza Cerebral/fisiología , Modelos Neurológicos , Neuronas/fisiología , Animales , Simulación por Computador , Humanos , Potenciales de la Membrana , Inhibición Neural/fisiología , Tálamo/fisiología
6.
PLoS Comput Biol ; 10(1): e1003428, 2014 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-24453955

RESUMEN

Correlated neuronal activity is a natural consequence of network connectivity and shared inputs to pairs of neurons, but the task-dependent modulation of correlations in relation to behavior also hints at a functional role. Correlations influence the gain of postsynaptic neurons, the amount of information encoded in the population activity and decoded by readout neurons, and synaptic plasticity. Further, it affects the power and spatial reach of extracellular signals like the local-field potential. A theory of correlated neuronal activity accounting for recurrent connectivity as well as fluctuating external sources is currently lacking. In particular, it is unclear how the recently found mechanism of active decorrelation by negative feedback on the population level affects the network response to externally applied correlated stimuli. Here, we present such an extension of the theory of correlations in stochastic binary networks. We show that (1) for homogeneous external input, the structure of correlations is mainly determined by the local recurrent connectivity, (2) homogeneous external inputs provide an additive, unspecific contribution to the correlations, (3) inhibitory feedback effectively decorrelates neuronal activity, even if neurons receive identical external inputs, and (4) identical synaptic input statistics to excitatory and to inhibitory cells increases intrinsically generated fluctuations and pairwise correlations. We further demonstrate how the accuracy of mean-field predictions can be improved by self-consistently including correlations. As a byproduct, we show that the cancellation of correlations between the summed inputs to pairs of neurons does not originate from the fast tracking of external input, but from the suppression of fluctuations on the population level by the local network. This suppression is a necessary constraint, but not sufficient to determine the structure of correlations; specifically, the structure observed at finite network size differs from the prediction based on perfect tracking, even though perfect tracking implies suppression of population fluctuations.


Asunto(s)
Red Nerviosa , Neuronas/fisiología , Algoritmos , Animales , Simulación por Computador , Retroalimentación Fisiológica , Haplorrinos , Modelos Neurológicos , Plasticidad Neuronal , Transducción de Señal , Procesos Estocásticos , Sinapsis/fisiología , Transmisión Sináptica
7.
PLoS Comput Biol ; 10(11): e1003928, 2014 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-25393030

RESUMEN

Power laws, that is, power spectral densities (PSDs) exhibiting 1/f(α) behavior for large frequencies f, have been observed both in microscopic (neural membrane potentials and currents) and macroscopic (electroencephalography; EEG) recordings. While complex network behavior has been suggested to be at the root of this phenomenon, we here demonstrate a possible origin of such power laws in the biophysical properties of single neurons described by the standard cable equation. Taking advantage of the analytical tractability of the so called ball and stick neuron model, we derive general expressions for the PSD transfer functions for a set of measures of neuronal activity: the soma membrane current, the current-dipole moment (corresponding to the single-neuron EEG contribution), and the soma membrane potential. These PSD transfer functions relate the PSDs of the respective measurements to the PSDs of the noisy input currents. With homogeneously distributed input currents across the neuronal membrane we find that all PSD transfer functions express asymptotic high-frequency 1/f(α) power laws with power-law exponents analytically identified as α∞(I) = 1/2 for the soma membrane current, α∞(p) = 3/2 for the current-dipole moment, and α∞(V) = 2 for the soma membrane potential. Comparison with available data suggests that the apparent power laws observed in the high-frequency end of the PSD spectra may stem from uncorrelated current sources which are homogeneously distributed across the neural membranes and themselves exhibit pink (1/f) noise distributions. While the PSD noise spectra at low frequencies may be dominated by synaptic noise, our findings suggest that the high-frequency power laws may originate in noise from intrinsic ion channels. The significance of this finding goes beyond neuroscience as it demonstrates how 1/f(α) power laws with a wide range of values for the power-law exponent α may arise from a simple, linear partial differential equation.


Asunto(s)
Electroencefalografía , Potenciales de la Membrana/fisiología , Modelos Neurológicos , Neuronas/fisiología , Biología Computacional , Simulación por Computador
8.
PLoS Comput Biol ; 9(7): e1003137, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-23874180

RESUMEN

Despite its century-old use, the interpretation of local field potentials (LFPs), the low-frequency part of electrical signals recorded in the brain, is still debated. In cortex the LFP appears to mainly stem from transmembrane neuronal currents following synaptic input, and obvious questions regarding the 'locality' of the LFP are: What is the size of the signal-generating region, i.e., the spatial reach, around a recording contact? How far does the LFP signal extend outside a synaptically activated neuronal population? And how do the answers depend on the temporal frequency of the LFP signal? Experimental inquiries have given conflicting results, and we here pursue a modeling approach based on a well-established biophysical forward-modeling scheme incorporating detailed reconstructed neuronal morphologies in precise calculations of population LFPs including thousands of neurons. The two key factors determining the frequency dependence of LFP are the spatial decay of the single-neuron LFP contribution and the conversion of synaptic input correlations into correlations between single-neuron LFP contributions. Both factors are seen to give low-pass filtering of the LFP signal power. For uncorrelated input only the first factor is relevant, and here a modest reduction (<50%) in the spatial reach is observed for higher frequencies (>100 Hz) compared to the near-DC ([Formula: see text]) value of about [Formula: see text]. Much larger frequency-dependent effects are seen when populations of pyramidal neurons receive correlated and spatially asymmetric inputs: the low-frequency ([Formula: see text]) LFP power can here be an order of magnitude or more larger than at 60 Hz. Moreover, the low-frequency LFP components have larger spatial reach and extend further outside the active population than high-frequency components. Further, the spatial LFP profiles for such populations typically span the full vertical extent of the dendrites of neurons in the population. Our numerical findings are backed up by an intuitive simplified model for the generation of population LFP.


Asunto(s)
Potenciales de Acción , Encéfalo/fisiología , Modelos Neurológicos
9.
J Comput Neurosci ; 35(3): 359-75, 2013 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-23783890

RESUMEN

Firing-rate models provide a practical tool for studying signal processing in the early visual system, permitting more thorough mathematical analysis than spike-based models. We show here that essential response properties of relay cells in the lateral geniculate nucleus (LGN) can be captured by surprisingly simple firing-rate models consisting of a low-pass filter and a nonlinear activation function. The starting point for our analysis are two spiking neuron models based on experimental data: a spike-response model fitted to data from macaque (Carandini et al. J. Vis., 20(14), 1-2011, 2007), and a model with conductance-based synapses and afterhyperpolarizing currents fitted to data from cat (Casti et al. J. Comput. Neurosci., 24(2), 235-252, 2008). We obtained the nonlinear activation function by stimulating the model neurons with stationary stochastic spike trains, while we characterized the linear filter by fitting a low-pass filter to responses to sinusoidally modulated stochastic spike trains. To account for the non-Poisson nature of retinal spike trains, we performed all analyses with spike trains with higher-order gamma statistics in addition to Poissonian spike trains. Interestingly, the properties of the low-pass filter depend only on the average input rate, but not on the modulation depth of sinusoidally modulated input. Thus, the response properties of our model are fully specified by just three parameters (low-frequency gain, cutoff frequency, and delay) for a given mean input rate and input regularity. This simple firing-rate model reproduces the response of spiking neurons to a step in input rate very well for Poissonian as well as for non-Poissonian input. We also found that the cutoff frequencies, and thus the filter time constants, of the rate-based model are unrelated to the membrane time constants of the underlying spiking models, in agreement with similar observations for simpler models.


Asunto(s)
Cuerpos Geniculados/fisiología , Neuronas/fisiología , Algoritmos , Animales , Simulación por Computador , Estimulación Eléctrica , Fenómenos Electrofisiológicos/fisiología , Potenciales Postsinápticos Excitadores/fisiología , Potenciales de la Membrana/fisiología , Modelos Neurológicos , Dinámicas no Lineales , Transmisión Sináptica/fisiología
10.
PLoS Comput Biol ; 8(8): e1002596, 2012 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-23133368

RESUMEN

Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. Here, we explain this observation by means of a linear network model and simulations of networks of leaky integrate-and-fire neurons. We show that inhibitory feedback efficiently suppresses pairwise correlations and, hence, population-rate fluctuations, thereby assigning inhibitory neurons the new role of active decorrelation. We quantify this decorrelation by comparing the responses of the intact recurrent network (feedback system) and systems where the statistics of the feedback channel is perturbed (feedforward system). Manipulations of the feedback statistics can lead to a significant increase in the power and coherence of the population response. In particular, neglecting correlations within the ensemble of feedback channels or between the external stimulus and the feedback amplifies population-rate fluctuations by orders of magnitude. The fluctuation suppression in homogeneous inhibitory networks is explained by a negative feedback loop in the one-dimensional dynamics of the compound activity. Similarly, a change of coordinates exposes an effective negative feedback loop in the compound dynamics of stable excitatory-inhibitory networks. The suppression of input correlations in finite networks is explained by the population averaged correlations in the linear network model: In purely inhibitory networks, shared-input correlations are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between excitatory (E) and inhibitory (I) neurons, but a consequence of a particular structure of correlations among the three possible pairings (EE, EI, II).


Asunto(s)
Modelos Neurológicos , Red Nerviosa/fisiología , Potenciales de Acción/fisiología , Animales , Simulación por Computador , Retroalimentación Fisiológica/fisiología , Neuronas/fisiología , Transmisión Sináptica
11.
Front Neuroinform ; 16: 837549, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35645755

RESUMEN

Modern computational neuroscience strives to develop complex network models to explain dynamics and function of brains in health and disease. This process goes hand in hand with advancements in the theory of neuronal networks and increasing availability of detailed anatomical data on brain connectivity. Large-scale models that study interactions between multiple brain areas with intricate connectivity and investigate phenomena on long time scales such as system-level learning require progress in simulation speed. The corresponding development of state-of-the-art simulation engines relies on information provided by benchmark simulations which assess the time-to-solution for scientifically relevant, complementary network models using various combinations of hardware and software revisions. However, maintaining comparability of benchmark results is difficult due to a lack of standardized specifications for measuring the scaling performance of simulators on high-performance computing (HPC) systems. Motivated by the challenging complexity of benchmarking, we define a generic workflow that decomposes the endeavor into unique segments consisting of separate modules. As a reference implementation for the conceptual workflow, we develop beNNch: an open-source software framework for the configuration, execution, and analysis of benchmarks for neuronal network simulations. The framework records benchmarking data and metadata in a unified way to foster reproducibility. For illustration, we measure the performance of various versions of the NEST simulator across network models with different levels of complexity on a contemporary HPC system, demonstrating how performance bottlenecks can be identified, ultimately guiding the development toward more efficient simulation technology.

12.
Front Neurosci ; 15: 757790, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-35002599

RESUMEN

The representation of the natural-density, heterogeneous connectivity of neuronal network models at relevant spatial scales remains a challenge for Computational Neuroscience and Neuromorphic Computing. In particular, the memory demands imposed by the vast number of synapses in brain-scale network simulations constitute a major obstacle. Limiting the number resolution of synaptic weights appears to be a natural strategy to reduce memory and compute load. In this study, we investigate the effects of a limited synaptic-weight resolution on the dynamics of recurrent spiking neuronal networks resembling local cortical circuits and develop strategies for minimizing deviations from the dynamics of networks with high-resolution synaptic weights. We mimic the effect of a limited synaptic weight resolution by replacing normally distributed synaptic weights with weights drawn from a discrete distribution, and compare the resulting statistics characterizing firing rates, spike-train irregularity, and correlation coefficients with the reference solution. We show that a naive discretization of synaptic weights generally leads to a distortion of the spike-train statistics. If the weights are discretized such that the mean and the variance of the total synaptic input currents are preserved, the firing statistics remain unaffected for the types of networks considered in this study. For networks with sufficiently heterogeneous in-degrees, the firing statistics can be preserved even if all synaptic weights are replaced by the mean of the weight distribution. We conclude that even for simple networks with non-plastic neurons and synapses, a discretization of synaptic weights can lead to substantial deviations in the firing statistics unless the discretization is performed with care and guided by a rigorous validation process. For the network model used in this study, the synaptic weights can be replaced by low-resolution weights without affecting its macroscopic dynamical characteristics, thereby saving substantial amounts of memory.

13.
J Neurosci ; 29(4): 1006-10, 2009 Jan 28.
Artículo en Inglés | MEDLINE | ID: mdl-19176809

RESUMEN

To understand the mechanisms of fast information processing in the brain, it is necessary to determine how rapidly populations of neurons can respond to incoming stimuli in a noisy environment. Recently, it has been shown experimentally that an ensemble of neocortical neurons can track a time-varying input current in the presence of additive correlated noise very fast, up to frequencies of several hundred hertz. Modulations in the firing rate of presynaptic neuron populations affect, however, not only the mean but also the variance of the synaptic input to postsynaptic cells. It has been argued that such modulations of the noise intensity (multiplicative modulation) can be tracked much faster than modulations of the mean input current (additive modulation). Here, we compare the response characteristics of an ensemble of neocortical neurons for both modulation schemes. We injected sinusoidally modulated noisy currents (additive and multiplicative modulation) into layer V pyramidal neurons of the rat somatosensory cortex and measured the trial and ensemble-averaged spike responses for a wide range of stimulus frequencies. For both modulation paradigms, we observed low-pass behavior. The cutoff frequencies were markedly high, considerably higher than the average firing rates. We demonstrate that modulations in the variance can be tracked significantly faster than modulations in the mean input. Extremely fast stimuli (up to 1 kHz) can be reliably tracked, provided the stimulus amplitudes are sufficiently high.


Asunto(s)
Potenciales de la Membrana/fisiología , Neocórtex/citología , Dinámicas no Lineales , Células Piramidales/fisiología , Animales , Animales Recién Nacidos , Fenómenos Biofísicos , Estimulación Eléctrica/métodos , Técnicas In Vitro , Ruido , Técnicas de Placa-Clamp/métodos , Ratas , Ratas Long-Evans
14.
Sci Rep ; 9(1): 18303, 2019 12 04.
Artículo en Inglés | MEDLINE | ID: mdl-31797943

RESUMEN

Neuronal network models of high-level brain functions such as memory recall and reasoning often rely on the presence of some form of noise. The majority of these models assumes that each neuron in the functional network is equipped with its own private source of randomness, often in the form of uncorrelated external noise. In vivo, synaptic background input has been suggested to serve as the main source of noise in biological neuronal networks. However, the finiteness of the number of such noise sources constitutes a challenge to this idea. Here, we show that shared-noise correlations resulting from a finite number of independent noise sources can substantially impair the performance of stochastic network models. We demonstrate that this problem is naturally overcome by replacing the ensemble of independent noise sources by a deterministic recurrent neuronal network. By virtue of inhibitory feedback, such networks can generate small residual spatial correlations in their activity which, counter to intuition, suppress the detrimental effect of shared input. We exploit this mechanism to show that a single recurrent network of a few hundred neurons can serve as a natural noise source for a large ensemble of functional networks performing probabilistic computations, each comprising thousands of units.

15.
Front Comput Neurosci ; 11: 79, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28878643

RESUMEN

The classical model of basal ganglia has been refined in recent years with discoveries of subpopulations within a nucleus and previously unknown projections. One such discovery is the presence of subpopulations of arkypallidal and prototypical neurons in external globus pallidus, which was previously considered to be a primarily homogeneous nucleus. Developing a computational model of these multiple interconnected nuclei is challenging, because the strengths of the connections are largely unknown. We therefore use a genetic algorithm to search for the unknown connectivity parameters in a firing rate model. We apply a binary cost function derived from empirical firing rate and phase relationship data for the physiological and Parkinsonian conditions. Our approach generates ensembles of over 1,000 configurations, or homologies, for each condition, with broad distributions for many of the parameter values and overlap between the two conditions. However, the resulting effective weights of connections from or to prototypical and arkypallidal neurons are consistent with the experimental data. We investigate the significance of the weight variability by manipulating the parameters individually and cumulatively, and conclude that the correlation observed between the parameters is necessary for generating the dynamics of the two conditions. We then investigate the response of the networks to a transient cortical stimulus, and demonstrate that networks classified as physiological effectively suppress activity in the internal globus pallidus, and are not susceptible to oscillations, whereas parkinsonian networks show the opposite tendency. Thus, we conclude that the rates and phase relationships observed in the globus pallidus are predictive of experimentally observed higher level dynamical features of the physiological and parkinsonian basal ganglia, and that the multiplicity of solutions generated by our method may well be indicative of a natural diversity in basal ganglia networks. We propose that our approach of generating and analyzing an ensemble of multiple solutions to an underdetermined network model provides greater confidence in its predictions than those derived from a unique solution, and that projecting such homologous networks on a lower dimensional space of sensibly chosen dynamical features gives a better chance than a purely structural analysis at understanding complex pathologies such as Parkinson's disease.

16.
Front Comput Neurosci ; 8: 136, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25400575

RESUMEN

Random networks of integrate-and-fire neurons with strong current-based synapses can, unlike previously believed, assume stable states of sustained asynchronous and irregular firing, even without external random background or pacemaker neurons. We analyze the mechanisms underlying the emergence, lifetime and irregularity of such self-sustained activity states. We first demonstrate how the competition between the mean and the variance of the synaptic input leads to a non-monotonic firing-rate transfer in the network. Thus, by increasing the synaptic coupling strength, the system can become bistable: In addition to the quiescent state, a second stable fixed-point at moderate firing rates can emerge by a saddle-node bifurcation. Inherently generated fluctuations of the population firing rate around this non-trivial fixed-point can trigger transitions into the quiescent state. Hence, the trade-off between the magnitude of the population-rate fluctuations and the size of the basin of attraction of the non-trivial rate fixed-point determines the onset and the lifetime of self-sustained activity states. During self-sustained activity, individual neuronal activity is moreover highly irregular, switching between long periods of low firing rate to short burst-like states. We show that this is an effect of the strong synaptic weights and the finite time constant of synaptic and neuronal integration, and can actually serve to stabilize the self-sustained state.

17.
Front Comput Neurosci ; 7: 131, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-24151463

RESUMEN

The diversity of neuron models used in contemporary theoretical neuroscience to investigate specific properties of covariances in the spiking activity raises the question how these models relate to each other. In particular it is hard to distinguish between generic properties of covariances and peculiarities due to the abstracted model. Here we present a unified view on pairwise covariances in recurrent networks in the irregular regime. We consider the binary neuron model, the leaky integrate-and-fire (LIF) model, and the Hawkes process. We show that linear approximation maps each of these models to either of two classes of linear rate models (LRM), including the Ornstein-Uhlenbeck process (OUP) as a special case. The distinction between both classes is the location of additive noise in the rate dynamics, which is located on the output side for spiking models and on the input side for the binary model. Both classes allow closed form solutions for the covariance. For output noise it separates into an echo term and a term due to correlated input. The unified framework enables us to transfer results between models. For example, we generalize the binary model and the Hawkes process to the situation with synaptic conduction delays and simplify derivations for established results. Our approach is applicable to general network structures and suitable for the calculation of population averages. The derived averages are exact for fixed out-degree network architectures and approximate for fixed in-degree. We demonstrate how taking into account fluctuations in the linearization procedure increases the accuracy of the effective theory and we explain the class dependent differences between covariances in the time and the frequency domain. Finally we show that the oscillatory instability emerging in networks of LIF models with delayed inhibitory feedback is a model-invariant feature: the same structure of poles in the complex frequency plane determines the population power spectra.

18.
Neuron ; 72(5): 859-72, 2011 Dec 08.
Artículo en Inglés | MEDLINE | ID: mdl-22153380

RESUMEN

The local field potential (LFP) reflects activity of many neurons in the vicinity of the recording electrode and is therefore useful for studying local network dynamics. Much of the nature of the LFP is, however, still unknown. There are, for instance, contradicting reports on the spatial extent of the region generating the LFP. Here, we use a detailed biophysical modeling approach to investigate the size of the contributing region by simulating the LFP from a large number of neurons around the electrode. We find that the size of the generating region depends on the neuron morphology, the synapse distribution, and the correlation in synaptic activity. For uncorrelated activity, the LFP represents cells in a small region (within a radius of a few hundred micrometers). If the LFP contributions from different cells are correlated, the size of the generating region is determined by the spatial extent of the correlated activity.


Asunto(s)
Corteza Cerebral/citología , Corteza Cerebral/fisiología , Potenciales Evocados/fisiología , Modelos Neurológicos , Neuronas/fisiología , Animales , Simulación por Computador , Electrodos , Electroencefalografía , Humanos , Red Nerviosa/fisiología , Neuronas/clasificación , Sinapsis/fisiología , Potenciales Sinápticos/fisiología
19.
Front Comput Neurosci ; 4: 149, 2010.
Artículo en Inglés | MEDLINE | ID: mdl-21212832

RESUMEN

Firing-rate models provide a practical tool for studying the dynamics of trial- or population-averaged neuronal signals. A wealth of theoretical and experimental studies has been dedicated to the derivation or extraction of such models by investigating the firing-rate response characteristics of ensembles of neurons. The majority of these studies assumes that neurons receive input spikes at a high rate through weak synapses (diffusion approximation). For many biological neural systems, however, this assumption cannot be justified. So far, it is unclear how time-varying presynaptic firing rates are transmitted by a population of neurons if the diffusion assumption is dropped. Here, we numerically investigate the stationary and non-stationary firing-rate response properties of leaky integrate-and-fire neurons receiving input spikes through excitatory synapses with alpha-function shaped postsynaptic currents for strong synaptic weights. Input spike trains are modeled by inhomogeneous Poisson point processes with sinusoidal rate. Average rates, modulation amplitudes, and phases of the period-averaged spike responses are measured for a broad range of stimulus, synapse, and neuron parameters. Across wide parameter regions, the resulting transfer functions can be approximated by a linear first-order low-pass filter. Below a critical synaptic weight, the cutoff frequencies are approximately constant and determined by the synaptic time constants. Only for synapses with unrealistically strong weights are the cutoff frequencies significantly increased. To account for stimuli with larger modulation depths, we combine the measured linear transfer function with the nonlinear response characteristics obtained for stationary inputs. The resulting linear-nonlinear model accurately predicts the population response for a variety of non-sinusoidal stimuli.

20.
Neural Comput ; 20(9): 2185-226, 2008 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-18439141

RESUMEN

The function of cortical networks depends on the collective interplay between neurons and neuronal populations, which is reflected in the correlation of signals that can be recorded at different levels. To correctly interpret these observations it is important to understand the origin of neuronal correlations. Here we study how cells in large recurrent networks of excitatory and inhibitory neurons interact and how the associated correlations affect stationary states of idle network activity. We demonstrate that the structure of the connectivity matrix of such networks induces considerable correlations between synaptic currents as well as between subthreshold membrane potentials, provided Dale's principle is respected. If, in contrast, synaptic weights are randomly distributed, input correlations can vanish, even for densely connected networks. Although correlations are strongly attenuated when proceeding from membrane potentials to action potentials (spikes), the resulting weak correlations in the spike output can cause substantial fluctuations in the population activity, even in highly diluted networks. We show that simple mean-field models that take the structure of the coupling matrix into account can adequately describe the power spectra of the population activity. The consequences of Dale's principle on correlations and rate fluctuations are discussed in the light of recent experimental findings.


Asunto(s)
Corteza Cerebral/anatomía & histología , Modelos Neurológicos , Red Nerviosa/fisiología , Neuronas/fisiología , Dinámica Poblacional , Estadística como Asunto , Animales , Humanos , Redes Neurales de la Computación
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA