Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 95
Filter
1.
PLoS Comput Biol ; 19(5): e1010989, 2023 05.
Article in English | MEDLINE | ID: mdl-37130121

ABSTRACT

Animals rely on different decision strategies when faced with ambiguous or uncertain cues. Depending on the context, decisions may be biased towards events that were most frequently experienced in the past, or be more explorative. A particular type of decision making central to cognition is sequential memory recall in response to ambiguous cues. A previously developed spiking neuronal network implementation of sequence prediction and recall learns complex, high-order sequences in an unsupervised manner by local, biologically inspired plasticity rules. In response to an ambiguous cue, the model deterministically recalls the sequence shown most frequently during training. Here, we present an extension of the model enabling a range of different decision strategies. In this model, explorative behavior is generated by supplying neurons with noise. As the model relies on population encoding, uncorrelated noise averages out, and the recall dynamics remain effectively deterministic. In the presence of locally correlated noise, the averaging effect is avoided without impairing the model performance, and without the need for large noise amplitudes. We investigate two forms of correlated noise occurring in nature: shared synaptic background inputs, and random locking of the stimulus to spatiotemporal oscillations in the network activity. Depending on the noise characteristics, the network adopts various recall strategies. This study thereby provides potential mechanisms explaining how the statistics of learned sequences affect decision making, and how decision strategies can be adjusted after learning.


Subject(s)
Neural Networks, Computer , Neurons , Animals , Neurons/physiology , Learning/physiology , Memory/physiology , Mental Recall , Models, Neurological , Neuronal Plasticity/physiology , Action Potentials/physiology
2.
PLoS Comput Biol ; 18(6): e1010233, 2022 06.
Article in English | MEDLINE | ID: mdl-35727857

ABSTRACT

Sequence learning, prediction and replay have been proposed to constitute the universal computations performed by the neocortex. The Hierarchical Temporal Memory (HTM) algorithm realizes these forms of computation. It learns sequences in an unsupervised and continuous manner using local learning rules, permits a context specific prediction of future sequence elements, and generates mismatch signals in case the predictions are not met. While the HTM algorithm accounts for a number of biological features such as topographic receptive fields, nonlinear dendritic processing, and sparse connectivity, it is based on abstract discrete-time neuron and synapse dynamics, as well as on plasticity mechanisms that can only partly be related to known biological mechanisms. Here, we devise a continuous-time implementation of the temporal-memory (TM) component of the HTM algorithm, which is based on a recurrent network of spiking neurons with biophysically interpretable variables and parameters. The model learns high-order sequences by means of a structural Hebbian synaptic plasticity mechanism supplemented with a rate-based homeostatic control. In combination with nonlinear dendritic input integration and local inhibitory feedback, this type of plasticity leads to the dynamic self-organization of narrow sequence-specific subnetworks. These subnetworks provide the substrate for a faithful propagation of sparse, synchronous activity, and, thereby, for a robust, context specific prediction of future sequence elements as well as for the autonomous replay of previously learned sequences. By strengthening the link to biology, our implementation facilitates the evaluation of the TM hypothesis based on experimentally accessible quantities. The continuous-time implementation of the TM algorithm permits, in particular, an investigation of the role of sequence timing for sequence learning, prediction and replay. We demonstrate this aspect by studying the effect of the sequence speed on the sequence learning performance and on the speed of autonomous sequence replay.


Subject(s)
Models, Neurological , Neural Networks, Computer , Learning/physiology , Neuronal Plasticity/physiology , Neurons/physiology , Synapses/physiology
3.
PLoS Comput Biol ; 18(9): e1010086, 2022 09.
Article in English | MEDLINE | ID: mdl-36074778

ABSTRACT

Sustainable research on computational models of neuronal networks requires published models to be understandable, reproducible, and extendable. Missing details or ambiguities about mathematical concepts and assumptions, algorithmic implementations, or parameterizations hinder progress. Such flaws are unfortunately frequent and one reason is a lack of readily applicable standards and tools for model description. Our work aims to advance complete and concise descriptions of network connectivity but also to guide the implementation of connection routines in simulation software and neuromorphic hardware systems. We first review models made available by the computational neuroscience community in the repositories ModelDB and Open Source Brain, and investigate the corresponding connectivity structures and their descriptions in both manuscript and code. The review comprises the connectivity of networks with diverse levels of neuroanatomical detail and exposes how connectivity is abstracted in existing description languages and simulator interfaces. We find that a substantial proportion of the published descriptions of connectivity is ambiguous. Based on this review, we derive a set of connectivity concepts for deterministically and probabilistically connected networks and also address networks embedded in metric space. Beside these mathematical and textual guidelines, we propose a unified graphical notation for network diagrams to facilitate an intuitive understanding of network properties. Examples of representative network models demonstrate the practical use of the ideas. We hope that the proposed standardizations will contribute to unambiguous descriptions and reproducible implementations of neuronal network connectivity in computational neuroscience.


Subject(s)
Models, Neurological , Neurosciences , Computer Simulation , Neurons/physiology , Software
4.
Proc Natl Acad Sci U S A ; 116(26): 13051-13060, 2019 06 25.
Article in English | MEDLINE | ID: mdl-31189590

ABSTRACT

Cortical networks that have been found to operate close to a critical point exhibit joint activations of large numbers of neurons. However, in motor cortex of the awake macaque monkey, we observe very different dynamics: massively parallel recordings of 155 single-neuron spiking activities show weak fluctuations on the population level. This a priori suggests that motor cortex operates in a noncritical regime, which in models, has been found to be suboptimal for computational performance. However, here, we show the opposite: The large dispersion of correlations across neurons is the signature of a second critical regime. This regime exhibits a rich dynamical repertoire hidden from macroscopic brain signals but essential for high performance in such concepts as reservoir computing. An analytical link between the eigenvalue spectrum of the dynamics, the heterogeneity of connectivity, and the dispersion of correlations allows us to assess the closeness to the critical point.


Subject(s)
Models, Neurological , Motor Cortex/physiology , Nerve Net/physiology , Neurons/physiology , Action Potentials/physiology , Analysis of Variance , Animals , Computer Simulation , Feedback, Sensory/physiology , Macaca , Models, Animal , Software , Uncertainty , Wakefulness/physiology
5.
PLoS Comput Biol ; 14(10): e1006359, 2018 10.
Article in English | MEDLINE | ID: mdl-30335761

ABSTRACT

Cortical activity has distinct features across scales, from the spiking statistics of individual cells to global resting-state networks. We here describe the first full-density multi-area spiking network model of cortex, using macaque visual cortex as a test system. The model represents each area by a microcircuit with area-specific architecture and features layer- and population-resolved connectivity between areas. Simulations reveal a structured asynchronous irregular ground state. In a metastable regime, the network reproduces spiking statistics from electrophysiological recordings and cortico-cortical interaction patterns in fMRI functional connectivity under resting-state conditions. Stable inter-area propagation is supported by cortico-cortical synapses that are moderately strong onto excitatory neurons and stronger onto inhibitory neurons. Causal interactions depend on both cortical structure and the dynamical state of populations. Activity propagates mainly in the feedback direction, similar to experimental results associated with visual imagery and sleep. The model unifies local and large-scale accounts of cortex, and clarifies how the detailed connectivity of cortex shapes its dynamics on multiple scales. Based on our simulations, we hypothesize that in the spontaneous condition the brain operates in a metastable regime where cortico-cortical projections target excitatory and inhibitory populations in a balanced manner that produces substantial inter-area interactions while maintaining global stability.


Subject(s)
Action Potentials/physiology , Models, Neurological , Neurons/physiology , Visual Cortex/physiology , Algorithms , Animals , Computational Biology , Electroencephalography , Implantable Neurostimulators , Macaca , Male , Photic Stimulation , Sleep
6.
PLoS Comput Biol ; 13(2): e1005179, 2017 02.
Article in English | MEDLINE | ID: mdl-28146554

ABSTRACT

The continuous integration of experimental data into coherent models of the brain is an increasing challenge of modern neuroscience. Such models provide a bridge between structure and activity, and identify the mechanisms giving rise to experimental observations. Nevertheless, structurally realistic network models of spiking neurons are necessarily underconstrained even if experimental data on brain connectivity are incorporated to the best of our knowledge. Guided by physiological observations, any model must therefore explore the parameter ranges within the uncertainty of the data. Based on simulation results alone, however, the mechanisms underlying stable and physiologically realistic activity often remain obscure. We here employ a mean-field reduction of the dynamics, which allows us to include activity constraints into the process of model construction. We shape the phase space of a multi-scale network model of the vision-related areas of macaque cortex by systematically refining its connectivity. Fundamental constraints on the activity, i.e., prohibiting quiescence and requiring global stability, prove sufficient to obtain realistic layer- and area-specific activity. Only small adaptations of the structure are required, showing that the network operates close to an instability. The procedure identifies components of the network critical to its collective dynamics and creates hypotheses for structural data and future experiments. The method can be applied to networks involving any neuron model with a known gain function.


Subject(s)
Connectome , Evoked Potentials, Visual/physiology , Models, Neurological , Visual Cortex/anatomy & histology , Visual Cortex/physiology , Visual Perception/physiology , Animals , Computer Simulation , Humans , Macaca , Models, Anatomic , Models, Statistical , Nerve Net/physiology , Synaptic Transmission/physiology
7.
PLoS Comput Biol ; 12(10): e1005132, 2016 Oct.
Article in English | MEDLINE | ID: mdl-27736873

ABSTRACT

Oscillations are omnipresent in neural population signals, like multi-unit recordings, EEG/MEG, and the local field potential. They have been linked to the population firing rate of neurons, with individual neurons firing in a close-to-irregular fashion at low rates. Using a combination of mean-field and linear response theory we predict the spectra generated in a layered microcircuit model of V1, composed of leaky integrate-and-fire neurons and based on connectivity compiled from anatomical and electrophysiological studies. The model exhibits low- and high-γ oscillations visible in all populations. Since locally generated frequencies are imposed onto other populations, the origin of the oscillations cannot be deduced from the spectra. We develop an universally applicable systematic approach that identifies the anatomical circuits underlying the generation of oscillations in a given network. Based on a theoretical reduction of the dynamics, we derive a sensitivity measure resulting in a frequency-dependent connectivity map that reveals connections crucial for the peak amplitude and frequency of the observed oscillations and identifies the minimal circuit generating a given frequency. The low-γ peak turns out to be generated in a sub-circuit located in layer 2/3 and 4, while the high-γ peak emerges from the inter-neurons in layer 4. Connections within and onto layer 5 are found to regulate slow rate fluctuations. We further demonstrate how small perturbations of the crucial connections have significant impact on the population spectra, while the impairment of other connections leaves the dynamics on the population level unaltered. The study uncovers connections where mechanisms controlling the spectra of the cortical microcircuit are most effective.


Subject(s)
Biological Clocks/physiology , Connectome/methods , Models, Neurological , Synaptic Transmission/physiology , Visual Cortex/anatomy & histology , Visual Cortex/physiology , Animals , Computer Simulation , Feedback, Physiological/physiology , Humans , Models, Anatomic , Models, Statistical , Nerve Net/anatomy & histology , Nerve Net/physiology
8.
Cereb Cortex ; 26(12): 4461-4496, 2016 12.
Article in English | MEDLINE | ID: mdl-27797828

ABSTRACT

With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network model for a ∼1 mm2 patch of primary visual cortex, predict laminar LFPs for different network states, assess the relative LFP contribution from different laminar populations, and investigate effects of input correlations and neuron density on the LFP. The generic nature of the hybrid scheme and its public implementation in hybridLFPy form the basis for LFP predictions from other and larger point-neuron network models, as well as extensions of the current application with additional biological detail.


Subject(s)
Cerebral Cortex/physiology , Models, Neurological , Neurons/physiology , Animals , Computer Simulation , Humans , Membrane Potentials , Neural Inhibition/physiology , Thalamus/physiology
9.
J Comput Neurosci ; 40(1): 1-26, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26560334

ABSTRACT

As a candidate mechanism of neural representation, large numbers of synfire chains can efficiently be embedded in a balanced recurrent cortical network model. Here we study a model in which multiple synfire chains of variable strength are randomly coupled together to form a recurrent system. The system can be implemented both as a large-scale network of integrate-and-fire neurons and as a reduced model. The latter has binary-state pools as basic units but is otherwise isomorphic to the large-scale model, and provides an efficient tool for studying its behavior. Both the large-scale system and its reduced counterpart are able to sustain ongoing endogenous activity in the form of synfire waves, the proliferation of which is regulated by negative feedback caused by collateral noise. Within this equilibrium, diverse repertoires of ongoing activity are observed, including meta-stability and multiple steady states. These states arise in concert with an effective connectivity structure (ECS). The ECS admits a family of effective connectivity graphs (ECGs), parametrized by the mean global activity level. Of these graphs, the strongly connected components and their associated out-components account to a large extent for the observed steady states of the system. These results imply a notion of dynamic effective connectivity as governing neural computation with synfire chains, and related forms of cortical circuitry with complex topologies.


Subject(s)
Action Potentials/physiology , Models, Neurological , Nerve Net/physiology , Neurons/physiology , Nonlinear Dynamics , Synapses/physiology , Computer Simulation , Humans , Probability
10.
PLoS Comput Biol ; 11(9): e1004490, 2015 Sep.
Article in English | MEDLINE | ID: mdl-26325661

ABSTRACT

Network models are routinely downscaled compared to nature in terms of numbers of nodes or edges because of a lack of computational resources, often without explicit mention of the limitations this entails. While reliable methods have long existed to adjust parameters such that the first-order statistics of network dynamics are conserved, here we show that limitations already arise if also second-order statistics are to be maintained. The temporal structure of pairwise averaged correlations in the activity of recurrent networks is determined by the effective population-level connectivity. We first show that in general the converse is also true and explicitly mention degenerate cases when this one-to-one relationship does not hold. The one-to-one correspondence between effective connectivity and the temporal structure of pairwise averaged correlations implies that network scalings should preserve the effective connectivity if pairwise averaged correlations are to be held constant. Changes in effective connectivity can even push a network from a linearly stable to an unstable, oscillatory regime and vice versa. On this basis, we derive conditions for the preservation of both mean population-averaged activities and pairwise averaged correlations under a change in numbers of neurons or synapses in the asynchronous regime typical of cortical networks. We find that mean activities and correlation structure can be maintained by an appropriate scaling of the synaptic weights, but only over a range of numbers of synapses that is limited by the variance of external inputs to the network. Our results therefore show that the reducibility of asynchronous networks is fundamentally limited.


Subject(s)
Models, Neurological , Nerve Net/physiology , Neurons/physiology , Synapses/physiology , Brain/physiology , Computational Biology , Humans
11.
PLoS Comput Biol ; 10(1): e1003428, 2014 Jan.
Article in English | MEDLINE | ID: mdl-24453955

ABSTRACT

Correlated neuronal activity is a natural consequence of network connectivity and shared inputs to pairs of neurons, but the task-dependent modulation of correlations in relation to behavior also hints at a functional role. Correlations influence the gain of postsynaptic neurons, the amount of information encoded in the population activity and decoded by readout neurons, and synaptic plasticity. Further, it affects the power and spatial reach of extracellular signals like the local-field potential. A theory of correlated neuronal activity accounting for recurrent connectivity as well as fluctuating external sources is currently lacking. In particular, it is unclear how the recently found mechanism of active decorrelation by negative feedback on the population level affects the network response to externally applied correlated stimuli. Here, we present such an extension of the theory of correlations in stochastic binary networks. We show that (1) for homogeneous external input, the structure of correlations is mainly determined by the local recurrent connectivity, (2) homogeneous external inputs provide an additive, unspecific contribution to the correlations, (3) inhibitory feedback effectively decorrelates neuronal activity, even if neurons receive identical external inputs, and (4) identical synaptic input statistics to excitatory and to inhibitory cells increases intrinsically generated fluctuations and pairwise correlations. We further demonstrate how the accuracy of mean-field predictions can be improved by self-consistently including correlations. As a byproduct, we show that the cancellation of correlations between the summed inputs to pairs of neurons does not originate from the fast tracking of external input, but from the suppression of fluctuations on the population level by the local network. This suppression is a necessary constraint, but not sufficient to determine the structure of correlations; specifically, the structure observed at finite network size differs from the prediction based on perfect tracking, even though perfect tracking implies suppression of population fluctuations.


Subject(s)
Nerve Net , Neurons/physiology , Algorithms , Animals , Computer Simulation , Feedback, Physiological , Haplorhini , Models, Neurological , Neuronal Plasticity , Signal Transduction , Stochastic Processes , Synapses/physiology , Synaptic Transmission
12.
Cereb Cortex ; 24(3): 785-806, 2014 Mar.
Article in English | MEDLINE | ID: mdl-23203991

ABSTRACT

In the past decade, the cell-type specific connectivity and activity of local cortical networks have been characterized experimentally to some detail. In parallel, modeling has been established as a tool to relate network structure to activity dynamics. While available comprehensive connectivity maps ( Thomson, West, et al. 2002; Binzegger et al. 2004) have been used in various computational studies, prominent features of the simulated activity such as the spontaneous firing rates do not match the experimental findings. Here, we analyze the properties of these maps to compile an integrated connectivity map, which additionally incorporates insights on the specific selection of target types. Based on this integrated map, we build a full-scale spiking network model of the local cortical microcircuit. The simulated spontaneous activity is asynchronous irregular and cell-type specific firing rates are in agreement with in vivo recordings in awake animals, including the low rate of layer 2/3 excitatory cells. The interplay of excitation and inhibition captures the flow of activity through cortical layers after transient thalamic stimulation. In conclusion, the integration of a large body of the available connectivity data enables us to expose the dynamical consequences of the cortical microcircuitry.


Subject(s)
Action Potentials/physiology , Cerebral Cortex/cytology , Models, Neurological , Nerve Net/physiology , Neurons/physiology , Computer Simulation , Humans , Nerve Net/cytology , Neural Inhibition , Neural Networks, Computer , Neural Pathways , Neurons/classification
13.
PLoS Comput Biol ; 9(4): e1002904, 2013 Apr.
Article in English | MEDLINE | ID: mdl-23592953

ABSTRACT

The functional significance of correlations between action potentials of neurons is still a matter of vivid debate. In particular, it is presently unclear how much synchrony is caused by afferent synchronized events and how much is intrinsic due to the connectivity structure of cortex. The available analytical approaches based on the diffusion approximation do not allow to model spike synchrony, preventing a thorough analysis. Here we theoretically investigate to what extent common synaptic afferents and synchronized inputs each contribute to correlated spiking on a fine temporal scale between pairs of neurons. We employ direct simulation and extend earlier analytical methods based on the diffusion approximation to pulse-coupling, allowing us to introduce precisely timed correlations in the spiking activity of the synaptic afferents. We investigate the transmission of correlated synaptic input currents by pairs of integrate-and-fire model neurons, so that the same input covariance can be realized by common inputs or by spiking synchrony. We identify two distinct regimes: In the limit of low correlation linear perturbation theory accurately determines the correlation transmission coefficient, which is typically smaller than unity, but increases sensitively even for weakly synchronous inputs. In the limit of high input correlation, in the presence of synchrony, a qualitatively new picture arises. As the non-linear neuronal response becomes dominant, the output correlation becomes higher than the total correlation in the input. This transmission coefficient larger unity is a direct consequence of non-linear neural processing in the presence of noise, elucidating how synchrony-coded signals benefit from these generic properties present in cortical networks.


Subject(s)
Action Potentials/physiology , Computational Biology/methods , Neurons/physiology , Animals , Computer Simulation , Diffusion , Humans , Models, Neurological , Models, Statistical , Neural Networks, Computer , Normal Distribution , Synapses/physiology , Synaptic Transmission
14.
Neurobiol Dis ; 59: 267-76, 2013 Nov.
Article in English | MEDLINE | ID: mdl-23932917

ABSTRACT

Neuronal networks are reorganized following brain injury. At the structural level this is in part reflected by changes in the spine turnover of the denervated neurons. Using the entorhinal cortex lesion in vitro model, we recently showed that mouse dentate granule cells respond to entorhinal denervation with coordinated functional and structural changes: During the early phase after denervation spine density decreases, while excitatory synaptic strength increases in a homeostatic manner. At later stages spine density increases again, and synaptic strength decreases back to baseline. In the present study, we have addressed the question of whether the denervation-induced homeostatic strengthening of excitatory synapses could not only be a result of the deafferentation, but could, in turn, affect the dynamics of the spine reorganization process following entorhinal denervation in vitro. Using a computational approach, time-lapse imaging of neurons in organotypic slice cultures prepared from Thy1-GFP mice, and patch-clamp recordings we provide experimental evidence which suggests that the strengthening of surviving synapses can lead to the destabilization of spines formed after denervation. This activity-dependent pruning of newly formed spines requires the activation of N-methyl-d-aspartate receptors (NMDA-Rs), since pharmacological inhibition of NMDA-Rs resulted in a stabilization of spines and in an accelerated spine density recovery after denervation. Thus, NMDA-R inhibitors may restore the ability of neurons to form new stable synaptic contacts under conditions of denervation-induced homeostatic synaptic up-scaling, which may contribute to their beneficial effect seen in the context of some neurological diseases.


Subject(s)
Dendritic Spines/physiology , Dentate Gyrus/cytology , Entorhinal Cortex/pathology , Neurons/cytology , Receptors, N-Methyl-D-Aspartate/metabolism , 2-Amino-5-phosphonovalerate/pharmacology , 6-Cyano-7-nitroquinoxaline-2,3-dione/pharmacology , Action Potentials/drug effects , Animals , Animals, Newborn , Computer Simulation , Dendritic Spines/drug effects , Denervation , Entorhinal Cortex/injuries , Excitatory Amino Acid Antagonists/pharmacology , Female , GABA Antagonists/pharmacology , Green Fluorescent Proteins/genetics , Green Fluorescent Proteins/metabolism , Male , Mice , Mice, Transgenic , Models, Biological , Neurons/drug effects , Organ Culture Techniques , Pyridazines/pharmacology , Sodium Channel Blockers/pharmacology , Tetrodotoxin/pharmacology
15.
J Comput Neurosci ; 34(2): 185-209, 2013 Apr.
Article in English | MEDLINE | ID: mdl-22878688

ABSTRACT

Synfire chains, sequences of pools linked by feedforward connections, support the propagation of precisely timed spike sequences, or synfire waves. An important question remains, how synfire chains can efficiently be embedded in cortical architecture. We present a model of synfire chain embedding in a cortical scale recurrent network using conductance-based synapses, balanced chains, and variable transmission delays. The network attains substantially higher embedding capacities than previous spiking neuron models and allows all its connections to be used for embedding. The number of waves in the model is regulated by recurrent background noise. We computationally explore the embedding capacity limit, and use a mean field analysis to describe the equilibrium state. Simulations confirm the mean field analysis over broad ranges of pool sizes and connectivity levels; the number of pools embedded in the system trades off against the firing rate and the number of waves. An optimal inhibition level balances the conflicting requirements of stable synfire propagation and limited response to background noise. A simplified analysis shows that the present conductance-based synapses achieve higher contrast between the responses to synfire input and background noise compared to current-based synapses, while regulation of wave numbers is traced to the use of variable transmission delays.


Subject(s)
Action Potentials/physiology , Cerebral Cortex/cytology , Models, Neurological , Nerve Net/physiology , Neurons/physiology , Synapses/physiology , Computer Simulation , Electric Capacitance , Humans , Stochastic Processes
16.
PLoS Comput Biol ; 8(8): e1002596, 2012 Aug.
Article in English | MEDLINE | ID: mdl-23133368

ABSTRACT

Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. Here, we explain this observation by means of a linear network model and simulations of networks of leaky integrate-and-fire neurons. We show that inhibitory feedback efficiently suppresses pairwise correlations and, hence, population-rate fluctuations, thereby assigning inhibitory neurons the new role of active decorrelation. We quantify this decorrelation by comparing the responses of the intact recurrent network (feedback system) and systems where the statistics of the feedback channel is perturbed (feedforward system). Manipulations of the feedback statistics can lead to a significant increase in the power and coherence of the population response. In particular, neglecting correlations within the ensemble of feedback channels or between the external stimulus and the feedback amplifies population-rate fluctuations by orders of magnitude. The fluctuation suppression in homogeneous inhibitory networks is explained by a negative feedback loop in the one-dimensional dynamics of the compound activity. Similarly, a change of coordinates exposes an effective negative feedback loop in the compound dynamics of stable excitatory-inhibitory networks. The suppression of input correlations in finite networks is explained by the population averaged correlations in the linear network model: In purely inhibitory networks, shared-input correlations are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between excitatory (E) and inhibitory (I) neurons, but a consequence of a particular structure of correlations among the three possible pairings (EE, EI, II).


Subject(s)
Models, Neurological , Nerve Net/physiology , Action Potentials/physiology , Animals , Computer Simulation , Feedback, Physiological/physiology , Neurons/physiology , Synaptic Transmission
17.
PLoS Comput Biol ; 8(9): e1002689, 2012.
Article in English | MEDLINE | ID: mdl-23028287

ABSTRACT

Structural plasticity governs the long-term development of synaptic connections in the neocortex. While the underlying processes at the synapses are not fully understood, there is strong evidence that a process of random, independent formation and pruning of excitatory synapses can be ruled out. Instead, there must be some cooperation between the synaptic contacts connecting a single pre- and postsynaptic neuron pair. So far, the mechanism of cooperation is not known. Here we demonstrate that local correlation detection at the postsynaptic dendritic spine suffices to explain the synaptic cooperation effect, without assuming any hypothetical direct interaction pathway between the synaptic contacts. Candidate biomolecular mechanisms for dendritic correlation detection have been identified previously, as well as for structural plasticity based thereon. By analyzing and fitting of a simple model, we show that spike-timing correlation dependent structural plasticity, without additional mechanisms of cross-synapse interaction, can reproduce the experimentally observed distributions of numbers of synaptic contacts between pairs of neurons in the neocortex. Furthermore, the model yields a first explanation for the existence of both transient and persistent dendritic spines and allows to make predictions for future experiments.


Subject(s)
Action Potentials/physiology , Connectome/methods , Neocortex/cytology , Neocortex/physiology , Neuronal Plasticity/physiology , Synapses/physiology , Synapses/ultrastructure , Animals , Computer Simulation , Humans , Models, Neurological , Nerve Net/cytology , Nerve Net/physiology
18.
PLoS Comput Biol ; 7(5): e1001133, 2011 May.
Article in English | MEDLINE | ID: mdl-21589888

ABSTRACT

An open problem in the field of computational neuroscience is how to link synaptic plasticity to system-level learning. A promising framework in this context is temporal-difference (TD) learning. Experimental evidence that supports the hypothesis that the mammalian brain performs temporal-difference learning includes the resemblance of the phasic activity of the midbrain dopaminergic neurons to the TD error and the discovery that cortico-striatal synaptic plasticity is modulated by dopamine. However, as the phasic dopaminergic signal does not reproduce all the properties of the theoretical TD error, it is unclear whether it is capable of driving behavior adaptation in complex tasks. Here, we present a spiking temporal-difference learning model based on the actor-critic architecture. The model dynamically generates a dopaminergic signal with realistic firing rates and exploits this signal to modulate the plasticity of synapses as a third factor. The predictions of our proposed plasticity dynamics are in good agreement with experimental results with respect to dopamine, pre- and post-synaptic activity. An analytical mapping from the parameters of our proposed plasticity dynamics to those of the classical discrete-time TD algorithm reveals that the biological constraints of the dopaminergic signal entail a modified TD algorithm with self-adapting learning parameters and an adapting offset. We show that the neuronal network is able to learn a task with sparse positive rewards as fast as the corresponding classical discrete-time TD algorithm. However, the performance of the neuronal network is impaired with respect to the traditional algorithm on a task with both positive and negative rewards and breaks down entirely on a task with purely negative rewards. Our model demonstrates that the asymmetry of a realistic dopaminergic signal enables TD learning when learning is driven by positive rewards but not when driven by negative rewards.


Subject(s)
Dopamine/physiology , Learning/physiology , Models, Neurological , Neuronal Plasticity/physiology , Reward , Action Potentials/physiology , Algorithms , Animals , Humans , Nerve Net , Rats
19.
Cereb Cortex ; 21(12): 2681-95, 2011 Dec.
Article in English | MEDLINE | ID: mdl-21508303

ABSTRACT

While oscillations of the local field potential (LFP) are commonly attributed to the synchronization of neuronal firing rate on the same time scale, their relationship to coincident spiking in the millisecond range is unknown. Here, we present experimental evidence to reconcile the notions of synchrony at the level of spiking and at the mesoscopic scale. We demonstrate that only in time intervals of significant spike synchrony that cannot be explained on the basis of firing rates, coincident spikes are better phase locked to the LFP than predicted by the locking of the individual spikes. This effect is enhanced in periods of large LFP amplitudes. A quantitative model explains the LFP dynamics by the orchestrated spiking activity in neuronal groups that contribute the observed surplus synchrony. From the correlation analysis, we infer that neurons participate in different constellations but contribute only a fraction of their spikes to temporally precise spike configurations. This finding provides direct evidence for the hypothesized relation that precise spike synchrony constitutes a major temporally and spatially organized component of the LFP.


Subject(s)
Cortical Synchronization/physiology , Motor Cortex/physiology , Neurons/physiology , Action Potentials/physiology , Animals , Electrophysiology , Macaca mulatta , Signal Processing, Computer-Assisted
20.
Front Neuroinform ; 16: 837549, 2022.
Article in English | MEDLINE | ID: mdl-35645755

ABSTRACT

Modern computational neuroscience strives to develop complex network models to explain dynamics and function of brains in health and disease. This process goes hand in hand with advancements in the theory of neuronal networks and increasing availability of detailed anatomical data on brain connectivity. Large-scale models that study interactions between multiple brain areas with intricate connectivity and investigate phenomena on long time scales such as system-level learning require progress in simulation speed. The corresponding development of state-of-the-art simulation engines relies on information provided by benchmark simulations which assess the time-to-solution for scientifically relevant, complementary network models using various combinations of hardware and software revisions. However, maintaining comparability of benchmark results is difficult due to a lack of standardized specifications for measuring the scaling performance of simulators on high-performance computing (HPC) systems. Motivated by the challenging complexity of benchmarking, we define a generic workflow that decomposes the endeavor into unique segments consisting of separate modules. As a reference implementation for the conceptual workflow, we develop beNNch: an open-source software framework for the configuration, execution, and analysis of benchmarks for neuronal network simulations. The framework records benchmarking data and metadata in a unified way to foster reproducibility. For illustration, we measure the performance of various versions of the NEST simulator across network models with different levels of complexity on a contemporary HPC system, demonstrating how performance bottlenecks can be identified, ultimately guiding the development toward more efficient simulation technology.

SELECTION OF CITATIONS
SEARCH DETAIL