Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
1.
Phys Rev Lett ; 128(16): 168301, 2022 Apr 22.
Artículo en Inglés | MEDLINE | ID: mdl-35522522

RESUMEN

Criticality is deeply related to optimal computational capacity. The lack of a renormalized theory of critical brain dynamics, however, so far limits insights into this form of biological information processing to mean-field results. These methods neglect a key feature of critical systems: the interaction between degrees of freedom across all length scales, required for complex nonlinear computation. We present a renormalized theory of a prototypical neural field theory, the stochastic Wilson-Cowan equation. We compute the flow of couplings, which parametrize interactions on increasing length scales. Despite similarities with the Kardar-Parisi-Zhang model, the theory is of a Gell-Mann-Low type, the archetypal form of a renormalizable quantum field theory. Here, nonlinear couplings vanish, flowing towards the Gaussian fixed point, but logarithmically slowly, thus remaining effective on most scales. We show this critical structure of interactions to implement a desirable trade-off between linearity, optimal for information storage, and nonlinearity, required for computation.


Asunto(s)
Encéfalo , Redes Neurales de la Computación , Distribución Normal , Teoría Cuántica
2.
Proc Natl Acad Sci U S A ; 116(26): 13051-13060, 2019 06 25.
Artículo en Inglés | MEDLINE | ID: mdl-31189590

RESUMEN

Cortical networks that have been found to operate close to a critical point exhibit joint activations of large numbers of neurons. However, in motor cortex of the awake macaque monkey, we observe very different dynamics: massively parallel recordings of 155 single-neuron spiking activities show weak fluctuations on the population level. This a priori suggests that motor cortex operates in a noncritical regime, which in models, has been found to be suboptimal for computational performance. However, here, we show the opposite: The large dispersion of correlations across neurons is the signature of a second critical regime. This regime exhibits a rich dynamical repertoire hidden from macroscopic brain signals but essential for high performance in such concepts as reservoir computing. An analytical link between the eigenvalue spectrum of the dynamics, the heterogeneity of connectivity, and the dispersion of correlations allows us to assess the closeness to the critical point.


Asunto(s)
Modelos Neurológicos , Corteza Motora/fisiología , Red Nerviosa/fisiología , Neuronas/fisiología , Potenciales de Acción/fisiología , Análisis de Varianza , Animales , Simulación por Computador , Retroalimentación Sensorial/fisiología , Macaca , Modelos Animales , Programas Informáticos , Incertidumbre , Vigilia/fisiología
3.
PLoS Comput Biol ; 16(10): e1008127, 2020 10.
Artículo en Inglés | MEDLINE | ID: mdl-33044953

RESUMEN

Learning in neuronal networks has developed in many directions, in particular to reproduce cognitive tasks like image recognition and speech processing. Implementations have been inspired by stereotypical neuronal responses like tuning curves in the visual system, where, for example, ON/OFF cells fire or not depending on the contrast in their receptive fields. Classical models of neuronal networks therefore map a set of input signals to a set of activity levels in the output of the network. Each category of inputs is thereby predominantly characterized by its mean. In the case of time series, fluctuations around this mean constitute noise in this view. For this paradigm, the high variability exhibited by the cortical activity may thus imply limitations or constraints, which have been discussed for many years. For example, the need for averaging neuronal activity over long periods or large groups of cells to assess a robust mean and to diminish the effect of noise correlations. To reconcile robust computations with variable neuronal activity, we here propose a conceptual change of perspective by employing variability of activity as the basis for stimulus-related information to be learned by neurons, rather than merely being the noise that corrupts the mean signal. In this new paradigm both afferent and recurrent weights in a network are tuned to shape the input-output mapping for covariances, the second-order statistics of the fluctuating activity. When including time lags, covariance patterns define a natural metric for time series that capture their propagating nature. We develop the theory for classification of time series based on their spatio-temporal covariances, which reflect dynamical properties. We demonstrate that recurrent connectivity is able to transform information contained in the temporal structure of the signal into spatial covariances. Finally, we use the MNIST database to show how the covariance perceptron can capture specific second-order statistical patterns generated by moving digits.


Asunto(s)
Modelos Neurológicos , Redes Neurales de la Computación , Algoritmos , Animales , Biología Computacional , Simulación por Computador , Bases de Datos Factuales , Humanos , Procesamiento de Imagen Asistido por Computador , Aprendizaje/fisiología , Neuronas/citología
4.
Cereb Cortex ; 26(12): 4461-4496, 2016 12.
Artículo en Inglés | MEDLINE | ID: mdl-27797828

RESUMEN

With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network model for a ∼1 mm2 patch of primary visual cortex, predict laminar LFPs for different network states, assess the relative LFP contribution from different laminar populations, and investigate effects of input correlations and neuron density on the LFP. The generic nature of the hybrid scheme and its public implementation in hybridLFPy form the basis for LFP predictions from other and larger point-neuron network models, as well as extensions of the current application with additional biological detail.


Asunto(s)
Corteza Cerebral/fisiología , Modelos Neurológicos , Neuronas/fisiología , Animales , Simulación por Computador , Humanos , Potenciales de la Membrana , Inhibición Neural/fisiología , Tálamo/fisiología
5.
Elife ; 122023 01 26.
Artículo en Inglés | MEDLINE | ID: mdl-36700545

RESUMEN

Information from the sensory periphery is conveyed to the cortex via structured projection pathways that spatially segregate stimulus features, providing a robust and efficient encoding strategy. Beyond sensory encoding, this prominent anatomical feature extends throughout the neocortex. However, the extent to which it influences cortical processing is unclear. In this study, we combine cortical circuit modeling with network theory to demonstrate that the sharpness of topographic projections acts as a bifurcation parameter, controlling the macroscopic dynamics and representational precision across a modular network. By shifting the balance of excitation and inhibition, topographic modularity gradually increases task performance and improves the signal-to-noise ratio across the system. We demonstrate that in biologically constrained networks, such a denoising behavior is contingent on recurrent inhibition. We show that this is a robust and generic structural feature that enables a broad range of behaviorally relevant operating regimes, and provide an in-depth theoretical analysis unraveling the dynamical principles underlying the mechanism.


Asunto(s)
Neocórtex , Neocórtex/fisiología , Relación Señal-Ruido , Redes Neurales de la Computación
6.
Phys Rev E ; 105(5-2): 059901, 2022 May.
Artículo en Inglés | MEDLINE | ID: mdl-35706324

RESUMEN

This corrects the article DOI: 10.1103/PhysRevE.101.042124.

7.
Elife ; 112022 01 20.
Artículo en Inglés | MEDLINE | ID: mdl-35049496

RESUMEN

Modern electrophysiological recordings simultaneously capture single-unit spiking activities of hundreds of neurons spread across large cortical distances. Yet, this parallel activity is often confined to relatively low-dimensional manifolds. This implies strong coordination also among neurons that are most likely not even connected. Here, we combine in vivo recordings with network models and theory to characterize the nature of mesoscopic coordination patterns in macaque motor cortex and to expose their origin: We find that heterogeneity in local connectivity supports network states with complex long-range cooperation between neurons that arises from multi-synaptic, short-range connections. Our theory explains the experimentally observed spatial organization of covariances in resting state recordings as well as the behaviorally related modulation of covariance patterns during a reach-to-grasp task. The ubiquity of heterogeneity in local cortical circuits suggests that the brain uses the described mechanism to flexibly adapt neuronal coordination to momentary demands.


Asunto(s)
Potenciales de Acción/fisiología , Modelos Neurológicos , Corteza Motora , Red Nerviosa , Neuronas , Animales , Electrofisiología , Femenino , Macaca mulatta , Masculino , Corteza Motora/citología , Corteza Motora/fisiología , Red Nerviosa/citología , Red Nerviosa/fisiología , Neuronas/citología , Neuronas/fisiología
8.
Front Neuroinform ; 15: 609147, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34177505

RESUMEN

Due to the point-like nature of neuronal spiking, efficient neural network simulators often employ event-based simulation schemes for synapses. Yet many types of synaptic plasticity rely on the membrane potential of the postsynaptic cell as a third factor in addition to pre- and postsynaptic spike times. In some learning rules membrane potentials not only influence synaptic weight changes at the time points of spike events but in a continuous manner. In these cases, synapses therefore require information on the full time course of membrane potentials to update their strength which a priori suggests a continuous update in a time-driven manner. The latter hinders scaling of simulations to realistic cortical network sizes and relevant time scales for learning. Here, we derive two efficient algorithms for archiving postsynaptic membrane potentials, both compatible with modern simulation engines based on event-based synapse updates. We theoretically contrast the two algorithms with a time-driven synapse update scheme to analyze advantages in terms of memory and computations. We further present a reference implementation in the spiking neural network simulator NEST for two prototypical voltage-based plasticity rules: the Clopath rule and the Urbanczik-Senn rule. For both rules, the two event-based algorithms significantly outperform the time-driven scheme. Depending on the amount of data to be stored for plasticity, which heavily differs between the rules, a strong performance increase can be achieved by compressing or sampling of information on membrane potentials. Our results on computational efficiency related to archiving of information provide guidelines for the design of learning rules in order to make them practically usable in large-scale networks.

9.
Cereb Cortex Commun ; 2(3): tgab033, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34296183

RESUMEN

Resting state has been established as a classical paradigm of brain activity studies, mostly based on large-scale measurements such as functional magnetic resonance imaging or magneto- and electroencephalography. This term typically refers to a behavioral state characterized by the absence of any task or stimuli. The corresponding neuronal activity is often called idle or ongoing. Numerous modeling studies on spiking neural networks claim to mimic such idle states, but compare their results with task- or stimulus-driven experiments, or to results from experiments with anesthetized subjects. Both approaches might lead to misleading conclusions. To provide a proper basis for comparing physiological and simulated network dynamics, we characterize simultaneously recorded single neurons' spiking activity in monkey motor cortex at rest and show the differences from spontaneous and task- or stimulus-induced movement conditions. We also distinguish between rest with open eyes and sleepy rest with eyes closed. The resting state with open eyes shows a significantly higher dimensionality, reduced firing rates, and less balance between population level excitation and inhibition than behavior-related states.

10.
Phys Rev E ; 101(4-1): 042124, 2020 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-32422832

RESUMEN

Neural dynamics is often investigated with tools from bifurcation theory. However, many neuron models are stochastic, mimicking fluctuations in the input from unknown parts of the brain or the spiking nature of signals. Noise changes the dynamics with respect to the deterministic model; in particular classical bifurcation theory cannot be applied. We formulate the stochastic neuron dynamics in the Martin-Siggia-Rose de Dominicis-Janssen (MSRDJ) formalism and present the fluctuation expansion of the effective action and the functional renormalization group (fRG) as two systematic ways to incorporate corrections to the mean dynamics and time-dependent statistics due to fluctuations in the presence of nonlinear neuronal gain. To formulate self-consistency equations, we derive a fundamental link between the effective action in the Onsager-Machlup (OM) formalism, which allows the study of phase transitions, and the MSRDJ effective action, which is computationally advantageous. These results in particular allow the derivation of an OM effective action for systems with non-Gaussian noise. This approach naturally leads to effective deterministic equations for the first moment of the stochastic system; they explain how nonlinearities and noise cooperate to produce memory effects. Moreover, the MSRDJ formulation yields an effective linear system that has identical power spectra and linear response. Starting from the better known loopwise approximation, we then discuss the use of the fRG as a method to obtain self-consistency beyond the mean. We present a new efficient truncation scheme for the hierarchy of flow equations for the vertex functions by adapting the Blaizot, Méndez, and Wschebor approximation from the derivative expansion to the vertex expansion. The methods are presented by means of the simplest possible example of a stochastic differential equation that has generic features of neuronal dynamics.

11.
Front Neuroinform ; 11: 34, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28596730

RESUMEN

Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA