Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
PLoS Comput Biol ; 18(12): e1010590, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36469504

RESUMO

Neural circuits exhibit complex activity patterns, both spontaneously and evoked by external stimuli. Information encoding and learning in neural circuits depend on how well time-varying stimuli can control spontaneous network activity. We show that in firing-rate networks in the balanced state, external control of recurrent dynamics, i.e., the suppression of internally-generated chaotic variability, strongly depends on correlations in the input. A distinctive feature of balanced networks is that, because common external input is dynamically canceled by recurrent feedback, it is far more difficult to suppress chaos with common input into each neuron than through independent input. To study this phenomenon, we develop a non-stationary dynamic mean-field theory for driven networks. The theory explains how the activity statistics and the largest Lyapunov exponent depend on the frequency and amplitude of the input, recurrent coupling strength, and network size, for both common and independent input. We further show that uncorrelated inputs facilitate learning in balanced networks.


Assuntos
Modelos Neurológicos , Rede Nervosa , Potenciais de Ação/fisiologia , Rede Nervosa/fisiologia , Neurônios/fisiologia , Aprendizagem
2.
Proc Natl Acad Sci U S A ; 118(46)2021 11 16.
Artigo em Inglês | MEDLINE | ID: mdl-34772802

RESUMO

Change is ubiquitous in living beings. In particular, the connectome and neural representations can change. Nevertheless, behaviors and memories often persist over long times. In a standard model, associative memories are represented by assemblies of strongly interconnected neurons. For faithful storage these assemblies are assumed to consist of the same neurons over time. Here we propose a contrasting memory model with complete temporal remodeling of assemblies, based on experimentally observed changes of synapses and neural representations. The assemblies drift freely as noisy autonomous network activity and spontaneous synaptic turnover induce neuron exchange. The gradual exchange allows activity-dependent and homeostatic plasticity to conserve the representational structure and keep inputs, outputs, and assemblies consistent. This leads to persistent memory. Our findings explain recent experimental results on temporal evolution of fear memory representations and suggest that memory systems need to be understood in their completeness as individual parts may constantly change.


Assuntos
Memória/fisiologia , Neurônios/fisiologia , Animais , Homeostase/fisiologia , Modelos Neurológicos , Redes Neurais de Computação , Plasticidade Neuronal/fisiologia , Sinapses/fisiologia
3.
Phys Rev Lett ; 125(8): 088103, 2020 Aug 21.
Artigo em Inglês | MEDLINE | ID: mdl-32909804

RESUMO

The ability of humans and animals to quickly adapt to novel tasks is difficult to reconcile with the standard paradigm of learning by slow synaptic weight modification. Here, we show that fixed-weight neural networks can learn to generate required dynamics by imitation. After appropriate weight pretraining, the networks quickly and dynamically adapt to learn new tasks and thereafter continue to achieve them without further teacher feedback. We explain this ability and illustrate it with a variety of target dynamics, ranging from oscillatory trajectories to driven and chaotic dynamical systems.


Assuntos
Aprendizagem/fisiologia , Modelos Neurológicos , Neurônios/fisiologia , Animais , Comunicação Celular/fisiologia , Humanos , Rede Nervosa/citologia , Rede Nervosa/fisiologia , Neurônios/citologia
4.
Elife ; 82019 12 23.
Artigo em Inglês | MEDLINE | ID: mdl-31868586

RESUMO

Jellyfish nerve nets provide insight into the origins of nervous systems, as both their taxonomic position and their evolutionary age imply that jellyfish resemble some of the earliest neuron-bearing, actively-swimming animals. Here, we develop the first neuronal network model for the nerve nets of jellyfish. Specifically, we focus on the moon jelly Aurelia aurita and the control of its energy-efficient swimming motion. The proposed single neuron model disentangles the contributions of different currents to a spike. The network model identifies factors ensuring non-pathological activity and suggests an optimization for the transmission of signals. After modeling the jellyfish's muscle system and its bell in a hydrodynamic environment, we explore the swimming elicited by neural activity. We find that different delays between nerve net activations lead to well-controlled, differently directed movements. Our model bridges the scales from single neurons to behavior, allowing for a comprehensive understanding of jellyfish neural control of locomotion.


Assuntos
Locomoção/fisiologia , Neurônios/fisiologia , Cifozoários/fisiologia , Animais , Hidrodinâmica , Modelos Teóricos , Rede Nervosa , Neurônios/citologia , Cifozoários/anatomia & histologia , Natação/fisiologia , Sinapses
5.
Phys Rev E ; 100(4-1): 042404, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31770941

RESUMO

Networks in the brain consist of different types of neurons. Here we investigate the influence of neuron diversity on the dynamics, phase space structure, and computational capabilities of spiking neural networks. We find that already a single neuron of a different type can qualitatively change the network dynamics and that mixed networks may combine the computational capabilities of ones with a single-neuron type. We study inhibitory networks of concave leaky (LIF) and convex "antileaky" (XIF) integrate-and-fire neurons that generalize irregularly spiking nonchaotic LIF neuron networks. Endowed with simple conductance-based synapses for XIF neurons, our networks can generate a balanced state of irregular asynchronous spiking as well. We determine the voltage probability distributions and self-consistent firing rates assuming Poisson input with finite-size spike impacts. Further, we compute the full spectrum of Lyapunov exponents (LEs) and the covariant Lyapunov vectors (CLVs) specifying the corresponding perturbation directions. We find that there is approximately one positive LE for each XIF neuron. This indicates in particular that a single XIF neuron renders the network dynamics chaotic. A simple mean-field approach, which can be justified by properties of the CLVs, explains the finding. As an application, we propose a spike-based computing scheme where our networks serve as computational reservoirs and their different stability properties yield different computational capabilities.


Assuntos
Modelos Neurológicos , Rede Nervosa/citologia , Neurônios/citologia , Cinética
6.
Phys Rev Lett ; 121(5): 058301, 2018 Aug 03.
Artigo em Inglês | MEDLINE | ID: mdl-30118252

RESUMO

Experiments in various neural systems found avalanches: bursts of activity with characteristics typical for critical dynamics. A possible explanation for their occurrence is an underlying network that self-organizes into a critical state. We propose a simple spiking model for developing neural networks, showing how these may "grow into" criticality. Avalanches generated by our model correspond to clusters of widely applied Hawkes processes. We analytically derive the cluster size and duration distributions and find that they agree with those of experimentally observed neuronal avalanches.


Assuntos
Modelos Neurológicos , Rede Nervosa/fisiologia , Neurônios/fisiologia , Potenciais de Ação/fisiologia , Processos Estocásticos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA