Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Elife ; 102021 10 18.
Artigo em Inglês | MEDLINE | ID: mdl-34661525

RESUMO

When an action potential arrives at a synapse there is a large probability that no neurotransmitter is released. Surprisingly, simple computational models suggest that these synaptic failures enable information processing at lower metabolic costs. However, these models only consider information transmission at single synapses ignoring the remainder of the neural network as well as its overall computational goal. Here, we investigate how synaptic failures affect the energy efficiency of models of entire neural networks that solve a goal-driven task. We find that presynaptic stochasticity and plasticity improve energy efficiency and show that the network allocates most energy to a sparse subset of important synapses. We demonstrate that stabilising these synapses helps to alleviate the stability-plasticity dilemma, thus connecting a presynaptic notion of importance to a computational role in lifelong learning. Overall, our findings present a set of hypotheses for how presynaptic plasticity and stochasticity contribute to sparsity, energy efficiency and improved trade-offs in the stability-plasticity dilemma.


Assuntos
Potenciais de Ação/fisiologia , Plasticidade Neuronal/fisiologia , Sinapses/fisiologia , Modelos Neurológicos , Redes Neurais de Computação
2.
Front Comput Neurosci ; 14: 12, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32132915

RESUMO

Natural brains perform miraculously well in learning new tasks from a small number of samples, whereas sample efficient learning is still a major open problem in the field of machine learning. Here, we raise the question, how the neural coding scheme affects sample efficiency, and make first progress on this question by proposing and analyzing a learning algorithm that uses a simple reinforce-type plasticity mechanism and does not require any gradients to learn low dimensional mappings. It harnesses three bio-plausible mechanisms, namely, population codes with bell shaped tuning curves, continous attractor mechanisms and probabilistic synapses, to achieve sample efficient learning. We show both theoretically and by simulations that population codes with broadly tuned neurons lead to high sample efficiency, whereas codes with sharply tuned neurons account for high final precision. Moreover, a dynamic adaptation of the tuning width during learning gives rise to both, high sample efficiency and high final precision. We prove a sample efficiency guarantee for our algorithm that lies within a logarithmic factor from the information theoretical optimum. Our simulations show that for low dimensional mappings, our learning algorithm achieves comparable sample efficiency to multi-layer perceptrons trained by gradient descent, although it does not use any gradients. Furthermore, it achieves competitive sample efficiency in low dimensional reinforcement learning tasks. From a machine learning perspective, these findings may inspire novel approaches to improve sample efficiency. From a neuroscience perspective, these findings suggest sample efficiency as a yet unstudied functional role of adaptive tuning curve width.

3.
Neural Comput ; 31(11): 2252-2265, 2019 11.
Artigo em Inglês | MEDLINE | ID: mdl-31525311

RESUMO

In computational neural network models, neurons are usually allowed to excite some and inhibit other neurons, depending on the weight of their synaptic connections. The traditional way to transform such networks into networks that obey Dale's law (i.e., a neuron can either excite or inhibit) is to accompany each excitatory neuron with an inhibitory one through which inhibitory signals are mediated. However, this requires an equal number of excitatory and inhibitory neurons, whereas a realistic number of inhibitory neurons is much smaller. In this letter, we propose a model of nonlinear interaction of inhibitory synapses on dendritic compartments of excitatory neurons that allows the excitatory neurons to mediate inhibitory signals through a subset of the inhibitory population. With this construction, the number of required inhibitory neurons can be reduced tremendously.


Assuntos
Modelos Neurológicos , Redes Neurais de Computação , Neurônios/fisiologia , Transmissão Sináptica/fisiologia , Animais , Humanos , Sinapses/fisiologia
4.
Hippocampus ; 28(11): 824-837, 2018 11.
Artigo em Inglês | MEDLINE | ID: mdl-30024075

RESUMO

The sharp wave ripple complex in rodent hippocampus is associated with a network burst in CA3 (NB) that triggers a synchronous event in the CA1 population (SE). The number of CA1 pyramidal cells participating in a SE has been observed to follow a lognormal distribution. However, the origin of this skewed and heavy-tailed distribution of population synchrony in CA1 remains unknown. Because the size of SEs is likely to originate from the size of the NBs and the underlying neural circuitry, we model the CA3-CA1 circuit to study the underlying mechanisms and their functional implications. We show analytically that if the size of a NB in CA3 is distributed according to a normal distribution, then the size of the resulting SE in CA1 follows a lognormal distribution. Our model predicts the distribution of the NB size in CA3, which remains to be tested experimentally. Moreover, we show that a putative lognormal NB size distribution leads to an extremely heavy-tailed SE size distribution in CA1, contradicting experimental evidence. In conclusion, our model provides general insight on the origin of lognormally distributed network synchrony as a consequence of synchronous synaptic transmission of normally distributed input events.


Assuntos
Região CA1 Hipocampal/fisiologia , Região CA3 Hipocampal/fisiologia , Modelos Neurológicos , Animais , Simulação por Computador , Potenciais da Membrana , Modelos Estatísticos , Neurônios/fisiologia , Roedores , Sinapses/fisiologia
5.
Sci Rep ; 8(1): 4609, 2018 03 15.
Artigo em Inglês | MEDLINE | ID: mdl-29545553

RESUMO

In computational neuroscience, synaptic plasticity rules are often formulated in terms of firing rates. The predominant description of in vivo neuronal activity, however, is the instantaneous rate (or spiking probability). In this article we resolve this discrepancy by showing that fluctuations of the membrane potential carry enough information to permit a precise estimate of the instantaneous rate in balanced networks. As a consequence, we find that rate based plasticity rules are not restricted to neuronal activity that is stable for hundreds of milliseconds to seconds, but can be carried over to situations in which it changes every few milliseconds. We illustrate this, by showing that a voltage-dependent realization of the classical BCM rule achieves input selectivity, even if stimulus duration is reduced to a few milliseconds each.


Assuntos
Potenciais de Ação , Aprendizagem/fisiologia , Modelos Neurológicos , Plasticidade Neuronal/fisiologia , Neurônios/fisiologia , Transmissão Sináptica/fisiologia , Algoritmos , Animais , Redes Neurais de Computação
6.
Front Neurosci ; 12: 961, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30618583

RESUMO

The hippocampus is known to play a crucial role in the formation of long-term memory. For this, fast replays of previously experienced activities during sleep or after reward experiences are believed to be crucial. But how such replays are generated is still completely unclear. In this paper we propose a possible mechanism for this: we present a model that can store experienced trajectories on a behavioral timescale after a single run, and can subsequently bidirectionally replay such trajectories, thereby omitting any specifics of the previous behavior like speed, etc, but allowing repetitions of events, even with different subsequent events. Our solution builds on well-known concepts, one-shot learning and synfire chains, enhancing them by additional mechanisms using global inhibition and disinhibition. For replays our approach relies on dendritic spikes and cholinergic modulation, as supported by experimental data. We also hypothesize a functional role of disinhibition as a pacemaker during behavioral time.

7.
Int J Neural Syst ; 27(8): 1750044, 2017 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-28982282

RESUMO

Sequences of precisely timed neuronal activity are observed in many brain areas in various species. Synfire chains are a well-established model that can explain such sequences. However, it is unknown under which conditions synfire chains can develop in initially unstructured networks by self-organization. This work shows that with spike-timing dependent plasticity (STDP), modulated by global population activity, long synfire chains emerge in sparse random networks. The learning rule fosters neurons to participate multiple times in the chain or in multiple chains. Such reuse of neurons has been experimentally observed and is necessary for high capacity. Sparse networks prevent the chains from being short and cyclic and show that the formation of specific synapses is not essential for chain formation. Analysis of the learning rule in a simple network of binary threshold neurons reveals the asymptotically optimal length of the emerging chains. The theoretical results generalize to simulated networks of conductance-based leaky integrate-and-fire (LIF) neurons. As an application of the emerged chain, we propose a one-shot memory for sequences of precisely timed neuronal activity.


Assuntos
Potenciais de Ação , Modelos Neurológicos , Plasticidade Neuronal/fisiologia , Neurônios/fisiologia , Animais , Simulação por Computador
8.
Front Comput Neurosci ; 11: 33, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28555102

RESUMO

Hebbian changes of excitatory synapses are driven by and enhance correlations between pre- and postsynaptic neuronal activations, forming a positive feedback loop that can lead to instability in simulated neural networks. Because Hebbian learning may occur on time scales of seconds to minutes, it is conjectured that some form of fast stabilization of neural firing is necessary to avoid runaway of excitation, but both the theoretical underpinning and the biological implementation for such homeostatic mechanism are to be fully investigated. Supported by analytical and computational arguments, we show that a Hebbian spike-timing-dependent metaplasticity rule, accounts for inherently-stable, quick tuning of the total input weight of a single neuron in the general scenario of asynchronous neural firing characterized by UP and DOWN states of activity.

9.
Biol Cybern ; 111(3-4): 229-235, 2017 08.
Artigo em Inglês | MEDLINE | ID: mdl-28432423

RESUMO

It is known that many neurons in the brain show spike trains with a coefficient of variation (CV) of the interspike times of approximately 1, thus resembling the properties of Poisson spike trains. Computational studies have been able to reproduce this phenomenon. However, the underlying models were too complex to be examined analytically. In this paper, we offer a simple model that shows the same effect but is accessible to an analytic treatment. The model is a random walk model with a reflecting barrier; we give explicit formulas for the CV in the regime of excess inhibition. We also analyze the effect of probabilistic synapses in our model and show that it resembles previous findings that were obtained by simulation.


Assuntos
Potenciais de Ação , Neurônios/fisiologia , Modelos Neurológicos , Sinapses/metabolismo
10.
Neural Comput ; 29(5): 1375-1405, 2017 05.
Artigo em Inglês | MEDLINE | ID: mdl-28333588

RESUMO

The connection density of nearby neurons in the cortex has been observed to be around 0.1, whereas the longer-range connections are present with much sparser density (Kalisman, Silberberg, & Markram, 2005 ). We propose a memory association model that qualitatively explains these empirical observations. The model we consider is a multiassociative, sparse, Willshaw-like model consisting of binary threshold neurons and binary synapses. It uses recurrent synapses for iterative retrieval of stored memories. We quantify the usefulness of recurrent synapses by simulating the model for small network sizes and by doing a precise mathematical analysis for large network sizes. Given the network parameters, we can determine the precise values of recurrent and afferent synapse densities that optimize the storage capacity of the network. If the network size is like that of a cortical column, then the predicted optimal recurrent density lies in a range that is compatible with biological measurements. Furthermore, we show that our model is able to surpass the standard Willshaw model in the multiassociative case if the information capacity is normalized per strong synapse or per bits required to store the model, as considered in Knoblauch, Palm, and Sommer ( 2010 ).

11.
F1000Res ; 4: 1288, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26949518

RESUMO

In recent decades, a profound conceptual transformation has occurred comprising different areas of biological research, leading to a novel understanding of life processes as much more dynamic and changeable. Discoveries in plants and animals, as well as novel experimental approaches, have prompted the research community to reconsider established concepts and paradigms. This development was taken as an incentive to organise a workshop in May 2014 at the Academia Nazionale dei Lincei in Rome. There, experts on epigenetics, regeneration, neuroplasticity, and computational biology, using different animal and plant models, presented their insights on important aspects of a dynamic architecture of life, which comprises all organisational levels of the organism. Their work demonstrates that a dynamic nature of life persists during the entire existence of the organism and permits animals and plants not only to fine-tune their response to particular environmental demands during development, but underlies their continuous capacity to do so. Here, a synthesis of the different findings and their relevance for biological thinking is presented.

12.
Front Comput Neurosci ; 8: 140, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25426060

RESUMO

We present a high-capacity model for one-shot association learning (hetero-associative memory) in sparse networks. We assume that basic patterns are pre-learned in networks and associations between two patterns are presented only once and have to be learned immediately. The model is a combination of an Amit-Fusi like network sparsely connected to a Willshaw type network. The learning procedure is palimpsest and comes from earlier work on one-shot pattern learning. However, in our setup we can enhance the capacity of the network by iterative retrieval. This yields a model for sparse brain-like networks in which populations of a few thousand neurons are capable of learning hundreds of associations even if they are presented only once. The analysis of the model is based on a novel result by Janson et al. on bootstrap percolation in random graphs.

13.
PLoS One ; 8(12): e80694, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24324621

RESUMO

For every engineer it goes without saying: in order to build a reliable system we need components that consistently behave precisely as they should. It is also well known that neurons, the building blocks of brains, do not satisfy this constraint. Even neurons of the same type come with huge variances in their properties and these properties also vary over time. Synapses, the connections between neurons, are highly unreliable in forwarding signals. In this paper we argue that both these fact add variance to neuronal processes, and that this variance is not a handicap of neural systems, but that instead predictable and reliable functional behavior of neural systems depends crucially on this variability. In particular, we show that higher variance allows a recurrently connected neural population to react more sensitively to incoming signals, and processes them faster and more energy efficient. This, for example, challenges the general assumption that the intrinsic variability of neurons in the brain is a defect that has to be overcome by synaptic plasticity in the process of learning.


Assuntos
Heterogeneidade Genética , Modelos Neurológicos , Rede Nervosa/fisiologia , Plasticidade Neuronal/genética , Neurônios/metabolismo , Animais , Encéfalo/fisiologia , Simulação por Computador , Potenciais Pós-Sinápticos Excitadores/fisiologia , Humanos , Aprendizagem/fisiologia , Neurônios/citologia , Distribuição de Poisson , Receptores de AMPA/genética , Receptores de AMPA/metabolismo , Receptores de N-Metil-D-Aspartato/genética , Receptores de N-Metil-D-Aspartato/metabolismo , Sinapses/fisiologia , Transmissão Sináptica/fisiologia , Ácido alfa-Amino-3-hidroxi-5-metil-4-isoxazol Propiônico/metabolismo , Ácido gama-Aminobutírico/metabolismo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA