Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Elife ; 92020 02 13.
Artigo em Inglês | MEDLINE | ID: mdl-32053106

RESUMO

Many aspects of the brain's design can be understood as the result of evolutionary drive toward metabolic efficiency. In addition to the energetic costs of neural computation and transmission, experimental evidence indicates that synaptic plasticity is metabolically demanding as well. As synaptic plasticity is crucial for learning, we examine how these metabolic costs enter in learning. We find that when synaptic plasticity rules are naively implemented, training neural networks requires extremely large amounts of energy when storing many patterns. We propose that this is avoided by precisely balancing labile forms of synaptic plasticity with more stable forms. This algorithm, termed synaptic caching, boosts energy efficiency manifold and can be used with any plasticity rule, including back-propagation. Our results yield a novel interpretation of the multiple forms of neural synaptic plasticity observed experimentally, including synaptic tagging and capture phenomena. Furthermore, our results are relevant for energy efficient neuromorphic designs.


The brain expends a lot of energy. While the organ accounts for only about 2% of a person's bodyweight, it is responsible for about 20% of our energy use at rest. Neurons use some of this energy to communicate with each other and to process information, but much of the energy is likely used to support learning. A study in fruit flies showed that insects that learned to associate two stimuli and then had their food supply cut off, died 20% earlier than untrained flies. This is thought to be because learning used up the insects' energy reserves. If learning a single association requires so much energy, how does the brain manage to store vast amounts of data? Li and van Rossum offer an explanation based on a computer model of neural networks. The advantage of using such a model is that it is possible to control and measure conditions more precisely than in the living brain. Analysing the model confirmed that learning many new associations requires large amounts of energy. This is particularly true if the memories must be stored with a high degree of accuracy, and if the neural network contains many stored memories already. The reason that learning consumes so much energy is that forming long-term memories requires neurons to produce new proteins. Using the computer model, Li and van Rossum show that neural networks can overcome this limitation by storing memories initially in a transient form that does not require protein synthesis. Doing so reduces energy requirements by as much as 10-fold. Studies in living brains have shown that transient memories of this type do in fact exist. The current results hence offer a hypothesis as to how the brain can learn in a more energy efficient way. Energy consumption is thought to have placed constraints on brain evolution. It is also often a bottleneck in computers. By revealing how the brain encodes memories energy efficiently, the current findings could thus also inspire new engineering solutions.


Assuntos
Plasticidade Neuronal , Algoritmos , Humanos , Aprendizado de Máquina , Redes Neurais de Computação
2.
Elife ; 82019 05 10.
Artigo em Inglês | MEDLINE | ID: mdl-31074745

RESUMO

Long-term memories are believed to be stored in the synapses of cortical neuronal networks. However, recent experiments report continuous creation and removal of cortical synapses, which raises the question how memories can survive on such a variable substrate. Here, we study the formation and retention of associative memory in a computational model based on Hebbian cell assemblies in the presence of both synaptic and structural plasticity. During rest periods, such as may occur during sleep, the assemblies reactivate spontaneously, reinforcing memories against ongoing synapse removal and replacement. Brief daily reactivations during rest-periods suffice to not only maintain the assemblies, but even strengthen them, and improve pattern completion, consistent with offline memory gains observed experimentally. While the connectivity inside memory representations is strengthened during rest phases, connections in the rest of the network decay and vanish thus reconciling apparently conflicting hypotheses of the influence of sleep on cortical connectivity.


Assuntos
Memória de Longo Prazo/fisiologia , Plasticidade Neuronal/fisiologia , Sono/fisiologia , Sinapses/fisiologia , Simulação por Computador , Humanos , Modelos Neurológicos , Neurônios/fisiologia , Reforço Psicológico
3.
Elife ; 52016 12 08.
Artigo em Inglês | MEDLINE | ID: mdl-27929374

RESUMO

Encoding of behavioral episodes as spike sequences during hippocampal theta oscillations provides a neural substrate for computations on events extended across time and space. However, the mechanisms underlying the numerous and diverse experimentally observed properties of theta sequences remain poorly understood. Here we account for theta sequences using a novel model constrained by the septo-hippocampal circuitry. We show that when spontaneously active interneurons integrate spatial signals and theta frequency pacemaker inputs, they generate phase precessing action potentials that can coordinate theta sequences in place cell populations. We reveal novel constraints on sequence generation, predict cellular properties and neural dynamics that characterize sequence compression, identify circuit organization principles for high capacity sequential representation, and show that theta sequences can be used as substrates for association of conditioned stimuli with recent and upcoming events. Our results suggest mechanisms for flexible sequence compression that are suited to associative learning across an animal's lifespan.


Assuntos
Potenciais de Ação , Hipocampo/fisiologia , Interneurônios/fisiologia , Modelos Neurológicos , Células de Lugar/fisiologia , Lobo Temporal/fisiologia , Ritmo Teta
5.
Elife ; 42015 Aug 26.
Artigo em Inglês | MEDLINE | ID: mdl-26308579

RESUMO

Although it is well known that long-term synaptic plasticity can be expressed both pre- and postsynaptically, the functional consequences of this arrangement have remained elusive. We show that spike-timing-dependent plasticity with both pre- and postsynaptic expression develops receptive fields with reduced variability and improved discriminability compared to postsynaptic plasticity alone. These long-term modifications in receptive field statistics match recent sensory perception experiments. Moreover, learning with this form of plasticity leaves a hidden postsynaptic memory trace that enables fast relearning of previously stored information, providing a cellular substrate for memory savings. Our results reveal essential roles for presynaptic plasticity that are missed when only postsynaptic expression of long-term plasticity is considered, and suggest an experience-dependent distribution of pre- and postsynaptic strength changes.


Assuntos
Aprendizagem , Plasticidade Neuronal , Potenciais de Ação , Animais , Modelos Teóricos , Fatores de Tempo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA