Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Elife ; 102021 07 26.
Artículo en Inglés | MEDLINE | ID: mdl-34310281

RESUMEN

For solving tasks such as recognizing a song, answering a question, or inverting a sequence of symbols, cortical microcircuits need to integrate and manipulate information that was dispersed over time during the preceding seconds. Creating biologically realistic models for the underlying computations, especially with spiking neurons and for behaviorally relevant integration time spans, is notoriously difficult. We examine the role of spike frequency adaptation in such computations and find that it has a surprisingly large impact. The inclusion of this well-known property of a substantial fraction of neurons in the neocortex - especially in higher areas of the human neocortex - moves the performance of spiking neural network models for computations on network inputs that are temporally dispersed from a fairly low level up to the performance level of the human brain.


Asunto(s)
Potenciales de Acción/fisiología , Modelos Neurológicos , Neocórtex/fisiología , Red Nerviosa/fisiología , Neuronas/fisiología , Adaptación Fisiológica , Computadores Moleculares , Humanos , Redes Neurales de la Computación
2.
Nat Commun ; 11(1): 3625, 2020 07 17.
Artículo en Inglés | MEDLINE | ID: mdl-32681001

RESUMEN

Recurrently connected networks of spiking neurons underlie the astounding information processing capabilities of the brain. Yet in spite of extensive research, how they can learn through synaptic plasticity to carry out complex network computations remains unclear. We argue that two pieces of this puzzle were provided by experimental data from neuroscience. A mathematical result tells us how these pieces need to be combined to enable biologically plausible online network learning through gradient descent, in particular deep reinforcement learning. This learning method-called e-prop-approaches the performance of backpropagation through time (BPTT), the best-known method for training recurrent neural networks in machine learning. In addition, it suggests a method for powerful on-chip learning in energy-efficient spike-based hardware for artificial intelligence.


Asunto(s)
Encéfalo/fisiología , Modelos Neurológicos , Red Nerviosa/fisiología , Neuronas/fisiología , Recompensa , Potenciales de Acción/fisiología , Animales , Encéfalo/citología , Aprendizaje Profundo , Humanos , Ratones , Plasticidad Neuronal/fisiología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA