Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
PLoS Comput Biol ; 16(3): e1007692, 2020 03.
Artículo en Inglés | MEDLINE | ID: mdl-32176682

RESUMEN

Networks based on coordinated spike coding can encode information with high efficiency in the spike trains of individual neurons. These networks exhibit single-neuron variability and tuning curves as typically observed in cortex, but paradoxically coincide with a precise, non-redundant spike-based population code. However, it has remained unclear whether the specific synaptic connectivities required in these networks can be learnt with local learning rules. Here, we show how to learn the required architecture. Using coding efficiency as an objective, we derive spike-timing-dependent learning rules for a recurrent neural network, and we provide exact solutions for the networks' convergence to an optimal state. As a result, we deduce an entire network from its input distribution and a firing cost. After learning, basic biophysical quantities such as voltages, firing thresholds, excitation, inhibition, or spikes acquire precise functional interpretations.


Asunto(s)
Potenciales de Acción/fisiología , Simulación por Computador , Aprendizaje/fisiología , Modelos Neurológicos , Neuronas/fisiología , Red Nerviosa/fisiología
2.
Neuron ; 94(5): 969-977, 2017 Jun 07.
Artículo en Inglés | MEDLINE | ID: mdl-28595053

RESUMEN

Understanding how the brain learns to compute functions reliably, efficiently, and robustly with noisy spiking activity is a fundamental challenge in neuroscience. Most sensory and motor tasks can be described as dynamical systems and could presumably be learned by adjusting connection weights in a recurrent biological neural network. However, this is greatly complicated by the credit assignment problem for learning in recurrent networks, e.g., the contribution of each connection to the global output error cannot be determined based only on locally accessible quantities to the synapse. Combining tools from adaptive control theory and efficient coding theories, we propose that neural circuits can indeed learn complex dynamic tasks with local synaptic plasticity rules as long as they associate two experimentally established neural mechanisms. First, they should receive top-down feedbacks driving both their activity and their synaptic plasticity. Second, inhibitory interneurons should maintain a tight balance between excitation and inhibition in the circuit. The resulting networks could learn arbitrary dynamical systems and produce irregular spike trains as variable as those observed experimentally. Yet, this variability in single neurons may hide an extremely efficient and robust computation at the population level.


Asunto(s)
Adaptación Fisiológica/fisiología , Encéfalo/fisiología , Retroalimentación Formativa , Aprendizaje/fisiología , Plasticidad Neuronal/fisiología , Humanos , Interneuronas/fisiología , Inhibición Neural/fisiología , Neuronas/fisiología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...