RESUMEN
Spiking neural networks are of high current interest, both from the perspective of modelling neural networks of the brain and for porting their fast learning capability and energy efficiency into neuromorphic hardware. But so far we have not been able to reproduce fast learning capabilities of the brain in spiking neural networks. Biological data suggest that a synergy of synaptic plasticity on a slow time scale with network dynamics on a faster time scale is responsible for fast learning capabilities of the brain. We show here that a suitable orchestration of this synergy between synaptic plasticity and network dynamics does in fact reproduce fast learning capabilities of generic recurrent networks of spiking neurons. This points to the important role of recurrent connections in spiking networks, since these are necessary for enabling salient network dynamics. We show more specifically that the proposed synergy enables synaptic weights to encode more general information such as priors and task structures, since moment-to-moment processing of new information can be delegated to the network dynamics.
Asunto(s)
Encéfalo , Aprendizaje , Plasticidad Neuronal , Medicamentos Genéricos , Redes Neurales de la ComputaciónRESUMEN
In an ever-changing visual world, animals' survival depends on their ability to perceive and respond to rapidly changing motion cues. The primary visual cortex (V1) is at the forefront of this sensory processing, orchestrating neural responses to perturbations in visual flow. However, the underlying neural mechanisms that lead to distinct cortical responses to such perturbations remain enigmatic. In this study, our objective was to uncover the neural dynamics that govern V1 neurons' responses to visual flow perturbations using a biologically realistic computational model. By subjecting the model to sudden changes in visual input, we observed opposing cortical responses in excitatory layer 2/3 (L2/3) neurons, namely, depolarizing and hyperpolarizing responses. We found that this segregation was primarily driven by the competition between external visual input and recurrent inhibition, particularly within L2/3 and L4. This division was not observed in excitatory L5/6 neurons, suggesting a more prominent role for inhibitory mechanisms in the visual processing of the upper cortical layers. Our findings share similarities with recent experimental studies focusing on the opposing influence of top-down and bottom-up inputs in the mouse primary visual cortex during visual flow perturbations.
Asunto(s)
Corteza Visual , Ratones , Animales , Corteza Visual/fisiología , Estimulación Luminosa , Neuronas/fisiología , Sensación , Percepción Visual/fisiologíaRESUMEN
We analyze visual processing capabilities of a large-scale model for area V1 that arguably provides the most comprehensive accumulation of anatomical and neurophysiological data to date. We find that this brain-like neural network model can reproduce a number of characteristic visual processing capabilities of the brain, in particular the capability to solve diverse visual processing tasks, also on temporally dispersed visual information, with remarkable robustness to noise. This V1 model, whose architecture and neurons markedly differ from those of deep neural networks used in current artificial intelligence (AI), such as convolutional neural networks (CNNs), also reproduces a number of characteristic neural coding properties of the brain, which provides explanations for its superior noise robustness. Because visual processing is substantially more energy efficient in the brain compared with CNNs in AI, such brain-like neural networks are likely to have an impact on future technology: as blueprints for visual processing in more energy-efficient neuromorphic hardware.
RESUMEN
Recent research resolves the challenging problem of building biophysically plausible spiking neural models that are also capable of complex information processing. This advance creates new opportunities in neuroscience and neuromorphic engineering, which we discussed at an online focus meeting.
Asunto(s)
Ingeniería Biomédica/tendencias , Modelos Neurológicos , Redes Neurales de la Computación , Neurociencias/tendencias , Ingeniería Biomédica/métodos , Predicción , Humanos , Neuronas/fisiología , Neurociencias/métodosRESUMEN
Recurrently connected networks of spiking neurons underlie the astounding information processing capabilities of the brain. Yet in spite of extensive research, how they can learn through synaptic plasticity to carry out complex network computations remains unclear. We argue that two pieces of this puzzle were provided by experimental data from neuroscience. A mathematical result tells us how these pieces need to be combined to enable biologically plausible online network learning through gradient descent, in particular deep reinforcement learning. This learning method-called e-prop-approaches the performance of backpropagation through time (BPTT), the best-known method for training recurrent neural networks in machine learning. In addition, it suggests a method for powerful on-chip learning in energy-efficient spike-based hardware for artificial intelligence.
Asunto(s)
Encéfalo/fisiología , Modelos Neurológicos , Red Nerviosa/fisiología , Neuronas/fisiología , Recompensa , Potenciales de Acción/fisiología , Animales , Encéfalo/citología , Aprendizaje Profundo , Humanos , Ratones , Plasticidad Neuronal/fisiologíaRESUMEN
Hyperparameters and learning algorithms for neuromorphic hardware are usually chosen by hand to suit a particular task. In contrast, networks of neurons in the brain were optimized through extensive evolutionary and developmental processes to work well on a range of computing and learning tasks. Occasionally this process has been emulated through genetic algorithms, but these require themselves hand-design of their details and tend to provide a limited range of improvements. We employ instead other powerful gradient-free optimization tools, such as cross-entropy methods and evolutionary strategies, in order to port the function of biological optimization processes to neuromorphic hardware. As an example, we show these optimization algorithms enable neuromorphic agents to learn very efficiently from rewards. In particular, meta-plasticity, i.e., the optimization of the learning rule which they use, substantially enhances reward-based learning capability of the hardware. In addition, we demonstrate for the first time Learning-to-Learn benefits from such hardware, in particular, the capability to extract abstract knowledge from prior learning experiences that speeds up the learning of new but related tasks. Learning-to-Learn is especially suited for accelerated neuromorphic hardware, since it makes it feasible to carry out the required very large number of network computations.