Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Sci Rep ; 14(1): 8557, 2024 04 12.
Article in English | MEDLINE | ID: mdl-38609429

ABSTRACT

Spiking neural networks are of high current interest, both from the perspective of modelling neural networks of the brain and for porting their fast learning capability and energy efficiency into neuromorphic hardware. But so far we have not been able to reproduce fast learning capabilities of the brain in spiking neural networks. Biological data suggest that a synergy of synaptic plasticity on a slow time scale with network dynamics on a faster time scale is responsible for fast learning capabilities of the brain. We show here that a suitable orchestration of this synergy between synaptic plasticity and network dynamics does in fact reproduce fast learning capabilities of generic recurrent networks of spiking neurons. This points to the important role of recurrent connections in spiking networks, since these are necessary for enabling salient network dynamics. We show more specifically that the proposed synergy enables synaptic weights to encode more general information such as priors and task structures, since moment-to-moment processing of new information can be delegated to the network dynamics.


Subject(s)
Brain , Learning , Neuronal Plasticity , Drugs, Generic , Neural Networks, Computer
2.
PLoS Comput Biol ; 20(3): e1011921, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38452057

ABSTRACT

In an ever-changing visual world, animals' survival depends on their ability to perceive and respond to rapidly changing motion cues. The primary visual cortex (V1) is at the forefront of this sensory processing, orchestrating neural responses to perturbations in visual flow. However, the underlying neural mechanisms that lead to distinct cortical responses to such perturbations remain enigmatic. In this study, our objective was to uncover the neural dynamics that govern V1 neurons' responses to visual flow perturbations using a biologically realistic computational model. By subjecting the model to sudden changes in visual input, we observed opposing cortical responses in excitatory layer 2/3 (L2/3) neurons, namely, depolarizing and hyperpolarizing responses. We found that this segregation was primarily driven by the competition between external visual input and recurrent inhibition, particularly within L2/3 and L4. This division was not observed in excitatory L5/6 neurons, suggesting a more prominent role for inhibitory mechanisms in the visual processing of the upper cortical layers. Our findings share similarities with recent experimental studies focusing on the opposing influence of top-down and bottom-up inputs in the mouse primary visual cortex during visual flow perturbations.


Subject(s)
Visual Cortex , Mice , Animals , Visual Cortex/physiology , Photic Stimulation , Neurons/physiology , Sensation , Visual Perception/physiology
3.
Sci Adv ; 8(44): eabq7592, 2022 Nov 04.
Article in English | MEDLINE | ID: mdl-36322646

ABSTRACT

We analyze visual processing capabilities of a large-scale model for area V1 that arguably provides the most comprehensive accumulation of anatomical and neurophysiological data to date. We find that this brain-like neural network model can reproduce a number of characteristic visual processing capabilities of the brain, in particular the capability to solve diverse visual processing tasks, also on temporally dispersed visual information, with remarkable robustness to noise. This V1 model, whose architecture and neurons markedly differ from those of deep neural networks used in current artificial intelligence (AI), such as convolutional neural networks (CNNs), also reproduces a number of characteristic neural coding properties of the brain, which provides explanations for its superior noise robustness. Because visual processing is substantially more energy efficient in the brain compared with CNNs in AI, such brain-like neural networks are likely to have an impact on future technology: as blueprints for visual processing in more energy-efficient neuromorphic hardware.

4.
Neuron ; 109(4): 571-575, 2021 02 17.
Article in English | MEDLINE | ID: mdl-33600754

ABSTRACT

Recent research resolves the challenging problem of building biophysically plausible spiking neural models that are also capable of complex information processing. This advance creates new opportunities in neuroscience and neuromorphic engineering, which we discussed at an online focus meeting.


Subject(s)
Biomedical Engineering/trends , Models, Neurological , Neural Networks, Computer , Neurosciences/trends , Biomedical Engineering/methods , Forecasting , Humans , Neurons/physiology , Neurosciences/methods
5.
Nat Commun ; 11(1): 3625, 2020 07 17.
Article in English | MEDLINE | ID: mdl-32681001

ABSTRACT

Recurrently connected networks of spiking neurons underlie the astounding information processing capabilities of the brain. Yet in spite of extensive research, how they can learn through synaptic plasticity to carry out complex network computations remains unclear. We argue that two pieces of this puzzle were provided by experimental data from neuroscience. A mathematical result tells us how these pieces need to be combined to enable biologically plausible online network learning through gradient descent, in particular deep reinforcement learning. This learning method-called e-prop-approaches the performance of backpropagation through time (BPTT), the best-known method for training recurrent neural networks in machine learning. In addition, it suggests a method for powerful on-chip learning in energy-efficient spike-based hardware for artificial intelligence.


Subject(s)
Brain/physiology , Models, Neurological , Nerve Net/physiology , Neurons/physiology , Reward , Action Potentials/physiology , Animals , Brain/cytology , Deep Learning , Humans , Mice , Neuronal Plasticity/physiology
6.
Front Neurosci ; 13: 483, 2019.
Article in English | MEDLINE | ID: mdl-31178681

ABSTRACT

Hyperparameters and learning algorithms for neuromorphic hardware are usually chosen by hand to suit a particular task. In contrast, networks of neurons in the brain were optimized through extensive evolutionary and developmental processes to work well on a range of computing and learning tasks. Occasionally this process has been emulated through genetic algorithms, but these require themselves hand-design of their details and tend to provide a limited range of improvements. We employ instead other powerful gradient-free optimization tools, such as cross-entropy methods and evolutionary strategies, in order to port the function of biological optimization processes to neuromorphic hardware. As an example, we show these optimization algorithms enable neuromorphic agents to learn very efficiently from rewards. In particular, meta-plasticity, i.e., the optimization of the learning rule which they use, substantially enhances reward-based learning capability of the hardware. In addition, we demonstrate for the first time Learning-to-Learn benefits from such hardware, in particular, the capability to extract abstract knowledge from prior learning experiences that speeds up the learning of new but related tasks. Learning-to-Learn is especially suited for accelerated neuromorphic hardware, since it makes it feasible to carry out the required very large number of network computations.

SELECTION OF CITATIONS
SEARCH DETAIL
...