Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Nature ; 563(7730): 230-234, 2018 11.
Artigo em Inglês | MEDLINE | ID: mdl-30374193

RESUMO

In recent years, artificial neural networks have become the flagship algorithm of artificial intelligence1. In these systems, neuron activation functions are static, and computing is achieved through standard arithmetic operations. By contrast, a prominent branch of neuroinspired computing embraces the dynamical nature of the brain and proposes to endow each component of a neural network with dynamical functionality, such as oscillations, and to rely on emergent physical phenomena, such as synchronization2-6, for solving complex problems with small networks7-11. This approach is especially interesting for hardware implementations, because emerging nanoelectronic devices can provide compact and energy-efficient nonlinear auto-oscillators that mimic the periodic spiking activity of biological neurons12-16. The dynamical couplings between oscillators can then be used to mediate the synaptic communication between the artificial neurons. One challenge for using nanodevices in this way is to achieve learning, which requires fine control and tuning of their coupled oscillations17; the dynamical features of nanodevices can be difficult to control and prone to noise and variability18. Here we show that the outstanding tunability of spintronic nano-oscillators-that is, the possibility of accurately controlling their frequency across a wide range, through electrical current and magnetic field-can be used to address this challenge. We successfully train a hardware network of four spin-torque nano-oscillators to recognize spoken vowels by tuning their frequencies according to an automatic real-time learning rule. We show that the high experimental recognition rates stem from the ability of these oscillators to synchronize. Our results demonstrate that non-trivial pattern classification tasks can be achieved with small hardware neural networks by endowing them with nonlinear dynamical features such as oscillations and synchronization.

2.
Phys Rev Lett ; 117(7): 074301, 2016 Aug 12.
Artigo em Inglês | MEDLINE | ID: mdl-27563967

RESUMO

A recent theoretical breakthrough has brought a new tool, called the localization landscape, for predicting the localization regions of vibration modes in complex or disordered systems. Here, we report on the first experiment which measures the localization landscape and demonstrates its predictive power. Holographic measurement of the static deformation under uniform load of a thin plate with complex geometry provides direct access to the landscape function. When put in vibration, this system shows modes precisely confined within the subregions delineated by the landscape function. Also the maxima of this function match the measured eigenfrequencies, while the minima of the valley network gives the frequencies at which modes become extended. This approach fully characterizes the low frequency spectrum of a complex structure from a single static measurement. It paves the way for controlling and engineering eigenmodes in any vibratory system, especially where a structural or microscopic description is not accessible.

3.
Nat Commun ; 13(1): 883, 2022 02 15.
Artigo em Inglês | MEDLINE | ID: mdl-35169115

RESUMO

The brain naturally binds events from different sources in unique concepts. It is hypothesized that this process occurs through the transient mutual synchronization of neurons located in different regions of the brain when the stimulus is presented. This mechanism of 'binding through synchronization' can be directly implemented in neural networks composed of coupled oscillators. To do so, the oscillators must be able to mutually synchronize for the range of inputs corresponding to a single class, and otherwise remain desynchronized. Here we show that the outstanding ability of spintronic nano-oscillators to mutually synchronize and the possibility to precisely control the occurrence of mutual synchronization by tuning the oscillator frequencies over wide ranges allows pattern recognition. We demonstrate experimentally on a simple task that three spintronic nano-oscillators can bind consecutive events and thus recognize and distinguish temporal sequences. This work is a step forward in the construction of neural networks that exploit the non-linear dynamic properties of their components to perform brain-inspired computations.


Assuntos
Encéfalo/fisiologia , Sincronização Cortical/fisiologia , Rede Nervosa/fisiologia , Redes Neurais de Computação , Animais , Simulação por Computador , Humanos , Modelos Neurológicos , Neurônios/fisiologia
4.
Nat Commun ; 12(1): 2549, 2021 05 05.
Artigo em Inglês | MEDLINE | ID: mdl-33953183

RESUMO

While deep neural networks have surpassed human performance in multiple situations, they are prone to catastrophic forgetting: upon training a new task, they rapidly forget previously learned ones. Neuroscience studies, based on idealized tasks, suggest that in the brain, synapses overcome this issue by adjusting their plasticity depending on their past history. However, such "metaplastic" behaviors do not transfer directly to mitigate catastrophic forgetting in deep neural networks. In this work, we interpret the hidden weights used by binarized neural networks, a low-precision version of deep neural networks, as metaplastic variables, and modify their training technique to alleviate forgetting. Building on this idea, we propose and demonstrate experimentally, in situations of multitask and stream learning, a training technique that reduces catastrophic forgetting without needing previously presented data, nor formal boundaries between datasets and with performance approaching more mainstream techniques with task boundaries. We support our approach with a theoretical analysis on a tractable task. This work bridges computational neuroscience and deep learning, and presents significant assets for future embedded and neuromorphic systems, especially when using novel nanodevices featuring physics analogous to metaplasticity.

5.
iScience ; 24(3): 102222, 2021 Mar 19.
Artigo em Inglês | MEDLINE | ID: mdl-33748709

RESUMO

Finding spike-based learning algorithms that can be implemented within the local constraints of neuromorphic systems, while achieving high accuracy, remains a formidable challenge. Equilibrium propagation is a promising alternative to backpropagation as it only involves local computations, but hardware-oriented studies have so far focused on rate-based networks. In this work, we develop a spiking neural network algorithm called EqSpike, compatible with neuromorphic systems, which learns by equilibrium propagation. Through simulations, we obtain a test recognition accuracy of 97.6% on the MNIST handwritten digits dataset (Mixed National Institute of Standards and Technology), similar to rate-based equilibrium propagation, and comparing favorably to alternative learning techniques for spiking neural networks. We show that EqSpike implemented in silicon neuromorphic technology could reduce the energy consumption of inference and training, respectively, by three orders and two orders of magnitude compared to graphics processing units. Finally, we also show that during learning, EqSpike weight updates exhibit a form of spike-timing-dependent plasticity, highlighting a possible connection with biology.

6.
Front Neurosci ; 15: 633674, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33679315

RESUMO

Equilibrium Propagation is a biologically-inspired algorithm that trains convergent recurrent neural networks with a local learning rule. This approach constitutes a major lead to allow learning-capable neuromophic systems and comes with strong theoretical guarantees. Equilibrium propagation operates in two phases, during which the network is let to evolve freely and then "nudged" toward a target; the weights of the network are then updated based solely on the states of the neurons that they connect. The weight updates of Equilibrium Propagation have been shown mathematically to approach those provided by Backpropagation Through Time (BPTT), the mainstream approach to train recurrent neural networks, when nudging is performed with infinitely small strength. In practice, however, the standard implementation of Equilibrium Propagation does not scale to visual tasks harder than MNIST. In this work, we show that a bias in the gradient estimate of equilibrium propagation, inherent in the use of finite nudging, is responsible for this phenomenon and that canceling it allows training deep convolutional neural networks. We show that this bias can be greatly reduced by using symmetric nudging (a positive nudging and a negative one). We also generalize Equilibrium Propagation to the case of cross-entropy loss (by opposition to squared error). As a result of these advances, we are able to achieve a test error of 11.7% on CIFAR-10, which approaches the one achieved by BPTT and provides a major improvement with respect to the standard Equilibrium Propagation that gives 86% test error. We also apply these techniques to train an architecture with unidirectional forward and backward connections, yielding a 13.2% test error. These results highlight equilibrium propagation as a compelling biologically-plausible approach to compute error gradients in deep neuromorphic systems.

7.
Sci Rep ; 9(1): 1851, 2019 02 12.
Artigo em Inglês | MEDLINE | ID: mdl-30755662

RESUMO

One of the biggest stakes in nanoelectronics today is to meet the needs of Artificial Intelligence by designing hardware neural networks which, by fusing computation and memory, process and learn from data with limited energy. For this purpose, memristive devices are excellent candidates to emulate synapses. A challenge, however, is to map existing learning algorithms onto a chip: for a physical implementation, a learning rule should ideally be tolerant to the typical intrinsic imperfections of such memristive devices, and local. Restricted Boltzmann Machines (RBM), for their local learning rule and inherent tolerance to stochasticity, comply with both of these constraints and constitute a highly attractive algorithm towards achieving memristor-based Deep Learning. On simulation grounds, this work gives insights into designing simple memristive devices programming protocols to train on chip Boltzmann Machines. Among other RBM-based neural networks, we advocate using a Discriminative RBM, with two hardware-oriented adaptations. We propose a pulse width selection scheme based on the sign of two successive weight updates, and show that it removes the constraint to precisely tune the initial programming pulse width as a hyperparameter. We also propose to evaluate the weight update requested by the algorithm across several samples and stochastic realizations. We show that this strategy brings a partial immunity against the most severe memristive device imperfections such as the non-linearity and the stochasticity of the conductance updates, as well as device-to-device variability.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa