Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 31
Filtrar
1.
Nature ; 632(8024): 264-265, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39112617
2.
Nature ; 563(7730): 230-234, 2018 11.
Artigo em Inglês | MEDLINE | ID: mdl-30374193

RESUMO

In recent years, artificial neural networks have become the flagship algorithm of artificial intelligence1. In these systems, neuron activation functions are static, and computing is achieved through standard arithmetic operations. By contrast, a prominent branch of neuroinspired computing embraces the dynamical nature of the brain and proposes to endow each component of a neural network with dynamical functionality, such as oscillations, and to rely on emergent physical phenomena, such as synchronization2-6, for solving complex problems with small networks7-11. This approach is especially interesting for hardware implementations, because emerging nanoelectronic devices can provide compact and energy-efficient nonlinear auto-oscillators that mimic the periodic spiking activity of biological neurons12-16. The dynamical couplings between oscillators can then be used to mediate the synaptic communication between the artificial neurons. One challenge for using nanodevices in this way is to achieve learning, which requires fine control and tuning of their coupled oscillations17; the dynamical features of nanodevices can be difficult to control and prone to noise and variability18. Here we show that the outstanding tunability of spintronic nano-oscillators-that is, the possibility of accurately controlling their frequency across a wide range, through electrical current and magnetic field-can be used to address this challenge. We successfully train a hardware network of four spin-torque nano-oscillators to recognize spoken vowels by tuning their frequencies according to an automatic real-time learning rule. We show that the high experimental recognition rates stem from the ability of these oscillators to synchronize. Our results demonstrate that non-trivial pattern classification tasks can be achieved with small hardware neural networks by endowing them with nonlinear dynamical features such as oscillations and synchronization.

3.
Nature ; 547(7664): 428-431, 2017 07 26.
Artigo em Inglês | MEDLINE | ID: mdl-28748930

RESUMO

Neurons in the brain behave as nonlinear oscillators, which develop rhythmic activity and interact to process information. Taking inspiration from this behaviour to realize high-density, low-power neuromorphic computing will require very large numbers of nanoscale nonlinear oscillators. A simple estimation indicates that to fit 108 oscillators organized in a two-dimensional array inside a chip the size of a thumb, the lateral dimension of each oscillator must be smaller than one micrometre. However, nanoscale devices tend to be noisy and to lack the stability that is required to process data in a reliable way. For this reason, despite multiple theoretical proposals and several candidates, including memristive and superconducting oscillators, a proof of concept of neuromorphic computing using nanoscale oscillators has yet to be demonstrated. Here we show experimentally that a nanoscale spintronic oscillator (a magnetic tunnel junction) can be used to achieve spoken-digit recognition with an accuracy similar to that of state-of-the-art neural networks. We also determine the regime of magnetization dynamics that leads to the greatest performance. These results, combined with the ability of the spintronic oscillators to interact with each other, and their long lifetime and low energy consumption, open up a path to fast, parallel, on-chip computation based on networks of oscillators.

4.
Nanotechnology ; 32(1): 012002, 2021 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-32679577

RESUMO

Recent progress in artificial intelligence is largely attributed to the rapid development of machine learning, especially in the algorithm and neural network models. However, it is the performance of the hardware, in particular the energy efficiency of a computing system that sets the fundamental limit of the capability of machine learning. Data-centric computing requires a revolution in hardware systems, since traditional digital computers based on transistors and the von Neumann architecture were not purposely designed for neuromorphic computing. A hardware platform based on emerging devices and new architecture is the hope for future computing with dramatically improved throughput and energy efficiency. Building such a system, nevertheless, faces a number of challenges, ranging from materials selection, device optimization, circuit fabrication and system integration, to name a few. The aim of this Roadmap is to present a snapshot of emerging hardware technologies that are potentially beneficial for machine learning, providing the Nanotechnology readers with a perspective of challenges and opportunities in this burgeoning field.

5.
Nanotechnology ; 31(14): 145201, 2020 Apr 03.
Artigo em Inglês | MEDLINE | ID: mdl-31842010

RESUMO

An energy-efficient voltage-controlled domain wall (DW) device for implementing an artificial neuron and synapse is analyzed using micromagnetic modeling in the presence of room temperature thermal noise. By controlling the DW motion utilizing spin transfer or spin-orbit torques in association with voltage generated strain control of perpendicular magnetic anisotropy in the presence of Dzyaloshinskii-Moriya interaction, different positions of the DW are realized in the free layer of a magnetic tunnel junction to program different synaptic weights. The feasibility of scaling of such devices is assessed in the presence of thermal perturbations that compromise controllability. Additionally, an artificial neuron can be realized by combining this DW device with a CMOS buffer. This provides a possible pathway to realize energy-efficient voltage-controlled nanomagnetic deep neural networks that can learn in real time.

6.
Proc IEEE Inst Electr Electron Eng ; 104(10): 2024-2039, 2016 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-27881881

RESUMO

Bioinspired hardware holds the promise of low-energy, intelligent, and highly adaptable computing systems. Applications span from automatic classification for big data management, through unmanned vehicle control, to control for biomedical prosthesis. However, one of the major challenges of fabricating bioinspired hardware is building ultra-high-density networks out of complex processing units interlinked by tunable connections. Nanometer-scale devices exploiting spin electronics (or spintronics) can be a key technology in this context. In particular, magnetic tunnel junctions (MTJs) are well suited for this purpose because of their multiple tunable functionalities. One such functionality, non-volatile memory, can provide massive embedded memory in unconventional circuits, thus escaping the von-Neumann bottleneck arising when memory and processors are located separately. Other features of spintronic devices that could be beneficial for bioinspired computing include tunable fast nonlinear dynamics, controlled stochasticity, and the ability of single devices to change functions in different operating conditions. Large networks of interacting spintronic nanodevices can have their interactions tuned to induce complex dynamics such as synchronization, chaos, soliton diffusion, phase transitions, criticality, and convergence to multiple metastable states. A number of groups have recently proposed bioinspired architectures that include one or several types of spintronic nanodevices. In this paper, we show how spintronics can be used for bioinspired computing. We review the different approaches that have been proposed, the recent advances in this direction, and the challenges toward fully integrated spintronics complementary metal-oxide-semiconductor (CMOS) bioinspired hardware.

7.
Stereotact Funct Neurosurg ; 93(2): 94-101, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25720954

RESUMO

Background/Aims: Evaluation of tremor constitutes a crucial step from the diagnosis to the initial treatment and follow-up of patients with essential tremor. The severity of tremor can be evaluated using clinical rating scales, accelerometry, or electrophysiology. Clinical scores are subjectively given, may be affected by intra- and interevaluator variations due to different experience, delays between consultations, and subtle changes in tremor severity. Existing medical devices are not routinely used: they are expensive, time-consuming, not easily accessible. We aimed at showing that a smartphone application using the accelerometers embedded in smartphones is effective for quantifying the tremor of patients presenting with essential tremor. Methods: We developed a free iPhone/iPod application, Itremor, and evaluated different parameters on 8 patients receiving deep brain stimulation of the ventral intermediate nucleus of the thalamus: average and maximum accelerations, time above 1 g of acceleration, peak frequency, typical magnitude of tremor, for postural and action tremors, on and off stimulation. Results: We demonstrated good correlations between the parameters measured with Itremor and clinical score in all conditions. Itremor evaluation enabled higher discriminatory power and degree of reproducibility than clinical scores. Conclusion: Itremor can be used for routine objective evaluation of essential tremor, and may facilitate adjustment of the treatment. © 2015 S. Karger AG, Basel.

8.
Nat Commun ; 15(1): 741, 2024 Jan 25.
Artigo em Inglês | MEDLINE | ID: mdl-38272896

RESUMO

Memristor-based neural networks provide an exceptional energy-efficient platform for artificial intelligence (AI), presenting the possibility of self-powered operation when paired with energy harvesters. However, most memristor-based networks rely on analog in-memory computing, necessitating a stable and precise power supply, which is incompatible with the inherently unstable and unreliable energy harvesters. In this work, we fabricated a robust binarized neural network comprising 32,768 memristors, powered by a miniature wide-bandgap solar cell optimized for edge applications. Our circuit employs a resilient digital near-memory computing approach, featuring complementarily programmed memristors and logic-in-sense-amplifier. This design eliminates the need for compensation or calibration, operating effectively under diverse conditions. Under high illumination, the circuit achieves inference performance comparable to that of a lab bench power supply. In low illumination scenarios, it remains functional with slightly reduced accuracy, seamlessly transitioning to an approximate computing mode. Through image classification neural network simulations, we demonstrate that misclassified images under low illumination are primarily difficult-to-classify cases. Our approach lays the groundwork for self-powered AI and the creation of intelligent sensors for various applications in health, safety, and environment monitoring.

9.
J Nanosci Nanotechnol ; 13(2): 771-5, 2013 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-23646513

RESUMO

III-V Metal-Oxide-Semiconductor Field-Effect Transistor (MOSFET) with a gate stack based on high-kappa dielectric appears as an appealing solution to increase the performance of either microwave or logic circuits with low supply voltage (V(DD)). The main objective of this work is to provide a theoretical model of the gate charge control in III-V MOS capacitors (MOSCAPs) using the accurate self-consistent solution of 1D and 2D Poisson-Schrödinger equations. This study allows us to identify the major mechanisms which must be included to get theoretical calculations in good agreement with experiments. Actually, our results obtained for an Al2O3/In0.53Ga0.47As MOSCAP structure are successfully compared to experimental measurements. We evaluate how III-V MOS technology is affected by the density of interface states which favors the Fermi level pinning at the Al2O3/In0.53Ga0.47As interface in both depletion and inversion regimes, which is a consequence of the poor gate control of the mobile inversion carrier density. The high energy valleys (satellite valleys) contribution observed in many theoretical calculations appears to be fully negligible in the presence of interface states. The enhancement of doping density in the channel is shown to improve the short-channel effect (SCE) immunity but to the price of higher sensitivity to the interface trap effect which manifests through a low Fermi level movement efficiency at interface in OFF-state and a low inversion carrier density in ON-state, even in the long channel case.

10.
Nat Commun ; 14(1): 7530, 2023 Nov 20.
Artigo em Inglês | MEDLINE | ID: mdl-37985669

RESUMO

Safety-critical sensory applications, like medical diagnosis, demand accurate decisions from limited, noisy data. Bayesian neural networks excel at such tasks, offering predictive uncertainty assessment. However, because of their probabilistic nature, they are computationally intensive. An innovative solution utilizes memristors' inherent probabilistic nature to implement Bayesian neural networks. However, when using memristors, statistical effects follow the laws of device physics, whereas in Bayesian neural networks, those effects can take arbitrary shapes. This work overcome this difficulty by adopting a variational inference training augmented by a "technological loss", incorporating memristor physics. This technique enabled programming a Bayesian neural network on 75 crossbar arrays of 1,024 memristors, incorporating CMOS periphery for in-memory computing. The experimental neural network classified heartbeats with high accuracy, and estimated the certainty of its predictions. The results reveal orders-of-magnitude improvement in inference energy efficiency compared to a microcontroller or an embedded graphics processing unit performing the same task.

11.
Nat Nanotechnol ; 18(11): 1273-1280, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37500772

RESUMO

Spintronic nano-synapses and nano-neurons perform neural network operations with high accuracy thanks to their rich, reproducible and controllable magnetization dynamics. These dynamical nanodevices could transform artificial intelligence hardware, provided they implement state-of-the-art deep neural networks. However, there is today no scalable way to connect them in multilayers. Here we show that the flagship nano-components of spintronics, magnetic tunnel junctions, can be connected into multilayer neural networks where they implement both synapses and neurons thanks to their magnetization dynamics, and communicate by processing, transmitting and receiving radiofrequency signals. We build a hardware spintronic neural network composed of nine magnetic tunnel junctions connected in two layers, and show that it natively classifies nonlinearly separable radiofrequency inputs with an accuracy of 97.7%. Using physical simulations, we demonstrate that a large network of nanoscale junctions can achieve state-of-the-art identification of drones from their radiofrequency transmissions, without digitization and consuming only a few milliwatts, which constitutes a gain of several orders of magnitude in power consumption compared to currently used techniques. This study lays the foundation for deep, dynamical, spintronic neural networks.

12.
Front Neurosci ; 16: 983950, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36340782

RESUMO

This study proposes voltage-dependent-synaptic plasticity (VDSP), a novel brain-inspired unsupervised local learning rule for the online implementation of Hebb's plasticity mechanism on neuromorphic hardware. The proposed VDSP learning rule updates the synaptic conductance on the spike of the postsynaptic neuron only, which reduces by a factor of two the number of updates with respect to standard spike timing dependent plasticity (STDP). This update is dependent on the membrane potential of the presynaptic neuron, which is readily available as part of neuron implementation and hence does not require additional memory for storage. Moreover, the update is also regularized on synaptic weight and prevents explosion or vanishing of weights on repeated stimulation. Rigorous mathematical analysis is performed to draw an equivalence between VDSP and STDP. To validate the system-level performance of VDSP, we train a single-layer spiking neural network (SNN) for the recognition of handwritten digits. We report 85.01 ± 0.76% (Mean ± SD) accuracy for a network of 100 output neurons on the MNIST dataset. The performance improves when scaling the network size (89.93 ± 0.41% for 400 output neurons, 90.56 ± 0.27 for 500 neurons), which validates the applicability of the proposed learning rule for spatial pattern recognition tasks. Future work will consider more complicated tasks. Interestingly, the learning rule better adapts than STDP to the frequency of input signal and does not require hand-tuning of hyperparameters.

13.
Nat Commun ; 13(1): 1016, 2022 Feb 23.
Artigo em Inglês | MEDLINE | ID: mdl-35197449

RESUMO

Deep learning has an increasing impact to assist research, allowing, for example, the discovery of novel materials. Until now, however, these artificial intelligence techniques have fallen short of discovering the full differential equation of an experimental physical system. Here we show that a dynamical neural network, trained on a minimal amount of data, can predict the behavior of spintronic devices with high accuracy and an extremely efficient simulation time, compared to the micromagnetic simulations that are usually employed to model them. For this purpose, we re-frame the formalism of Neural Ordinary Differential Equations to the constraints of spintronics: few measured outputs, multiple inputs and internal parameters. We demonstrate with Neural Ordinary Differential Equations an acceleration factor over 200 compared to micromagnetic simulations for a complex problem - the simulation of a reservoir computer made of magnetic skyrmions (20 minutes compared to three days). In a second realization, we show that we can predict the noisy response of experimental spintronic nano-oscillators to varying inputs after training Neural Ordinary Differential Equations on five milliseconds of their measured response to a different set of inputs. Neural Ordinary Differential Equations can therefore constitute a disruptive tool for developing spintronic applications in complement to micromagnetic simulations, which are time-consuming and cannot fit experiments when noise or imperfections are present. Our approach can also be generalized to other electronic devices involving dynamics.

14.
Nat Commun ; 13(1): 883, 2022 02 15.
Artigo em Inglês | MEDLINE | ID: mdl-35169115

RESUMO

The brain naturally binds events from different sources in unique concepts. It is hypothesized that this process occurs through the transient mutual synchronization of neurons located in different regions of the brain when the stimulus is presented. This mechanism of 'binding through synchronization' can be directly implemented in neural networks composed of coupled oscillators. To do so, the oscillators must be able to mutually synchronize for the range of inputs corresponding to a single class, and otherwise remain desynchronized. Here we show that the outstanding ability of spintronic nano-oscillators to mutually synchronize and the possibility to precisely control the occurrence of mutual synchronization by tuning the oscillator frequencies over wide ranges allows pattern recognition. We demonstrate experimentally on a simple task that three spintronic nano-oscillators can bind consecutive events and thus recognize and distinguish temporal sequences. This work is a step forward in the construction of neural networks that exploit the non-linear dynamic properties of their components to perform brain-inspired computations.


Assuntos
Encéfalo/fisiologia , Sincronização Cortical/fisiologia , Rede Nervosa/fisiologia , Redes Neurais de Computação , Animais , Simulação por Computador , Humanos , Modelos Neurológicos , Neurônios/fisiologia
15.
Nat Commun ; 12(1): 2549, 2021 05 05.
Artigo em Inglês | MEDLINE | ID: mdl-33953183

RESUMO

While deep neural networks have surpassed human performance in multiple situations, they are prone to catastrophic forgetting: upon training a new task, they rapidly forget previously learned ones. Neuroscience studies, based on idealized tasks, suggest that in the brain, synapses overcome this issue by adjusting their plasticity depending on their past history. However, such "metaplastic" behaviors do not transfer directly to mitigate catastrophic forgetting in deep neural networks. In this work, we interpret the hidden weights used by binarized neural networks, a low-precision version of deep neural networks, as metaplastic variables, and modify their training technique to alleviate forgetting. Building on this idea, we propose and demonstrate experimentally, in situations of multitask and stream learning, a training technique that reduces catastrophic forgetting without needing previously presented data, nor formal boundaries between datasets and with performance approaching more mainstream techniques with task boundaries. We support our approach with a theoretical analysis on a tractable task. This work bridges computational neuroscience and deep learning, and presents significant assets for future embedded and neuromorphic systems, especially when using novel nanodevices featuring physics analogous to metaplasticity.

16.
Front Neurosci ; 15: 781786, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-35069101

RESUMO

With recent advances in the field of artificial intelligence (AI) such as binarized neural networks (BNNs), a wide variety of vision applications with energy-optimized implementations have become possible at the edge. Such networks have the first layer implemented with high precision, which poses a challenge in deploying a uniform hardware mapping for the network implementation. Stochastic computing can allow conversion of such high-precision computations to a sequence of binarized operations while maintaining equivalent accuracy. In this work, we propose a fully binarized hardware-friendly computation engine based on stochastic computing as a proof of concept for vision applications involving multi-channel inputs. Stochastic sampling is performed by sampling from a non-uniform (normal) distribution based on analog hardware sources. We first validate the benefits of the proposed pipeline on the CIFAR-10 dataset. To further demonstrate its application for real-world scenarios, we present a case-study of microscopy image diagnostics for pathogen detection. We then evaluate benefits of implementing such a pipeline using OxRAM-based circuits for stochastic sampling as well as in-memory computing-based binarized multiplication. The proposed implementation is about 1,000 times more energy efficient compared to conventional floating-precision-based digital implementations, with memory savings of a factor of 45.

17.
Front Neurosci ; 15: 633674, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33679315

RESUMO

Equilibrium Propagation is a biologically-inspired algorithm that trains convergent recurrent neural networks with a local learning rule. This approach constitutes a major lead to allow learning-capable neuromophic systems and comes with strong theoretical guarantees. Equilibrium propagation operates in two phases, during which the network is let to evolve freely and then "nudged" toward a target; the weights of the network are then updated based solely on the states of the neurons that they connect. The weight updates of Equilibrium Propagation have been shown mathematically to approach those provided by Backpropagation Through Time (BPTT), the mainstream approach to train recurrent neural networks, when nudging is performed with infinitely small strength. In practice, however, the standard implementation of Equilibrium Propagation does not scale to visual tasks harder than MNIST. In this work, we show that a bias in the gradient estimate of equilibrium propagation, inherent in the use of finite nudging, is responsible for this phenomenon and that canceling it allows training deep convolutional neural networks. We show that this bias can be greatly reduced by using symmetric nudging (a positive nudging and a negative one). We also generalize Equilibrium Propagation to the case of cross-entropy loss (by opposition to squared error). As a result of these advances, we are able to achieve a test error of 11.7% on CIFAR-10, which approaches the one achieved by BPTT and provides a major improvement with respect to the standard Equilibrium Propagation that gives 86% test error. We also apply these techniques to train an architecture with unidirectional forward and backward connections, yielding a 13.2% test error. These results highlight equilibrium propagation as a compelling biologically-plausible approach to compute error gradients in deep neuromorphic systems.

18.
iScience ; 24(3): 102222, 2021 Mar 19.
Artigo em Inglês | MEDLINE | ID: mdl-33748709

RESUMO

Finding spike-based learning algorithms that can be implemented within the local constraints of neuromorphic systems, while achieving high accuracy, remains a formidable challenge. Equilibrium propagation is a promising alternative to backpropagation as it only involves local computations, but hardware-oriented studies have so far focused on rate-based networks. In this work, we develop a spiking neural network algorithm called EqSpike, compatible with neuromorphic systems, which learns by equilibrium propagation. Through simulations, we obtain a test recognition accuracy of 97.6% on the MNIST handwritten digits dataset (Mixed National Institute of Standards and Technology), similar to rate-based equilibrium propagation, and comparing favorably to alternative learning techniques for spiking neural networks. We show that EqSpike implemented in silicon neuromorphic technology could reduce the energy consumption of inference and training, respectively, by three orders and two orders of magnitude compared to graphics processing units. Finally, we also show that during learning, EqSpike weight updates exhibit a form of spike-timing-dependent plasticity, highlighting a possible connection with biology.

19.
Adv Mater ; 33(17): e2008135, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33738866

RESUMO

Metamaterials present the possibility of artificially generating advanced functionalities through engineering of their internal structure. Artificial spin networks, in which a large number of nanoscale magnetic elements are coupled together, are promising metamaterial candidates that enable the control of collective magnetic behavior through tuning of the local interaction between elements. In this work, the motion of magnetic domain-walls in an artificial spin network leads to a tunable stochastic response of the metamaterial, which can be tailored through an external magnetic field and local lattice modifications. This type of tunable stochastic network produces a controllable random response exploiting intrinsic stochasticity within magnetic domain-wall motion at the nanoscale. An iconic demonstration used to illustrate the control of randomness is the Galton board. In this system, multiple balls fall into an array of pegs to generate a bell-shaped curve that can be modified via the array spacing or the tilt of the board. A nanoscale recreation of this experiment using an artificial spin network is employed to demonstrate tunable stochasticity. This type of tunable stochastic network opens new paths toward post-Von Neumann computing architectures such as Bayesian sensing or random neural networks, in which stochasticity is harnessed to efficiently perform complex computational tasks.

20.
Sci Rep ; 10(1): 328, 2020 01 15.
Artigo em Inglês | MEDLINE | ID: mdl-31941917

RESUMO

The reservoir computing neural network architecture is widely used to test hardware systems for neuromorphic computing. One of the preferred tasks for bench-marking such devices is automatic speech recognition. This task requires acoustic transformations from sound waveforms with varying amplitudes to frequency domain maps that can be seen as feature extraction techniques. Depending on the conversion method, these transformations sometimes obscure the contribution of the neuromorphic hardware to the overall speech recognition performance. Here, we quantify and separate the contributions of the acoustic transformations and the neuromorphic hardware to the speech recognition success rate. We show that the non-linearity in the acoustic transformation plays a critical role in feature extraction. We compute the gain in word success rate provided by a reservoir computing device compared to the acoustic transformation only, and show that it is an appropriate bench-mark for comparing different hardware. Finally, we experimentally and numerically quantify the impact of the different acoustic transformations for neuromorphic hardware based on magnetic nano-oscillators.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA