Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 42
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Nat Commun ; 15(1): 1974, 2024 Mar 04.
Artículo en Inglés | MEDLINE | ID: mdl-38438350

RESUMEN

Artificial Intelligence (AI) is currently experiencing a bloom driven by deep learning (DL) techniques, which rely on networks of connected simple computing units operating in parallel. The low communication bandwidth between memory and processing units in conventional von Neumann machines does not support the requirements of emerging applications that rely extensively on large sets of data. More recent computing paradigms, such as high parallelization and near-memory computing, help alleviate the data communication bottleneck to some extent, but paradigm- shifting concepts are required. Memristors, a novel beyond-complementary metal-oxide-semiconductor (CMOS) technology, are a promising choice for memory devices due to their unique intrinsic device-level properties, enabling both storing and computing with a small, massively-parallel footprint at low power. Theoretically, this directly translates to a major boost in energy efficiency and computational throughput, but various practical challenges remain. In this work we review the latest efforts for achieving hardware-based memristive artificial neural networks (ANNs), describing with detail the working principia of each block and the different design alternatives with their own advantages and disadvantages, as well as the tools required for accurate estimation of performance metrics. Ultimately, we aim to provide a comprehensive protocol of the materials and methods involved in memristive neural networks to those aiming to start working in this field and the experts looking for a holistic approach.

2.
Nat Commun ; 14(1): 5282, 2023 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-37648721

RESUMEN

Analog in-memory computing-a promising approach for energy-efficient acceleration of deep learning workloads-computes matrix-vector multiplications but only approximately, due to nonidealities that often are non-deterministic or nonlinear. This can adversely impact the achievable inference accuracy. Here, we develop an hardware-aware retraining approach to systematically examine the accuracy of analog in-memory computing across multiple network topologies, and investigate sensitivity and robustness to a broad set of nonidealities. By introducing a realistic crossbar model, we improve significantly on earlier retraining approaches. We show that many larger-scale deep neural networks-including convnets, recurrent networks, and transformers-can in fact be successfully retrained to show iso-accuracy with the floating point implementation. Our results further suggest that nonidealities that add noise to the inputs or outputs, not the weights, have the largest impact on accuracy, and that recurrent networks are particularly robust to all nonidealities.

3.
ACS Nano ; 17(13): 11994-12039, 2023 Jul 11.
Artículo en Inglés | MEDLINE | ID: mdl-37382380

RESUMEN

Memristive technology has been rapidly emerging as a potential alternative to traditional CMOS technology, which is facing fundamental limitations in its development. Since oxide-based resistive switches were demonstrated as memristors in 2008, memristive devices have garnered significant attention due to their biomimetic memory properties, which promise to significantly improve power consumption in computing applications. Here, we provide a comprehensive overview of recent advances in memristive technology, including memristive devices, theory, algorithms, architectures, and systems. In addition, we discuss research directions for various applications of memristive technology including hardware accelerators for artificial intelligence, in-sensor computing, and probabilistic computing. Finally, we provide a forward-looking perspective on the future of memristive technology, outlining the challenges and opportunities for further research and innovation in this field. By providing an up-to-date overview of the state-of-the-art in memristive technology, this review aims to inform and inspire further research in this field.

4.
Nat Nanotechnol ; 18(5): 479-485, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-36997756

RESUMEN

Disentangling the attributes of a sensory signal is central to sensory perception and cognition and hence is a critical task for future artificial intelligence systems. Here we present a compute engine capable of efficiently factorizing high-dimensional holographic representations of combinations of such attributes, by exploiting the computation-in-superposition capability of brain-inspired hyperdimensional computing, and the intrinsic stochasticity associated with analogue in-memory computing based on nanoscale memristive devices. Such an iterative in-memory factorizer is shown to solve at least five orders of magnitude larger problems that cannot be solved otherwise, as well as substantially lowering the computational time and space complexity. We present a large-scale experimental demonstration of the factorizer by employing two in-memory compute chips based on phase-change memristive devices. The dominant matrix-vector multiplication operations take a constant time, irrespective of the size of the matrix, thus reducing the computational time complexity to merely the number of iterations. Moreover, we experimentally demonstrate the ability to reliably and efficiently factorize visual perceptual representations.

5.
IEEE Trans Neural Netw Learn Syst ; 34(12): 10993-10998, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-35333724

RESUMEN

Memory-augmented neural networks enhance a neural network with an external key-value (KV) memory whose complexity is typically dominated by the number of support vectors in the key memory. We propose a generalized KV memory that decouples its dimension from the number of support vectors by introducing a free parameter that can arbitrarily add or remove redundancy to the key memory representation. In effect, it provides an additional degree of freedom to flexibly control the tradeoff between robustness and the resources required to store and compute the generalized KV memory. This is particularly useful for realizing the key memory on in-memory computing hardware where it exploits nonideal, but extremely efficient nonvolatile memory devices for dense storage and computation. Experimental results show that adapting this parameter on demand effectively mitigates up to 44% nonidealities, at equal accuracy and number of devices, without any need for neural network retraining.

6.
Adv Mater ; 35(37): e2201238, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-35570382

RESUMEN

Nanoscale resistive memory devices are being explored for neuromorphic and in-memory computing. However, non-ideal device characteristics of read noise and resistance drift pose significant challenges to the achievable computational precision. Here, it is shown that there is an additional non-ideality that can impact computational precision, namely the bias-polarity-dependent current flow. Using phase-change memory (PCM) as a model system, it is shown that this "current-voltage" non-ideality arises both from the material and geometrical properties of the devices. Further, we discuss the detrimental effects of such bipolar asymmetry on in-memory matrix-vector multiply (MVM) operations and provide a scheme to compensate for it.

7.
Sci Adv ; 8(22): eabn3243, 2022 Jun 03.
Artículo en Inglés | MEDLINE | ID: mdl-35648858

RESUMEN

With more and more aspects of modern life and scientific tools becoming digitized, the amount of data being generated is growing exponentially. Fast and efficient statistical processing, such as identifying correlations in big datasets, is therefore becoming increasingly important, and this, on account of the various compute bottlenecks in modern digital machines, has necessitated new computational paradigms. Here, we demonstrate one such novel paradigm, via the development of an integrated phase-change photonics engine. The computational memory engine exploits the accumulative property of Ge2Sb2Te5 phase-change cells and wavelength division multiplexing property of optics in delivering fully parallelized and colocated temporal correlation detection computations. We investigate this property and present an experimental demonstration of identifying real-time correlations in data streams on the social media platform Twitter and high-traffic computing nodes in data centers. Our results demonstrate the use case of high-speed integrated photonics in accelerating statistical analysis methods.

8.
Science ; 376(6597): eabj9979, 2022 06 03.
Artículo en Inglés | MEDLINE | ID: mdl-35653464

RESUMEN

Memristive devices, which combine a resistor with memory functions such that voltage pulses can change their resistance (and hence their memory state) in a nonvolatile manner, are beginning to be implemented in integrated circuits for memory applications. However, memristive devices could have applications in many other technologies, such as non-von Neumann in-memory computing in crossbar arrays, random number generation for data security, and radio-frequency switches for mobile communications. Progress toward the integration of memristive devices in commercial solid-state electronic circuits and other potential applications will depend on performance and reliability challenges that still need to be addressed, as described here.

9.
Nat Commun ; 13(1): 3765, 2022 06 30.
Artículo en Inglés | MEDLINE | ID: mdl-35773285

RESUMEN

Analogue memory-based deep neural networks provide energy-efficiency and per-area throughput gains relative to state-of-the-art digital counterparts such as graphics processing units. Recent advances focus largely on hardware-aware algorithmic training and improvements to circuits, architectures, and memory devices. Optimal translation of software-trained weights into analogue hardware weights-given the plethora of complex memory non-idealities-represents an equally important task. We report a generalised computational framework that automates the crafting of complex weight programming strategies to minimise accuracy degradations during inference, particularly over time. The framework is agnostic to network structure and generalises well across recurrent, convolutional, and transformer neural networks. As a highly flexible numerical heuristic, the approach accommodates arbitrary device-level complexity, making it potentially relevant for a variety of analogue memories. By quantifying the limit of achievable inference accuracy, it also enables analogue memory-based deep neural network accelerators to reach their full inference potential.


Asunto(s)
Redes Neurales de la Computación , Programas Informáticos , Computadores
10.
Nanomaterials (Basel) ; 12(10)2022 May 17.
Artículo en Inglés | MEDLINE | ID: mdl-35630924

RESUMEN

Non-volatile memories based on phase-change materials have gained ground for applications in analog in-memory computing. Nonetheless, non-idealities inherent to the material result in device resistance variations that impair the achievable numerical precision. Projected-type phase-change memory devices reduce these non-idealities. In a projected phase-change memory, the phase-change storage mechanism is decoupled from the information retrieval process by using projection of the phase-change material's phase configuration onto a projection liner. It has been suggested that the interface resistance between the phase-change material and the projection liner is an important parameter that dictates the efficacy of the projection. In this work, we establish a metrology framework to assess and understand the relevant structural properties of the interfaces in thin films contained in projected memory devices. Using X-ray reflectivity, X-ray diffraction and transmission electron microscopy, we investigate the quality of the interfaces and the layers' properties. Using demonstrator examples of Sb and Sb2Te3 phase-change materials, new deposition routes as well as stack designs are proposed to enhance the phase-change material to a projection-liner interface and the robustness of material stacks in the devices.

11.
Sci Rep ; 12(1): 6488, 2022 Apr 20.
Artículo en Inglés | MEDLINE | ID: mdl-35443770

RESUMEN

Phase Change Memory (PCM) is an emerging technology exploiting the rapid and reversible phase transition of certain chalcogenides to realize nanoscale memory elements. PCM devices are being explored as non-volatile storage-class memory and as computing elements for in-memory and neuromorphic computing. It is well-known that PCM exhibits several characteristics of a memristive device. In this work, based on the essential physical attributes of PCM devices, we exploit the concept of Dynamic Route Map (DRM) to capture the complex physics underlying these devices to describe them as memristive devices defined by a state-dependent Ohm's law. The efficacy of the DRM has been proven by comparing numerical results with experimental data obtained on PCM devices.

12.
Nat Nanotechnol ; 17(5): 507-513, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-35347271

RESUMEN

In the mammalian nervous system, various synaptic plasticity rules act, either individually or synergistically, over wide-ranging timescales to enable learning and memory formation. Hence, in neuromorphic computing platforms, there is a significant need for artificial synapses that can faithfully express such multi-timescale plasticity mechanisms. Although some plasticity rules have been emulated with elaborate complementary metal oxide semiconductor and memristive circuitry, device-level hardware realizations of long-term and short-term plasticity with tunable dynamics are lacking. Here we introduce a phase-change memtransistive synapse that leverages both the non-volatility of the phase configurations and the volatility of field-effect modulation for implementing tunable plasticities. We show that these mixed-plasticity synapses can enable plasticity rules such as short-term spike-timing-dependent plasticity that helps with the modelling of dynamic environments. Further, we demonstrate the efficacy of the memtransistive synapses in realizing accelerators for Hopfield neural networks for solving combinatorial optimization problems.


Asunto(s)
Plasticidad Neuronal , Sinapsis , Animales , Mamíferos , Redes Neurales de la Computación , Semiconductores
13.
Front Comput Neurosci ; 15: 674154, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34413731

RESUMEN

In-memory computing (IMC) is a non-von Neumann paradigm that has recently established itself as a promising approach for energy-efficient, high throughput hardware for deep learning applications. One prominent application of IMC is that of performing matrix-vector multiplication in O ( 1 ) time complexity by mapping the synaptic weights of a neural-network layer to the devices of an IMC core. However, because of the significantly different pattern of execution compared to previous computational paradigms, IMC requires a rethinking of the architectural design choices made when designing deep-learning hardware. In this work, we focus on application-specific, IMC hardware for inference of Convolution Neural Networks (CNNs), and provide methodologies for implementing the various architectural components of the IMC core. Specifically, we present methods for mapping synaptic weights and activations on the memory structures and give evidence of the various trade-offs therein, such as the one between on-chip memory requirements and execution latency. Lastly, we show how to employ these methods to implement a pipelined dataflow that offers throughput and latency beyond state-of-the-art for image classification tasks.

14.
Nat Commun ; 12(1): 2468, 2021 04 29.
Artículo en Inglés | MEDLINE | ID: mdl-33927202

RESUMEN

Traditional neural networks require enormous amounts of data to build their complex mappings during a slow training procedure that hinders their abilities for relearning and adapting to new data. Memory-augmented neural networks enhance neural networks with an explicit memory to overcome these issues. Access to this explicit memory, however, occurs via soft read and write operations involving every individual memory entry, resulting in a bottleneck when implemented using the conventional von Neumann computer architecture. To overcome this bottleneck, we propose a robust architecture that employs a computational memory unit as the explicit memory performing analog in-memory computation on high-dimensional (HD) vectors, while closely matching 32-bit software-equivalent accuracy. This is achieved by a content-based attention mechanism that represents unrelated items in the computational memory with uncorrelated HD vectors, whose real-valued components can be readily approximated by binary, or bipolar components. Experimental results demonstrate the efficacy of our approach on few-shot image classification tasks on the Omniglot dataset using more than 256,000 phase-change memory devices. Our approach effectively merges the richness of deep neural network representations with HD computing that paves the way for robust vector-symbolic manipulations applicable in reasoning, fusion, and compression.

15.
Nat Nanotechnol ; 15(9): 812, 2020 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-32678302

RESUMEN

An amendment to this paper has been published and can be accessed via a link at the top of the paper.

16.
Front Neurosci ; 14: 406, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32477047

RESUMEN

Deep neural networks (DNNs) have revolutionized the field of artificial intelligence and have achieved unprecedented success in cognitive tasks such as image and speech recognition. Training of large DNNs, however, is computationally intensive and this has motivated the search for novel computing architectures targeting this application. A computational memory unit with nanoscale resistive memory devices organized in crossbar arrays could store the synaptic weights in their conductance states and perform the expensive weighted summations in place in a non-von Neumann manner. However, updating the conductance states in a reliable manner during the weight update process is a fundamental challenge that limits the training accuracy of such an implementation. Here, we propose a mixed-precision architecture that combines a computational memory unit performing the weighted summations and imprecise conductance updates with a digital processing unit that accumulates the weight updates in high precision. A combined hardware/software training experiment of a multilayer perceptron based on the proposed architecture using a phase-change memory (PCM) array achieves 97.73% test accuracy on the task of classifying handwritten digits (based on the MNIST dataset), within 0.6% of the software baseline. The architecture is further evaluated using accurate behavioral models of PCM on a wide class of networks, namely convolutional neural networks, long-short-term-memory networks, and generative-adversarial networks. Accuracies comparable to those of floating-point implementations are achieved without being constrained by the non-idealities associated with the PCM devices. A system-level study demonstrates 172 × improvement in energy efficiency of the architecture when used for training a multilayer perceptron compared with a dedicated fully digital 32-bit implementation.

17.
Sci Rep ; 10(1): 8248, 2020 May 19.
Artículo en Inglés | MEDLINE | ID: mdl-32427898

RESUMEN

Phase change memory (PCM) is being actively explored for in-memory computing and neuromorphic systems. The ability of a PCM device to store a continuum of resistance values can be exploited to realize arithmetic operations such as matrix-vector multiplications or to realize the synaptic efficacy in neural networks. However, the resistance variations arising from structural relaxation, 1/f noise, and changes in ambient temperature pose a key challenge. The recently proposed projected PCM concept helps to mitigate these resistance variations by decoupling the physical mechanism of resistance storage from the information-retrieval process. Even though the device concept has been proven successfully, a comprehensive understanding of the device behavior is still lacking. Here, we develop a device model that captures two key attributes, namely, resistance drift and the state dependence of resistance. The former refers to the temporal evolution of resistance, while the latter refers to the dependence of the device resistance on the phase configuration of the phase change material. The study provides significant insights into the role of interfacial resistance in these devices. The model is experimentally validated on projected PCM devices based on antimony and a metal nitride fabricated in a lateral device geometry and is also used to provide guidelines for material selection and device engineering.

18.
Sci Rep ; 10(1): 8080, 2020 05 15.
Artículo en Inglés | MEDLINE | ID: mdl-32415108

RESUMEN

Spiking neural networks (SNN) are computational models inspired by the brain's ability to naturally encode and process information in the time domain. The added temporal dimension is believed to render them more computationally efficient than the conventional artificial neural networks, though their full computational capabilities are yet to be explored. Recently, in-memory computing architectures based on non-volatile memory crossbar arrays have shown great promise to implement parallel computations in artificial and spiking neural networks. In this work, we evaluate the feasibility to realize high-performance event-driven in-situ supervised learning systems using nanoscale and stochastic analog memory synapses. For the first time, the potential of analog memory synapses to generate precisely timed spikes in SNNs is experimentally demonstrated. The experiment targets applications which directly integrates spike encoded signals generated from bio-mimetic sensors with in-memory computing based learning systems to generate precisely timed control signal spikes for neuromorphic actuators. More than 170,000 phase-change memory (PCM) based synapses from our prototype chip were trained based on an event-driven learning rule, to generate spike patterns with more than 85% of the spikes within a 25 ms tolerance interval in a 1250 ms long spike pattern. We observe that the accuracy is mainly limited by the imprecision related to device programming and temporal drift of conductance values. We show that an array level scaling scheme can significantly improve the retention of the trained SNN states in the presence of conductance drift in the PCM. Combining the computational potential of supervised SNNs with the parallel compute power of in-memory computing, this work paves the way for next-generation of efficient brain-inspired systems.


Asunto(s)
Potenciales de Acción , Encéfalo/fisiología , Memoria/fisiología , Redes Neurales de la Computación , Neuronas/fisiología , Aprendizaje Automático Supervisado , Sinapsis/fisiología , Algoritmos , Humanos , Reconocimiento de Normas Patrones Automatizadas
19.
Nat Commun ; 11(1): 2473, 2020 05 18.
Artículo en Inglés | MEDLINE | ID: mdl-32424184

RESUMEN

In-memory computing using resistive memory devices is a promising non-von Neumann approach for making energy-efficient deep learning inference hardware. However, due to device variability and noise, the network needs to be trained in a specific way so that transferring the digitally trained weights to the analog resistive memory devices will not result in significant loss of accuracy. Here, we introduce a methodology to train ResNet-type convolutional neural networks that results in no appreciable accuracy loss when transferring weights to phase-change memory (PCM) devices. We also propose a compensation technique that exploits the batch normalization parameters to improve the accuracy retention over time. We achieve a classification accuracy of 93.7% on CIFAR-10 and a top-1 accuracy of 71.6% on ImageNet benchmarks after mapping the trained weights to PCM. Our hardware results on CIFAR-10 with ResNet-32 demonstrate an accuracy above 93.5% retained over a one-day period, where each of the 361,722 synaptic weights is programmed on just two PCM devices organized in a differential configuration.

20.
Nat Nanotechnol ; 15(7): 529-544, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-32231270

RESUMEN

Traditional von Neumann computing systems involve separate processing and memory units. However, data movement is costly in terms of time and energy and this problem is aggravated by the recent explosive growth in highly data-centric applications related to artificial intelligence. This calls for a radical departure from the traditional systems and one such non-von Neumann computational approach is in-memory computing. Hereby certain computational tasks are performed in place in the memory itself by exploiting the physical attributes of the memory devices. Both charge-based and resistance-based memory devices are being explored for in-memory computing. In this Review, we provide a broad overview of the key computational primitives enabled by these memory devices as well as their applications spanning scientific computing, signal processing, optimization, machine learning, deep learning and stochastic computing.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...