Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Nat Commun ; 15(1): 1974, 2024 Mar 04.
Artículo en Inglés | MEDLINE | ID: mdl-38438350

RESUMEN

Artificial Intelligence (AI) is currently experiencing a bloom driven by deep learning (DL) techniques, which rely on networks of connected simple computing units operating in parallel. The low communication bandwidth between memory and processing units in conventional von Neumann machines does not support the requirements of emerging applications that rely extensively on large sets of data. More recent computing paradigms, such as high parallelization and near-memory computing, help alleviate the data communication bottleneck to some extent, but paradigm- shifting concepts are required. Memristors, a novel beyond-complementary metal-oxide-semiconductor (CMOS) technology, are a promising choice for memory devices due to their unique intrinsic device-level properties, enabling both storing and computing with a small, massively-parallel footprint at low power. Theoretically, this directly translates to a major boost in energy efficiency and computational throughput, but various practical challenges remain. In this work we review the latest efforts for achieving hardware-based memristive artificial neural networks (ANNs), describing with detail the working principia of each block and the different design alternatives with their own advantages and disadvantages, as well as the tools required for accurate estimation of performance metrics. Ultimately, we aim to provide a comprehensive protocol of the materials and methods involved in memristive neural networks to those aiming to start working in this field and the experts looking for a holistic approach.

2.
Nat Commun ; 14(1): 5282, 2023 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-37648721

RESUMEN

Analog in-memory computing-a promising approach for energy-efficient acceleration of deep learning workloads-computes matrix-vector multiplications but only approximately, due to nonidealities that often are non-deterministic or nonlinear. This can adversely impact the achievable inference accuracy. Here, we develop an hardware-aware retraining approach to systematically examine the accuracy of analog in-memory computing across multiple network topologies, and investigate sensitivity and robustness to a broad set of nonidealities. By introducing a realistic crossbar model, we improve significantly on earlier retraining approaches. We show that many larger-scale deep neural networks-including convnets, recurrent networks, and transformers-can in fact be successfully retrained to show iso-accuracy with the floating point implementation. Our results further suggest that nonidealities that add noise to the inputs or outputs, not the weights, have the largest impact on accuracy, and that recurrent networks are particularly robust to all nonidealities.

3.
Adv Mater ; 35(37): e2201238, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-35570382

RESUMEN

Nanoscale resistive memory devices are being explored for neuromorphic and in-memory computing. However, non-ideal device characteristics of read noise and resistance drift pose significant challenges to the achievable computational precision. Here, it is shown that there is an additional non-ideality that can impact computational precision, namely the bias-polarity-dependent current flow. Using phase-change memory (PCM) as a model system, it is shown that this "current-voltage" non-ideality arises both from the material and geometrical properties of the devices. Further, we discuss the detrimental effects of such bipolar asymmetry on in-memory matrix-vector multiply (MVM) operations and provide a scheme to compensate for it.

4.
Science ; 376(6597): eabj9979, 2022 06 03.
Artículo en Inglés | MEDLINE | ID: mdl-35653464

RESUMEN

Memristive devices, which combine a resistor with memory functions such that voltage pulses can change their resistance (and hence their memory state) in a nonvolatile manner, are beginning to be implemented in integrated circuits for memory applications. However, memristive devices could have applications in many other technologies, such as non-von Neumann in-memory computing in crossbar arrays, random number generation for data security, and radio-frequency switches for mobile communications. Progress toward the integration of memristive devices in commercial solid-state electronic circuits and other potential applications will depend on performance and reliability challenges that still need to be addressed, as described here.

5.
Nat Commun ; 13(1): 3765, 2022 06 30.
Artículo en Inglés | MEDLINE | ID: mdl-35773285

RESUMEN

Analogue memory-based deep neural networks provide energy-efficiency and per-area throughput gains relative to state-of-the-art digital counterparts such as graphics processing units. Recent advances focus largely on hardware-aware algorithmic training and improvements to circuits, architectures, and memory devices. Optimal translation of software-trained weights into analogue hardware weights-given the plethora of complex memory non-idealities-represents an equally important task. We report a generalised computational framework that automates the crafting of complex weight programming strategies to minimise accuracy degradations during inference, particularly over time. The framework is agnostic to network structure and generalises well across recurrent, convolutional, and transformer neural networks. As a highly flexible numerical heuristic, the approach accommodates arbitrary device-level complexity, making it potentially relevant for a variety of analogue memories. By quantifying the limit of achievable inference accuracy, it also enables analogue memory-based deep neural network accelerators to reach their full inference potential.


Asunto(s)
Redes Neurales de la Computación , Programas Informáticos , Computadores
6.
Sci Rep ; 12(1): 6488, 2022 Apr 20.
Artículo en Inglés | MEDLINE | ID: mdl-35443770

RESUMEN

Phase Change Memory (PCM) is an emerging technology exploiting the rapid and reversible phase transition of certain chalcogenides to realize nanoscale memory elements. PCM devices are being explored as non-volatile storage-class memory and as computing elements for in-memory and neuromorphic computing. It is well-known that PCM exhibits several characteristics of a memristive device. In this work, based on the essential physical attributes of PCM devices, we exploit the concept of Dynamic Route Map (DRM) to capture the complex physics underlying these devices to describe them as memristive devices defined by a state-dependent Ohm's law. The efficacy of the DRM has been proven by comparing numerical results with experimental data obtained on PCM devices.

7.
Nat Commun ; 12(1): 2468, 2021 04 29.
Artículo en Inglés | MEDLINE | ID: mdl-33927202

RESUMEN

Traditional neural networks require enormous amounts of data to build their complex mappings during a slow training procedure that hinders their abilities for relearning and adapting to new data. Memory-augmented neural networks enhance neural networks with an explicit memory to overcome these issues. Access to this explicit memory, however, occurs via soft read and write operations involving every individual memory entry, resulting in a bottleneck when implemented using the conventional von Neumann computer architecture. To overcome this bottleneck, we propose a robust architecture that employs a computational memory unit as the explicit memory performing analog in-memory computation on high-dimensional (HD) vectors, while closely matching 32-bit software-equivalent accuracy. This is achieved by a content-based attention mechanism that represents unrelated items in the computational memory with uncorrelated HD vectors, whose real-valued components can be readily approximated by binary, or bipolar components. Experimental results demonstrate the efficacy of our approach on few-shot image classification tasks on the Omniglot dataset using more than 256,000 phase-change memory devices. Our approach effectively merges the richness of deep neural network representations with HD computing that paves the way for robust vector-symbolic manipulations applicable in reasoning, fusion, and compression.

8.
Nat Nanotechnol ; 15(9): 812, 2020 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-32678302

RESUMEN

An amendment to this paper has been published and can be accessed via a link at the top of the paper.

9.
Front Neurosci ; 14: 406, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32477047

RESUMEN

Deep neural networks (DNNs) have revolutionized the field of artificial intelligence and have achieved unprecedented success in cognitive tasks such as image and speech recognition. Training of large DNNs, however, is computationally intensive and this has motivated the search for novel computing architectures targeting this application. A computational memory unit with nanoscale resistive memory devices organized in crossbar arrays could store the synaptic weights in their conductance states and perform the expensive weighted summations in place in a non-von Neumann manner. However, updating the conductance states in a reliable manner during the weight update process is a fundamental challenge that limits the training accuracy of such an implementation. Here, we propose a mixed-precision architecture that combines a computational memory unit performing the weighted summations and imprecise conductance updates with a digital processing unit that accumulates the weight updates in high precision. A combined hardware/software training experiment of a multilayer perceptron based on the proposed architecture using a phase-change memory (PCM) array achieves 97.73% test accuracy on the task of classifying handwritten digits (based on the MNIST dataset), within 0.6% of the software baseline. The architecture is further evaluated using accurate behavioral models of PCM on a wide class of networks, namely convolutional neural networks, long-short-term-memory networks, and generative-adversarial networks. Accuracies comparable to those of floating-point implementations are achieved without being constrained by the non-idealities associated with the PCM devices. A system-level study demonstrates 172 × improvement in energy efficiency of the architecture when used for training a multilayer perceptron compared with a dedicated fully digital 32-bit implementation.

10.
Nat Commun ; 11(1): 2473, 2020 05 18.
Artículo en Inglés | MEDLINE | ID: mdl-32424184

RESUMEN

In-memory computing using resistive memory devices is a promising non-von Neumann approach for making energy-efficient deep learning inference hardware. However, due to device variability and noise, the network needs to be trained in a specific way so that transferring the digitally trained weights to the analog resistive memory devices will not result in significant loss of accuracy. Here, we introduce a methodology to train ResNet-type convolutional neural networks that results in no appreciable accuracy loss when transferring weights to phase-change memory (PCM) devices. We also propose a compensation technique that exploits the batch normalization parameters to improve the accuracy retention over time. We achieve a classification accuracy of 93.7% on CIFAR-10 and a top-1 accuracy of 71.6% on ImageNet benchmarks after mapping the trained weights to PCM. Our hardware results on CIFAR-10 with ResNet-32 demonstrate an accuracy above 93.5% retained over a one-day period, where each of the 361,722 synaptic weights is programmed on just two PCM devices organized in a differential configuration.

11.
Sci Rep ; 10(1): 8080, 2020 05 15.
Artículo en Inglés | MEDLINE | ID: mdl-32415108

RESUMEN

Spiking neural networks (SNN) are computational models inspired by the brain's ability to naturally encode and process information in the time domain. The added temporal dimension is believed to render them more computationally efficient than the conventional artificial neural networks, though their full computational capabilities are yet to be explored. Recently, in-memory computing architectures based on non-volatile memory crossbar arrays have shown great promise to implement parallel computations in artificial and spiking neural networks. In this work, we evaluate the feasibility to realize high-performance event-driven in-situ supervised learning systems using nanoscale and stochastic analog memory synapses. For the first time, the potential of analog memory synapses to generate precisely timed spikes in SNNs is experimentally demonstrated. The experiment targets applications which directly integrates spike encoded signals generated from bio-mimetic sensors with in-memory computing based learning systems to generate precisely timed control signal spikes for neuromorphic actuators. More than 170,000 phase-change memory (PCM) based synapses from our prototype chip were trained based on an event-driven learning rule, to generate spike patterns with more than 85% of the spikes within a 25 ms tolerance interval in a 1250 ms long spike pattern. We observe that the accuracy is mainly limited by the imprecision related to device programming and temporal drift of conductance values. We show that an array level scaling scheme can significantly improve the retention of the trained SNN states in the presence of conductance drift in the PCM. Combining the computational potential of supervised SNNs with the parallel compute power of in-memory computing, this work paves the way for next-generation of efficient brain-inspired systems.


Asunto(s)
Potenciales de Acción , Encéfalo/fisiología , Memoria/fisiología , Redes Neurales de la Computación , Neuronas/fisiología , Aprendizaje Automático Supervisado , Sinapsis/fisiología , Algoritmos , Humanos , Reconocimiento de Normas Patrones Automatizadas
12.
Sci Rep ; 10(1): 8248, 2020 May 19.
Artículo en Inglés | MEDLINE | ID: mdl-32427898

RESUMEN

Phase change memory (PCM) is being actively explored for in-memory computing and neuromorphic systems. The ability of a PCM device to store a continuum of resistance values can be exploited to realize arithmetic operations such as matrix-vector multiplications or to realize the synaptic efficacy in neural networks. However, the resistance variations arising from structural relaxation, 1/f noise, and changes in ambient temperature pose a key challenge. The recently proposed projected PCM concept helps to mitigate these resistance variations by decoupling the physical mechanism of resistance storage from the information-retrieval process. Even though the device concept has been proven successfully, a comprehensive understanding of the device behavior is still lacking. Here, we develop a device model that captures two key attributes, namely, resistance drift and the state dependence of resistance. The former refers to the temporal evolution of resistance, while the latter refers to the dependence of the device resistance on the phase configuration of the phase change material. The study provides significant insights into the role of interfacial resistance in these devices. The model is experimentally validated on projected PCM devices based on antimony and a metal nitride fabricated in a lateral device geometry and is also used to provide guidelines for material selection and device engineering.

13.
Nat Nanotechnol ; 15(7): 529-544, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-32231270

RESUMEN

Traditional von Neumann computing systems involve separate processing and memory units. However, data movement is costly in terms of time and energy and this problem is aggravated by the recent explosive growth in highly data-centric applications related to artificial intelligence. This calls for a radical departure from the traditional systems and one such non-von Neumann computational approach is in-memory computing. Hereby certain computational tasks are performed in place in the memory itself by exploiting the physical attributes of the memory devices. Both charge-based and resistance-based memory devices are being explored for in-memory computing. In this Review, we provide a broad overview of the key computational primitives enabled by these memory devices as well as their applications spanning scientific computing, signal processing, optimization, machine learning, deep learning and stochastic computing.

14.
Sci Adv ; 5(2): eaau5759, 2019 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-30793028

RESUMEN

Collocated data processing and storage are the norm in biological computing systems such as the mammalian brain. As our ability to create better hardware improves, new computational paradigms are being explored beyond von Neumann architectures. Integrated photonic circuits are an attractive solution for on-chip computing which can leverage the increased speed and bandwidth potential of the optical domain, and importantly, remove the need for electro-optical conversions. Here we show that we can combine integrated optics with collocated data storage and processing to enable all-photonic in-memory computations. By employing nonvolatile photonic elements based on the phase-change material, Ge2Sb2Te5, we achieve direct scalar and matrix-vector multiplication, featuring a novel single-shot Write/Erase and a drift-free process. The output pulse, carrying the information of the light-matter interaction, is the result of the computation. Our all-optical approach is novel, easy to fabricate and operate, and sets the stage for development of entirely photonic computers.

15.
Nat Mater ; 17(8): 681-685, 2018 08.
Artículo en Inglés | MEDLINE | ID: mdl-29915424

RESUMEN

Phase change memory has been developed into a mature technology capable of storing information in a fast and non-volatile way1-3, with potential for neuromorphic computing applications4-6. However, its future impact in electronics depends crucially on how the materials at the core of this technology adapt to the requirements arising from continued scaling towards higher device densities. A common strategy to fine-tune the properties of phase change memory materials, reaching reasonable thermal stability in optical data storage, relies on mixing precise amounts of different dopants, resulting often in quaternary or even more complicated compounds6-8. Here we show how the simplest material imaginable, a single element (in this case, antimony), can become a valid alternative when confined in extremely small volumes. This compositional simplification eliminates problems related to unwanted deviations from the optimized stoichiometry in the switching volume, which become increasingly pressing when devices are aggressively miniaturized9,10. Removing compositional optimization issues may allow one to capitalize on nanosize effects in information storage.


Asunto(s)
Equipos y Suministros Eléctricos , Antimonio , Conductividad Eléctrica
16.
Nat Commun ; 9(1): 2514, 2018 06 28.
Artículo en Inglés | MEDLINE | ID: mdl-29955057

RESUMEN

Neuromorphic computing has emerged as a promising avenue towards building the next generation of intelligent computing systems. It has been proposed that memristive devices, which exhibit history-dependent conductivity modulation, could efficiently represent the synaptic weights in artificial neural networks. However, precise modulation of the device conductance over a wide dynamic range, necessary to maintain high network accuracy, is proving to be challenging. To address this, we present a multi-memristive synaptic architecture with an efficient global counter-based arbitration scheme. We focus on phase change memory devices, develop a comprehensive model and demonstrate via simulations the effectiveness of the concept for both spiking and non-spiking neural networks. Moreover, we present experimental results involving over a million phase change memory devices for unsupervised learning of temporal correlations using a spiking neural network. The work presents a significant step towards the realization of large-scale and energy-efficient neuromorphic computing systems.


Asunto(s)
Materiales Biomiméticos , Electrónica/instrumentación , Modelos Neurológicos , Redes Neurales de la Computación , Aprendizaje Automático no Supervisado , Potenciales de Acción/fisiología , Animales , Conductividad Eléctrica , Humanos , Sinapsis/fisiología
17.
Nat Commun ; 8(1): 1115, 2017 10 24.
Artículo en Inglés | MEDLINE | ID: mdl-29062022

RESUMEN

Conventional computers based on the von Neumann architecture perform computation by repeatedly transferring data between their physically separated processing and memory units. As computation becomes increasingly data centric and the scalability limits in terms of performance and power are being reached, alternative computing paradigms with collocated computation and storage are actively being sought. A fascinating such approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. We present an experimental demonstration using one million phase change memory devices organized to perform a high-level computational primitive by exploiting the crystallization dynamics. Its result is imprinted in the conductance states of the memory devices. The results of using such a computational memory for processing real-world data sets show that this co-existence of computation and storage at the nanometer scale could enable ultra-dense, low-power, and massively-parallel computing systems.

18.
Nat Nanotechnol ; 11(8): 693-9, 2016 08.
Artículo en Inglés | MEDLINE | ID: mdl-27183057

RESUMEN

Artificial neuromorphic systems based on populations of spiking neurons are an indispensable tool in understanding the human brain and in constructing neuromimetic computational systems. To reach areal and power efficiencies comparable to those seen in biological systems, electroionics-based and phase-change-based memristive devices have been explored as nanoscale counterparts of synapses. However, progress on scalable realizations of neurons has so far been limited. Here, we show that chalcogenide-based phase-change materials can be used to create an artificial neuron in which the membrane potential is represented by the phase configuration of the nanoscale phase-change device. By exploiting the physics of reversible amorphous-to-crystal phase transitions, we show that the temporal integration of postsynaptic potentials can be achieved on a nanosecond timescale. Moreover, we show that this is inherently stochastic because of the melt-quench-induced reconfiguration of the atomic structure occurring when the neuron is reset. We demonstrate the use of these phase-change neurons, and their populations, in the detection of temporal correlations in parallel data streams and in sub-Nyquist representation of high-bandwidth signals.


Asunto(s)
Potenciales de Acción/fisiología , Modelos Neurológicos , Nanotecnología/métodos , Neuronas/fisiología , Encéfalo/fisiología , Calcógenos/metabolismo , Humanos , Potenciales de la Membrana/fisiología , Procesos Estocásticos
19.
Nat Commun ; 5: 4314, 2014 Jul 07.
Artículo en Inglés | MEDLINE | ID: mdl-25000349

RESUMEN

In spite of the prominent role played by phase change materials in information technology, a detailed understanding of the central property of such materials, namely the phase change mechanism, is still lacking mostly because of difficulties associated with experimental measurements. Here, we measure the crystal growth velocity of a phase change material at both the nanometre length and the nanosecond timescale using phase-change memory cells. The material is studied in the technologically relevant melt-quenched phase and directly in the environment in which the phase change material is going to be used in the application. We present a consistent description of the temperature dependence of the crystal growth velocity in the glass and the super-cooled liquid up to the melting temperature.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...