Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros












Base de datos
Asunto principal
Intervalo de año de publicación
1.
Nano Converg ; 11(1): 9, 2024 Feb 28.
Artículo en Inglés | MEDLINE | ID: mdl-38416323

RESUMEN

Artificial neural networks (ANNs), inspired by the human brain's network of neurons and synapses, enable computing machines and systems to execute cognitive tasks, thus embodying artificial intelligence (AI). Since the performance of ANNs generally improves with the expansion of the network size, and also most of the computation time is spent for matrix operations, AI computation have been performed not only using the general-purpose central processing unit (CPU) but also architectures that facilitate parallel computation, such as graphic processing units (GPUs) and custom-designed application-specific integrated circuits (ASICs). Nevertheless, the substantial energy consumption stemming from frequent data transfers between processing units and memory has remained a persistent challenge. In response, a novel approach has emerged: an in-memory computing architecture harnessing analog memory elements. This innovation promises a notable advancement in energy efficiency. The core of this analog AI hardware accelerator lies in expansive arrays of non-volatile memory devices, known as resistive processing units (RPUs). These RPUs facilitate massively parallel matrix operations, leading to significant enhancements in both performance and energy efficiency. Electrochemical random-access memory (ECRAM), leveraging ion dynamics in secondary-ion battery materials, has emerged as a promising candidate for RPUs. ECRAM achieves over 1000 memory states through precise ion movement control, prompting early-stage research into material stacks such as mobile ion species and electrolyte materials. Crucially, the analog states in ECRAMs update symmetrically with pulse number (or voltage polarity), contributing to high network performance. Recent strides in device engineering in planar and three-dimensional structures and the understanding of ECRAM operation physics have marked significant progress in a short research period. This paper aims to review ECRAM material advancements through literature surveys, offering a systematic discussion on engineering assessments for ion control and a physical understanding of array-level demonstrations. Finally, the review outlines future directions for improvements, co-optimization, and multidisciplinary collaboration in circuits, algorithms, and applications to develop energy-efficient, next-generation AI hardware systems.

2.
ACS Nano ; 16(8): 12214-12225, 2022 08 23.
Artículo en Inglés | MEDLINE | ID: mdl-35853220

RESUMEN

An artificial synapse based on oxygen-ion-driven electrochemical random-access memory (O-ECRAM) devices is a promising candidate for building neural networks embodied in neuromorphic hardware. However, achieving commercial-level learning accuracy in O-ECRAM synapses, analog conductance tuning at fast speed, and multibit storage capacity is challenging because of the lack of Joule heating, which restricts O2- ionic transport. Here, we propose the use of an atomically thin heater of monolayer graphene as a low-power heating source for O-ECRAM to increase thermally activated O2- migration within channel-electrolyte layers. Heating from graphene manipulates the electrolyte activation energy to establish and maintain discrete analog states in the O-ECRAM channel. Benefiting from the integrated graphene heater, the O-ECRAM features long retention (>104 s), good stability (switching accuracy <98% for >103 training pulses), multilevel analog states for 6-bit analog weight storage with near-ideal linear switching, and 95% pattern-identification accuracy. The findings demonstrate the usefulness of 2D materials as integrated heating elements in artificial synapse chips to accelerate neuromorphic computation.


Asunto(s)
Grafito , Redes Neurales de la Computación , Sinapsis
3.
Front Neurosci ; 15: 636127, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33897351

RESUMEN

In-memory computing based on non-volatile resistive memory can significantly improve the energy efficiency of artificial neural networks. However, accurate in situ training has been challenging due to the nonlinear and stochastic switching of the resistive memory elements. One promising analog memory is the electrochemical random-access memory (ECRAM), also known as the redox transistor. Its low write currents and linear switching properties across hundreds of analog states enable accurate and massively parallel updates of a full crossbar array, which yield rapid and energy-efficient training. While simulations predict that ECRAM based neural networks achieve high training accuracy at significantly higher energy efficiency than digital implementations, these predictions have not been experimentally achieved. In this work, we train a 3 × 3 array of ECRAM devices that learns to discriminate several elementary logic gates (AND, OR, NAND). We record the evolution of the network's synaptic weights during parallel in situ (on-line) training, with outer product updates. Due to linear and reproducible device switching characteristics, our crossbar simulations not only accurately simulate the epochs to convergence, but also quantitatively capture the evolution of weights in individual devices. The implementation of the first in situ parallel training together with strong agreement with simulation results provides a significant advance toward developing ECRAM into larger crossbar arrays for artificial neural network accelerators, which could enable orders of magnitude improvements in energy efficiency of deep neural networks.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...