Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Nano Lett ; 24(12): 3581-3589, 2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38471119

RESUMO

In this study, we demonstrate the implementation of programmable threshold logics using a 32 × 32 memristor crossbar array. Thanks to forming-free characteristics obtained by the annealing process, its accurate programming characteristics are presented by a 256-level grayscale image. By simultaneous subtraction between weighted sum and threshold values with a differential pair in an opposite way, 3-input and 4-input Boolean logics are implemented in the crossbar without additional reference bias. Also, we verify a full-adder circuit and analyze its fidelity, depending on the device programming accuracy. Lastly, we successfully implement a 4-bit ripple carry adder in the crossbar and achieve reliable operations by read-based logic operations. Compared to stateful logic driven by device switching, a 4-bit ripple carry adder on a memristor crossbar array can perform more reliably in fewer steps thanks to its read-based parallel logic operation.

2.
Nanotechnology ; 33(37)2022 Jun 24.
Artigo em Inglês | MEDLINE | ID: mdl-35671736

RESUMO

To analyze the effect of the intrinsic variations of the memristor device on the neuromorphic system, we fabricated 32 × 32 Al2O3/TiOx-based memristor crossbar array and implemented 3 bit multilevel conductance as weight quantization by utilizing the switching characteristics to minimize the performance degradation of the neural network. The tuning operation for 8 weight levels was confirmed with a tolerance of ±4µA (±40µS). The endurance and retention characteristics were also verified, and the random telegraph noise (RTN) characteristics were measured according to the weight range to evaluate the internal stochastic variation effect. Subsequently, a memristive neural network was constructed by off-chip training with differential memristor pairs for the Modified National Institute of Standards and Technology (MNIST) handwritten dataset. The pre-trained weights were quantized, and the classification accuracy was evaluated by applying the intrinsic variations to each quantized weight. The intrinsic variations were applied using the measured weight inaccuracy given by the tuning tolerance, RTN characteristics, and the fault device yield. We believe these results should be considered when the pre-trained weights are transferred to a memristive neural network by off-chip training.

3.
Neural Netw ; 176: 106355, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38759411

RESUMO

On-chip learning is an effective method for adjusting artificial neural networks in neuromorphic computing systems by considering hardware intrinsic properties. However, it faces challenges due to hardware nonidealities, such as the nonlinearity of potentiation and depression and limitations on fine weight adjustment. In this study, we propose a threshold learning algorithm for a variation-tolerant ternary neural network in a memristor crossbar array. This algorithm utilizes two tightly separated resistance states in memristive devices to represent weight values. The high-resistance state (HRS) and low-resistance state (LRS) defined as read current of < 0.1 µA and > 1 µA, respectively, were successfully programmed in a 32 × 32 crossbar array, and exhibited half-normal distributions due to the programming method. To validate our approach experimentally, a 64 × 10 single-layer fully connected network were trained in the fabricated crossbar for an 8 × 8 MNIST dataset using the threshold learning algorithm, where the weight value is updated when a gradient determined by backpropagation exceeds a threshold value. Thanks to the large margin between the two states of the memristor, we observed only a 0.42 % drop in classification accuracy compared to the baseline network results. The threshold learning algorithm is expected to alleviate the programming burden and be utilized in variation-tolerant neuromorphic architectures.


Assuntos
Algoritmos , Redes Neurais de Computação , Aprendizado de Máquina
4.
ACS Appl Mater Interfaces ; 16(1): 1054-1065, 2024 Jan 10.
Artigo em Inglês | MEDLINE | ID: mdl-38163259

RESUMO

We propose a hardware-friendly architecture of a convolutional neural network using a 32 × 32 memristor crossbar array having an overshoot suppression layer. The gradual switching characteristics in both set and reset operations enable the implementation of a 3-bit multilevel operation in a whole array that can be utilized as 16 kernels. Moreover, a binary activation function mapped to the read voltage and ground is introduced to evaluate the result of training with a boundary of 0.5 and its estimated gradient. Additionally, we adopt a fixed kernel method, where inputs are sequentially applied to a crossbar array with a differential memristor pair scheme, reducing unused cell waste. The binary activation has robust characteristics against device state variations, and a neuron circuit is experimentally demonstrated on a customized breadboard. Thanks to the analogue switching characteristics of the memristor device, the accurate vector-matrix multiplication (VMM) operations can be experimentally demonstrated by combining sequential inputs and the weights obtained through tuning operations in the crossbar array. In addition, the feature images extracted by VMM during the hardware inference operations on 100 test samples are classified, and the classification performance by off-chip training is compared with the software results. Finally, inference results depending on the tolerance are statistically verified through several tuning cycles.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA