Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Neural Netw ; 176: 106355, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38759411

ABSTRACT

On-chip learning is an effective method for adjusting artificial neural networks in neuromorphic computing systems by considering hardware intrinsic properties. However, it faces challenges due to hardware nonidealities, such as the nonlinearity of potentiation and depression and limitations on fine weight adjustment. In this study, we propose a threshold learning algorithm for a variation-tolerant ternary neural network in a memristor crossbar array. This algorithm utilizes two tightly separated resistance states in memristive devices to represent weight values. The high-resistance state (HRS) and low-resistance state (LRS) defined as read current of < 0.1 µA and > 1 µA, respectively, were successfully programmed in a 32 × 32 crossbar array, and exhibited half-normal distributions due to the programming method. To validate our approach experimentally, a 64 × 10 single-layer fully connected network were trained in the fabricated crossbar for an 8 × 8 MNIST dataset using the threshold learning algorithm, where the weight value is updated when a gradient determined by backpropagation exceeds a threshold value. Thanks to the large margin between the two states of the memristor, we observed only a 0.42 % drop in classification accuracy compared to the baseline network results. The threshold learning algorithm is expected to alleviate the programming burden and be utilized in variation-tolerant neuromorphic architectures.


Subject(s)
Algorithms , Neural Networks, Computer , Machine Learning
2.
Front Neurosci ; 18: 1279708, 2024.
Article in English | MEDLINE | ID: mdl-38660225

ABSTRACT

A neuromorphic system is composed of hardware-based artificial neurons and synaptic devices, designed to improve the efficiency of neural computations inspired by energy-efficient and parallel operations of the biological nervous system. A synaptic device-based array can compute vector-matrix multiplication (VMM) with given input voltage signals, as a non-volatile memory device stores the weight information of the neural network in the form of conductance or capacitance. However, unlike software-based neural networks, the neuromorphic system unavoidably exhibits non-ideal characteristics that can have an adverse impact on overall system performance. In this study, the characteristics required for synaptic devices and their importance are discussed, depending on the targeted application. We categorize synaptic devices into two types: conductance-based and capacitance-based, and thoroughly explore the operations and characteristics of each device. The array structure according to the device structure and the VMM operation mechanism of each structure are analyzed, including recent advances in array-level implementation of synaptic devices. Furthermore, we reviewed studies to minimize the effect of hardware non-idealities, which degrades the performance of hardware neural networks. These studies introduce techniques in hardware and signal engineering, as well as software-hardware co-optimization, to address these non-idealities through compensation approaches.

3.
ACS Appl Mater Interfaces ; 15(40): 47229-47237, 2023 Oct 11.
Article in English | MEDLINE | ID: mdl-37782228

ABSTRACT

Neuromorphic computing, an innovative technology inspired by the human brain, has attracted increasing attention as a promising technology for the development of artificial intelligence systems. This study proposes synaptic transistors with a Li1-xAlxTi2-x(PO4)3 (LATP) layer to analyze the conductance modulation linearity, which is essential for weight mapping and updating during on-chip learning processes. The high ionic conductivity of the LATP electrolyte provides a large hysteresis window and enables linear weight update in synaptic devices. The results demonstrate that optimizing the LATP layer thickness improves the conductance modulation and linearity of synaptic transistors during potentiation and degradation. A 20 nm-thick LATP layer results in the most nonlinear depression (αd = -6.59), whereas a 100 nm-thick LATP layer results in the smallest nonlinearity (αd = -2.22). Additionally, a device with the optimal 100 nm-thick LATP layer exhibits the highest average recognition accuracy of 94.8% and the smallest fluctuation, indicating that the linearity characteristics of a device play a crucial role in weight update during learning and can significantly affect the recognition accuracy.

4.
Adv Sci (Weinh) ; 10(32): e2303817, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37752771

ABSTRACT

The progress of artificial intelligence and the development of large-scale neural networks have significantly increased computational costs and energy consumption. To address these challenges, researchers are exploring low-power neural network implementation approaches and neuromorphic computing systems are being highlighted as potential candidates. Specifically, the development of high-density and reliable synaptic devices, which are the key elements of neuromorphic systems, is of particular interest. In this study, an 8 × 16 memcapacitor crossbar array that combines the technological maturity of flash cells with the advantages of NAND flash array structure is presented. The analog properties of the array with high reliability are experimentally demonstrated, and vector-matrix multiplication with extremely low error is successfully performed. Additionally, with the capability of weight fine-tuning characteristics, a spiking neural network for CIFAR-10 classification via off-chip learning at the wafer level is implemented. These experimental results demonstrate a high level of accuracy of 92.11%, with less than a 1.13% difference compared to software-based neural networks (93.24%).

SELECTION OF CITATIONS
SEARCH DETAIL
...