Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38470601

RESUMO

The quantization of synaptic weights using emerging nonvolatile memory (NVM) devices has emerged as a promising solution to implement computationally efficient neural networks on resource constrained hardware. However, the practical implementation of such synaptic weights is hampered by the imperfect memory characteristics, specifically the availability of limited number of quantized states and the presence of large intrinsic device variation and stochasticity involved in writing the synaptic states. This article presents on-chip training and inference of a neural network using quantized magnetic domain wall (DW)-based synaptic array and CMOS peripheral circuits. A rigorous model of the magnetic DW device considering stochasticity and process variations has been utilized for the synapse. To achieve stable quantized weights, DW pinning has been achieved by means of physical constrictions. Finally, VGG8 architecture for CIFAR-10 image classification has been simulated by using the extracted synaptic device characteristics. The performance in terms of accuracy, energy, latency, and area consumption has been evaluated while considering the process variations and nonidealities in the DW device as well as the peripheral circuits. The proposed quantized neural network (QNN) architecture achieves efficient on-chip learning with 92.4% and 90.4% training and inference accuracy, respectively. In comparison to pure CMOS-based design, it demonstrates an overall improvement in area, energy, and latency by 13.8 × , 9.6 × , and 3.5 × , respectively.

2.
Nanotechnology ; 31(50): 504001, 2020 Dec 11.
Artigo em Inglês | MEDLINE | ID: mdl-33021239

RESUMO

Stochastic neuromorphic computation (SNC) has the potential to enable a low power, error tolerant and scalable computing platform in comparison to its deterministic counterparts. However, the hardware implementation of complementary metal oxide semiconductor (CMOS)-based stochastic circuits involves conversion blocks that cost more than the actual processing circuits. The realization of the activation function for SNCs also requires a complicated circuit that results in a significant amount of power dissipation and area overhead. The inherent probabilistic switching behavior of nanomagnets provides an advantage to overcome these complexity issues for the realization of low power and area efficient SNC systems. This paper presents magnetic tunnel junction (MTJ)-based stochastic computing methodology for the implementation of a neural network. The stochastic switching behavior of the MTJ has been exploited to design a binary to stochastic converter to mitigate the complexity of the CMOS-based design. The paper also presents the technique for realizing stochastic sigmoid activation function using an MTJ. Such circuits are simpler than existing ones and use considerably less power. An image classification system employing the proposed circuits has been implemented to verify the effectiveness of the technique. The MTJ-based SNC system shows area and energy reduction by a factor of 13.5 and 2.5, respectively, while the prediction accuracy is 86.66%. Furthermore, this paper investigates how crucial parameters, such as stochastic bitstream length, number of hidden layers and number of nodes in a hidden layer, need to be set precisely to realize an efficient MTJ-based stochastic neural network (SNN). The proposed methodology can prove a promising alternative for highly efficient digital stochastic computing applications.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA