Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Front Neurosci ; 18: 1440000, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39296710

RESUMO

Spiking neural networks (SNNs) have received increasing attention due to their high biological plausibility and energy efficiency. The binary spike-based information propagation enables efficient sparse computation in event-based and static computer vision applications. However, the weight precision and especially the membrane potential precision remain as high-precision values (e.g., 32 bits) in state-of-the-art SNN algorithms. Each neuron in an SNN stores the membrane potential over time and typically updates its value in every time step. Such frequent read/write operations of high-precision membrane potential incur storage and memory access overhead in SNNs, which undermines the SNNs' compatibility with resource-constrained hardware. To resolve this inefficiency, prior works have explored the time step reduction and low-precision representation of membrane potential at a limited scale and reported significant accuracy drops. Furthermore, while recent advances in on-device AI present pruning and quantization optimization with different architectures and datasets, simultaneous pruning with quantization is highly under-explored in SNNs. In this work, we present SpQuant-SNN, a fully-quantized spiking neural network with ultra-low precision weights, membrane potential, and high spatial-channel sparsity, enabling the end-to-end low precision with significantly reduced operations on SNN. First, we propose an integer-only quantization scheme for the membrane potential with a stacked surrogate gradient function, a simple-yet-effective method that enables the smooth learning process of quantized SNN training. Second, we implement spatial-channel pruning with membrane potential prior, toward reducing the layer-wise computational complexity, and floating-point operations (FLOPs) in SNNs. Finally, to further improve the accuracy of low-precision and sparse SNN, we propose a self-adaptive learnable potential threshold for SNN training. Equipped with high biological adaptiveness, minimal computations, and memory utilization, SpQuant-SNN achieves state-of-the-art performance across multiple SNN models for both event-based and static image datasets, including both image classification and object detection tasks. The proposed SpQuant-SNN achieved up to 13× memory reduction and >4.7× FLOPs reduction with < 1.8% accuracy degradation for both classification and object detection tasks, compared to the SOTA baseline.

2.
ACS Nano ; 17(13): 11994-12039, 2023 Jul 11.
Artigo em Inglês | MEDLINE | ID: mdl-37382380

RESUMO

Memristive technology has been rapidly emerging as a potential alternative to traditional CMOS technology, which is facing fundamental limitations in its development. Since oxide-based resistive switches were demonstrated as memristors in 2008, memristive devices have garnered significant attention due to their biomimetic memory properties, which promise to significantly improve power consumption in computing applications. Here, we provide a comprehensive overview of recent advances in memristive technology, including memristive devices, theory, algorithms, architectures, and systems. In addition, we discuss research directions for various applications of memristive technology including hardware accelerators for artificial intelligence, in-sensor computing, and probabilistic computing. Finally, we provide a forward-looking perspective on the future of memristive technology, outlining the challenges and opportunities for further research and innovation in this field. By providing an up-to-date overview of the state-of-the-art in memristive technology, this review aims to inform and inspire further research in this field.

3.
IEEE Trans Biomed Circuits Syst ; 14(2): 198-208, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-32078561

RESUMO

Biometrics such as facial features, fingerprint, and iris are being used increasingly in modern authentication systems. These methods are now popular and have found their way into many portable electronics such as smartphones, tablets, and laptops. Furthermore, the use of biometrics enables secure access to private medical data, now collected in wearable devices such as smartwatches. In this work, we present an accurate low-power device authentication system that employs electrocardiogram (ECG) signals as the biometric modality. The proposed ECG processor consists of front-end signal processing of ECG signals and back-end neural networks (NNs) for accurate authentication. The NNs are trained using a cost function that minimizes intra-individual distance over time and maximizes inter-individual distance. Efficient low-power hardware was implemented by using fixed coefficients for ECG signal pre-processing and by using joint optimization of low-precision and structured sparsity for the NNs. We implemented two instances of ECG authentication hardware with 4X and 8X structurally-compressed NNs in 65 nm LP CMOS, which consume low power of 62.37  µW and 75.41  µW for real-time ECG authentication with a low equal error rate of 1.36% and 1.21%, respectively, for a large 741-subject in-house ECG database. The hardware was evaluated at 10 kHz clock frequency and 1.2 V voltage supply.


Assuntos
Eletrocardiografia/instrumentação , Redes Neurais de Computação , Processamento de Sinais Assistido por Computador/instrumentação , Algoritmos , Biometria , Humanos , Dispositivos Eletrônicos Vestíveis
4.
Front Neurosci ; 12: 891, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30559644

RESUMO

Neuromorphic engineering (NE) encompasses a diverse range of approaches to information processing that are inspired by neurobiological systems, and this feature distinguishes neuromorphic systems from conventional computing systems. The brain has evolved over billions of years to solve difficult engineering problems by using efficient, parallel, low-power computation. The goal of NE is to design systems capable of brain-like computation. Numerous large-scale neuromorphic projects have emerged recently. This interdisciplinary field was listed among the top 10 technology breakthroughs of 2014 by the MIT Technology Review and among the top 10 emerging technologies of 2015 by the World Economic Forum. NE has two-way goals: one, a scientific goal to understand the computational properties of biological neural systems by using models implemented in integrated circuits (ICs); second, an engineering goal to exploit the known properties of biological systems to design and implement efficient devices for engineering applications. Building hardware neural emulators can be extremely useful for simulating large-scale neural models to explain how intelligent behavior arises in the brain. The principal advantages of neuromorphic emulators are that they are highly energy efficient, parallel and distributed, and require a small silicon area. Thus, compared to conventional CPUs, these neuromorphic emulators are beneficial in many engineering applications such as for the porting of deep learning algorithms for various recognitions tasks. In this review article, we describe some of the most significant neuromorphic spiking emulators, compare the different architectures and approaches used by them, illustrate their advantages and drawbacks, and highlight the capabilities that each can deliver to neural modelers. This article focuses on the discussion of large-scale emulators and is a continuation of a previous review of various neural and synapse circuits (Indiveri et al., 2011). We also explore applications where these emulators have been used and discuss some of their promising future applications.

6.
Nanotechnology ; 26(45): 455204, 2015 Nov 13.
Artigo em Inglês | MEDLINE | ID: mdl-26491032

RESUMO

A neuro-inspired computing paradigm beyond the von Neumann architecture is emerging and it generally takes advantage of massive parallelism and is aimed at complex tasks that involve intelligence and learning. The cross-point array architecture with synaptic devices has been proposed for on-chip implementation of the weighted sum and weight update in the learning algorithms. In this work, forming-free, silicon-process-compatible Ta/TaOx/TiO2/Ti synaptic devices are fabricated, in which >200 levels of conductance states could be continuously tuned by identical programming pulses. In order to demonstrate the advantages of parallelism of the cross-point array architecture, a novel fully parallel write scheme is designed and experimentally demonstrated in a small-scale crossbar array to accelerate the weight update in the training process, at a speed that is independent of the array size. Compared to the conventional row-by-row write scheme, it achieves >30× speed-up and >30× improvement in energy efficiency as projected in a large-scale array. If realistic synaptic device characteristics such as device variations are taken into an array-level simulation, the proposed array architecture is able to achieve ∼95% recognition accuracy of MNIST handwritten digits, which is close to the accuracy achieved by software using the ideal sparse coding algorithm.


Assuntos
Metodologias Computacionais , Impedância Elétrica , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão , Semicondutores , Sinapses/fisiologia , Aprendizagem , Modelos Teóricos , Aprendizado de Máquina não Supervisionado
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA