Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Sensors (Basel) ; 24(13)2024 Jul 05.
Artigo em Inglês | MEDLINE | ID: mdl-39001137

RESUMO

Low-light imaging capabilities are in urgent demand in many fields, such as security surveillance, night-time autonomous driving, wilderness rescue, and environmental monitoring. The excellent performance of SPAD devices gives them significant potential for applications in low-light imaging. This article presents a 64 (rows) × 128 (columns) SPAD image sensor designed for low-light imaging. The chip utilizes a three-dimensional stacking architecture and microlens technology, combined with compact gated pixel circuits designed with thick-gate MOS transistors, which further enhance the SPAD's photosensitivity. The configurable digital control circuit allows for the adjustment of exposure time, enabling the sensor to adapt to different lighting conditions. The chip exhibits very low dark noise levels, with an average DCR of 41.5 cps at 2.4 V excess bias voltage. Additionally, it employs a denoising algorithm specifically developed for the SPAD image sensor, achieving two-dimensional grayscale imaging under 6 × 10-4 lux illumination conditions, demonstrating excellent low-light imaging capabilities. The chip designed in this paper fully leverages the performance advantages of SPAD image sensors and holds promise for applications in various fields requiring low-light imaging capabilities.

2.
Sensors (Basel) ; 22(4)2022 Feb 18.
Artigo em Inglês | MEDLINE | ID: mdl-35214487

RESUMO

Siamese networks have been extensively studied in recent years. Most of the previous research focuses on improving accuracy, while merely a few recognize the necessity of reducing parameter redundancy and computation load. Even less work has been done to optimize the runtime memory cost when designing networks, making the Siamese-network-based tracker difficult to deploy on edge devices. In this paper, we present SiamMixer, a lightweight and hardware-friendly visual object-tracking network. It uses patch-by-patch inference to reduce memory use in shallow layers, where each small image region is processed individually. It merges and globally encodes feature maps in deep layers to enhance accuracy. Benefiting from these techniques, SiamMixer demonstrates a comparable accuracy to other large trackers with only 286 kB parameters and 196 kB extra memory use for feature maps. Additionally, we verify the impact of various activation functions and replace all activation functions with ReLU in SiamMixer. This reduces the cost when deploying on mobile devices.


Assuntos
Computadores , Redes Neurais de Computação , Computadores de Mão
3.
Sensors (Basel) ; 21(9)2021 May 08.
Artigo em Inglês | MEDLINE | ID: mdl-34066794

RESUMO

Image demosaicking has been an essential and challenging problem among the most crucial steps of image processing behind image sensors. Due to the rapid development of intelligent processors based on deep learning, several demosaicking methods based on a convolutional neural network (CNN) have been proposed. However, it is difficult for their networks to run in real-time on edge computing devices with a large number of model parameters. This paper presents a compact demosaicking neural network based on the UNet++ structure. The network inserts densely connected layer blocks and adopts Gaussian smoothing layers instead of down-sampling operations before the backbone network. The densely connected blocks can extract mosaic image features efficiently by utilizing the correlation between feature maps. Furthermore, the block adopts depthwise separable convolutions to reduce the model parameters; the Gaussian smoothing layer can expand the receptive fields without down-sampling image size and discarding image information. The size constraints on the input and output images can also be relaxed, and the quality of demosaicked images is improved. Experiment results show that the proposed network can improve the running speed by 42% compared with the fastest CNN-based method and achieve comparable reconstruction quality as it on four mainstream datasets. Besides, when we carry out the inference processing on the demosaicked images on typical deep CNN networks, Mobilenet v1 and SSD, the accuracy can also achieve 85.83% (top 5) and 75.44% (mAP), which performs comparably to the existing methods. The proposed network has the highest computing efficiency and lowest parameter number through all methods, demonstrating that it is well suitable for applications on modern edge computing devices.

4.
Front Comput Neurosci ; 18: 1418115, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38873286

RESUMO

The spiking convolutional neural network (SCNN) is a kind of spiking neural network (SNN) with high accuracy for visual tasks and power efficiency on neuromorphic hardware, which is attractive for edge applications. However, it is challenging to implement SCNNs on resource-constrained edge devices because of the large number of convolutional operations and membrane potential (Vm) storage needed. Previous works have focused on timestep reduction, network pruning, and network quantization to realize SCNN implementation on edge devices. However, they overlooked similarities between spiking feature maps (SFmaps), which contain significant redundancy and cause unnecessary computation and storage. This work proposes a dual-threshold spiking convolutional neural network (DT-SCNN) to decrease the number of operations and memory access by utilizing similarities between SFmaps. The DT-SCNN employs dual firing thresholds to derive two similar SFmaps from one Vm map, reducing the number of convolutional operations and decreasing the volume of Vms and convolutional weights by half. We propose a variant spatio-temporal back propagation (STBP) training method with a two-stage strategy to train DT-SCNNs to decrease the inference timestep to 1. The experimental results show that the dual-thresholds mechanism achieves a 50% reduction in operations and data storage for the convolutional layers compared to conventional SCNNs while achieving not more than a 0.4% accuracy loss on the CIFAR10, MNIST, and Fashion MNIST datasets. Due to the lightweight network and single timestep inference, the DT-SCNN has the least number of operations compared to previous works, paving the way for low-latency and power-efficient edge applications.

5.
Sci Rep ; 14(1): 22249, 2024 Sep 27.
Artigo em Inglês | MEDLINE | ID: mdl-39333218

RESUMO

The rotary motor plays a pivotal role in various motion execution mechanisms. However, an inherent issue arises during the initial installation of the encoder grating, namely, eccentricity between the centers of the encoder grating and motor shaft. This eccentricity substantially affects the accuracy of motor angle measurements. To address this challenge, we proposed a precision encoder grating mounting system that automates the encoder grating mounting process. The proposed system mainly comprises a near-sensor detector and a push rod. With the use of a near-sensor approach, the detector captures rotating encoder grating images, and the eccentricity is computed in real-time. This approach substantially reduces the time delays in image data transmission, thereby enhancing the speed and accuracy of eccentricity calculation. The major contribution of this article is a method for real-time eccentricity calculation that leverages an edge processor within the detector and an edge-vision baseline detection algorithm. This method enables real-time determination of the eccentricity and eccentricity angle of the encoder grating. Leveraging the obtained eccentricity and eccentricity angle data, the position of the encoder grating can be automatically adjusted by the push rod. In the experimental results, the detector can obtain the eccentricity and eccentricity angle of the encoder grating within 2.8 s. The system efficiently and precisely completes a encoder grating mounting task in average 25.1 s, and the average eccentricity after encoder grating mounting is 3.8 µm.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA