Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 24(1)2023 Dec 21.
Artigo em Inglês | MEDLINE | ID: mdl-38202919

RESUMO

The deposition of dust and condensation of fog will block the scattering and transmission of light, thus affecting the performance of optical devices. In this work, flexible polyethylene terephthalate (PET) foil functionalized by active dust removal and anti-fogging characteristics is realized which combines electrodynamic screen (EDS) and electro-heating devices. In lieu of traditional measurement methods of dust removal efficiency, the PSNR is employed to characterize the dust removal efficiency of the film for the first time. The results show that both dust removal and anti-fogging improve the image quality, in which the dust removal increases the PSNR from 28.1 dB to 34.2 dB and the anti-fogging function realizes a film temperature rise of 16.7 ∘C in 5 min, reaching a maximum of 41.3 ∘C. According to the high sensitivity of the PSNR, we propose a fully automatic CIS film-driven algorithm, and its feasibility has been demonstrated.

2.
Sensors (Basel) ; 21(3)2021 Jan 22.
Artigo em Inglês | MEDLINE | ID: mdl-33499338

RESUMO

A three-dimensional (3D) image sensor based on Single-Photon Avalanche Diode (SPAD) requires a time-to-digital converter (TDC) with a wide dynamic range and fine resolution for precise depth calculation. In this paper, we propose a novel high-performance TDC for a SPAD image sensor. In our design, we first present a pulse-width self-restricted (PWSR) delay element that is capable of providing a steady delay to improve the time precision. Meanwhile, we employ the proposed PWSR delay element to construct a pair of 16-stages vernier delay-rings to effectively enlarge the dynamic range. Moreover, we propose a compact and fast arbiter using a fully symmetric topology to enhance the robustness of the TDC. To validate the performance of the proposed TDC, a prototype 13-bit TDC has been fabricated in the standard 0.18-µm complementary metal-oxide-semiconductor (CMOS) process. The core area is about 200 µm × 180 µm and the total power consumption is nearly 1.6 mW. The proposed TDC achieves a dynamic range of 92.1 ns and a time precision of 11.25 ps. The measured worst integral nonlinearity (INL) and differential nonlinearity (DNL) are respectively 0.65 least-significant-bit (LSB) and 0.38 LSB, and both of them are less than 1 LSB. The experimental results indicate that the proposed TDC is suitable for SPAD-based 3D imaging applications.

3.
IEEE Trans Neural Netw Learn Syst ; 34(6): 3205-3219, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35622806

RESUMO

Real-time semantic segmentation is widely used in autonomous driving and robotics. Most previous networks achieved great accuracy based on a complicated model involving mass computing. The existing lightweight networks generally reduce the parameter sizes by sacrificing the segmentation accuracy. It is critical to balance the parameters and accuracy for real-time semantic segmentation. In this article, we propose a lightweight multiscale-feature-fusion network (LMFFNet) mainly composed of three types of components: split-extract-merge bottleneck (SEM-B) block, feature fusion module (FFM), and multiscale attention decoder (MAD), where the SEM-B block extracts sufficient features with fewer parameters. FFMs fuse multiscale semantic features to effectively improve the segmentation accuracy and the MAD well recovers the details of the input images through the attention mechanism. Without pretraining, LMFFNet-3-8 achieves 75.1% mean intersection over union (mIoU) with 1.4 M parameters at 118.9 frames/s using RTX 3090 GPU. More experiments are investigated extensively on various resolutions on other three datasets of CamVid, KITTI, and WildDash2. The experiments verify that the proposed LMFFNet model makes a decent tradeoff between segmentation accuracy and inference speed for real-time tasks. The source code is publicly available at https://github.com/Greak-1124/LMFFNet.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA