Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Opt Lett ; 48(23): 6192-6195, 2023 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-38039224

RESUMO

Collecting higher-quality three-dimensional points-cloud data in various scenarios practically and robustly has led to a strong demand for such dToF-based LiDAR systems with higher ambient noise rejection ability and limited optical power consumption, which is a sharp conflict. To alleviate such a clash, an idea of utilizing a strong ambient noise rejection ability of intensity and RGB images is proposed, based on which a lightweight CNN is newly, to the best of our knowledge, designed, achieving a state-of-the-art performance even with 90 × less inference time and 480 × fewer FLOPs. With such net deployed on edge devices, a complete AI-LiDAR system is presented, showing a 100 × fewer signal photon demand in simulation experiments when creating depth images of the same quality.

2.
Opt Lett ; 48(13): 3415-3418, 2023 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-37390144

RESUMO

The cutting-edge imaging system exhibits low output resolution and high power consumption, presenting challenges for the RGB-D fusion algorithm. In practical scenarios, aligning the depth map resolution with the RGB image sensor is a crucial requirement. In this Letter, the software and hardware co-design is considered to implement a lidar system based on the monocular RGB 3D imaging algorithm. A 6.4 × 6.4-mm2 deep-learning accelerator (DLA) system-on-chip (SoC) manufactured in a 40-nm CMOS is incorporated with a 3.6-mm2 TX-RX integrated chip fabricated in a 180-nm CMOS to employ the customized single-pixel imaging neural network. In comparison to the RGB-only monocular depth estimation technique, the root mean square error is reduced from 0.48 m to 0.3 m on the evaluated dataset, and the output depth map resolution matches the RGB input.


Assuntos
Algoritmos , Redes Neurais de Computação , Desenho de Equipamento , Imageamento Tridimensional
3.
Sensors (Basel) ; 23(15)2023 Aug 03.
Artigo em Inglês | MEDLINE | ID: mdl-37571709

RESUMO

Light detection and ranging (LiDAR) technology, a cutting-edge advancement in mobile applications, presents a myriad of compelling use cases, including enhancing low-light photography, capturing and sharing 3D images of fascinating objects, and elevating the overall augmented reality (AR) experience. However, its widespread adoption has been hindered by the prohibitive costs and substantial power consumption associated with its implementation in mobile devices. To surmount these obstacles, this paper proposes a low-power, low-cost, single-photon avalanche detector (SPAD)-based system-on-chip (SoC) which packages the microlens arrays (MLAs) and a lightweight RGB-guided sparse depth imaging completion neural network for 3D LiDAR imaging. The proposed SoC integrates an 8 × 8 SPAD macropixel array with time-to-digital converters (TDCs) and a charge pump, fabricated using a 180 nm bipolar-CMOS-DMOS (BCD) process. Initially, the primary function of this SoC was limited to serving as a ranging sensor. A random MLA-based homogenizing diffuser efficiently transforms Gaussian beams into flat-topped beams with a 45° field of view (FOV), enabling flash projection at the transmitter. To further enhance resolution and broaden application possibilities, a lightweight neural network employing RGB-guided sparse depth complementation is proposed, enabling a substantial expansion of image resolution from 8 × 8 to quarter video graphics array level (QVGA; 256 × 256). Experimental results demonstrate the effectiveness and stability of the hardware encompassing the SoC and optical system, as well as the lightweight features and accuracy of the algorithmic neural network. The state-of-the-art SoC-neural network solution offers a promising and inspiring foundation for developing consumer-level 3D imaging applications on mobile devices.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA