Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Tipo de documento
Ano de publicação
Intervalo de ano de publicação
1.
IEEE Trans Pattern Anal Mach Intell ; 45(4): 4180-4197, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35994546

RESUMO

Photon-efficient imaging, which captures 3D images with single-photon sensors, has enabled a wide range of applications. However, two major challenges limit the reconstruction performance, i.e., the low photon counts accompanied by low signal-to-background ratio (SBR) and the multiple returns. In this paper, we propose a unified deep neural network that, for the first time, explicitly addresses these two challenges, and simultaneously recovers depth maps and intensity images from photon-efficient measurements. Starting from a general image formation model, our network is constituted of one encoder, where a non-local block is utilized to exploit the long-range correlations in both spatial and temporal dimensions of the raw measurement, and two decoders, which are designed to recover depth and intensity, respectively. Meanwhile, we investigate the statistics of the background noise photons and propose a noise prior block to further improve the reconstruction performance. The proposed network achieves decent reconstruction fidelity even under extremely low photon counts / SBR and heavy blur caused by the multiple-return effect, which significantly surpasses the existing methods. Moreover, our network trained on simulated data generalizes well to real-world imaging systems, which greatly extends the application scope of photon-efficient imaging in challenging scenarios with a strict limit on optical flux. Code is available at https://github.com/JiayongO-O/PENonLocal.

2.
Artigo em Inglês | MEDLINE | ID: mdl-36049012

RESUMO

Computational approach to imaging around the corner, or non-line-of-sight (NLOS) imaging, is becoming a reality thanks to major advances in imaging hardware and reconstruction algorithms. A recent development towards practical NLOS imaging, Nam et al. [1] demonstrated a high-speed non-confocal imaging system that operates at 5 Hz, 100x faster than the prior art. This enormous gain in acquisition rate, however, necessitates numerous approximations in light transport, breaking many existing NLOS reconstruction methods that assume an idealized image formation model. To bridge the gap, we present a novel deep model that incorporates the complementary physics priors of wave propagation and volume rendering into a neural network for high-quality and robust NLOS reconstruction. This orchestrated design regularizes the solution space by relaxing the image formation model, resulting in a deep model that generalizes well on real captures despite being exclusively trained on synthetic data. Further, we devise a unified learning framework that enables our model to be flexibly trained using diverse supervision signals, including target intensity images or even raw NLOS transient measurements. Once trained, our model renders both intensity and depth images at inference time in a single forward pass, capable of processing more than 5 captures per second on a high-end GPU. Through extensive qualitative and quantitative experiments, we show that our method outperforms prior physics and learning based approaches on both synthetic and real measurements. We anticipate that our method along with the fast capturing system will accelerate future development of NLOS imaging for real world applications that require high-speed imaging.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA