RESUMO
3D single-photon LiDAR imaging has an important role in many applications. However, full deployment of this modality will require the analysis of low signal to noise ratio target returns and very high volume of data. This is particularly evident when imaging through obscurants or in high ambient background light conditions. This paper proposes a multiscale approach for 3D surface detection from the photon timing histogram to permit a significant reduction in data volume. The resulting surfaces are background-free and can be used to infer depth and reflectivity information about the target. We demonstrate this by proposing a hierarchical Bayesian model for 3D reconstruction and spectral classification of multispectral single-photon LiDAR data. The reconstruction method promotes spatial correlation between point-cloud estimates and uses a coordinate gradient descent algorithm for parameter estimation. Results on simulated and real data show the benefits of the proposed target detection and reconstruction approaches when compared to state-of-the-art processing algorithms.
RESUMO
We investigate the depth imaging of objects through various densities of different obscurants (water fog, glycol-based vapor, and incendiary smoke) using a time-correlated single-photon detection system which had an operating wavelength of 1550 nm and an average optical output power of approximately 1.5 mW. It consisted of a monostatic scanning transceiver unit used in conjunction with a picosecond laser source and an individual Peltier-cooled InGaAs/InP single-photon avalanche diode (SPAD) detector. We acquired depth and intensity data of targets imaged through distances of up to 24 meters for the different obscurants. We compare several statistical algorithms which reconstruct both the depth and intensity images for short data acquisition times, including very low signal returns in the photon-starved regime.
RESUMO
Single-photon multispectral light detection and ranging (LiDAR) approaches have emerged as a route to color reconstruction and enhanced target identification in photon-starved imaging scenarios. In this paper, we present a three-dimensional imaging system based on a time-of-flight approach which is capable of simultaneous multispectral measurements using only one single-photon detector. Unlike other techniques, this approach does not require a wavelength router in the receiver channel. By observing multiple wavelengths at each spatial location, or per pixel (four discrete visible wavelengths are used in this work), we can obtain a single waveform with wavelength-to-time mapped peaks. The time-mapped peaks are created by the known chromatic group delay dispersion in the laser source's optical fiber, resulting in temporal separations between these peaks being in the region of 200 to 1000 ps, in this case. A multispectral single waveform algorithm was proposed to fit these multiple peaked LiDAR waveforms, and then reconstruct the color (spectral response) and depth profiles for the entire image. To the best of our knowledge, this is the first dedicated computational method operating in the photon-starved regime capable of discriminating multiple peaks associated with different wavelengths in a single pixel waveform and reconstructing spectral responses and depth.
RESUMO
Recently, time-of-flight LiDAR using the single-photon detection approach has emerged as a potential solution for three-dimensional imaging in challenging measurement scenarios, such as over distances of many kilometres. The high sensitivity and picosecond timing resolution afforded by single-photon detection offers high-resolution depth profiling of remote, complex scenes while maintaining low power optical illumination. These properties are ideal for imaging in highly scattering environments such as through atmospheric obscurants, for example fog and smoke. In this paper we present the reconstruction of depth profiles of moving objects through high levels of obscurant equivalent to five attenuation lengths between transceiver and target at stand-off distances up to 150 m. We used a robust statistically based processing algorithm designed for the real time reconstruction of single-photon data obtained in the presence of atmospheric obscurant, including providing uncertainty estimates in the depth reconstruction. This demonstration of real-time 3D reconstruction of moving scenes points a way forward for high-resolution imaging from mobile platforms in degraded visual environments.
RESUMO
In this article, we present a new algorithm for fast, online 3D reconstruction of dynamic scenes using times of arrival of photons recorded by single-photon detector arrays. One of the main challenges in 3D imaging using single-photon lidar in practical applications is the presence of strong ambient illumination which corrupts the data and can jeopardize the detection of peaks/surface in the signals. This background noise not only complicates the observation model classically used for 3D reconstruction but also the estimation procedure which requires iterative methods. In this work, we consider a new similarity measure for robust depth estimation, which allows us to use a simple observation model and a non-iterative estimation procedure while being robust to mis-specification of the background illumination model. This choice leads to a computationally attractive depth estimation procedure without significant degradation of the reconstruction performance. This new depth estimation procedure is coupled with a spatio-temporal model to capture the natural correlation between neighboring pixels and successive frames for dynamic scene analysis. The resulting online inference process is scalable and well suited for parallel implementation. The benefits of the proposed method are demonstrated through a series of experiments conducted with simulated and real single-photon lidar videos, allowing the analysis of dynamic scenes at 325 m observed under extreme ambient illumination conditions.
RESUMO
Single-photon lidar has emerged as a prime candidate technology for depth imaging through challenging environments. Until now, a major limitation has been the significant amount of time required for the analysis of the recorded data. Here we show a new computational framework for real-time three-dimensional (3D) scene reconstruction from single-photon data. By combining statistical models with highly scalable computational tools from the computer graphics community, we demonstrate 3D reconstruction of complex outdoor scenes with processing times of the order of 20 ms, where the lidar data was acquired in broad daylight from distances up to 320 metres. The proposed method can handle an unknown number of surfaces in each pixel, allowing for target detection and imaging through cluttered scenes. This enables robust, real-time target reconstruction of complex moving scenes, paving the way for single-photon lidar at video rates for practical 3D imaging applications.