RESUMO
Stand-off detection and characterization of scattering media such as fog and aerosols is an important task in environmental monitoring and related applications. We present, for the first time, a stand-off characterization of sprayed water fog in the time domain. Using a time correlated single photon counting, we measure transient signatures of photons reflected off a target within the fog volume. We can distinguish ballistic from scattered photon. By application of a forward propagation model, we reconstruct the scattered photon paths and determine the fog's mean scattering length µscat. in a range of 1.55 m to 1.86m. Moreover, in a second analysis, we project the recorded transients back to reconstruct the scene using virtual Huygens-Fresnel wavefronts. While in medium-density fog some contribution of ballistic remain in the signatures, we could demonstrate that in high-density fog, all recorded photons are at least scattered a single time. This work may path the way to novel characterization tools of and enhanced imaging in scattering media.
RESUMO
The recent years have given rise to a large number of techniques for "looking around corners", i.e., for reconstructing or tracking occluded objects from indirect light reflections off a wall. While the direct view of cameras is routinely calibrated in computer vision applications, the calibration of non-line-of-sight setups has so far relied on manual measurement of the most important dimensions (device positions, wall position and orientation, etc.). In this paper, we propose a method for calibrating time-of-flight-based non-line-of-sight imaging systems that relies on mirrors as known targets. A roughly determined initialization is refined in order to optimize for spatio-temporal consistency. Our system is general enough to be applicable to a variety of sensing scenarios ranging from single sources/detectors via scanning arrangements to large-scale arrays. It is robust towards bad initialization and the achieved accuracy is proportional to the depth resolution of the camera system.
RESUMO
Optical sensing with single photon counting avalanche diode detectors has become a versatile approach for ranging and low light level imaging. In this paper, we compare time correlated and uncorrelated imaging of single photon events using an InGaAs single-photon-counting-avalanche-photo-diode (SPAD) sensor with a 32 × 32 focal plane array detector. We compare ranging, imaging and photon flux measurement capabilities at shortwave infrared wavelengths and determine the minimum number of photon event measurements to perform reliable scene reconstruction. With time-correlated-single-photon-counting (TCSPC), we obtained range images with centimeter resolution and determined the relative intensity. Using uncorrelated single photon counting (USPC), we demonstrated photon flux estimation with a high dynamic range from Ï^=2×104 to 1.3 × 107 counts per second. Finally, we demonstrate imaging, ranging and photon flux measurements of a moving target from a few samples with a frame rate of 50 kHz.
RESUMO
We investigate the depth imaging of objects through various densities of different obscurants (water fog, glycol-based vapor, and incendiary smoke) using a time-correlated single-photon detection system which had an operating wavelength of 1550 nm and an average optical output power of approximately 1.5 mW. It consisted of a monostatic scanning transceiver unit used in conjunction with a picosecond laser source and an individual Peltier-cooled InGaAs/InP single-photon avalanche diode (SPAD) detector. We acquired depth and intensity data of targets imaged through distances of up to 24 meters for the different obscurants. We compare several statistical algorithms which reconstruct both the depth and intensity images for short data acquisition times, including very low signal returns in the photon-starved regime.
RESUMO
Direct observation of light in flight is enabled by recent avalanche photodiode arrays, which have the capability for time-correlated single photon counting. In contrast to classical imaging, imaging of light in flight depends on the relative sensor position, which is studied in detail by measurement and analysis of light pulses propagating at different angles. The time differences of arrival are analyzed to determine the propagation angle and distance of arbitrary light paths. Further analysis of the apparent velocity shows that light pulses can appear to travel at superluminal or subluminal apparent velocities.
RESUMO
Range-gated active imaging is a well-known technique used for night vision or for vision enhancement in scattering environments. A lot of papers have been published, in which the performance enhancement of range gating has been demonstrated. However, there are no studies which systematically investigate and quantify the real gain brought by range gating, in comparison with a classical imaging system, in controlled smoke densities. In this paper, a systematic investigation of the performance enhancement of range-gated viewing is presented in comparison with a color camera representing the human vision. The influence of range gating and of the gate shape is studied. We have been able to demonstrate that a short-wave infrared (SWIR) range-gated active imaging system can enhance by a factor of 6.9 the penetration depth in dense smoke. On the other hand, we have shown that the combination of a short pulse with a short integration time gives better contrasted images in dense scattering media.
RESUMO
Time-of-flight sensing with single-photon sensitivity enables new approaches for the localization of objects outside a sensor's field of view by analyzing backscattered photons. In this Letter, the authors have studied the application of Geiger-mode avalanche photodiode arrays and eye-safe infrared lasers, and provide experimental data of the direct visualization of backscattering light in flight, and direct vision and indirect vision of targets in line-of-sight and non-line-of-sight configurations at shortwave infrared wavelengths.
RESUMO
In the present paper we discuss the method of image coding by multiple exposure of range-gated images. This method enlarges the depth mapping range of range-gated imaging systems exponentially with the number of utilized images. We developed a theoretical model to give a precise prediction of the number of permutations that can be used for image coding. For what we believe is the first time, we realized an image coding sequence for three range-gated images to enlarge the depth mapping range by a factor of 12. We demonstrate three-dimensional imaging in a range of 460 to 1000 m using a laser pulse width of 300 ns. Because of the impact of noise, a critical linking error occurs during the encoding of the intensity images. It is possible to reduce this error by the application of effective noise reduction strategies and the use of a threshold value to the tolerance drift of intensity levels.
RESUMO
In this paper a new method to evaluate gated viewing systems and range-gated imaging sequences, using a Lissajous-type eye pattern, is presented. This approach enables the comparison of gated viewing systems and defines clear criteria for depth resolution and depth mapping capabilities of an active imaging system. A distinct parameter can be depicted and sensed within a single chart. Therefore, this approach is a first proposal for an evaluation procedure of gated viewing systems and an enhanced analysis tool in the means of 3D capabilities.
RESUMO
The observation of objects located in inaccessible regions is a recurring challenge in a wide variety of important applications. Recent work has shown that using rare and expensive optical setups, indirect diffuse light reflections can be used to reconstruct objects and two-dimensional (2D) patterns around a corner. Here we show that occluded objects can be tracked in real time using much simpler means, namely a standard 2D camera and a laser pointer. Our method fundamentally differs from previous solutions by approaching the problem in an analysis-by-synthesis sense. By repeatedly simulating light transport through the scene, we determine the set of object parameters that most closely fits the measured intensity distribution. We experimentally demonstrate that this approach is capable of following the translation of unknown objects, and translation and orientation of a known object, in real time.
RESUMO
We present a technique to overcome the depth resolution limitation for 3D active imaging. Applying microsecond laser pulses and sensor gate width, a scene of several hundred meters is illuminated and recorded in a single image. The trapezoid-shaped range intensity profile is analyzed to obtain both the reflectivity and the depth of scene. We demonstrate a 3D scene reconstruction in a depth of 650 to 1550 m from only three images with an accuracy of <30 m. This depth accuracy is 10 times better than estimated from the classical resolution limit obtained for depth scanning active imaging with a similar number of images. Therefore, this technique enables superresolution depth mapping with a reduction of image data processing.