RESUMEN
We demonstrate a fully submerged underwater LiDAR transceiver system based on single-photon detection technologies. The LiDAR imaging system used a silicon single-photon avalanche diode (SPAD) detector array fabricated in complementary metal-oxide semiconductor (CMOS) technology to measure photon time-of-flight using picosecond resolution time-correlated single-photon counting. The SPAD detector array was directly interfaced to a Graphics Processing Unit (GPU) for real-time image reconstruction capability. Experiments were performed with the transceiver system and target objects immersed in a water tank at a depth of 1.8 meters, with the targets placed at a stand-off distance of approximately 3 meters. The transceiver used a picosecond pulsed laser source with a central wavelength of 532 nm, operating at a repetition rate of 20 MHz and average optical power of up to 52 mW, dependent on scattering conditions. Three-dimensional imaging was demonstrated by implementing a joint surface detection and distance estimation algorithm for real-time processing and visualization, which achieved images of stationary targets with up to 7.5 attenuation lengths between the transceiver and the target. The average processing time per frame was approximately 33 ms, allowing real-time three-dimensional video demonstrations of moving targets at ten frames per second at up to 5.5 attenuation lengths between transceiver and target.
RESUMEN
We present a scanning light detection and ranging (LIDAR) system incorporating an individual Ge-on-Si single-photon avalanche diode (SPAD) detector for depth and intensity imaging in the short-wavelength infrared region. The time-correlated single-photon counting technique was used to determine the return photon time-of-flight for target depth information. In laboratory demonstrations, depth and intensity reconstructions were made of targets at short range, using advanced image processing algorithms tailored for the analysis of single-photon time-of-flight data. These laboratory measurements were used to predict the performance of the single-photon LIDAR system at longer ranges, providing estimations that sub-milliwatt average power levels would be required for kilometer range depth measurements.
RESUMEN
Large-format single-photon avalanche diode (SPAD) arrays often suffer from low fill-factors-the ratio of the active area to the overall pixel area. The detection efficiency of these detector arrays can be vastly increased with the integration of microlens arrays designed to concentrate incident light onto the active areas and may be refractive or diffractive in nature. The ability of diffractive optical elements (DOEs) to efficiently cover a square or rectangular pixel, combined with their capability of working as fast lenses (i.e., â¼f/3) makes them versatile and practical lens designs for use in sparse photon applications using microscale, large-format detector arrays. Binary-mask-based photolithography was employed to fabricate fast diffractive microlenses for two designs of 32×32 SPAD detector arrays, each design having a different pixel pitch and fill-factor. A spectral characterization of the lenses is performed, as well as analysis of performance under different illumination conditions from wide- to narrow-angle illumination (i.e., f/2 to f/22 optics). The performance of the microlenses presented exceeds previous designs in terms of both concentration factor (i.e., increase in light collection capability) and lens speed. Concentration factors greater than 33× are achieved for focal lengths in the substrate material as short as 190µm, representing a microlens f-number of 3.8 and providing a focal spot diameter of <4µm. These results were achieved while retaining an extremely high degree of performance uniformity across the 1024 devices in each case, which demonstrates the significant benefits to be gained by the implementation of DOEs as part of an integrated detector system using SPAD arrays with very small active areas.
RESUMEN
We investigate the depth imaging of objects through various densities of different obscurants (water fog, glycol-based vapor, and incendiary smoke) using a time-correlated single-photon detection system which had an operating wavelength of 1550 nm and an average optical output power of approximately 1.5 mW. It consisted of a monostatic scanning transceiver unit used in conjunction with a picosecond laser source and an individual Peltier-cooled InGaAs/InP single-photon avalanche diode (SPAD) detector. We acquired depth and intensity data of targets imaged through distances of up to 24 meters for the different obscurants. We compare several statistical algorithms which reconstruct both the depth and intensity images for short data acquisition times, including very low signal returns in the photon-starved regime.
RESUMEN
Three-dimensional imaging in underwater environments was investigated using a picosecond resolution silicon single-photon avalanche diode (SPAD) detector array fabricated in complementary metal-oxide semiconductor (CMOS) technology. Each detector in the 192 × 128 SPAD array had an individual time-to-digital converter allowing rapid, simultaneous acquisition of data for the entire array using the time-correlated single-photon counting approach. A picosecond pulsed laser diode source operating at a wavelength of 670 nm was used to illuminate the underwater scenes, emitting an average optical power up to 8 mW. Both stationary and moving targets were imaged under a variety of underwater scattering conditions. The acquisition of depth and intensity videos of moving targets was demonstrated in dark laboratory conditions through scattering water, equivalent to having up to 6.7 attenuation lengths between the transceiver and target. Data were analyzed using a pixel-wise approach, as well as an image processing algorithm based on a median filter and polynomial approximation.
RESUMEN
By illumination of target scenes using a set of different wavelengths, we demonstrate color classification of scenes, as well as depth estimation, in photon-starved images. The spectral signatures are classified with a new advanced statistical image processing method from measurements of the same scene, in this case using combinations of 33, 16, 8 or 4 different wavelengths in the range 500 - 820 nm. This approach makes it possible to perform color classification and depth estimation on images containing as few as one photon per pixel, on average. Compared to single wavelength imaging, this approach improves target discrimination by extracting more spectral information, which, in turn, improves the depth estimation since this approach is robust to changes in target reflectivity. We demonstrate color classification and depth profiling of complex targets at average signal levels as low as 1.0 photons per pixel from as few as 4 different wavelength measurements.
RESUMEN
Single-photon multispectral light detection and ranging (LiDAR) approaches have emerged as a route to color reconstruction and enhanced target identification in photon-starved imaging scenarios. In this paper, we present a three-dimensional imaging system based on a time-of-flight approach which is capable of simultaneous multispectral measurements using only one single-photon detector. Unlike other techniques, this approach does not require a wavelength router in the receiver channel. By observing multiple wavelengths at each spatial location, or per pixel (four discrete visible wavelengths are used in this work), we can obtain a single waveform with wavelength-to-time mapped peaks. The time-mapped peaks are created by the known chromatic group delay dispersion in the laser source's optical fiber, resulting in temporal separations between these peaks being in the region of 200 to 1000 ps, in this case. A multispectral single waveform algorithm was proposed to fit these multiple peaked LiDAR waveforms, and then reconstruct the color (spectral response) and depth profiles for the entire image. To the best of our knowledge, this is the first dedicated computational method operating in the photon-starved regime capable of discriminating multiple peaks associated with different wavelengths in a single pixel waveform and reconstructing spectral responses and depth.
RESUMEN
Single-photon avalanche diode (SPAD) detector arrays generally suffer from having a low fill-factor, in which the photo-sensitive area of each pixel is small compared to the overall area of the pixel. This paper describes the integration of different configurations of high efficiency diffractive optical microlens arrays onto a 32 × 32 SPAD array, fabricated using a 0.35 µm CMOS technology process. The characterization of SPAD arrays with integrated microlens arrays is reported over the spectral range of 500-900 nm, and a range of f-numbers from f/2 to f/22. We report an average concentration factor of 15 measured for the entire SPAD array with integrated microlens array. The integrated SPAD and microlens array demonstrated a very high uniformity in overall efficiency.
RESUMEN
A depth imaging system, based on the time-of-flight approach and the time-correlated single-photon counting (TCSPC) technique, was investigated for use in highly scattering underwater environments. The system comprised a pulsed supercontinuum laser source, a monostatic scanning transceiver, with a silicon single-photon avalanche diode (SPAD) used for detection of the returned optical signal. Depth images were acquired in the laboratory at stand-off distances of up to 8 attenuation lengths, using per-pixel acquisition times in the range 0.5 to 100 ms, at average optical powers in the range 0.8 nW to 950 µW. In parallel, a LiDAR model was developed and validated using experimental data. The model can be used to estimate the performance of the system under a variety of scattering conditions and system parameters.
RESUMEN
We have used an InGaAs/InP single-photon avalanche diode detector module in conjunction with a time-of-flight depth imager operating at a wavelength of 1550 nm, to acquire centimeter resolution depth images of low signature objects at stand-off distances of up to one kilometer. The scenes of interest were scanned by the transceiver system using pulsed laser illumination with an average optical power of less than 600 µW and per-pixel acquisition times of between 0.5 ms and 20 ms. The fiber-pigtailed InGaAs/InP detector was Peltier-cooled and operated at a temperature of 230 K. This detector was used in electrically gated mode with a single-photon detection efficiency of about 26% at a dark count rate of 16 kilocounts per second. The system's overall instrumental temporal response was 144 ps full width at half maximum. Measurements made in daylight on a number of target types at ranges of 325 m, 910 m, and 4.5 km are presented, along with an analysis of the depth resolution achieved.
RESUMEN
This paper highlights a significant advance in time-of-flight depth imaging: by using a scanning transceiver which incorporated a free-running, low noise superconducting nanowire single-photon detector, we were able to obtain centimeter resolution depth images of low-signature objects in daylight at stand-off distances of the order of one kilometer at the relatively eye-safe wavelength of 1560 nm. The detector used had an efficiency of 18% at 1 kHz dark count rate, and the overall system jitter was ~100 ps. The depth images were acquired by illuminating the scene with an optical output power level of less than 250 µW average, and using per-pixel dwell times in the millisecond regime.
Asunto(s)
Aumento de la Imagen/instrumentación , Fotometría/instrumentación , Telecomunicaciones/instrumentación , Transductores , Diseño de Equipo , Análisis de Falla de Equipo , FotonesRESUMEN
Direct monitoring of singlet oxygen (¹O2) luminescence is a particularly challenging infrared photodetection problem. ¹O2, an excited state of the oxygen molecule, is a crucial intermediate in many biological processes. We employ a low noise superconducting nanowire single-photon detector to record ¹O2 luminescence at 1270 nm wavelength from a model photosensitizer (Rose Bengal) in solution. Narrow band spectral filtering and chemical quenching is used to verify the ¹O2 signal, and lifetime evolution with the addition of protein is studied. Furthermore, we demonstrate the detection of ¹O2 luminescence through a single optical fiber, a marked advance for dose monitoring in clinical treatments such as photodynamic therapy.
Asunto(s)
Técnicas Biosensibles/instrumentación , Conductometría/instrumentación , Tecnología de Fibra Óptica/instrumentación , Mediciones Luminiscentes/instrumentación , Nanotubos/efectos de la radiación , Fotometría/instrumentación , Oxígeno Singlete/análisis , Conductividad Eléctrica , Diseño de Equipo , Análisis de Falla de Equipo , Luz , Nanotubos/química , FotonesRESUMEN
In this paper we report on the development and optical properties of nanostructured gradient index microlenses with good chromatic behavior. We introduce a new fabrication concept for the development of large diameter nanostructured gradient index microlenses based on quantized gradient index profiles and the use of nanostructured meta-rods. We show a dependence of the quality of performance on the number of refractive index levels and the lens diameter. Measurements carried out at 633 and 850 nm show good optical properties and similar focal lengths for both wavelengths.
Asunto(s)
Diseño Asistido por Computadora , Lentes , Modelos Teóricos , Nanotecnología/instrumentación , Refractometría/instrumentación , Simulación por Computador , Diseño de Equipo , Análisis de Falla de Equipo , Luz , Dispersión de RadiaciónRESUMEN
Recently, time-of-flight LiDAR using the single-photon detection approach has emerged as a potential solution for three-dimensional imaging in challenging measurement scenarios, such as over distances of many kilometres. The high sensitivity and picosecond timing resolution afforded by single-photon detection offers high-resolution depth profiling of remote, complex scenes while maintaining low power optical illumination. These properties are ideal for imaging in highly scattering environments such as through atmospheric obscurants, for example fog and smoke. In this paper we present the reconstruction of depth profiles of moving objects through high levels of obscurant equivalent to five attenuation lengths between transceiver and target at stand-off distances up to 150 m. We used a robust statistically based processing algorithm designed for the real time reconstruction of single-photon data obtained in the presence of atmospheric obscurant, including providing uncertainty estimates in the depth reconstruction. This demonstration of real-time 3D reconstruction of moving scenes points a way forward for high-resolution imaging from mobile platforms in degraded visual environments.
RESUMEN
In this article, we present a new algorithm for fast, online 3D reconstruction of dynamic scenes using times of arrival of photons recorded by single-photon detector arrays. One of the main challenges in 3D imaging using single-photon lidar in practical applications is the presence of strong ambient illumination which corrupts the data and can jeopardize the detection of peaks/surface in the signals. This background noise not only complicates the observation model classically used for 3D reconstruction but also the estimation procedure which requires iterative methods. In this work, we consider a new similarity measure for robust depth estimation, which allows us to use a simple observation model and a non-iterative estimation procedure while being robust to mis-specification of the background illumination model. This choice leads to a computationally attractive depth estimation procedure without significant degradation of the reconstruction performance. This new depth estimation procedure is coupled with a spatio-temporal model to capture the natural correlation between neighboring pixels and successive frames for dynamic scene analysis. The resulting online inference process is scalable and well suited for parallel implementation. The benefits of the proposed method are demonstrated through a series of experiments conducted with simulated and real single-photon lidar videos, allowing the analysis of dynamic scenes at 325 m observed under extreme ambient illumination conditions.
RESUMEN
Time-correlated single-photon counting techniques have recently been used in ranging and depth imaging systems that are based on time-of-flight measurements. These systems transmit low average power pulsed laser signals and measure the scattered return photons. The use of periodic laser pulses means that absolute ranges can only be measured unambiguously at low repetition rates (typically <100 kHz for > 1 km) to ensure that only one pulse is in transit at any instant. We demonstrate the application of a pseudo-random pattern matching technique to a scanning rangefinder system using GHz base clock rates, permitting the acquisition of unambiguous, three-dimensional images at average pulse rates equivalent to >10 MHz. Depth images with centimeter distance uncertainty at ranges between 50 m and 4.4 km are presented.
RESUMEN
We describe a scanning time-of-flight system which uses the time-correlated single-photon counting technique to produce three-dimensional depth images of distant, noncooperative surfaces when these targets are illuminated by a kHz to MHz repetition rate pulsed laser source. The data for the scene are acquired using a scanning optical system and an individual single-photon detector. Depth images have been successfully acquired with centimeter xyz resolution, in daylight conditions, for low-signature targets in field trials at distances of up to 325 m using an output illumination with an average optical power of less than 50 microW.
RESUMEN
Single-photon lidar has emerged as a prime candidate technology for depth imaging through challenging environments. Until now, a major limitation has been the significant amount of time required for the analysis of the recorded data. Here we show a new computational framework for real-time three-dimensional (3D) scene reconstruction from single-photon data. By combining statistical models with highly scalable computational tools from the computer graphics community, we demonstrate 3D reconstruction of complex outdoor scenes with processing times of the order of 20 ms, where the lidar data was acquired in broad daylight from distances up to 320 metres. The proposed method can handle an unknown number of surfaces in each pixel, allowing for target detection and imaging through cluttered scenes. This enables robust, real-time target reconstruction of complex moving scenes, paving the way for single-photon lidar at video rates for practical 3D imaging applications.
RESUMEN
This paper presents a new algorithm for the learning of spatial correlation and non-local restoration of single-photon 3-Dimensional Lidar images acquired in the photon starved regime (fewer or less than one photon per pixel) or with a reduced number of scanned spatial points (pixels). The algorithm alternates between three steps: (i) extract multi-scale information, (ii) build a robust graph of non-local spatial correlations between pixels, and (iii) the restoration of depth and reflectivity images. A non-uniform sampling approach, which assigns larger patches to homogeneous regions and smaller ones to heterogeneous regions, is adopted to reduce the computational cost associated with the graph. The restoration of the 3D images is achieved by minimizing a cost function accounting for the multi-scale information and the non-local spatial correlation between patches. This minimization problem is efficiently solved using the alternating direction method of multipliers (ADMM) that presents fast convergence properties. Various results based on simulated and real Lidar data show the benefits of the proposed algorithm that improves the quality of the estimated depth and reflectivity images, especially in the photon-starved regime or when containing a reduced number of spatial points.
RESUMEN
This paper describes a rapid data acquisition photon-counting time-of-flight ranging technique that is designed for the avoidance of range ambiguity, an issue commonly found in high repetition frequency time-off-light systems. The technique transmits a non-periodic pulse train based on the random bin filling of a high frequency time clock. A received pattern is formed from the arrival times of the returning single photons and the correlation between the transmitted and received patterns was used to identify the unique target time-of-flight. The paper describes experiments in laboratory and in free space at over several hundred meters range at clock frequencies of 1GHz. Unambiguous photon-counting range-finding is demonstrated with centimeter accuracy.