RESUMO
This joint feature issue of Optics Express and Applied Optics showcases technical innovations by participants of the 2023 topical meeting on Computational Optical Sensing and Imaging and the computational imaging community. The articles included in the feature issue highlight advances in imaging science that emphasize synergistic activities in optics, signal processing and machine learning. The issue features 26 contributed articles that cover multiple themes including non line-of-sight imaging, imaging through scattering media, compressed sensing, lensless imaging, ptychography, computational microscopy, spectroscopy and optical metrology.
RESUMO
This joint feature issue of Optics Express and Applied Optics showcases technical innovations by participants of the 2023 topical meeting on Computational Optical Sensing and Imaging and the computational imaging community. The articles included in the feature issue highlight advances in imaging science that emphasize synergistic activities in optics, signal processing and machine learning. The issue features 26 contributed articles that cover multiple themes including non line-of-sight imaging, imaging through scattering media, compressed sensing, lensless imaging, ptychography, computational microscopy, spectroscopy and optical metrology.
RESUMO
Indirect imaging correlography (IIC) is a coherent imaging technique that provides access to the autocorrelation of the albedo of objects obscured from line-of-sight. This technique is used to recover sub-mm resolution images of obscured objects at large standoffs in non-line-of-sight (NLOS) imaging. However, predicting the exact resolving power of IIC in any given NLOS scene is complicated by the interplay between several factors, including object position and pose. This work puts forth a mathematical model for the imaging operator in IIC to accurately predict the images of objects in NLOS imaging scenes. Using the imaging operator, expressions for the spatial resolution as a function of scene parameters such as object position and pose are derived and validated experimentally. In addition, a self-supervised deep neural network framework to reconstruct images of objects from their autocorrelation is proposed. Using this framework, objects with ≈ 250 µ m features, located at 1â mt standoffs in an NLOS scene, are successfully reconstructed.
RESUMO
One of the challenges of using Time-of-Flight (ToF) sensors for dimensioning objects is that the depth information suffers from issues such as low resolution, self-occlusions, noise, and multipath interference, which distort the shape and size of objects. In this work, we successfully apply a superquadric fitting framework for dimensioning cuboid and cylindrical objects from point cloud data generated using a ToF sensor. Our work demonstrates that an average error of less than 1 cm is possible for a box with the largest dimension of about 30 cm and a cylinder with the largest dimension of about 20 cm that are each placed 1.5 m from a ToF sensor. We also quantify the performance of dimensioning objects using various object orientations, ground plane surfaces, and model fitting methods. For cuboid objects, our results show that the proposed superquadric fitting framework is able to achieve absolute dimensioning errors between 4% and 9% using the bounding technique and between 8% and 15% using the mirroring technique across all tested surfaces. For cylindrical objects, our results show that the proposed superquadric fitting framework is able to achieve absolute dimensioning errors between 2.97% and 6.61% when the object is in a horizontal orientation and between 8.01% and 13.13% when the object is in a vertical orientation using the bounding technique across all tested surfaces.
RESUMO
This Feature Issue includes 19 articles that highlight advances in the field of Computational Optical Sensing and Imaging. Many of the articles were presented at the 2019 OSA Topical Meeting on Computational Optical Sensing and Imaging held in Munich, Germany, on June 24-27. Articles featured in the issue cover a broad array of topics ranging from imaging through scattering media, imaging round corners and compressive imaging to machine learning for recovery of images.
RESUMO
The OSA Topical Meeting on Computational Optical Sensing and Imaging (COSI) was held June 25-June 28, 2018 in Orlando, Florida, USA, as part of the Imaging and Applied Optics Congress. In this feature issue, we present several papers that cover the techniques, topics, and advancements in the field presented at the COSI meeting highlighting the integration of opto-electric measurement and computational processing.
RESUMO
Optical imaging systems in which the lens and sensor are free to rotate about independent pivots offer greater degrees of freedom for controlling and optimizing the process of image gathering. However, to benefit from the expanded possibilities, we need an imaging model that directly incorporates the essential parameters. In this work, we propose a model of imaging which can accurately predict the geometric properties of the image in such systems. Furthermore, we introduce a new method for synthesizing an omnifocus (all-in-focus) image from a sequence of images captured while rotating a lens. The crux of our approach lies in insights gained from the new model.
RESUMO
Macroscopic imagers are subject to constraints imposed by the wave nature of light and the geometry of image formation. The former limits the resolving power while the latter results in a loss of absolute size and shape information. The suite of methods outlined in this work enables macroscopic imagers the unique ability to capture unresolved spatial detail while recovering topographic information. The common thread connecting these methods is the notion of imaging under patterned illumination. The notion is advanced further to develop computational imagers with resolving power that is decoupled from the constraints imposed by the collection optics and the image sensor. These imagers additionally feature support for multiscale reconstruction.
RESUMO
The poor lateral and depth resolution of state-of-the-art 3D sensors based on the time-of-flight (ToF) principle has limited widespread adoption to a few niche applications. In this work, we introduce a novel sensor concept that provides ToF-based 3D measurements of real world objects and surfaces with depth precision up to 35 µm and point cloud densities commensurate with the native sensor resolution of standard CMOS/CCD detectors (up to several megapixels). Such capabilities are realized by combining the best attributes of continuous wave ToF sensing, multi-wavelength interferometry, and heterodyne interferometry into a single approach. We describe multiple embodiments of the approach, each featuring a different sensing modality and associated tradeoffs.
Assuntos
Algoritmos , Imageamento TridimensionalRESUMO
The presence of a scattering medium in the imaging path between an object and an observer is known to severely limit the visual acuity of the imaging system. We present an approach to circumvent the deleterious effects of scattering, by exploiting spectral correlations in scattered wavefronts. Our Synthetic Wavelength Holography (SWH) method is able to recover a holographic representation of hidden targets with sub-mm resolution over a nearly hemispheric angular field of view. The complete object field is recorded within 46 ms, by monitoring the scattered light return in a probe area smaller than 6 cm × 6 cm. This unique combination of attributes opens up a plethora of new Non-Line-of-Sight imaging applications ranging from medical imaging and forensics, to early-warning navigation systems and reconnaissance. Adapting the findings of this work to other wave phenomena will help unlock a wider gamut of applications beyond those envisioned in this paper.