RESUMEN
Coded aperture-based compression has proven to be an effective approach for high-density cold data storage. Nevertheless, its limited decoding speed represents a significant challenge for its broader application. We introduce a novel, to the best of our knowledge, decoding method leveraging the fast and flexible denoising network (FFDNet), capable of decoding a coded aperture-based compressive data page within 30.64â s. The practicality of the method has been confirmed in the decoding of monochromatic photo arrays, full-color photos, and dynamic videos. In experimental trials, the variance between decoded results obtained via the FFDNet-based method and the FFDNet-absent method in terms of average PSNR is less than 1â dB, while realizing a decoding speed enhancement of over 100-fold when employing the FFDNet-based method.
RESUMEN
The resolution of a lensless on-chip microscopy system is constrained by the pixel size of image sensors. This Letter introduces a super-resolution on-chip microscopy system based on a compact array light source illumination and sub-pixel shift search. The system utilizes a closely spaced array light source composed by four RGB LED modules, sequentially illuminating the sample. A sub-pixel shift search algorithm is proposed, which determines the sub-pixel shift by comparing the frequency of captured low-resolution holograms. Leveraging this sub-pixel shift, a super-resolution reconstruction algorithm is introduced, building upon a multi-wavelength phase retrieval method, enabling the rapid super-resolution reconstruction of holograms with the region-of-interest. The system and algorithms presented herein obviate the need for a displacement control platform and calibration of the illumination angles of the light source, facilitating a super-resolution phase reconstruction under partially coherent illumination.
RESUMEN
The van Cittert-Zernike theorem states that the Fourier transform of the intensity distribution function of a distant, incoherent source is equal to the complex degree of coherence. In this Letter, we present a method for measuring the complex degree of coherence in one shot by recording the interference patterns produced by multiple aperture pairs. The intensity of the sample is obtained by Fourier transforming the complex degree of coherence. The experimental verification by using a simple object is presented together with a discussion on how the method could be improved for imaging more complex samples.
RESUMEN
Computational imaging using a Pancake lens can help reduce the size of optical systems by folded optics. However, Pancake cameras frequently exhibit inferior image quality due to stray light, low light transmission, and spatially varying aberrations. In this Letter, we propose a thin and lightweight camera comprising a polarization-based catadioptric Pancake lens and a Fourier Position encoding Network (FPNet). The camera achieves high-quality imaging at an f-number of 0.4 and an expansive 88° field of view. The FPNet encodes the positional order of the point spread functions, mitigating global optical image degradation and improving image quality by 10.13 dB in PSNR. The Pancake camera and FPNet have potential applications in mobile photography and virtual/augmented reality.
RESUMEN
In lensless imaging using a Fresnel zone aperture (FZA), it is generally believed that the resolution is limited by the outermost ring breadth of the FZA. The limitation has the potential to be broken according to the multi-order property of binary FZAs. In this Letter, we propose to use a high-order component of the FZA as the point spread function (PSF) to develop a high-order transfer function backpropagation (HBP) algorithm to enhance the resolution. The proportion of high-order diffraction energy is low, leading to severe defocus noise in the reconstructed image. To address this issue, we propose a Compound FZA (CFZA), which merges two partial FZAs operating at different orders as the mask to strike a balance between the noise and resolution. Experimental results verify that the CFZA-based camera has a resolution that is double that of a traditional FZA-based camera with an identical outer ring breadth and can be reconstructed with high quality by a single HBP without calibration. Our method offers a cost-effective solution for achieving high-resolution imaging, expanding the potential applications of FZA-based lensless imaging in a variety of areas.
RESUMEN
We propose a novel, to the best of our knowledge, and fast adaptive layer-based (ALB) method for generating a computer-generated hologram (CGH) with accurate depth information. A complex three-dimensional (3D) object is adaptively divided into layers along the depth direction according to its own non-uniformly distributed depth coordinates, which reduces the depth error caused by the conventional layer-based method. Each adaptive layer generates a single-layer hologram using the angular spectrum method for diffraction, and the final hologram of a complex three-dimensional object is obtained by superimposing all the adaptive layer holograms. A hologram derived with the proposed method is referred to as an adaptive layer-based hologram (ALBH). Our demonstration shows that the desired reconstruction can be achieved with 52 adaptive layers in 8.7 s, whereas the conventional method requires 397 layers in 74.9 s.
RESUMEN
Computational methods have been established as cornerstones in optical imaging and holography in recent years. Every year, the dependence of optical imaging and holography on computational methods is increasing significantly to the extent that optical methods and components are being completely and efficiently replaced with computational methods at low cost. This roadmap reviews the current scenario in four major areas namely incoherent digital holography, quantitative phase imaging, imaging through scattering layers, and super-resolution imaging. In addition to registering the perspectives of the modern-day architects of the above research areas, the roadmap also reports some of the latest studies on the topic. Computational codes and pseudocodes are presented for computational methods in a plug-and-play fashion for readers to not only read and understand but also practice the latest algorithms with their data. We believe that this roadmap will be a valuable tool for analyzing the current trends in computational methods to predict and prepare the future of computational methods in optical imaging and holography. Supplementary Information: The online version contains supplementary material available at 10.1007/s00340-024-08280-3.
RESUMEN
The morphology and dynamics of label-free tissues can be exploited by sample-induced changes in the optical field from quantitative phase imaging. Its sensitivity to subtle changes in the optical field makes the reconstructed phase susceptible to phase aberrations. We import variable sparse splitting framework on quantitative phase aberration extraction based on alternating direction aberration free method. The optimization and regularization in the reconstructed phase are decomposed into object terms and aberration terms. By formulating the aberration extraction as a convex quadratic problem, the background phase aberration can be fast and directly decomposed with the specific complete basis functions such as Zernike or standard polynomials. Faithful phase reconstruction can be obtained by eliminating global background phase aberration. The aberration-free two-dimensional and three-dimensional imaging experiments are demonstrated, showing the relaxation of the strict alignment requirements for the holographic microscopes.
RESUMEN
Continuous complex-amplitude computer-generated holograms (CGHs) are converted to discrete amplitude-only or phase-only ones in practical applications to cater for the characteristics of spatial light modulators (SLMs). To describe the influence of the discretization correctly, a refined model that eliminates the circular-convolution error is proposed to emulate the propagation of the wavefront during the formation and reconstruction of a CGH. The effects of several significant factors, including quantized amplitude and phase, zero-padding rate, random phase, resolution, reconstruction distance, wavelength, pixel pitch, phase modulation deviation and pixel-to-pixel interaction, are discussed. Based on evaluations, the optimal quantization for both available and future SLM devices is suggested.
RESUMEN
Fresnel zone aperture (FZA) lensless imaging encodes the incident light into a hologram-like pattern, so that the scene image can be numerically focused at a long imaging range by the back propagation method. However, the target distance is uncertain. The inaccurate distance causes blurs and artifacts in the reconstructed images. This brings difficulties for the target recognition applications, such as quick response code scanning. We propose an autofocusing method for FZA lensless imaging. By incorporating the image sharpness metrics into the back propagation reconstruction process, the method can acquire the desired focusing distance and reconstruct noise-free high-contrast images. By combining the Tamura of the gradient metrics and nuclear norm of gradient, the relative error of estimated object distance is only 0.95% in the experiment. The proposed reconstruction method significantly improves the mean recognition rate of QR code from 4.06% to 90.00%. It paves the way for designing intelligent integrated sensors.
RESUMEN
Holography is a crucial technique for the ultimate three-dimensional (3D) display, because it renders all optical cues from the human visual system. However, the shortage of 3D contents strictly restricts the extensive application of holographic 3D displays. In this paper, a 2D-to-3D-display system by deep learning-based monocular depth estimation is proposed. By feeding a single RGB image of a 3D scene into our designed DGE-CNN network, a corresponding display-oriented 3D depth map can be accurately generated for layer-based computer-generated holography. With simple parameter adjustment, our system can adapt the distance range of holographic display according to specific requirements. The high-quality and flexible holographic 3D display can be achieved based on a single RGB image without 3D rendering devices, permitting potential human-display interactive applications such as remote education, navigation, and medical treatment.
RESUMEN
We demonstrate a lensless imaging system with edge-enhanced imaging constructed with a Fresnel zone aperture (FZA) mask placed 3 mm away from a CMOS sensor. We propose vortex back-propagation (vortex-BP) and amplitude vortex-BP algorithms for the FZA-based lensless imaging system to remove the noise and achieve the fast reconstruction of high contrast edge enhancement. Directionally controlled anisotropic edge enhancement can be achieved with our proposed superimposed vortex-BP algorithm. With different reconstruction algorithms, the proposed amp-vortex edge-camera in this paper can achieve 2D bright filed imaging, isotropic, and directional controllable anisotropic edge-enhanced imaging with incoherent light illumination, by a single-shot captured hologram. The effect of edge detection is the same as optical edge detection, which is the re-distribution of light energy. Noise-free in-focus edge detection can be achieved by using back-propagation, without a de-noise algorithm, which is an advantage over other lensless imaging technologies. This is expected to be widely used in autonomous driving, artificial intelligence recognition in consumer electronics, etc.
RESUMEN
In an era of data explosion, optical data storage provides an alternative solution for cold data storage due to its energy-saving and cost-effective features. However, its data density is still insufficient for zettabyte-scale cold data storage. Here, a coded aperture-based compressive data page with a compression ratio of ≤0.125 is proposed. Based on two frameworks-weighted nuclear norm minimization (WNNM) and alternating direction method of multipliers (ADMM)-the decoded quality of the compressive data page is ensured by utilizing sparsity priors. In experiments, compressive data pages of a monochromatic photo-array, full-color photo, and dynamic video are accurately decoded.
RESUMEN
As the foundation of virtual content generation, cameras are crucial for augmented reality (AR) applications, yet their integration with transparent displays has remained a challenge. Prior efforts to develop see-through cameras have struggled to achieve high resolution and seamless integration with AR displays. In this work, we present LightguideCam, a compact and flexible see-through camera based on an AR lightguide. To address the overlapping artifacts in measurement, we present a compressive sensing algorithm based on an equivalent imaging model that minimizes computational consumption and calibration complexity. We validate our design using a commercial AR lightguide and demonstrate a field of view of 23.1° and an angular resolution of 0.1° in the prototype. Our LightguideCam has great potential as a plug-and-play extensional imaging component in AR head-mounted displays, with promising applications for eye-gaze tracking, eye-position perspective photography, and improved human-computer interaction devices, such as full-screen mobile phones.
RESUMEN
Liquid crystal on silicon (LCoS) is a widely used spatial light modulator (SLM) in computer-generated holography (CGH). However, the phase-modulating profile of LCoS is often not ideally uniform in application, bringing about undesired intensity fringes. In this study, we overcome this problem by proposing a highly robust dual-SLM complex-amplitude CGH technique, which incorporates a polarimetric mode and a diffractive mode. The polarimetric mode linearizes the general phase modulations of the two SLMs separately, while the diffractive mode uses camera-in-the-loop optimization to achieve improved holographic display. Experimental results show the effectiveness of our proposal in improving reconstructing accuracy by 21.12% in peak signal-to-noise ratio (PSNR) and 50.74% in structure similarity index measure (SSIM), using LCoS SLMs with originally non-uniform phase-modulating profiles.
Asunto(s)
Holografía , Holografía/instrumentación , Holografía/métodos , Holografía/normas , Relación Señal-Ruido , AlgoritmosRESUMEN
Pixel super-resolution (PSR) has emerged as a promising technique to break the sampling limit for phase imaging systems. However, due to the inherent nonconvexity of phase retrieval problem and super-resolution process, PSR algorithms are sensitive to noise, leading to reconstruction quality inevitably deteriorating. Following the plug-and-play framework, we introduce the nonlocal low-rank (NLR) regularization for accurate and robust PSR, achieving a state-of-the-art performance. Inspired by the NLR prior, we further develop the complex-domain nonlocal low-rank network (CNLNet) regularization to perform nonlocal similarity matching and low-rank approximation in the deep feature domain rather than the spatial domain of conventional NLR. Through visual and quantitative comparisons, CNLNet-based reconstruction shows an average 1.4 dB PSNR improvement over conventional NLR, outperforming existing algorithms under various scenarios.
RESUMEN
Reconstruction of multiple objects from one hologram can be affected by the focus metric judgment of autofocusing. Various segmentation algorithms are applied to obtain a single object in the hologram. Each object is unambiguously reconstructed to acquire its focal position, which produces complicated calculations. Herein, Hough transform (HT)-based multi-object autofocusing compressive holography is presented. The sharpness of each reconstructed image is computed by using a focus metric such as entropy or variance. According to the characteristics of the object, the standard HT is further used for calibration to remove redundant extreme points. The compressive holographic imaging framework with a filter layer can eliminate the inherent noise in in-line reconstruction including cross talk noise of different depth layers, two-order noise, and twin image noise. The proposed method can effectively obtain 3D information on multiple objects and achieve noise elimination by only reconstructing from one hologram.
RESUMEN
To obtain higher phase accuracy with less computation time in phase-shifting interferometry, a random phase-shifting algorithm based on principal component analysis and least squares iteration (PCA&LSI) is proposed. The algorithm does not require pre-filtering, and only requires two-frame phase-shifted interferograms and less computation time to obtain a relatively accurate phase distribution. This method can still extract the phase with high precision when there are few fringes in the interferogram. Moreover, it eliminates the limitation that the PCA algorithm needs more than three frames of interferograms with uniform phase shift distribution to accurately extract the phase. Numerical simulations and experiments confirm that the method is suitable for complex situations with different fluctuations in background intensity and modulation amplitude. And it can still achieve accurate phase extraction compared with other methods under different noise conditions.
RESUMEN
Computer-generated holography provides an approach to modulate the optical wavefront with computationally synthesized holograms. Since the hardware implementation for complex wavefronts is not yet available, double-phase decomposition is utilized as a complex encoding method of converting a complex wavefront to a double-phase hologram. The double-phase hologram adapts a complex wavefront for the phase-type devices, but the reconstruction is plagued by the noise caused by spatial-shifting errors. Here, a spectral-envelope modulated double-phase method is proposed to suppress the spatial-shifting noise with an off-axis envelope modulation on the Fourier spectrum of a double-phase hologram. This proposed method out-performs conventional on-axis double-phase method in optical reconstructing accuracy with indicated 9.54% improvement in PSNR and 196.86% improvement in SSIM.
RESUMEN
Mask-based lensless cameras break the constraints of traditional lens-based cameras, introducing highly flexible imaging systems. However, the inherent restrictions of imaging devices lead to low reconstruction quality. To overcome this challenge, we propose an explicit-restriction convolutional framework for lensless imaging, whose forward model effectively incorporates multiple restrictions by introducing the linear and noise-like nonlinear terms. As examples, numerical and experimental reconstructions based on the limitation of sensor size, pixel pitch, and bit depth are analyzed. By tailoring our framework for specific factors, better perceptual image quality or reconstructions with 4× pixel density can be achieved. This proposed framework can be extended to lensless imaging systems with different masks or structures.