RESUMEN
The projection of fringes plays an essential role in many applications, such as fringe projection profilometry and structured illumination microscopy. However, these capabilities are significantly constrained in environments affected by optical scattering. Although recent developments in wavefront shaping have effectively generated high-fidelity focal points and relatively simple structured images amidst scattering, the ability to project fringes that cover half of the projection area has not yet been achieved. To address this limitation, this study presents a fringe projector enabled by a neural network, capable of projecting fringes with variable periodicities and orientation angles through scattering media. We tested this projector on two types of scattering media: ground glass diffusers and multimode fibers. For these scattering media, the average Pearson's correlation coefficients between the projected fringes and their designed configurations are 86.9% and 79.7%, respectively. These results demonstrate the effectiveness of the proposed neural network enabled fringe projector. This advancement is expected to broaden the scope of fringe-based imaging techniques, making it feasible to employ them in conditions previously hindered by scattering effects.
RESUMEN
Ultrafast imaging can capture the dynamic scenes with a nanosecond and even femtosecond temporal resolution. Complementarily, phase imaging can provide the morphology, refractive index, or thickness information that intensity imaging cannot represent. Therefore, it is important to realize the simultaneous ultrafast intensity and phase imaging for achieving as much information as possible in the detection of ultrafast dynamic scenes. Here, we report a single-shot intensity- and phase-sensitive compressive sensing-based coherent modulation ultrafast imaging technique, shortened as CS-CMUI, which integrates coherent modulation imaging, compressive imaging, and streak imaging. We theoretically demonstrate through numerical simulations that CS-CMUI can obtain both the intensity and phase information of the dynamic scenes with ultrahigh fidelity. Furthermore, we experimentally build a CS-CMUI system and successfully measure the intensity and phase evolution of a multimode Q-switched laser pulse and the dynamical behavior of laser ablation on an indium tin oxide thin film. It is anticipated that CS-CMUI enables a profound comprehension of ultrafast phenomena and promotes the advancement of various practical applications, which will have substantial impact on fundamental and applied sciences.
RESUMEN
Compressed ultrafast photography (CUP) is a novel two-dimensional (2D) imaging technique to capture ultrafast dynamic scenes. Effective image reconstruction is essential in CUP systems. However, existing reconstruction algorithms mostly rely on image priors and complex parameter spaces. Therefore, in general, they are time-consuming and result in poor imaging quality, which limits their practical applications. In this paper, we propose a novel reconstruction algorithm, to the best of our knowledge, named plug-in-plug-fast deep video denoising net-total variation (PnP-TV-FastDVDnet), which exploits an image's spatial features and correlation features in the temporal dimension. Therefore, it offers higher-quality images than those in previously reported methods. First, we built a forward mathematical model of the CUP, and the closed-form solution of the three suboptimization problems was derived according to plug-in and plug-out frames. Secondly, we used an advanced video denoising algorithm based on a neural network named FastDVDnet to solve the denoising problem. The peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) are improved on actual CUP data compared with traditional algorithms. On benchmark and real CUP datasets, the proposed method shows the comparable visual results while reducing the running time by 96% over state-of-the-art algorithms.
RESUMEN
Hyperspectrally compressed ultrafast photography (HCUP) based on compressed sensing and time- and spectrum-to-space mappings can simultaneously realize the temporal and spectral imaging of non-repeatable or difficult-to-repeat transient events with a passive manner in single exposure. HCUP possesses an incredibly high frame rate of tens of trillions of frames per second and a sequence depth of several hundred, and therefore plays a revolutionary role in single-shot ultrafast optical imaging. However, due to ultra-high data compression ratios induced by the extremely large sequence depth, as well as limited fidelities of traditional algorithms over the image reconstruction process, HCUP suffers from a poor image reconstruction quality and fails to capture fine structures in complex transient scenes. To overcome these restrictions, we report a flexible image reconstruction algorithm based on a total variation (TV) and cascaded denoisers (CD) for HCUP, named the TV-CD algorithm. The TV-CD algorithm applies the TV denoising model cascaded with several advanced deep learning-based denoising models in the iterative plug-and-play alternating direction method of multipliers framework, which not only preserves the image smoothness with TV, but also obtains more priori with CD. Therefore, it solves the common sparsity representation problem in local similarity and motion compensation. Both the simulation and experimental results show that the proposed TV-CD algorithm can effectively improve the image reconstruction accuracy and quality of HCUP, and may further promote the practical applications of HCUP in capturing high-dimensional complex physical, chemical and biological ultrafast dynamic scenes.
RESUMEN
Structured illumination microscopy (SIM) has been widely applied to investigating fine structures of biological samples by breaking the optical diffraction limitation. So far, video-rate imaging has been obtained in SIM, but the imaging speed was still limited due to the reconstruction of a super-solution image through multi-sampling, which hindered the applications in high-speed biomedical imaging. To overcome this limitation, here we develop compressive imaging-based structured illumination microscopy (CISIM) by synergizing SIM and compressive sensing (CS). Compared with conventional SIM, CISIM can greatly improve the super-resolution imaging speed by extracting multiple super-resolution images from one compressed image. Based on CISIM, we successfully reconstruct the super-resolution images in biological dynamics, and analyze the effect factors of image reconstruction quality, which verify the feasibility of CISIM. CISIM paves a way for high-speed super-resolution imaging, which may bring technological breakthroughs and significant applications in biomedical imaging.
Asunto(s)
Iluminación , Microscopía Fluorescente/métodos , Fenómenos FísicosRESUMEN
Being capable of passively capturing transient scenes occurring in picoseconds and even shorter time with an extremely large sequence depth in a snapshot, compressed ultrafast photography (CUP) has aroused tremendous attention in ultrafast optical imaging. However, the high compression ratio induced by large sequence depth brings the problem of low image quality in image reconstruction, preventing CUP from observing transient scenes with fine spatial information. To overcome these restrictions, we propose an efficient image reconstruction algorithm with multi-scale (MS) weighted denoising based on the plug-and-play (PnP) based alternating direction method of multipliers (ADMM) framework for multi-channel coupled CUP (MC-CUP), named the MCMS-PnP algorithm. By removing non-Gaussian distributed noise using weighted MS denoising during each iteration of the ADMM, and adaptively adjusting the weights via sufficiently exploiting the coupling information among different acquisition channels collected by MC-CUP, a synergistic combination of hardware and algorithm can be realized to significantly improve the quality of image reconstruction. Both simulation and experimental results demonstrate that the proposed adaptive MCMS-PnP algorithm can effectively improve the accuracy and quality of reconstructed images in MC-CUP, and extend the detectable range of CUP to transient scenes with fine structures.
RESUMEN
The compressive ultrafast photography (CUP) has achieved real-time femtosecond imaging based on the compressive-sensing methods. However, the reconstruction performance usually suffers from artifacts brought by strong noise, aberration, and distortion, which prevents its applications. We propose a deep compressive ultrafast photography (DeepCUP) method. Various numerical simulations have been demonstrated on both the MNIST and UCF-101 datasets and compared with other state-of-the-art algorithms. The result shows that our DeepCUP has a superior performance in both PSNR and SSIM compared to previous compressed-sensing methods. We also illustrate the outstanding performance of the proposed method under system errors and noise in comparison to other methods.
RESUMEN
The spatial, temporal, and spectral information in optical imaging play a crucial role in exploring the unknown world and unencrypting natural mysteries. However, the existing optical imaging techniques can only acquire the spatiotemporal or spatiospectral information of the object with the single-shot method. Here, we develop a hyperspectrally compressed ultrafast photography (HCUP) that can simultaneously record the spatial, temporal, and spectral information of the object. In our HCUP, the spatial resolution is 1.26 lp/mm in the horizontal direction and 1.41 lp/mm in the vertical direction, the temporal frame interval is 2 ps, and the spectral frame interval is 1.72 nm. Moreover, HCUP operates with receive-only and single-shot modes, and therefore it overcomes the technical limitation of active illumination and can measure the nonrepetitive or irreversible transient events. Using our HCUP, we successfully measure the spatiotemporal-spatiospectral intensity evolution of the chirped picosecond laser pulse and the photoluminescence dynamics. This Letter extends the optical imaging from three- to four-dimensional information, which has an important scientific significance in both fundamental research and applied science.
RESUMEN
This work treats the impact of vibrational coherence on the quantum efficiency of a dissipative electronic wave packet in the vicinity of a conical intersection by monitoring the time-dependent wave packet projection onto the tuning and the coupling mode. The vibrational coherence of the wave packet is tuned by varying the strength of the dissipative vibrational coupling of the tuning and the coupling modes to their thermal baths. We observe that the most coherent wave packet yields a quantum efficiency of 93%, but with a large transfer time constant. The quantum yield is dramatically decreased to 50% for a strongly damped incoherent wave packet, but the associated transfer time of the strongly localized wave packet is short. In addition, we find for the strongly damped wave packet that the transfer occurs via tunneling of the wave packet between the potential energy surfaces before the seam of the conical intersection is reached and a direct passage takes over. Our results provide direct evidence that vibrational coherence of the electronic wave packet is a decisive factor which determines the dynamical behavior of a wave packet in the vicinity of the conical intersection.
RESUMEN
Compressed ultrafast photography (CUP) can capture irreversible or difficult-to-repeat dynamic scenes at the imaging speed of more than one billion frames per second, which is obtained by compressive sensing-based image reconstruction from a compressed 2D image through the discretization of detector pixels. However, an excessively high data compression ratio in CUP severely degrades the image reconstruction quality, thereby restricting its ability to observe ultrafast dynamic scenes with complex spatial structures. To address this issue, a discrete illumination-based CUP (DI-CUP) with high fidelity is reported. In DI-CUP, the dynamic scenes are loaded into an ultrashort laser pulse train with controllable sub-pulse number and time interval, thus the data compression ratio, as well as the overlap between adjacent frames, is greatly decreased and flexibly controlled through the discretization of dynamic scenes based on laser pulse train illumination, and high-fidelity image reconstruction can be realized within the same observation time window. Furthermore, the superior performance of DI-CUP is verified by observing femtosecond laser-induced ablation dynamics and plasma channel evolution, which are hardly resolved in the spatial structures using conventional CUP. It is anticipated that DI-CUP will be widely and dependably used in the real-time observations of various ultrafast dynamics.
RESUMEN
Structured illumination microscopy (SIM), as a flexible tool, has been widely applied to observing subcellular dynamics in live cells. It is noted, however, that SIM still encounters a problem with theoretical resolution limitation being only twice over wide-field microscopy, where imaging of finer biological structures and dynamics are significantly constrained. To surpass the resolution limitation of SIM, we developed an image postprocessing method to further improve the lateral resolution of SIM by an untrained neural network, i.e., deep resolution-enhanced SIM (DRE-SIM). DRE-SIM can further extend the spatial frequency components of SIM by employing the implicit priors based on the neural network without training datasets. The further super-resolution capability of DRE-SIM is verified by theoretical simulations as well as experimental measurements. Our experimental results show that DRE-SIM can achieve the resolution enhancement by a factor of about 1.4 compared with conventional SIM. Given the advantages of improving the lateral resolution while keeping the imaging speed, DRE-SIM will have a wide range of applications in biomedical imaging, especially when high-speed imaging mechanisms are integrated into the conventional SIM system.
RESUMEN
Uncorrelated position and velocity distribution of the electron bunch at the photocathode from the residual energy greatly limit the transverse coherent length and the recompression ability. Here we first propose a femtosecond pulse-shaping method to realize the electron pulse self-compression in ultrafast electron diffraction system based on a point-to-point space-charge model. The positively chirped femtosecond laser pulse can correspondingly create the positively chirped electron bunch at the photocathode (such as metal-insulator heterojunction), and such a shaped electron pulse can realize the self-compression in the subsequent propagation process. The greatest advantage for our proposed scheme is that no additional components are introduced into the ultrafast electron diffraction system, which therefore does not affect the electron bunch shape. More importantly, this scheme can break the limitation that the electron pulse via postphotocathode static compression schemes is not shorter than the excitation laser pulse due to the uncorrelated position and velocity distribution of the initial electron bunch.