RESUMO
Multi-focus image fusion is a method to extend the depth of field to generate fully focused images. The effective detection of image focusing pixels and the optimization of image regions are the key to it. A method based on multidimensional structure and edge-guided correction (MSEGC) is proposed. The pixel-level focusing evaluation function is redesigned to preserve image details and non-texture regions. Edge-guided decision correction is used to suppress edge artifacts. With public data and semiconductor detection images for verification, the results show that compared with other methods, the objective evaluation is improved by 22-50%, providing better vision.
RESUMO
Focusing objects accurately over short time scales is an essential and nontrivial task for a variety of microscopy applications. In this Letter, an autofocusing algorithm using pixel difference with the Tanimoto coefficient (PDTC) is described to predict the focus. Our method can robustly distinguish differences in clarity among datasets. The generated auto-focusing curves have extremely high sensitivity. A dataset of a defocused stack acquired by an Olympus microscope demonstrates the feasibility of our technique. This work can be applied in full-color microscopic imaging systems and is also valid for single-color imaging.
Assuntos
Algoritmos , Microscopia , Microscopia/métodosRESUMO
Lens distortion can introduce deviations in visual measurement and positioning. The distortion can be minimized by optimizing the lens and selecting high-quality optical glass, but it cannot be completely eliminated. Most existing correction methods are based on accurate distortion models and stable image characteristics. However, the distortion is usually a mixture of the radial distortion and the tangential distortion of the lens group, which makes it difficult for the mathematical model to accurately fit the non-uniform distortion. This paper proposes a new model-independent lens complex distortion correction method. Taking the horizontal and vertical stripe pattern as the calibration target, the sub-pixel value distribution visualizes the image distortion, and the correction parameters are directly obtained from the pixel distribution. A quantitative evaluation method suitable for model-independent methods is proposed. The method only calculates the error based on the characteristic points of the corrected picture itself. Experiments show that this method can accurately correct distortion with only 8 pictures, with an error of 0.39 pixels, which provides a simple method for complex lens distortion correction.