RESUMO
Under-display imaging technique was recently proposed to enlarge the screen-to-body ratio for full-screen devices. However, existing image restoration algorithms have difficulty generalizing to real-world under-display (UD) images, especially to images containing strong light sources. To address this issue, we propose a novel method for building a synthetic dataset (CalibPSF dataset) and introduce a two-stage neural network to solve the under-display imaging degradation problem. The CalibPSF dataset is generated using the calibrated high dynamic range point spread function (PSF) of the under-display optical system and contains various simulated light sources. The two-stage network solves the color distortion and diffraction degradation in order. We evaluate the performance of our algorithm on our captured real-world test set. Comprehensive experiments demonstrate the superiority of our method in different dynamic range scenes.
RESUMO
Mask-based lensless imaging cameras have many applications due to their smaller volumes and lower costs. However, due to the ill-nature of the inverse problem, the reconstructed images have low resolution and poor quality. In this article, we use a mask based on almost perfect sequence which has an excellent autocorrelation property for lensless imaging and propose a Learned Analytic solution Net for image reconstruction under the framework of unrolled optimization. Our network combines a physical imaging model with deep learning to achieve high-quality image reconstruction. The experimental results indicate that our reconstructed images at a resolution of 512 × 512 have excellent performances in both visual effects and objective evaluations.