Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters











Database
Language
Publication year range
1.
IEEE Trans Image Process ; 32: 4677-4688, 2023.
Article in English | MEDLINE | ID: mdl-37527318

ABSTRACT

In this paper, we propose an efficient deep learning pipeline for light field acquisition using a back-to-back dual-fisheye camera. The proposed pipeline generates a light field from a sequence of 360° raw images captured by the dual-fisheye camera. It has three main components: a convolutional network (CNN) that enforces a spatiotemporal consistency constraint on the subviews of the 360° light field, an equirectangular matching cost that aims at increasing the accuracy of disparity estimation, and a light field resampling subnet that produces the 360° light field based on the disparity information. Ablation tests are conducted to analyze the performance of the proposed pipeline using the HCI light field datasets with five objective assessment metrics (MSE, MAE, PSNR, SSIM, and GMSD). We also use real data obtained from a commercially available dual-fisheye camera to quantitatively and qualitatively test the effectiveness, robustness, and quality of the proposed pipeline. Our contributions include: 1) a novel spatiotemporal consistency loss that enforces the subviews of the 360° light field to be consistent, 2) an equirectangular matching cost that combats severe projection distortion of fisheye images, and 3) a light field resampling subnet that retains the geometric structure of spherical subviews while enhancing the angular resolution of the light field.

2.
IEEE Trans Image Process ; 31: 251-262, 2022.
Article in English | MEDLINE | ID: mdl-34855594

ABSTRACT

Back-to-back dual-fisheye cameras are the most cost-effective devices to capture 360° visual content. However, image and video stitching for such cameras often suffer from the effect of fisheye distortion, photometric inconsistency between the two views, and non-collocated optical centers. In this paper, we present algorithms for geometric calibration, photometric compensation, and seamless stitching to address these issues for back-to-back dual-fisheye cameras. Specifically, we develop a co-centric trajectory model for geometric calibration to characterize both intrinsic and extrinsic parameters of the fisheye camera to fifth-order precision, a photometric correction model for intensity and color compensation to provide efficient and accurate local color transfer, and a mesh deformation model along with an adaptive seam carving method for image stitching to reduce geometric distortion and ensure optimal spatiotemporal alignment. The stitching algorithm and the compensation algorithm can run efficiently for 1920×960 images. Quantitative evaluation of geometric distortion, color discontinuity, jitter, and ghost artifact of the resulting image and video shows that our solution outperforms the state-of-the-art techniques.

3.
IEEE Trans Image Process ; 30: 264-276, 2021.
Article in English | MEDLINE | ID: mdl-32870793

ABSTRACT

Rectilinear face recognition models suffer from severe performance degradation when applied to fisheye images captured by 360° back-to-back dual fisheye cameras. We propose a novel face rectification method to combat the effect of fisheye image distortion on face recognition. The method consists of a classification network and a restoration network specifically designed to handle the non-linear property of fisheye projection. The classification network classifies an input fisheye image according to its distortion level. The restoration network takes a distorted image as input and restores the rectilinear geometric structure of the face. The performance of the proposed method is tested on an end-to-end face recognition system constructed by integrating the proposed rectification method with a conventional rectilinear face recognition system. The face verification accuracy of the integrated system is 99.18% when tested on images in the synthetic Labeled Faces in the Wild (LFW) dataset and 95.70% for images in a real image dataset, resulting in an average accuracy improvement of 6.57% over the conventional face recognition system. For face identification, the average improvement over the conventional face recognition system is 4.51%.

SELECTION OF CITATIONS
SEARCH DETAIL