ABSTRACT
We propose a method designed to push the frontiers of unconstrained face recognition in the wild with an emphasis on extreme out-of-plane pose variations. Existing methods either expect a single model to learn pose invariance by training on massive amounts of data or else normalize images by aligning faces to a single frontal pose. Contrary to these, our method is designed to explicitly tackle pose variations. Our proposed Pose-Aware Models (PAM) process a face image using several pose-specific, deep convolutional neural networks (CNN). 3D rendering is used to synthesize multiple face poses from input images to both train these models and to provide additional robustness to pose variations at test time. Our paper presents an extensive analysis of the IARPA Janus Benchmark A (IJB-A), evaluating the effects that landmark detection accuracy, CNN layer selection, and pose model selection all have on the performance of the recognition pipeline. It further provides comparative evaluations on IJB-A and the PIPA dataset. These tests show that our approach outperforms existing methods, even surprisingly matching the accuracy of methods that were specifically fine-tuned to the target dataset. Parts of this work previously appeared in [1] and [2].
ABSTRACT
We aim at improving the object recognition with few training data in the target domain by leveraging abundant auxiliary data in the source domain. The major issue obstructing knowledge transfer from source to target is the limited correlation between the two domains. Transferring irrelevant information from the source domain usually leads to performance degradation in the target domain. To address this issue, we propose a transfer learning framework with the two key components, such as discriminative source data reconstruction and dual-domain boosting. The former correlates the two domains via reconstructing source data by target data in a discriminative manner. The latter discovers and delivers only knowledge shared by the target data and the reconstructed source data. Hence, it facilitates recognition in the target. The promising experimental results on three benchmarks of object recognition demonstrate the effectiveness of our approach.
ABSTRACT
In this paper, a robust illuminant estimation algorithm for color constancy is proposed. Considering the drawback of the well-known max-RGB algorithm, which regards only pixels with the maximum image intensities, we explore the representative pixels from an image for illuminant estimation: The representative pixels are determined via the intensity bounds corresponding to a certain percentage value in the normalized accumulative histograms. To achieve the suitable percentage, an iterative algorithm is presented by simultaneously neutralizing the chromaticity distribution and preventing overcorrection. The experimental results on the benchmark databases provided by Simon Fraser University and Microsoft Research Cambridge, as well as several web images, demonstrate the effectiveness of our approach.