RESUMEN
The pandemic has made wearing masks commonplace, prompting researchers to investigate their effects on interpersonal perception. Findings indicate masks obstruct face identification and expression recognition, with lower face cues being most affected. When judging attractiveness, masks can enhance the appeal of less attractive faces, but reduce the appeal of more attractive faces. Trust and speech perception outcomes are inconclusive. Future studies could focus on individual differences in how masks influence our perception of others.
RESUMEN
Existing eye gaze tracking systems typically require an explicit personal calibration process in order to estimate certain person-specific eye parameters. For natural human computer interaction, such a personal calibration is often inconvenient and unnatural. In this paper, we propose a new probabilistic eye gaze tracking system without explicit personal calibration. Unlike the conventional eye gaze tracking methods, which estimate the eye parameter deterministically using known gaze points, our approach estimates the probability distributions of the eye parameter and eye gaze. Using an incremental learning framework, the subject does not need personal calibration before using the system. His/her eye parameter estimation and gaze estimation can be improved gradually when he/she is naturally interacting with the system. The experimental result shows that the proposed system can achieve <3° accuracy for different people without explicit personal calibration.
Asunto(s)
Teorema de Bayes , Fijación Ocular/fisiología , Procesamiento de Imagen Asistido por Computador/métodos , Animales , Calibración , Humanos , Grabación en VideoRESUMEN
Video based human body pose estimation seeks to estimate the human body pose from an image or a video sequence, which captures a person exhibiting some activities. To handle noise and occlusion, a pose prior model is often constructed and is subsequently combined with the pose estimated from the image data to achieve a more robust body pose tracking. Various body prior models have been proposed. Most of them are data-driven, typically learned from 3D motion capture data. In addition to being expensive and time-consuming to collect, these data-based prior models cannot generalize well to activities and subjects not present in the motion capture data. To alleviate this problem, we propose to learn the prior model from anatomic, biomechanics, and physical constraints, rather than from the motion capture data. For this, we propose methods that can effectively capture different types of constraints and systematically encode them into the prior model. Experiments on benchmark data sets show the proposed prior model, compared with data-based prior models, achieves comparable performance for body motions that are present in the training data. It, however, significantly outperforms the data-based prior models in generalization to different body motions and to different subjects.
Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Modelos Biológicos , Postura/fisiología , Fenómenos Biomecánicos/fisiología , Bases de Datos Factuales , Humanos , Grabación en VideoRESUMEN
Facial expression is a natural and powerful means of human communication. Recognizing spontaneous facial actions, however, is very challenging due to subtle facial deformation, frequent head movements, and ambiguous and uncertain facial motion measurements. Because of these challenges, current research in facial expression recognition is limited to posed expressions and often in frontal view. A spontaneous facial expression is characterized by rigid head movements and nonrigid facial muscular movements. More importantly, it is the coherent and consistent spatiotemporal interactions among rigid and nonrigid facial motions that produce a meaningful facial expression. Recognizing this fact, we introduce a unified probabilistic facial action model based on the Dynamic Bayesian network (DBN) to simultaneously and coherently represent rigid and nonrigid facial motions, their spatiotemporal dependencies, and their image measurements. Advanced machine learning methods are introduced to learn the model based on both training data and subjective prior knowledge. Given the model and the measurements of facial motions, facial action recognition is accomplished through probabilistic inference by systematically integrating visual measurements with the facial action model. Experiments show that compared to the state-of-the-art techniques, the proposed system yields significant improvements in recognizing both rigid and nonrigid facial motions, especially for spontaneous facial expressions.