Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros

Banco de datos
Tipo del documento
Asunto de la revista
Intervalo de año de publicación
1.
Front Psychol ; 14: 1203442, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37416539

RESUMEN

The pandemic has made wearing masks commonplace, prompting researchers to investigate their effects on interpersonal perception. Findings indicate masks obstruct face identification and expression recognition, with lower face cues being most affected. When judging attractiveness, masks can enhance the appeal of less attractive faces, but reduce the appeal of more attractive faces. Trust and speech perception outcomes are inconclusive. Future studies could focus on individual differences in how masks influence our perception of others.

2.
IEEE Trans Image Process ; 24(3): 1076-86, 2015 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-25532184

RESUMEN

Existing eye gaze tracking systems typically require an explicit personal calibration process in order to estimate certain person-specific eye parameters. For natural human computer interaction, such a personal calibration is often inconvenient and unnatural. In this paper, we propose a new probabilistic eye gaze tracking system without explicit personal calibration. Unlike the conventional eye gaze tracking methods, which estimate the eye parameter deterministically using known gaze points, our approach estimates the probability distributions of the eye parameter and eye gaze. Using an incremental learning framework, the subject does not need personal calibration before using the system. His/her eye parameter estimation and gaze estimation can be improved gradually when he/she is naturally interacting with the system. The experimental result shows that the proposed system can achieve <3° accuracy for different people without explicit personal calibration.


Asunto(s)
Teorema de Bayes , Fijación Ocular/fisiología , Procesamiento de Imagen Asistido por Computador/métodos , Animales , Calibración , Humanos , Grabación en Video
3.
IEEE Trans Image Process ; 22(12): 4627-39, 2013 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-23893728

RESUMEN

Video based human body pose estimation seeks to estimate the human body pose from an image or a video sequence, which captures a person exhibiting some activities. To handle noise and occlusion, a pose prior model is often constructed and is subsequently combined with the pose estimated from the image data to achieve a more robust body pose tracking. Various body prior models have been proposed. Most of them are data-driven, typically learned from 3D motion capture data. In addition to being expensive and time-consuming to collect, these data-based prior models cannot generalize well to activities and subjects not present in the motion capture data. To alleviate this problem, we propose to learn the prior model from anatomic, biomechanics, and physical constraints, rather than from the motion capture data. For this, we propose methods that can effectively capture different types of constraints and systematically encode them into the prior model. Experiments on benchmark data sets show the proposed prior model, compared with data-based prior models, achieves comparable performance for body motions that are present in the training data. It, however, significantly outperforms the data-based prior models in generalization to different body motions and to different subjects.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Modelos Biológicos , Postura/fisiología , Fenómenos Biomecánicos/fisiología , Bases de Datos Factuales , Humanos , Grabación en Video
4.
IEEE Trans Pattern Anal Mach Intell ; 32(2): 258-73, 2010 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-20075457

RESUMEN

Facial expression is a natural and powerful means of human communication. Recognizing spontaneous facial actions, however, is very challenging due to subtle facial deformation, frequent head movements, and ambiguous and uncertain facial motion measurements. Because of these challenges, current research in facial expression recognition is limited to posed expressions and often in frontal view. A spontaneous facial expression is characterized by rigid head movements and nonrigid facial muscular movements. More importantly, it is the coherent and consistent spatiotemporal interactions among rigid and nonrigid facial motions that produce a meaningful facial expression. Recognizing this fact, we introduce a unified probabilistic facial action model based on the Dynamic Bayesian network (DBN) to simultaneously and coherently represent rigid and nonrigid facial motions, their spatiotemporal dependencies, and their image measurements. Advanced machine learning methods are introduced to learn the model based on both training data and subjective prior knowledge. Given the model and the measurements of facial motions, facial action recognition is accomplished through probabilistic inference by systematically integrating visual measurements with the facial action model. Experiments show that compared to the state-of-the-art techniques, the proposed system yields significant improvements in recognizing both rigid and nonrigid facial motions, especially for spontaneous facial expressions.


Asunto(s)
Identificación Biométrica/métodos , Cara/anatomía & histología , Algoritmos , Inteligencia Artificial , Teorema de Bayes , Bases de Datos Factuales , Humanos , Modelos Estadísticos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA