Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros

Bases de datos
Tipo de estudio
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Psychol Sci ; 29(8): 1257-1269, 2018 08.
Artículo en Inglés | MEDLINE | ID: mdl-29874156

RESUMEN

Motor-based theories of facial expression recognition propose that the visual perception of facial expression is aided by sensorimotor processes that are also used for the production of the same expression. Accordingly, sensorimotor and visual processes should provide congruent emotional information about a facial expression. Here, we report evidence that challenges this view. Specifically, the repeated execution of facial expressions has the opposite effect on the recognition of a subsequent facial expression than the repeated viewing of facial expressions. Moreover, the findings of the motor condition, but not of the visual condition, were correlated with a nonsensory condition in which participants imagined an emotional situation. These results can be well accounted for by the idea that facial expression recognition is not always mediated by motor processes but can also be recognized on visual information alone.


Asunto(s)
Expresión Facial , Reconocimiento Facial/fisiología , Percepción Social , Percepción Visual , Emociones , Humanos , Desempeño Psicomotor
2.
J Vis ; 18(4): 13, 2018 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-29710303

RESUMEN

According to a long-standing hypothesis in motor control, complex body motion is organized in terms of movement primitives, reducing massively the dimensionality of the underlying control problems. For body movements, this low-dimensional organization has been convincingly demonstrated by the learning of low-dimensional representations from kinematic and EMG data. In contrast, the effective dimensionality of dynamic facial expressions is unknown, and dominant analysis approaches have been based on heuristically defined facial "action units," which reflect contributions of individual face muscles. We determined the effective dimensionality of dynamic facial expressions by learning of a low-dimensional model from 11 facial expressions. We found an amazingly low dimensionality with only two movement primitives being sufficient to simulate these dynamic expressions with high accuracy. This low dimensionality is confirmed statistically, by Bayesian model comparison of models with different numbers of primitives, and by a psychophysical experiment that demonstrates that expressions, simulated with only two primitives, are indistinguishable from natural ones. In addition, we find statistically optimal integration of the emotion information specified by these primitives in visual perception. Taken together, our results indicate that facial expressions might be controlled by a very small number of independent control units, permitting very low-dimensional parametrization of the associated facial expression.


Asunto(s)
Fenómenos Biomecánicos/fisiología , Emociones/fisiología , Expresión Facial , Reconocimiento Facial/fisiología , Desempeño Psicomotor/fisiología , Adulto , Teorema de Bayes , Femenino , Humanos , Masculino , Psicofísica
3.
J Vis ; 13(1): 23, 2013 Jan 18.
Artículo en Inglés | MEDLINE | ID: mdl-23335323

RESUMEN

Probing emotional facial expression recognition with the adaptation paradigm is one way to investigate the processes underlying emotional face recognition. Previous research suggests that these processes are tuned to dynamic facial information (facial movement). Here we examined the tuning of processes involved in the recognition of emotional facial expressions to different sources of facial movement information. Specifically we investigated the effect of the availability of rigid head movement and intrinsic facial movements (e.g., movement of facial features) on the size of the emotional facial expression adaptation effect. Using a three-dimensional (3D) morphable model that allowed the manipulation of the availability of each of the two factors (intrinsic facial movement, head movement) individually, we examined emotional facial expression adaptation with happy and disgusted faces. Our results show that intrinsic facial movement is necessary for the emergence of an emotional facial expression adaptation effect with dynamic adaptors. The presence of rigid head motion modulates the emotional facial expression adaptation effect only in the presence of intrinsic facial motion. In a second experiment we show these adaptation effects are difficult to explain by merely the perceived intensity and clarity (uniqueness) of the adaptor expressions. Together these results suggest that processes encoding facial expressions are differently tuned to different sources of facial movements.


Asunto(s)
Señales (Psicología) , Emociones/fisiología , Cara/fisiología , Expresión Facial , Percepción de Movimiento/fisiología , Reconocimiento Visual de Modelos/fisiología , Reconocimiento en Psicología/fisiología , Adulto , Femenino , Humanos , Masculino , Procesos Mentales/fisiología
4.
PLoS One ; 9(1): e86502, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24466123

RESUMEN

The social context in which an action is embedded provides important information for the interpretation of an action. Is this social context integrated during the visual recognition of an action? We used a behavioural visual adaptation paradigm to address this question and measured participants' perceptual bias of a test action after they were adapted to one of two adaptors (adaptation after-effect). The action adaptation after-effect was measured for the same set of adaptors in two different social contexts. Our results indicate that the size of the adaptation effect varied with social context (social context modulation) although the physical appearance of the adaptors remained unchanged. Three additional experiments provided evidence that the observed social context modulation of the adaptation effect are owed to the adaptation of visual action recognition processes. We found that adaptation is critical for the social context modulation (experiment 2). Moreover, the effect is not mediated by emotional content of the action alone (experiment 3) and visual information about the action seems to be critical for the emergence of action adaptation effects (experiment 4). Taken together these results suggest that processes underlying visual action recognition are sensitive to the social context of an action.


Asunto(s)
Adaptación Fisiológica , Reconocimiento Visual de Modelos , Medio Social , Adulto , Emociones , Femenino , Humanos , Masculino , Estimulación Luminosa , Psicometría , Adulto Joven
5.
Front Psychol ; 4: 752, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-24155731

RESUMEN

Recognizing social interactions, e.g., two people shaking hands, is important for obtaining information about other people and the surrounding social environment. Despite the visual complexity of social interactions, humans have often little difficulties to visually recognize social interactions. What is the visual representation of social interactions and the bodily visual cues that promote this remarkable human ability? Viewpoint dependent representations are considered to be at the heart of the visual recognition of many visual stimuli including objects (Bülthoff and Edelman, 1992), and biological motion patterns (Verfaillie, 1993). Here we addressed the question whether complex social actions acted out between pairs of people, e.g., hugging, are also represented in a similar manner. To this end, we created 3-D models from motion captured actions acted out by two people, e.g., hugging. These 3-D models allowed to present the same action from different viewpoints. Participants' task was to discriminate a target action from distractor actions using a one-interval-forced-choice (1IFC) task. We measured participants' recognition performance in terms of reaction times (RT) and d-prime (d'). For each tested action we found one view that led to superior recognition performance compared to other views. This finding demonstrates view-dependent effects of visual recognition, which are in line with the idea of a view-dependent representation of social interactions. Subsequently, we examined the degree to which velocities of joints are able to predict the recognition performance of social interactions in order to determine candidate visual cues underlying the recognition of social interactions. We found that the velocities of the arms, both feet, and hips correlated with recognition performance.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA