Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 27
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Sci Robot ; 9(88): eado5755, 2024 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-38536904

RESUMEN

Humanoid robots can now learn the art of social synchrony using neural networks.


Asunto(s)
Robótica , Humanos , Redes Neurales de la Computación , Aprendizaje
2.
J Exp Psychol Gen ; 153(3): 742-753, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38271012

RESUMEN

Social class is a powerful hierarchy that determines many privileges and disadvantages. People form impressions of others' social class (like other important social attributes) from facial appearance, and these impressions correlate with stereotype judgments. However, what drives these related subjective judgments remains unknown. That is, what makes someone look like they are of higher or lower social class standing (e.g., rich or poor), and how does this relate to harmful or advantageous stereotypes? We addressed these questions using a perception-based data-driven method to model the specific three-dimensional facial features that drive social class judgments and compared them to those of stereotype-related judgments (competence, warmth, dominance, and trustworthiness), based on White Western culture participants and face stimuli. Using a complementary data-reduction analysis and machine learning approach, we show that social class judgments are driven by a unique constellation of facial features that reflect multiple embedded stereotypes: poor-looking (vs. rich-looking) faces are wider, shorter, and flatter with downturned mouths and darker, cooler complexions, mirroring features of incompetent, cold, and untrustworthy-looking (vs. competent, warm, and trustworthy-looking) faces. Our results reveal the specific facial features that underlie the connection between impressions of social class and stereotype-related social traits, with implications for central social perception theories, including understanding the causal links between stereotype knowledge and social class judgments. We anticipate that our results will inform future interventions designed to interrupt biased perception and social inequalities. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Asunto(s)
Reconocimiento Facial , Estereotipo , Humanos , Percepción Social , Actitud , Juicio , Clase Social , Expresión Facial , Confianza
3.
Curr Biol ; 34(1): 213-223.e5, 2024 01 08.
Artículo en Inglés | MEDLINE | ID: mdl-38141619

RESUMEN

Communicating emotional intensity plays a vital ecological role because it provides valuable information about the nature and likelihood of the sender's behavior.1,2,3 For example, attack often follows signals of intense aggression if receivers fail to retreat.4,5 Humans regularly use facial expressions to communicate such information.6,7,8,9,10,11 Yet how this complex signaling task is achieved remains unknown. We addressed this question using a perception-based, data-driven method to mathematically model the specific facial movements that receivers use to classify the six basic emotions-"happy," "surprise," "fear," "disgust," "anger," and "sad"-and judge their intensity in two distinct cultures (East Asian, Western European; total n = 120). In both cultures, receivers expected facial expressions to dynamically represent emotion category and intensity information over time, using a multi-component compositional signaling structure. Specifically, emotion intensifiers peaked earlier or later than emotion classifiers and represented intensity using amplitude variations. Emotion intensifiers are also more similar across emotions than classifiers are, suggesting a latent broad-plus-specific signaling structure. Cross-cultural analysis further revealed similarities and differences in expectations that could impact cross-cultural communication. Specifically, East Asian and Western European receivers have similar expectations about which facial movements represent high intensity for threat-related emotions, such as "anger," "disgust," and "fear," but differ on those that represent low threat emotions, such as happiness and sadness. Together, our results provide new insights into the intricate processes by which facial expressions can achieve complex dynamic signaling tasks by revealing the rich information embedded in facial expressions.


Asunto(s)
Emociones , Expresión Facial , Humanos , Ira , Miedo , Felicidad
4.
Sci Adv ; 9(6): eabq8421, 2023 02 10.
Artículo en Inglés | MEDLINE | ID: mdl-36763663

RESUMEN

Models are the hallmark of mature scientific inquiry. In psychology, this maturity has been reached in a pervasive question-what models best represent facial expressions of emotion? Several hypotheses propose different combinations of facial movements [action units (AUs)] as best representing the six basic emotions and four conversational signals across cultures. We developed a new framework to formalize such hypotheses as predictive models, compare their ability to predict human emotion categorizations in Western and East Asian cultures, explain the causal role of individual AUs, and explore updated, culture-accented models that improve performance by reducing a prevalent Western bias. Our predictive models also provide a noise ceiling to inform the explanatory power and limitations of different factors (e.g., AUs and individual differences). Thus, our framework provides a new approach to test models of social signals, explain their predictive power, and explore their optimization, with direct implications for theory development.


Asunto(s)
Emociones , Expresión Facial , Humanos , Cara , Movimiento
5.
Sci Rep ; 12(1): 12592, 2022 07 22.
Artículo en Inglés | MEDLINE | ID: mdl-35869154

RESUMEN

Realtime visual feedback from consequences of actions is useful for future safety-critical human-robot interaction applications such as remote physical examination of patients. Given multiple formats to present visual feedback, using face as feedback for mediating human-robot interaction in remote examination remains understudied. Here we describe a face mediated human-robot interaction approach for remote palpation. It builds upon a robodoctor-robopatient platform where user can palpate on the robopatient to remotely control the robodoctor to diagnose a patient. A tactile sensor array mounted on the end effector of the robodoctor measures the haptic response of the patient under diagnosis and transfers it to the robopatient to render pain facial expressions in response to palpation forces. We compare this approach against a direct presentation of tactile sensor data in a visual tactile map. As feedback, the former has the advantage of recruiting advanced human capabilities to decode expressions on a human face whereas the later has the advantage of being able to present details such as intensity and spatial information of palpation. In a user study, we compare these two approaches in a teleoperated palpation task to find the hard nodule embedded in the remote abdominal phantom. We show that the face mediated human-robot interaction approach leads to statistically significant improvements in localizing the hard nodule without compromising the nodule position estimation time. We highlight the inherent power of facial expressions as communicative signals to enhance the utility and effectiveness of human-robot interaction in remote medical examinations.


Asunto(s)
Robótica , Retroalimentación , Retroalimentación Sensorial , Humanos , Palpación , Tacto/fisiología
6.
Sci Rep ; 12(1): 4200, 2022 03 10.
Artículo en Inglés | MEDLINE | ID: mdl-35273296

RESUMEN

Medical training simulators can provide a safe and controlled environment for medical students to practice their physical examination skills. An important source of information for physicians is the visual feedback of involuntary pain facial expressions in response to physical palpation on an affected area of a patient. However, most existing robotic medical training simulators that can capture physical examination behaviours in real-time cannot display facial expressions and comprise a limited range of patient identities in terms of ethnicity and gender. Together, these limitations restrict the utility of medical training simulators because they do not provide medical students with a representative sample of pain facial expressions and face identities, which could result in biased practices. Further, these limitations restrict the utility of such medical simulators to detect and correct early signs of bias in medical training. Here, for the first time, we present a robotic system that can simulate facial expressions of pain in response to palpations, displayed on a range of patient face identities. We use the unique approach of modelling dynamic pain facial expressions using a data-driven perception-based psychophysical method combined with the visuo-haptic inputs of users performing palpations on a robot medical simulator. Specifically, participants performed palpation actions on the abdomen phantom of a simulated patient, which triggered the real-time display of six pain-related facial Action Units (AUs) on a robotic face (MorphFace), each controlled by two pseudo randomly generated transient parameters: rate of change [Formula: see text] and activation delay [Formula: see text]. Participants then rated the appropriateness of the facial expression displayed in response to their palpations on a 4-point scale from "strongly disagree" to "strongly agree". Each participant ([Formula: see text], 4 Asian females, 4 Asian males, 4 White females and 4 White males) performed 200 palpation trials on 4 patient identities (Black female, Black male, White female and White male) simulated using MorphFace. Results showed facial expressions rated as most appropriate by all participants comprise a higher rate of change and shorter delay from upper face AUs (around the eyes) to those in the lower face (around the mouth). In contrast, we found that transient parameter values of most appropriate-rated pain facial expressions, palpation forces, and delays between palpation actions varied across participant-simulated patient pairs according to gender and ethnicity. These findings suggest that gender and ethnicity biases affect palpation strategies and the perception of pain facial expressions displayed on MorphFace. We anticipate that our approach will be used to generate physical examination models with diverse patient demographics to reduce erroneous judgments in medical students, and provide focused training to address these errors.


Asunto(s)
Procedimientos Quirúrgicos Robotizados , Robótica , Expresión Facial , Femenino , Humanos , Masculino , Dolor , Palpación
7.
Curr Biol ; 32(1): 200-209.e6, 2022 01 10.
Artículo en Inglés | MEDLINE | ID: mdl-34767768

RESUMEN

Human facial expressions are complex, multi-component signals that can communicate rich information about emotions,1-5 including specific categories, such as "anger," and broader dimensions, such as "negative valence, high arousal."6-8 An enduring question is how this complex signaling is achieved. Communication theory predicts that multi-component signals could transmit each type of emotion information-i.e., specific categories and broader dimensions-via the same or different facial signal components, with implications for elucidating the system and ontology of facial expression communication.9 We addressed this question using a communication-systems-based method that agnostically generates facial expressions and uses the receiver's perceptions to model the specific facial signal components that represent emotion category and dimensional information to them.10-12 First, we derived the facial expressions that elicit the perception of emotion categories (i.e., the six classic emotions13 plus 19 complex emotions3) and dimensions (i.e., valence and arousal) separately, in 60 individual participants. Comparison of these facial signals showed that they share subsets of components, suggesting that specific latent signals jointly represent-i.e., multiplex-categorical and dimensional information. Further examination revealed these specific latent signals and the joint information they represent. Our results-based on white Western participants, same-ethnicity face stimuli, and commonly used English emotion terms-show that facial expressions can jointly represent specific emotion categories and broad dimensions to perceivers via multiplexed facial signal components. Our results provide insights into the ontology and system of facial expression communication and a new information-theoretic framework that can characterize its complexities.


Asunto(s)
Emociones , Expresión Facial , Ira , Nivel de Alerta , Cara , Humanos
8.
Curr Biol ; 31(10): 2243-2252.e6, 2021 05 24.
Artículo en Inglés | MEDLINE | ID: mdl-33798430

RESUMEN

Facial attractiveness confers considerable advantages in social interactions,1,2 with preferences likely reflecting psychobiological mechanisms shaped by natural selection. Theories of universal beauty propose that attractive faces comprise features that are closer to the population average3 while optimizing sexual dimorphism.4 However, emerging evidence questions this model as an accurate representation of facial attractiveness,5-7 including representing the diversity of beauty preferences within and across cultures.8-12 Here, we demonstrate that Western Europeans (WEs) and East Asians (EAs) evaluate facial beauty using culture-specific features, contradicting theories of universality. With a data-driven method, we modeled, at both the individual and group levels, the attractive face features of young females (25 years old) in two matched groups each of 40 young male WE and EA participants. Specifically, we generated a broad range of same- and other-ethnicity female faces with naturally varying shapes and complexions. Participants rated each on attractiveness. We then reverse correlated the face features that drive perception of attractiveness in each participant. From these individual face models, we reconstructed a facial attractiveness representation space that explains preference variations. We show that facial attractiveness is distinct both from averageness and from sexual dimorphism in both cultures. Finally, we disentangled attractive face features into those shared across cultures, culture specific, and specific to individual participants, thereby revealing their diversity. Our results have direct theoretical and methodological impact for representing diversity in social perception and for the design of culturally and ethnically sensitive socially interactive digital agents.


Asunto(s)
Belleza , Cultura , Cara , Adulto , Pueblo Asiatico , Femenino , Humanos , Masculino , Caracteres Sexuales , Población Blanca
9.
Philos Trans R Soc Lond B Biol Sci ; 375(1799): 20190705, 2020 05 25.
Artículo en Inglés | MEDLINE | ID: mdl-32248774

RESUMEN

The information contents of memory are the cornerstone of the most influential models in cognition. To illustrate, consider that in predictive coding, a prediction implies that specific information is propagated down from memory through the visual hierarchy. Likewise, recognizing the input implies that sequentially accrued sensory evidence is successfully matched with memorized information (categorical knowledge). Although the existing models of prediction, memory, sensory representation and categorical decision are all implicitly cast within an information processing framework, it remains a challenge to precisely specify what this information is, and therefore where, when and how the architecture of the brain dynamically processes it to produce behaviour. Here, we review a framework that addresses these challenges for the studies of perception and categorization-stimulus information representation (SIR). We illustrate how SIR can reverse engineer the information contents of memory from behavioural and brain measures in the context of specific cognitive tasks that involve memory. We discuss two specific lessons from this approach that generally apply to memory studies: the importance of task, to constrain what the brain does, and of stimulus variations, to identify the specific information contents that are memorized, predicted, recalled and replayed. This article is part of the Theo Murphy meeting issue 'Memory reactivation: replaying events past, present and future'.


Asunto(s)
Encéfalo/fisiología , Cognición/fisiología , Consolidación de la Memoria/fisiología , Humanos
10.
Sci Justice ; 59(4): 380-389, 2019 07.
Artículo en Inglés | MEDLINE | ID: mdl-31256809

RESUMEN

Cognitive bias is a well-documented automatic process that can have serious negative consequences in a variety of settings. For example, cognitive bias within a forensic science setting can lead to examiners' judgements being swayed by details that they have learned while working on the case, and which go beyond the physical evidence being examined. Although cognitive bias has been studied in many forensic disciplines, such as fingerprints, bullet comparison, and document examination, knowledge of cognitive bias within forensic toxicology is lacking. Here, we address this knowledge gap by assessing the reported use of contextual information by an international group of forensic toxicologists attending the 54th conference of The International Association of Forensic Toxicologists (TIAFT) in Brisbane in 2016. In a first study, participants read a set of simple post-mortem toxicology results (two drug concentrations in blood) and then indicated what information they would normally use when interpreting these results in their day-to-day casework. Using a questionnaire, we then surveyed the familiarity of toxicologists with contextual bias and captured any suggested bias-minimizing procedures for use in forensic toxicology laboratories. Thirty-six participants from 23 different countries and with a range of 1-35 years' forensic toxicology reporting experience volunteered. Analysis of their responses showed that the majority of participants reported using some contextual information in their interpretation of these post-mortem toxicology results (range = 3-15 pieces of information, median ±â€¯SD = 11 ±â€¯3), the most common being the deceased's history of prescription or illicit drug use. More than three-quarters of participants reported being familiar with the concept of contextual bias, although few (n = 9) worked in laboratories that had a formal policy covering it. Over half of participants knew of at least one bias-minimizing procedure specifically for forensic toxicology casework, but only a quarter (overall) reported using bias-minimizing procedures in their laboratories. Our results provide substantial evidence that although practising forensic toxicologists are familiar with contextual bias, many report that they still engage in behaviours that could lead to cognitive bias (e.g., through the use of contextual information, through lack of explicit policies or bias-minimizing procedures). We anticipate that our work will form the basis of further research involving a larger sample of participants and examining other potentially relevant factors such as sex/gender, country and accreditation of laboratories.


Asunto(s)
Sesgo , Cognición , Toma de Decisiones , Toxicología Forense , Juicio , Congresos como Asunto , Humanos , Internacionalidad , Laboratorios , Encuestas y Cuestionarios
11.
Proc Natl Acad Sci U S A ; 115(43): E10013-E10021, 2018 10 23.
Artículo en Inglés | MEDLINE | ID: mdl-30297420

RESUMEN

Real-world studies show that the facial expressions produced during pain and orgasm-two different and intense affective experiences-are virtually indistinguishable. However, this finding is counterintuitive, because facial expressions are widely considered to be a powerful tool for social interaction. Consequently, debate continues as to whether the facial expressions of these extreme positive and negative affective states serve a communicative function. Here, we address this debate from a novel angle by modeling the mental representations of dynamic facial expressions of pain and orgasm in 40 observers in each of two cultures (Western, East Asian) using a data-driven method. Using a complementary approach of machine learning, an information-theoretic analysis, and a human perceptual discrimination task, we show that mental representations of pain and orgasm are physically and perceptually distinct in each culture. Cross-cultural comparisons also revealed that pain is represented by similar face movements across cultures, whereas orgasm showed distinct cultural accents. Together, our data show that mental representations of the facial expressions of pain and orgasm are distinct, which questions their nondiagnosticity and instead suggests they could be used for communicative purposes. Our results also highlight the potential role of cultural and perceptual factors in shaping the mental representation of these facial expressions. We discuss new research directions to further explore their relationship to the production of facial expressions.


Asunto(s)
Emociones/fisiología , Cara/fisiología , Dolor/fisiopatología , Dolor/psicología , Placer/fisiología , Adulto , Comparación Transcultural , Cultura , Expresión Facial , Femenino , Humanos , Relaciones Interpersonales , Masculino , Reconocimiento en Psicología/fisiología , Adulto Joven
12.
Trends Cogn Sci ; 22(1): 1-5, 2018 01.
Artículo en Inglés | MEDLINE | ID: mdl-29126772

RESUMEN

Psychology aims to understand real human behavior. However, cultural biases in the scientific process can constrain knowledge. We describe here how data-driven methods can relax these constraints to reveal new insights that theories can overlook. To advance knowledge we advocate a symbiotic approach that better combines data-driven methods with theory.


Asunto(s)
Psicología/métodos , Proyectos de Investigación , Cultura , Humanos , Conocimiento , Modelos Psicológicos
13.
Curr Opin Psychol ; 17: 61-66, 2017 10.
Artículo en Inglés | MEDLINE | ID: mdl-28950974

RESUMEN

Understanding the cultural commonalities and specificities of facial expressions of emotion remains a central goal of Psychology. However, recent progress has been stayed by dichotomous debates (e.g. nature versus nurture) that have created silos of empirical and theoretical knowledge. Now, an emerging interdisciplinary scientific culture is broadening the focus of research to provide a more unified and refined account of facial expressions within and across cultures. Specifically, data-driven approaches allow a wider, more objective exploration of face movement patterns that provide detailed information ontologies of their cultural commonalities and specificities. Similarly, a wider exploration of the social messages perceived from face movements diversifies knowledge of their functional roles (e.g. the 'fear' face used as a threat display). Together, these new approaches promise to diversify, deepen, and refine knowledge of facial expressions, and deliver the next major milestones for a functional theory of human social communication that is transferable to social robotics.


Asunto(s)
Comparación Transcultural , Emociones , Expresión Facial , Humanos
14.
Psychol Sci ; 28(9): 1259-1270, 2017 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-28741981

RESUMEN

A smile is the most frequent facial expression, but not all smiles are equal. A social-functional account holds that smiles of reward, affiliation, and dominance serve basic social functions, including rewarding behavior, bonding socially, and negotiating hierarchy. Here, we characterize the facial-expression patterns associated with these three types of smiles. Specifically, we modeled the facial expressions using a data-driven approach and showed that reward smiles are symmetrical and accompanied by eyebrow raising, affiliative smiles involve lip pressing, and dominance smiles are asymmetrical and contain nose wrinkling and upper-lip raising. A Bayesian-classifier analysis and a detection task revealed that the three smile types are highly distinct. Finally, social judgments made by a separate participant group showed that the different smile types convey different social messages. Our results provide the first detailed description of the physical form and social messages conveyed by these three types of functional smiles and document the versatility of these facial expressions.


Asunto(s)
Relaciones Interpersonales , Apego a Objetos , Recompensa , Sonrisa/psicología , Predominio Social , Percepción Social , Adolescente , Adulto , Femenino , Humanos , Masculino , Adulto Joven
15.
Annu Rev Psychol ; 68: 269-297, 2017 Jan 03.
Artículo en Inglés | MEDLINE | ID: mdl-28051933

RESUMEN

As a highly social species, humans are equipped with a powerful tool for social communication-the face. Although seemingly simple, the human face can elicit multiple social perceptions due to the rich variations of its movements, morphology, and complexion. Consequently, identifying precisely what face information elicits different social perceptions is a complex empirical challenge that has largely remained beyond the reach of traditional methods. In the past decade, the emerging field of social psychophysics has developed new methods to address this challenge, with the potential to transfer psychophysical laws of social perception to the digital economy via avatars and social robots. At this exciting juncture, it is timely to review these new methodological developments. In this article, we introduce and review the foundational methodological developments of social psychophysics, present work done in the past decade that has advanced understanding of the face as a tool for social communication, and discuss the major challenges that lie ahead.


Asunto(s)
Expresión Facial , Comunicación no Verbal/psicología , Percepción Social , Humanos , Psicofísica
16.
J Vis ; 16(8): 14, 2016 06 01.
Artículo en Inglés | MEDLINE | ID: mdl-27305521

RESUMEN

Visual categorization is the brain computation that reduces high-dimensional information in the visual environment into a smaller set of meaningful categories. An important problem in visual neuroscience is to identify the visual information that the brain must represent and then use to categorize visual inputs. Here we introduce a new mathematical formalism-termed space-by-time manifold decomposition-that describes this information as a low-dimensional manifold separable in space and time. We use this decomposition to characterize the representations used by observers to categorize the six classic facial expressions of emotion (happy, surprise, fear, disgust, anger, and sad). By means of a Generative Face Grammar, we presented random dynamic facial movements on each experimental trial and used subjective human perception to identify the facial movements that correlate with each emotion category. When the random movements projected onto the categorization manifold region corresponding to one of the emotion categories, observers categorized the stimulus accordingly; otherwise they selected "other." Using this information, we determined both the Action Unit and temporal components whose linear combinations lead to reliable categorization of each emotion. In a validation experiment, we confirmed the psychological validity of the resulting space-by-time manifold representation. Finally, we demonstrated the importance of temporal sequencing for accurate emotion categorization and identified the temporal dynamics of Action Unit components that cause typical confusions between specific emotions (e.g., fear and surprise) as well as those resolving these confusions.


Asunto(s)
Emociones/fisiología , Expresión Facial , Movimiento/fisiología , Percepción Espacial/fisiología , Percepción del Tiempo/fisiología , Ambiente , Miedo/fisiología , Femenino , Felicidad , Humanos , Masculino , Adulto Joven
17.
J Exp Psychol Gen ; 145(6): 708-30, 2016 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-27077757

RESUMEN

As a highly social species, humans generate complex facial expressions to communicate a diverse range of emotions. Since Darwin's work, identifying among these complex patterns which are common across cultures and which are culture-specific has remained a central question in psychology, anthropology, philosophy, and more recently machine vision and social robotics. Classic approaches to addressing this question typically tested the cross-cultural recognition of theoretically motivated facial expressions representing 6 emotions, and reported universality. Yet, variable recognition accuracy across cultures suggests a narrower cross-cultural communication supported by sets of simpler expressive patterns embedded in more complex facial expressions. We explore this hypothesis by modeling the facial expressions of over 60 emotions across 2 cultures, and segregating out the latent expressive patterns. Using a multidisciplinary approach, we first map the conceptual organization of a broad spectrum of emotion words by building semantic networks in 2 cultures. For each emotion word in each culture, we then model and validate its corresponding dynamic facial expression, producing over 60 culturally valid facial expression models. We then apply to the pooled models a multivariate data reduction technique, revealing 4 latent and culturally common facial expression patterns that each communicates specific combinations of valence, arousal, and dominance. We then reveal the face movements that accentuate each latent expressive pattern to create complex facial expressions. Our data questions the widely held view that 6 facial expression patterns are universal, instead suggesting 4 latent expressive patterns with direct implications for emotion communication, social psychology, cognitive neuroscience, and social robotics. (PsycINFO Database Record


Asunto(s)
Cultura , Emociones/fisiología , Expresión Facial , Lenguaje , Adulto , Comparación Transcultural , Femenino , Humanos , Masculino , Adulto Joven
18.
Curr Biol ; 25(14): R621-34, 2015 Jul 20.
Artículo en Inglés | MEDLINE | ID: mdl-26196493

RESUMEN

As a highly social species, humans frequently exchange social information to support almost all facets of life. One of the richest and most powerful tools in social communication is the face, from which observers can quickly and easily make a number of inferences - about identity, gender, sex, age, race, ethnicity, sexual orientation, physical health, attractiveness, emotional state, personality traits, pain or physical pleasure, deception, and even social status. With the advent of the digital economy, increasing globalization and cultural integration, understanding precisely which face information supports social communication and which produces misunderstanding is central to the evolving needs of modern society (for example, in the design of socially interactive digital avatars and companion robots). Doing so is challenging, however, because the face can be thought of as comprising a high-dimensional, dynamic information space, and this impacts cognitive science and neuroimaging, and their broader applications in the digital economy. New opportunities to address this challenge are arising from the development of new methods and technologies, coupled with the emergence of a modern scientific culture that embraces cross-disciplinary approaches. Here, we briefly review one such approach that combines state-of-the-art computer graphics, psychophysics and vision science, cultural psychology and social cognition, and highlight the main knowledge advances it has generated. In the light of current developments, we provide a vision of the future directions in the field of human facial communication within and across cultures.


Asunto(s)
Comunicación , Cara/anatomía & histología , Cara/fisiología , Femenino , Humanos , Masculino
19.
Cortex ; 65: 50-64, 2015 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-25638352

RESUMEN

The human face transmits a wealth of signals that readily provide crucial information for social interactions, such as facial identity and emotional expression. Yet, a fundamental question remains unresolved: does the face information for identity and emotional expression categorization tap into common or distinct representational systems? To address this question we tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions that are assumed to contribute to facial expression (de)coding (i.e., the amygdala, the insula and the posterior superior temporal sulcus--pSTS). We previously demonstrated that PS does not use information from the eye region to identify faces, but relies on the suboptimal mouth region. PS's abnormal information use for identity, coupled with her neural dissociation, provides a unique opportunity to probe the existence of a dichotomy in the face representational system. To reconstruct the mental models of the six basic facial expressions of emotion in PS and age-matched healthy observers, we used a novel reverse correlation technique tracking information use on dynamic faces. PS was comparable to controls, using all facial features to (de)code facial expressions with the exception of fear. PS's normal (de)coding of dynamic facial expressions suggests that the face system relies either on distinct representational systems for identity and expression, or dissociable cortical pathways to access them. Interestingly, PS showed a selective impairment for categorizing many static facial expressions, which could be accounted for by her lesion in the right inferior occipital gyrus. PS's advantage for dynamic facial expressions might instead relate to a functionally distinct and sufficient cortical pathway directly connecting the early visual cortex to the spared pSTS. Altogether, our data provide critical insights on the healthy and impaired face systems, question evidence of deficits obtained from patients by using static images of facial expressions, and offer novel routes for patient rehabilitation.


Asunto(s)
Emociones/fisiología , Cara , Expresión Facial , Prosopagnosia/psicología , Percepción Visual/fisiología , Mapeo Encefálico , Corteza Cerebral/fisiopatología , Discriminación en Psicología , Femenino , Humanos , Masculino , Persona de Mediana Edad , Modelos Psicológicos , Prosopagnosia/diagnóstico
20.
Psychol Sci ; 25(5): 1079-86, 2014 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-24659191

RESUMEN

Animals use social camouflage as a tool of deceit to increase the likelihood of survival and reproduction. We tested whether humans can also strategically deploy transient facial movements to camouflage the default social traits conveyed by the phenotypic morphology of their faces. We used the responses of 12 observers to create models of the dynamic facial signals of dominance, trustworthiness, and attractiveness. We applied these dynamic models to facial morphologies differing on perceived dominance, trustworthiness, and attractiveness to create a set of dynamic faces; new observers rated each dynamic face according to the three social traits. We found that specific facial movements camouflage the social appearance of a face by modulating the features of phenotypic morphology. A comparison of these facial expressions with those similarly derived for facial emotions showed that social-trait expressions, rather than being simple one-to-one overgeneralizations of emotional expressions, are a distinct set of signals composed of movements from different emotions. Our generative face models represent novel psychophysical laws for social sciences; these laws predict the perception of social traits on the basis of dynamic face identities.


Asunto(s)
Emociones/fisiología , Cara/anatomía & histología , Expresión Facial , Factores Sociológicos , Adolescente , Adulto , Femenino , Humanos , Masculino , Percepción , Percepción Social , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...