Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 34
Filter
1.
Sci Robot ; 9(88): eado5755, 2024 Mar 27.
Article in English | MEDLINE | ID: mdl-38536904

ABSTRACT

Humanoid robots can now learn the art of social synchrony using neural networks.


Subject(s)
Robotics , Humans , Neural Networks, Computer , Learning
2.
J Exp Psychol Gen ; 153(3): 742-753, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38271012

ABSTRACT

Social class is a powerful hierarchy that determines many privileges and disadvantages. People form impressions of others' social class (like other important social attributes) from facial appearance, and these impressions correlate with stereotype judgments. However, what drives these related subjective judgments remains unknown. That is, what makes someone look like they are of higher or lower social class standing (e.g., rich or poor), and how does this relate to harmful or advantageous stereotypes? We addressed these questions using a perception-based data-driven method to model the specific three-dimensional facial features that drive social class judgments and compared them to those of stereotype-related judgments (competence, warmth, dominance, and trustworthiness), based on White Western culture participants and face stimuli. Using a complementary data-reduction analysis and machine learning approach, we show that social class judgments are driven by a unique constellation of facial features that reflect multiple embedded stereotypes: poor-looking (vs. rich-looking) faces are wider, shorter, and flatter with downturned mouths and darker, cooler complexions, mirroring features of incompetent, cold, and untrustworthy-looking (vs. competent, warm, and trustworthy-looking) faces. Our results reveal the specific facial features that underlie the connection between impressions of social class and stereotype-related social traits, with implications for central social perception theories, including understanding the causal links between stereotype knowledge and social class judgments. We anticipate that our results will inform future interventions designed to interrupt biased perception and social inequalities. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Subject(s)
Facial Recognition , Stereotyping , Humans , Social Perception , Attitude , Judgment , Social Class , Facial Expression , Trust
3.
Curr Biol ; 34(1): 213-223.e5, 2024 01 08.
Article in English | MEDLINE | ID: mdl-38141619

ABSTRACT

Communicating emotional intensity plays a vital ecological role because it provides valuable information about the nature and likelihood of the sender's behavior.1,2,3 For example, attack often follows signals of intense aggression if receivers fail to retreat.4,5 Humans regularly use facial expressions to communicate such information.6,7,8,9,10,11 Yet how this complex signaling task is achieved remains unknown. We addressed this question using a perception-based, data-driven method to mathematically model the specific facial movements that receivers use to classify the six basic emotions-"happy," "surprise," "fear," "disgust," "anger," and "sad"-and judge their intensity in two distinct cultures (East Asian, Western European; total n = 120). In both cultures, receivers expected facial expressions to dynamically represent emotion category and intensity information over time, using a multi-component compositional signaling structure. Specifically, emotion intensifiers peaked earlier or later than emotion classifiers and represented intensity using amplitude variations. Emotion intensifiers are also more similar across emotions than classifiers are, suggesting a latent broad-plus-specific signaling structure. Cross-cultural analysis further revealed similarities and differences in expectations that could impact cross-cultural communication. Specifically, East Asian and Western European receivers have similar expectations about which facial movements represent high intensity for threat-related emotions, such as "anger," "disgust," and "fear," but differ on those that represent low threat emotions, such as happiness and sadness. Together, our results provide new insights into the intricate processes by which facial expressions can achieve complex dynamic signaling tasks by revealing the rich information embedded in facial expressions.


Subject(s)
Emotions , Facial Expression , Humans , Anger , Fear , Happiness
4.
Curr Biol ; 33(11): R425-R426, 2023 Jun 05.
Article in English | MEDLINE | ID: mdl-37279658

ABSTRACT

An interview with Rachael Jack, who studies how the human face transmits social information.

5.
J Travel Med ; 30(5)2023 09 05.
Article in English | MEDLINE | ID: mdl-37133444

ABSTRACT

BACKGROUND: Exposure to pathogens in public transport systems is a common means of spreading infection, mainly by inhaling aerosol or droplets from infected individuals. Such particles also contaminate surfaces, creating a potential surface-transmission pathway. METHODS: A fast acoustic biosensor with an antifouling nano-coating was introduced to detect severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) on exposed surfaces in the Prague Public Transport System. Samples were measured directly without pre-treatment. Results with the sensor gave excellent agreement with parallel quantitative reverse-transcription polymerase chain reaction (qRT-PCR) measurements on 482 surface samples taken from actively used trams, buses, metro trains and platforms between 7 and 9 April 2021, in the middle of the lineage Alpha SARS-CoV-2 epidemic wave when 1 in 240 people were COVID-19 positive in Prague. RESULTS: Only ten of the 482 surface swabs produced positive results and none of them contained virus particles capable of replication, indicating that positive samples contained inactive virus particles and/or fragments. Measurements of the rate of decay of SARS-CoV-2 on frequently touched surface materials showed that the virus did not remain viable longer than 1-4 h. The rate of inactivation was the fastest on rubber handrails in metro escalators and the slowest on hard-plastic seats, window glasses and stainless-steel grab rails. As a result of this study, Prague Public Transport Systems revised their cleaning protocols and the lengths of parking times during the pandemic. CONCLUSIONS: Our findings suggest that surface transmission played no or negligible role in spreading SARS-CoV-2 in Prague. The results also demonstrate the potential of the new biosensor to serve as a complementary screening tool in epidemic monitoring and prognosis.


Subject(s)
COVID-19 , SARS-CoV-2 , Humans , Respiratory Aerosols and Droplets , Transportation , Pandemics/prevention & control
6.
Sci Adv ; 9(6): eabq8421, 2023 02 10.
Article in English | MEDLINE | ID: mdl-36763663

ABSTRACT

Models are the hallmark of mature scientific inquiry. In psychology, this maturity has been reached in a pervasive question-what models best represent facial expressions of emotion? Several hypotheses propose different combinations of facial movements [action units (AUs)] as best representing the six basic emotions and four conversational signals across cultures. We developed a new framework to formalize such hypotheses as predictive models, compare their ability to predict human emotion categorizations in Western and East Asian cultures, explain the causal role of individual AUs, and explore updated, culture-accented models that improve performance by reducing a prevalent Western bias. Our predictive models also provide a noise ceiling to inform the explanatory power and limitations of different factors (e.g., AUs and individual differences). Thus, our framework provides a new approach to test models of social signals, explain their predictive power, and explore their optimization, with direct implications for theory development.


Subject(s)
Emotions , Facial Expression , Humans , Face , Movement
7.
Sci Rep ; 12(1): 12592, 2022 07 22.
Article in English | MEDLINE | ID: mdl-35869154

ABSTRACT

Realtime visual feedback from consequences of actions is useful for future safety-critical human-robot interaction applications such as remote physical examination of patients. Given multiple formats to present visual feedback, using face as feedback for mediating human-robot interaction in remote examination remains understudied. Here we describe a face mediated human-robot interaction approach for remote palpation. It builds upon a robodoctor-robopatient platform where user can palpate on the robopatient to remotely control the robodoctor to diagnose a patient. A tactile sensor array mounted on the end effector of the robodoctor measures the haptic response of the patient under diagnosis and transfers it to the robopatient to render pain facial expressions in response to palpation forces. We compare this approach against a direct presentation of tactile sensor data in a visual tactile map. As feedback, the former has the advantage of recruiting advanced human capabilities to decode expressions on a human face whereas the later has the advantage of being able to present details such as intensity and spatial information of palpation. In a user study, we compare these two approaches in a teleoperated palpation task to find the hard nodule embedded in the remote abdominal phantom. We show that the face mediated human-robot interaction approach leads to statistically significant improvements in localizing the hard nodule without compromising the nodule position estimation time. We highlight the inherent power of facial expressions as communicative signals to enhance the utility and effectiveness of human-robot interaction in remote medical examinations.


Subject(s)
Robotics , Feedback , Feedback, Sensory , Humans , Palpation , Touch/physiology
8.
Sci Rep ; 12(1): 4200, 2022 03 10.
Article in English | MEDLINE | ID: mdl-35273296

ABSTRACT

Medical training simulators can provide a safe and controlled environment for medical students to practice their physical examination skills. An important source of information for physicians is the visual feedback of involuntary pain facial expressions in response to physical palpation on an affected area of a patient. However, most existing robotic medical training simulators that can capture physical examination behaviours in real-time cannot display facial expressions and comprise a limited range of patient identities in terms of ethnicity and gender. Together, these limitations restrict the utility of medical training simulators because they do not provide medical students with a representative sample of pain facial expressions and face identities, which could result in biased practices. Further, these limitations restrict the utility of such medical simulators to detect and correct early signs of bias in medical training. Here, for the first time, we present a robotic system that can simulate facial expressions of pain in response to palpations, displayed on a range of patient face identities. We use the unique approach of modelling dynamic pain facial expressions using a data-driven perception-based psychophysical method combined with the visuo-haptic inputs of users performing palpations on a robot medical simulator. Specifically, participants performed palpation actions on the abdomen phantom of a simulated patient, which triggered the real-time display of six pain-related facial Action Units (AUs) on a robotic face (MorphFace), each controlled by two pseudo randomly generated transient parameters: rate of change [Formula: see text] and activation delay [Formula: see text]. Participants then rated the appropriateness of the facial expression displayed in response to their palpations on a 4-point scale from "strongly disagree" to "strongly agree". Each participant ([Formula: see text], 4 Asian females, 4 Asian males, 4 White females and 4 White males) performed 200 palpation trials on 4 patient identities (Black female, Black male, White female and White male) simulated using MorphFace. Results showed facial expressions rated as most appropriate by all participants comprise a higher rate of change and shorter delay from upper face AUs (around the eyes) to those in the lower face (around the mouth). In contrast, we found that transient parameter values of most appropriate-rated pain facial expressions, palpation forces, and delays between palpation actions varied across participant-simulated patient pairs according to gender and ethnicity. These findings suggest that gender and ethnicity biases affect palpation strategies and the perception of pain facial expressions displayed on MorphFace. We anticipate that our approach will be used to generate physical examination models with diverse patient demographics to reduce erroneous judgments in medical students, and provide focused training to address these errors.


Subject(s)
Robotic Surgical Procedures , Robotics , Facial Expression , Female , Humans , Male , Pain , Palpation
9.
Curr Biol ; 32(1): 200-209.e6, 2022 01 10.
Article in English | MEDLINE | ID: mdl-34767768

ABSTRACT

Human facial expressions are complex, multi-component signals that can communicate rich information about emotions,1-5 including specific categories, such as "anger," and broader dimensions, such as "negative valence, high arousal."6-8 An enduring question is how this complex signaling is achieved. Communication theory predicts that multi-component signals could transmit each type of emotion information-i.e., specific categories and broader dimensions-via the same or different facial signal components, with implications for elucidating the system and ontology of facial expression communication.9 We addressed this question using a communication-systems-based method that agnostically generates facial expressions and uses the receiver's perceptions to model the specific facial signal components that represent emotion category and dimensional information to them.10-12 First, we derived the facial expressions that elicit the perception of emotion categories (i.e., the six classic emotions13 plus 19 complex emotions3) and dimensions (i.e., valence and arousal) separately, in 60 individual participants. Comparison of these facial signals showed that they share subsets of components, suggesting that specific latent signals jointly represent-i.e., multiplex-categorical and dimensional information. Further examination revealed these specific latent signals and the joint information they represent. Our results-based on white Western participants, same-ethnicity face stimuli, and commonly used English emotion terms-show that facial expressions can jointly represent specific emotion categories and broad dimensions to perceivers via multiplexed facial signal components. Our results provide insights into the ontology and system of facial expression communication and a new information-theoretic framework that can characterize its complexities.


Subject(s)
Emotions , Facial Expression , Anger , Arousal , Face , Humans
10.
Curr Biol ; 31(10): 2243-2252.e6, 2021 05 24.
Article in English | MEDLINE | ID: mdl-33798430

ABSTRACT

Facial attractiveness confers considerable advantages in social interactions,1,2 with preferences likely reflecting psychobiological mechanisms shaped by natural selection. Theories of universal beauty propose that attractive faces comprise features that are closer to the population average3 while optimizing sexual dimorphism.4 However, emerging evidence questions this model as an accurate representation of facial attractiveness,5-7 including representing the diversity of beauty preferences within and across cultures.8-12 Here, we demonstrate that Western Europeans (WEs) and East Asians (EAs) evaluate facial beauty using culture-specific features, contradicting theories of universality. With a data-driven method, we modeled, at both the individual and group levels, the attractive face features of young females (25 years old) in two matched groups each of 40 young male WE and EA participants. Specifically, we generated a broad range of same- and other-ethnicity female faces with naturally varying shapes and complexions. Participants rated each on attractiveness. We then reverse correlated the face features that drive perception of attractiveness in each participant. From these individual face models, we reconstructed a facial attractiveness representation space that explains preference variations. We show that facial attractiveness is distinct both from averageness and from sexual dimorphism in both cultures. Finally, we disentangled attractive face features into those shared across cultures, culture specific, and specific to individual participants, thereby revealing their diversity. Our results have direct theoretical and methodological impact for representing diversity in social perception and for the design of culturally and ethnically sensitive socially interactive digital agents.


Subject(s)
Beauty , Culture , Face , Adult , Asian People , Female , Humans , Male , Sex Characteristics , White People
11.
Emotion ; 21(6): 1324-1339, 2021 Sep.
Article in English | MEDLINE | ID: mdl-32628034

ABSTRACT

Action video game players (AVGPs) display superior performance in various aspects of cognition, especially in perception and top-down attention. The existing literature has examined these performance almost exclusively with stimuli and tasks devoid of any emotional content. Thus, whether the superior performance documented in the cognitive domain extend to the emotional domain remains unknown. We present 2 cross-sectional studies contrasting AVGPs and nonvideo game players (NVGPs) in their ability to perceive facial emotions. Under an enhanced perception account, AVGPs should outperform NVGPs when processing facial emotion. Yet, alternative accounts exist. For instance, under some social accounts, exposure to action video games, which often contain violence, may lower sensitivity for empathy-related expressions such as sadness, happiness, and pain while increasing sensitivity to aggression signals. Finally, under the view that AVGPs excel at learning new tasks (in contrast to the view that they are immediately better at all new tasks), the use of stimuli that participants are already experts at predicts little to no group differences. Study 1 uses drift-diffusion modeling and establishes that AVGPs are comparable to NVGPs in every decision-making stage mediating the discrimination of facial emotions, despite showing group difference in aggressive behavior. Study 2 uses the reverse inference technique to assess the mental representation of facial emotion expressions, and again documents no group differences. These results indicate that the perceptual benefits associated with action video game play do not extend to overlearned stimuli such as facial emotion, and rather indicate equivalent facial emotion skills in AVGPs and NVGPs. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Subject(s)
Video Games , Cross-Sectional Studies , Emotions , Facial Expression , Humans , Perception
12.
Philos Trans R Soc Lond B Biol Sci ; 375(1799): 20190705, 2020 05 25.
Article in English | MEDLINE | ID: mdl-32248774

ABSTRACT

The information contents of memory are the cornerstone of the most influential models in cognition. To illustrate, consider that in predictive coding, a prediction implies that specific information is propagated down from memory through the visual hierarchy. Likewise, recognizing the input implies that sequentially accrued sensory evidence is successfully matched with memorized information (categorical knowledge). Although the existing models of prediction, memory, sensory representation and categorical decision are all implicitly cast within an information processing framework, it remains a challenge to precisely specify what this information is, and therefore where, when and how the architecture of the brain dynamically processes it to produce behaviour. Here, we review a framework that addresses these challenges for the studies of perception and categorization-stimulus information representation (SIR). We illustrate how SIR can reverse engineer the information contents of memory from behavioural and brain measures in the context of specific cognitive tasks that involve memory. We discuss two specific lessons from this approach that generally apply to memory studies: the importance of task, to constrain what the brain does, and of stimulus variations, to identify the specific information contents that are memorized, predicted, recalled and replayed. This article is part of the Theo Murphy meeting issue 'Memory reactivation: replaying events past, present and future'.


Subject(s)
Brain/physiology , Cognition/physiology , Memory Consolidation/physiology , Humans
13.
Sci Justice ; 59(4): 380-389, 2019 07.
Article in English | MEDLINE | ID: mdl-31256809

ABSTRACT

Cognitive bias is a well-documented automatic process that can have serious negative consequences in a variety of settings. For example, cognitive bias within a forensic science setting can lead to examiners' judgements being swayed by details that they have learned while working on the case, and which go beyond the physical evidence being examined. Although cognitive bias has been studied in many forensic disciplines, such as fingerprints, bullet comparison, and document examination, knowledge of cognitive bias within forensic toxicology is lacking. Here, we address this knowledge gap by assessing the reported use of contextual information by an international group of forensic toxicologists attending the 54th conference of The International Association of Forensic Toxicologists (TIAFT) in Brisbane in 2016. In a first study, participants read a set of simple post-mortem toxicology results (two drug concentrations in blood) and then indicated what information they would normally use when interpreting these results in their day-to-day casework. Using a questionnaire, we then surveyed the familiarity of toxicologists with contextual bias and captured any suggested bias-minimizing procedures for use in forensic toxicology laboratories. Thirty-six participants from 23 different countries and with a range of 1-35 years' forensic toxicology reporting experience volunteered. Analysis of their responses showed that the majority of participants reported using some contextual information in their interpretation of these post-mortem toxicology results (range = 3-15 pieces of information, median ±â€¯SD = 11 ±â€¯3), the most common being the deceased's history of prescription or illicit drug use. More than three-quarters of participants reported being familiar with the concept of contextual bias, although few (n = 9) worked in laboratories that had a formal policy covering it. Over half of participants knew of at least one bias-minimizing procedure specifically for forensic toxicology casework, but only a quarter (overall) reported using bias-minimizing procedures in their laboratories. Our results provide substantial evidence that although practising forensic toxicologists are familiar with contextual bias, many report that they still engage in behaviours that could lead to cognitive bias (e.g., through the use of contextual information, through lack of explicit policies or bias-minimizing procedures). We anticipate that our work will form the basis of further research involving a larger sample of participants and examining other potentially relevant factors such as sex/gender, country and accreditation of laboratories.


Subject(s)
Bias , Cognition , Decision Making , Forensic Toxicology , Judgment , Congresses as Topic , Humans , Internationality , Laboratories , Surveys and Questionnaires
14.
Proc Natl Acad Sci U S A ; 115(43): E10013-E10021, 2018 10 23.
Article in English | MEDLINE | ID: mdl-30297420

ABSTRACT

Real-world studies show that the facial expressions produced during pain and orgasm-two different and intense affective experiences-are virtually indistinguishable. However, this finding is counterintuitive, because facial expressions are widely considered to be a powerful tool for social interaction. Consequently, debate continues as to whether the facial expressions of these extreme positive and negative affective states serve a communicative function. Here, we address this debate from a novel angle by modeling the mental representations of dynamic facial expressions of pain and orgasm in 40 observers in each of two cultures (Western, East Asian) using a data-driven method. Using a complementary approach of machine learning, an information-theoretic analysis, and a human perceptual discrimination task, we show that mental representations of pain and orgasm are physically and perceptually distinct in each culture. Cross-cultural comparisons also revealed that pain is represented by similar face movements across cultures, whereas orgasm showed distinct cultural accents. Together, our data show that mental representations of the facial expressions of pain and orgasm are distinct, which questions their nondiagnosticity and instead suggests they could be used for communicative purposes. Our results also highlight the potential role of cultural and perceptual factors in shaping the mental representation of these facial expressions. We discuss new research directions to further explore their relationship to the production of facial expressions.


Subject(s)
Emotions/physiology , Face/physiology , Pain/physiopathology , Pain/psychology , Pleasure/physiology , Adult , Cross-Cultural Comparison , Culture , Facial Expression , Female , Humans , Interpersonal Relations , Male , Recognition, Psychology/physiology , Young Adult
15.
Trends Cogn Sci ; 22(1): 1-5, 2018 01.
Article in English | MEDLINE | ID: mdl-29126772

ABSTRACT

Psychology aims to understand real human behavior. However, cultural biases in the scientific process can constrain knowledge. We describe here how data-driven methods can relax these constraints to reveal new insights that theories can overlook. To advance knowledge we advocate a symbiotic approach that better combines data-driven methods with theory.


Subject(s)
Psychology/methods , Research Design , Culture , Humans , Knowledge , Models, Psychological
16.
Curr Opin Psychol ; 17: 61-66, 2017 10.
Article in English | MEDLINE | ID: mdl-28950974

ABSTRACT

Understanding the cultural commonalities and specificities of facial expressions of emotion remains a central goal of Psychology. However, recent progress has been stayed by dichotomous debates (e.g. nature versus nurture) that have created silos of empirical and theoretical knowledge. Now, an emerging interdisciplinary scientific culture is broadening the focus of research to provide a more unified and refined account of facial expressions within and across cultures. Specifically, data-driven approaches allow a wider, more objective exploration of face movement patterns that provide detailed information ontologies of their cultural commonalities and specificities. Similarly, a wider exploration of the social messages perceived from face movements diversifies knowledge of their functional roles (e.g. the 'fear' face used as a threat display). Together, these new approaches promise to diversify, deepen, and refine knowledge of facial expressions, and deliver the next major milestones for a functional theory of human social communication that is transferable to social robotics.


Subject(s)
Cross-Cultural Comparison , Emotions , Facial Expression , Humans
17.
Psychol Sci ; 28(9): 1259-1270, 2017 Sep.
Article in English | MEDLINE | ID: mdl-28741981

ABSTRACT

A smile is the most frequent facial expression, but not all smiles are equal. A social-functional account holds that smiles of reward, affiliation, and dominance serve basic social functions, including rewarding behavior, bonding socially, and negotiating hierarchy. Here, we characterize the facial-expression patterns associated with these three types of smiles. Specifically, we modeled the facial expressions using a data-driven approach and showed that reward smiles are symmetrical and accompanied by eyebrow raising, affiliative smiles involve lip pressing, and dominance smiles are asymmetrical and contain nose wrinkling and upper-lip raising. A Bayesian-classifier analysis and a detection task revealed that the three smile types are highly distinct. Finally, social judgments made by a separate participant group showed that the different smile types convey different social messages. Our results provide the first detailed description of the physical form and social messages conveyed by these three types of functional smiles and document the versatility of these facial expressions.


Subject(s)
Interpersonal Relations , Object Attachment , Reward , Smiling/psychology , Social Dominance , Social Perception , Adolescent , Adult , Female , Humans , Male , Young Adult
18.
Annu Rev Psychol ; 68: 269-297, 2017 Jan 03.
Article in English | MEDLINE | ID: mdl-28051933

ABSTRACT

As a highly social species, humans are equipped with a powerful tool for social communication-the face. Although seemingly simple, the human face can elicit multiple social perceptions due to the rich variations of its movements, morphology, and complexion. Consequently, identifying precisely what face information elicits different social perceptions is a complex empirical challenge that has largely remained beyond the reach of traditional methods. In the past decade, the emerging field of social psychophysics has developed new methods to address this challenge, with the potential to transfer psychophysical laws of social perception to the digital economy via avatars and social robots. At this exciting juncture, it is timely to review these new methodological developments. In this article, we introduce and review the foundational methodological developments of social psychophysics, present work done in the past decade that has advanced understanding of the face as a tool for social communication, and discuss the major challenges that lie ahead.


Subject(s)
Facial Expression , Nonverbal Communication/psychology , Social Perception , Humans , Psychophysics
19.
J Vis ; 16(8): 14, 2016 06 01.
Article in English | MEDLINE | ID: mdl-27305521

ABSTRACT

Visual categorization is the brain computation that reduces high-dimensional information in the visual environment into a smaller set of meaningful categories. An important problem in visual neuroscience is to identify the visual information that the brain must represent and then use to categorize visual inputs. Here we introduce a new mathematical formalism-termed space-by-time manifold decomposition-that describes this information as a low-dimensional manifold separable in space and time. We use this decomposition to characterize the representations used by observers to categorize the six classic facial expressions of emotion (happy, surprise, fear, disgust, anger, and sad). By means of a Generative Face Grammar, we presented random dynamic facial movements on each experimental trial and used subjective human perception to identify the facial movements that correlate with each emotion category. When the random movements projected onto the categorization manifold region corresponding to one of the emotion categories, observers categorized the stimulus accordingly; otherwise they selected "other." Using this information, we determined both the Action Unit and temporal components whose linear combinations lead to reliable categorization of each emotion. In a validation experiment, we confirmed the psychological validity of the resulting space-by-time manifold representation. Finally, we demonstrated the importance of temporal sequencing for accurate emotion categorization and identified the temporal dynamics of Action Unit components that cause typical confusions between specific emotions (e.g., fear and surprise) as well as those resolving these confusions.


Subject(s)
Emotions/physiology , Facial Expression , Movement/physiology , Space Perception/physiology , Time Perception/physiology , Environment , Fear/physiology , Female , Happiness , Humans , Male , Young Adult
20.
J Exp Psychol Gen ; 145(6): 708-30, 2016 Jun.
Article in English | MEDLINE | ID: mdl-27077757

ABSTRACT

As a highly social species, humans generate complex facial expressions to communicate a diverse range of emotions. Since Darwin's work, identifying among these complex patterns which are common across cultures and which are culture-specific has remained a central question in psychology, anthropology, philosophy, and more recently machine vision and social robotics. Classic approaches to addressing this question typically tested the cross-cultural recognition of theoretically motivated facial expressions representing 6 emotions, and reported universality. Yet, variable recognition accuracy across cultures suggests a narrower cross-cultural communication supported by sets of simpler expressive patterns embedded in more complex facial expressions. We explore this hypothesis by modeling the facial expressions of over 60 emotions across 2 cultures, and segregating out the latent expressive patterns. Using a multidisciplinary approach, we first map the conceptual organization of a broad spectrum of emotion words by building semantic networks in 2 cultures. For each emotion word in each culture, we then model and validate its corresponding dynamic facial expression, producing over 60 culturally valid facial expression models. We then apply to the pooled models a multivariate data reduction technique, revealing 4 latent and culturally common facial expression patterns that each communicates specific combinations of valence, arousal, and dominance. We then reveal the face movements that accentuate each latent expressive pattern to create complex facial expressions. Our data questions the widely held view that 6 facial expression patterns are universal, instead suggesting 4 latent expressive patterns with direct implications for emotion communication, social psychology, cognitive neuroscience, and social robotics. (PsycINFO Database Record


Subject(s)
Culture , Emotions/physiology , Facial Expression , Language , Adult , Cross-Cultural Comparison , Female , Humans , Male , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...