Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Exp Psychol Gen ; 153(3): 742-753, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38271012

RESUMO

Social class is a powerful hierarchy that determines many privileges and disadvantages. People form impressions of others' social class (like other important social attributes) from facial appearance, and these impressions correlate with stereotype judgments. However, what drives these related subjective judgments remains unknown. That is, what makes someone look like they are of higher or lower social class standing (e.g., rich or poor), and how does this relate to harmful or advantageous stereotypes? We addressed these questions using a perception-based data-driven method to model the specific three-dimensional facial features that drive social class judgments and compared them to those of stereotype-related judgments (competence, warmth, dominance, and trustworthiness), based on White Western culture participants and face stimuli. Using a complementary data-reduction analysis and machine learning approach, we show that social class judgments are driven by a unique constellation of facial features that reflect multiple embedded stereotypes: poor-looking (vs. rich-looking) faces are wider, shorter, and flatter with downturned mouths and darker, cooler complexions, mirroring features of incompetent, cold, and untrustworthy-looking (vs. competent, warm, and trustworthy-looking) faces. Our results reveal the specific facial features that underlie the connection between impressions of social class and stereotype-related social traits, with implications for central social perception theories, including understanding the causal links between stereotype knowledge and social class judgments. We anticipate that our results will inform future interventions designed to interrupt biased perception and social inequalities. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Assuntos
Reconhecimento Facial , Estereotipagem , Humanos , Percepção Social , Atitude , Julgamento , Classe Social , Expressão Facial , Confiança
2.
Curr Biol ; 34(1): 213-223.e5, 2024 01 08.
Artigo em Inglês | MEDLINE | ID: mdl-38141619

RESUMO

Communicating emotional intensity plays a vital ecological role because it provides valuable information about the nature and likelihood of the sender's behavior.1,2,3 For example, attack often follows signals of intense aggression if receivers fail to retreat.4,5 Humans regularly use facial expressions to communicate such information.6,7,8,9,10,11 Yet how this complex signaling task is achieved remains unknown. We addressed this question using a perception-based, data-driven method to mathematically model the specific facial movements that receivers use to classify the six basic emotions-"happy," "surprise," "fear," "disgust," "anger," and "sad"-and judge their intensity in two distinct cultures (East Asian, Western European; total n = 120). In both cultures, receivers expected facial expressions to dynamically represent emotion category and intensity information over time, using a multi-component compositional signaling structure. Specifically, emotion intensifiers peaked earlier or later than emotion classifiers and represented intensity using amplitude variations. Emotion intensifiers are also more similar across emotions than classifiers are, suggesting a latent broad-plus-specific signaling structure. Cross-cultural analysis further revealed similarities and differences in expectations that could impact cross-cultural communication. Specifically, East Asian and Western European receivers have similar expectations about which facial movements represent high intensity for threat-related emotions, such as "anger," "disgust," and "fear," but differ on those that represent low threat emotions, such as happiness and sadness. Together, our results provide new insights into the intricate processes by which facial expressions can achieve complex dynamic signaling tasks by revealing the rich information embedded in facial expressions.


Assuntos
Emoções , Expressão Facial , Humanos , Ira , Medo , Felicidade
3.
Sci Adv ; 9(6): eabq8421, 2023 02 10.
Artigo em Inglês | MEDLINE | ID: mdl-36763663

RESUMO

Models are the hallmark of mature scientific inquiry. In psychology, this maturity has been reached in a pervasive question-what models best represent facial expressions of emotion? Several hypotheses propose different combinations of facial movements [action units (AUs)] as best representing the six basic emotions and four conversational signals across cultures. We developed a new framework to formalize such hypotheses as predictive models, compare their ability to predict human emotion categorizations in Western and East Asian cultures, explain the causal role of individual AUs, and explore updated, culture-accented models that improve performance by reducing a prevalent Western bias. Our predictive models also provide a noise ceiling to inform the explanatory power and limitations of different factors (e.g., AUs and individual differences). Thus, our framework provides a new approach to test models of social signals, explain their predictive power, and explore their optimization, with direct implications for theory development.


Assuntos
Emoções , Expressão Facial , Humanos , Face , Movimento
4.
J Neurosci ; 42(48): 9030-9044, 2022 11 30.
Artigo em Inglês | MEDLINE | ID: mdl-36280264

RESUMO

To date, social and nonsocial decisions have been studied largely in isolation. Consequently, the extent to which social and nonsocial forms of decision uncertainty are integrated using shared neurocomputational resources remains elusive. Here, we address this question using simultaneous electroencephalography (EEG)-functional magnetic resonance imaging (fMRI) in healthy human participants (young adults of both sexes) and a task in which decision evidence in social and nonsocial contexts varies along comparable scales. First, we identify time-resolved build-up of activity in the EEG, akin to a process of evidence accumulation (EA), across both contexts. We then use the endogenous trial-by-trial variability in the slopes of these accumulating signals to construct parametric fMRI predictors. We show that a region of the posterior-medial frontal cortex (pMFC) uniquely explains trial-wise variability in the process of evidence accumulation in both social and nonsocial contexts. We further demonstrate a task-dependent coupling between the pMFC and regions of the human valuation system in dorso-medial and ventro-medial prefrontal cortex across both contexts. Finally, we report domain-specific representations in regions known to encode the early decision evidence for each context. These results are suggestive of a domain-general decision-making architecture, whereupon domain-specific information is likely converted into a "common currency" in medial prefrontal cortex and accumulated for the decision in the pMFC.SIGNIFICANCE STATEMENT Little work has directly compared social-versus-nonsocial decisions to investigate whether they share common neurocomputational origins. Here, using combined electroencephalography (EEG)-functional magnetic resonance imaging (fMRI) and computational modeling, we offer a detailed spatiotemporal account of the neural underpinnings of social and nonsocial decisions. Specifically, we identify a comparable mechanism of temporal evidence integration driving both decisions and localize this integration process in posterior-medial frontal cortex (pMFC). We further demonstrate task-dependent coupling between the pMFC and regions of the human valuation system across both contexts. Finally, we report domain-specific representations in regions encoding the early, domain-specific, decision evidence. These results suggest a domain-general decision-making architecture, whereupon domain-specific information is converted into a common representation in the valuation system and integrated for the decision in the pMFC.


Assuntos
Tomada de Decisões , Imageamento por Ressonância Magnética , Adulto Jovem , Masculino , Feminino , Humanos , Lobo Frontal , Eletroencefalografia
5.
Curr Biol ; 32(1): 200-209.e6, 2022 01 10.
Artigo em Inglês | MEDLINE | ID: mdl-34767768

RESUMO

Human facial expressions are complex, multi-component signals that can communicate rich information about emotions,1-5 including specific categories, such as "anger," and broader dimensions, such as "negative valence, high arousal."6-8 An enduring question is how this complex signaling is achieved. Communication theory predicts that multi-component signals could transmit each type of emotion information-i.e., specific categories and broader dimensions-via the same or different facial signal components, with implications for elucidating the system and ontology of facial expression communication.9 We addressed this question using a communication-systems-based method that agnostically generates facial expressions and uses the receiver's perceptions to model the specific facial signal components that represent emotion category and dimensional information to them.10-12 First, we derived the facial expressions that elicit the perception of emotion categories (i.e., the six classic emotions13 plus 19 complex emotions3) and dimensions (i.e., valence and arousal) separately, in 60 individual participants. Comparison of these facial signals showed that they share subsets of components, suggesting that specific latent signals jointly represent-i.e., multiplex-categorical and dimensional information. Further examination revealed these specific latent signals and the joint information they represent. Our results-based on white Western participants, same-ethnicity face stimuli, and commonly used English emotion terms-show that facial expressions can jointly represent specific emotion categories and broad dimensions to perceivers via multiplexed facial signal components. Our results provide insights into the ontology and system of facial expression communication and a new information-theoretic framework that can characterize its complexities.


Assuntos
Emoções , Expressão Facial , Ira , Nível de Alerta , Face , Humanos
6.
Patterns (N Y) ; 2(10): 100348, 2021 Oct 08.
Artigo em Inglês | MEDLINE | ID: mdl-34693374

RESUMO

Deep neural networks (DNNs) can resolve real-world categorization tasks with apparent human-level performance. However, true equivalence of behavioral performance between humans and their DNN models requires that their internal mechanisms process equivalent features of the stimulus. To develop such feature equivalence, our methodology leveraged an interpretable and experimentally controlled generative model of the stimuli (realistic three-dimensional textured faces). Humans rated the similarity of randomly generated faces to four familiar identities. We predicted these similarity ratings from the activations of five DNNs trained with different optimization objectives. Using information theoretic redundancy, reverse correlation, and the testing of generalization gradients, we show that DNN predictions of human behavior improve because their shape and texture features overlap with those that subsume human behavior. Thus, we must equate the functional features that subsume the behavioral performances of the brain and its models before comparing where, when, and how these features are processed.

7.
Curr Biol ; 31(10): 2243-2252.e6, 2021 05 24.
Artigo em Inglês | MEDLINE | ID: mdl-33798430

RESUMO

Facial attractiveness confers considerable advantages in social interactions,1,2 with preferences likely reflecting psychobiological mechanisms shaped by natural selection. Theories of universal beauty propose that attractive faces comprise features that are closer to the population average3 while optimizing sexual dimorphism.4 However, emerging evidence questions this model as an accurate representation of facial attractiveness,5-7 including representing the diversity of beauty preferences within and across cultures.8-12 Here, we demonstrate that Western Europeans (WEs) and East Asians (EAs) evaluate facial beauty using culture-specific features, contradicting theories of universality. With a data-driven method, we modeled, at both the individual and group levels, the attractive face features of young females (25 years old) in two matched groups each of 40 young male WE and EA participants. Specifically, we generated a broad range of same- and other-ethnicity female faces with naturally varying shapes and complexions. Participants rated each on attractiveness. We then reverse correlated the face features that drive perception of attractiveness in each participant. From these individual face models, we reconstructed a facial attractiveness representation space that explains preference variations. We show that facial attractiveness is distinct both from averageness and from sexual dimorphism in both cultures. Finally, we disentangled attractive face features into those shared across cultures, culture specific, and specific to individual participants, thereby revealing their diversity. Our results have direct theoretical and methodological impact for representing diversity in social perception and for the design of culturally and ethnically sensitive socially interactive digital agents.


Assuntos
Beleza , Cultura , Face , Adulto , Povo Asiático , Feminino , Humanos , Masculino , Caracteres Sexuais , População Branca
8.
Nat Hum Behav ; 3(8): 817-826, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-31209368

RESUMO

Current cognitive theories are cast in terms of information-processing mechanisms that use mental representations1-4. For example, people use their mental representations to identify familiar faces under various conditions of pose, illumination and ageing, or to draw resemblance between family members. Yet, the actual information contents of these representations are rarely characterized, which hinders knowledge of the mechanisms that use them. Here, we modelled the three-dimensional representational contents of 4 faces that were familiar to 14 participants as work colleagues. The representational contents were created by reverse-correlating identity information generated on each trial with judgements of the face's similarity to the individual participant's memory of this face. In a second study, testing new participants, we demonstrated the validity of the modelled contents using everyday face tasks that generalize identity judgements to new viewpoints, age and sex. Our work highlights that such models of mental representations are critical to understanding generalization behaviour and its underlying information-processing mechanisms.


Assuntos
Face , Reconhecimento Facial , Memória , Adulto , Feminino , Percepção de Forma , Generalização Psicológica , Humanos , Masculino , Modelos Psicológicos , Adulto Jovem
9.
Proc Natl Acad Sci U S A ; 115(43): E10013-E10021, 2018 10 23.
Artigo em Inglês | MEDLINE | ID: mdl-30297420

RESUMO

Real-world studies show that the facial expressions produced during pain and orgasm-two different and intense affective experiences-are virtually indistinguishable. However, this finding is counterintuitive, because facial expressions are widely considered to be a powerful tool for social interaction. Consequently, debate continues as to whether the facial expressions of these extreme positive and negative affective states serve a communicative function. Here, we address this debate from a novel angle by modeling the mental representations of dynamic facial expressions of pain and orgasm in 40 observers in each of two cultures (Western, East Asian) using a data-driven method. Using a complementary approach of machine learning, an information-theoretic analysis, and a human perceptual discrimination task, we show that mental representations of pain and orgasm are physically and perceptually distinct in each culture. Cross-cultural comparisons also revealed that pain is represented by similar face movements across cultures, whereas orgasm showed distinct cultural accents. Together, our data show that mental representations of the facial expressions of pain and orgasm are distinct, which questions their nondiagnosticity and instead suggests they could be used for communicative purposes. Our results also highlight the potential role of cultural and perceptual factors in shaping the mental representation of these facial expressions. We discuss new research directions to further explore their relationship to the production of facial expressions.


Assuntos
Emoções/fisiologia , Face/fisiologia , Dor/fisiopatologia , Dor/psicologia , Prazer/fisiologia , Adulto , Comparação Transcultural , Cultura , Expressão Facial , Feminino , Humanos , Relações Interpessoais , Masculino , Reconhecimento Psicológico/fisiologia , Adulto Jovem
10.
Psychol Sci ; 28(9): 1259-1270, 2017 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-28741981

RESUMO

A smile is the most frequent facial expression, but not all smiles are equal. A social-functional account holds that smiles of reward, affiliation, and dominance serve basic social functions, including rewarding behavior, bonding socially, and negotiating hierarchy. Here, we characterize the facial-expression patterns associated with these three types of smiles. Specifically, we modeled the facial expressions using a data-driven approach and showed that reward smiles are symmetrical and accompanied by eyebrow raising, affiliative smiles involve lip pressing, and dominance smiles are asymmetrical and contain nose wrinkling and upper-lip raising. A Bayesian-classifier analysis and a detection task revealed that the three smile types are highly distinct. Finally, social judgments made by a separate participant group showed that the different smile types convey different social messages. Our results provide the first detailed description of the physical form and social messages conveyed by these three types of functional smiles and document the versatility of these facial expressions.


Assuntos
Relações Interpessoais , Apego ao Objeto , Recompensa , Sorriso/psicologia , Predomínio Social , Percepção Social , Adolescente , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
11.
J Vis ; 16(8): 14, 2016 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-27305521

RESUMO

Visual categorization is the brain computation that reduces high-dimensional information in the visual environment into a smaller set of meaningful categories. An important problem in visual neuroscience is to identify the visual information that the brain must represent and then use to categorize visual inputs. Here we introduce a new mathematical formalism-termed space-by-time manifold decomposition-that describes this information as a low-dimensional manifold separable in space and time. We use this decomposition to characterize the representations used by observers to categorize the six classic facial expressions of emotion (happy, surprise, fear, disgust, anger, and sad). By means of a Generative Face Grammar, we presented random dynamic facial movements on each experimental trial and used subjective human perception to identify the facial movements that correlate with each emotion category. When the random movements projected onto the categorization manifold region corresponding to one of the emotion categories, observers categorized the stimulus accordingly; otherwise they selected "other." Using this information, we determined both the Action Unit and temporal components whose linear combinations lead to reliable categorization of each emotion. In a validation experiment, we confirmed the psychological validity of the resulting space-by-time manifold representation. Finally, we demonstrated the importance of temporal sequencing for accurate emotion categorization and identified the temporal dynamics of Action Unit components that cause typical confusions between specific emotions (e.g., fear and surprise) as well as those resolving these confusions.


Assuntos
Emoções/fisiologia , Expressão Facial , Movimento/fisiologia , Percepção Espacial/fisiologia , Percepção do Tempo/fisiologia , Meio Ambiente , Medo/fisiologia , Feminino , Felicidade , Humanos , Masculino , Adulto Jovem
12.
J Exp Psychol Gen ; 145(6): 708-30, 2016 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-27077757

RESUMO

As a highly social species, humans generate complex facial expressions to communicate a diverse range of emotions. Since Darwin's work, identifying among these complex patterns which are common across cultures and which are culture-specific has remained a central question in psychology, anthropology, philosophy, and more recently machine vision and social robotics. Classic approaches to addressing this question typically tested the cross-cultural recognition of theoretically motivated facial expressions representing 6 emotions, and reported universality. Yet, variable recognition accuracy across cultures suggests a narrower cross-cultural communication supported by sets of simpler expressive patterns embedded in more complex facial expressions. We explore this hypothesis by modeling the facial expressions of over 60 emotions across 2 cultures, and segregating out the latent expressive patterns. Using a multidisciplinary approach, we first map the conceptual organization of a broad spectrum of emotion words by building semantic networks in 2 cultures. For each emotion word in each culture, we then model and validate its corresponding dynamic facial expression, producing over 60 culturally valid facial expression models. We then apply to the pooled models a multivariate data reduction technique, revealing 4 latent and culturally common facial expression patterns that each communicates specific combinations of valence, arousal, and dominance. We then reveal the face movements that accentuate each latent expressive pattern to create complex facial expressions. Our data questions the widely held view that 6 facial expression patterns are universal, instead suggesting 4 latent expressive patterns with direct implications for emotion communication, social psychology, cognitive neuroscience, and social robotics. (PsycINFO Database Record


Assuntos
Cultura , Emoções/fisiologia , Expressão Facial , Idioma , Adulto , Comparação Transcultural , Feminino , Humanos , Masculino , Adulto Jovem
13.
Cortex ; 65: 50-64, 2015 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-25638352

RESUMO

The human face transmits a wealth of signals that readily provide crucial information for social interactions, such as facial identity and emotional expression. Yet, a fundamental question remains unresolved: does the face information for identity and emotional expression categorization tap into common or distinct representational systems? To address this question we tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions that are assumed to contribute to facial expression (de)coding (i.e., the amygdala, the insula and the posterior superior temporal sulcus--pSTS). We previously demonstrated that PS does not use information from the eye region to identify faces, but relies on the suboptimal mouth region. PS's abnormal information use for identity, coupled with her neural dissociation, provides a unique opportunity to probe the existence of a dichotomy in the face representational system. To reconstruct the mental models of the six basic facial expressions of emotion in PS and age-matched healthy observers, we used a novel reverse correlation technique tracking information use on dynamic faces. PS was comparable to controls, using all facial features to (de)code facial expressions with the exception of fear. PS's normal (de)coding of dynamic facial expressions suggests that the face system relies either on distinct representational systems for identity and expression, or dissociable cortical pathways to access them. Interestingly, PS showed a selective impairment for categorizing many static facial expressions, which could be accounted for by her lesion in the right inferior occipital gyrus. PS's advantage for dynamic facial expressions might instead relate to a functionally distinct and sufficient cortical pathway directly connecting the early visual cortex to the spared pSTS. Altogether, our data provide critical insights on the healthy and impaired face systems, question evidence of deficits obtained from patients by using static images of facial expressions, and offer novel routes for patient rehabilitation.


Assuntos
Emoções/fisiologia , Face , Expressão Facial , Prosopagnosia/psicologia , Percepção Visual/fisiologia , Mapeamento Encefálico , Córtex Cerebral/fisiopatologia , Discriminação Psicológica , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Modelos Psicológicos , Prosopagnosia/diagnóstico
14.
Psychol Sci ; 25(5): 1079-86, 2014 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-24659191

RESUMO

Animals use social camouflage as a tool of deceit to increase the likelihood of survival and reproduction. We tested whether humans can also strategically deploy transient facial movements to camouflage the default social traits conveyed by the phenotypic morphology of their faces. We used the responses of 12 observers to create models of the dynamic facial signals of dominance, trustworthiness, and attractiveness. We applied these dynamic models to facial morphologies differing on perceived dominance, trustworthiness, and attractiveness to create a set of dynamic faces; new observers rated each dynamic face according to the three social traits. We found that specific facial movements camouflage the social appearance of a face by modulating the features of phenotypic morphology. A comparison of these facial expressions with those similarly derived for facial emotions showed that social-trait expressions, rather than being simple one-to-one overgeneralizations of emotional expressions, are a distinct set of signals composed of movements from different emotions. Our generative face models represent novel psychophysical laws for social sciences; these laws predict the perception of social traits on the basis of dynamic face identities.


Assuntos
Emoções/fisiologia , Face/anatomia & histologia , Expressão Facial , Fatores Sociológicos , Adolescente , Adulto , Feminino , Humanos , Masculino , Percepção , Percepção Social , Adulto Jovem
15.
Curr Biol ; 24(2): 187-192, 2014 Jan 20.
Artigo em Inglês | MEDLINE | ID: mdl-24388852

RESUMO

Designed by biological and social evolutionary pressures, facial expressions of emotion comprise specific facial movements to support a near-optimal system of signaling and decoding. Although highly dynamical, little is known about the form and function of facial expression temporal dynamics. Do facial expressions transmit diagnostic signals simultaneously to optimize categorization of the six classic emotions, or sequentially to support a more complex communication system of successive categorizations over time? Our data support the latter. Using a combination of perceptual expectation modeling, information theory, and Bayesian classifiers, we show that dynamic facial expressions of emotion transmit an evolving hierarchy of "biologically basic to socially specific" information over time. Early in the signaling dynamics, facial expressions systematically transmit few, biologically rooted face signals supporting the categorization of fewer elementary categories (e.g., approach/avoidance). Later transmissions comprise more complex signals that support categorization of a larger number of socially specific categories (i.e., the six classic emotions). Here, we show that dynamic facial expressions of emotion provide a sophisticated signaling system, questioning the widely accepted notion that emotion communication is comprised of six basic (i.e., psychologically irreducible) categories, and instead suggesting four.


Assuntos
Emoções , Face/fisiologia , Expressão Facial , Encéfalo/fisiologia , Humanos , Tempo de Reação
17.
Proc Natl Acad Sci U S A ; 109(19): 7241-4, 2012 May 08.
Artigo em Inglês | MEDLINE | ID: mdl-22509011

RESUMO

Since Darwin's seminal works, the universality of facial expressions of emotion has remained one of the longest standing debates in the biological and social sciences. Briefly stated, the universality hypothesis claims that all humans communicate six basic internal emotional states (happy, surprise, fear, disgust, anger, and sad) using the same facial movements by virtue of their biological and evolutionary origins [Susskind JM, et al. (2008) Nat Neurosci 11:843-850]. Here, we refute this assumed universality. Using a unique computer graphics platform that combines generative grammars [Chomsky N (1965) MIT Press, Cambridge, MA] with visual perception, we accessed the mind's eye of 30 Western and Eastern culture individuals and reconstructed their mental representations of the six basic facial expressions of emotion. Cross-cultural comparisons of the mental representations challenge universality on two separate counts. First, whereas Westerners represent each of the six basic emotions with a distinct set of facial movements common to the group, Easterners do not. Second, Easterners represent emotional intensity with distinctive dynamic eye activity. By refuting the long-standing universality hypothesis, our data highlight the powerful influence of culture on shaping basic behaviors once considered biologically hardwired. Consequently, our data open a unique nature-nurture debate across broad fields from evolutionary psychology and social neuroscience to social networking via digital avatars.


Assuntos
Comparação Transcultural , Emoções , Expressão Facial , Interface Usuário-Computador , Povo Asiático/psicologia , Características Culturais , Feminino , Humanos , Masculino , Modelos Psicológicos , Estimulação Luminosa , Inquéritos e Questionários , Percepção Visual , População Branca/psicologia , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...