Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
1.
J Neurosci ; 42(48): 9030-9044, 2022 11 30.
Artículo en Inglés | MEDLINE | ID: mdl-36280264

RESUMEN

To date, social and nonsocial decisions have been studied largely in isolation. Consequently, the extent to which social and nonsocial forms of decision uncertainty are integrated using shared neurocomputational resources remains elusive. Here, we address this question using simultaneous electroencephalography (EEG)-functional magnetic resonance imaging (fMRI) in healthy human participants (young adults of both sexes) and a task in which decision evidence in social and nonsocial contexts varies along comparable scales. First, we identify time-resolved build-up of activity in the EEG, akin to a process of evidence accumulation (EA), across both contexts. We then use the endogenous trial-by-trial variability in the slopes of these accumulating signals to construct parametric fMRI predictors. We show that a region of the posterior-medial frontal cortex (pMFC) uniquely explains trial-wise variability in the process of evidence accumulation in both social and nonsocial contexts. We further demonstrate a task-dependent coupling between the pMFC and regions of the human valuation system in dorso-medial and ventro-medial prefrontal cortex across both contexts. Finally, we report domain-specific representations in regions known to encode the early decision evidence for each context. These results are suggestive of a domain-general decision-making architecture, whereupon domain-specific information is likely converted into a "common currency" in medial prefrontal cortex and accumulated for the decision in the pMFC.SIGNIFICANCE STATEMENT Little work has directly compared social-versus-nonsocial decisions to investigate whether they share common neurocomputational origins. Here, using combined electroencephalography (EEG)-functional magnetic resonance imaging (fMRI) and computational modeling, we offer a detailed spatiotemporal account of the neural underpinnings of social and nonsocial decisions. Specifically, we identify a comparable mechanism of temporal evidence integration driving both decisions and localize this integration process in posterior-medial frontal cortex (pMFC). We further demonstrate task-dependent coupling between the pMFC and regions of the human valuation system across both contexts. Finally, we report domain-specific representations in regions encoding the early, domain-specific, decision evidence. These results suggest a domain-general decision-making architecture, whereupon domain-specific information is converted into a common representation in the valuation system and integrated for the decision in the pMFC.


Asunto(s)
Toma de Decisiones , Imagen por Resonancia Magnética , Adulto Joven , Masculino , Femenino , Humanos , Lóbulo Frontal , Electroencefalografía
2.
Proc Natl Acad Sci U S A ; 115(43): E10013-E10021, 2018 10 23.
Artículo en Inglés | MEDLINE | ID: mdl-30297420

RESUMEN

Real-world studies show that the facial expressions produced during pain and orgasm-two different and intense affective experiences-are virtually indistinguishable. However, this finding is counterintuitive, because facial expressions are widely considered to be a powerful tool for social interaction. Consequently, debate continues as to whether the facial expressions of these extreme positive and negative affective states serve a communicative function. Here, we address this debate from a novel angle by modeling the mental representations of dynamic facial expressions of pain and orgasm in 40 observers in each of two cultures (Western, East Asian) using a data-driven method. Using a complementary approach of machine learning, an information-theoretic analysis, and a human perceptual discrimination task, we show that mental representations of pain and orgasm are physically and perceptually distinct in each culture. Cross-cultural comparisons also revealed that pain is represented by similar face movements across cultures, whereas orgasm showed distinct cultural accents. Together, our data show that mental representations of the facial expressions of pain and orgasm are distinct, which questions their nondiagnosticity and instead suggests they could be used for communicative purposes. Our results also highlight the potential role of cultural and perceptual factors in shaping the mental representation of these facial expressions. We discuss new research directions to further explore their relationship to the production of facial expressions.


Asunto(s)
Emociones/fisiología , Cara/fisiología , Dolor/fisiopatología , Dolor/psicología , Placer/fisiología , Adulto , Comparación Transcultural , Cultura , Expresión Facial , Femenino , Humanos , Relaciones Interpersonales , Masculino , Reconocimiento en Psicología/fisiología , Adulto Joven
3.
Psychol Sci ; 28(9): 1259-1270, 2017 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-28741981

RESUMEN

A smile is the most frequent facial expression, but not all smiles are equal. A social-functional account holds that smiles of reward, affiliation, and dominance serve basic social functions, including rewarding behavior, bonding socially, and negotiating hierarchy. Here, we characterize the facial-expression patterns associated with these three types of smiles. Specifically, we modeled the facial expressions using a data-driven approach and showed that reward smiles are symmetrical and accompanied by eyebrow raising, affiliative smiles involve lip pressing, and dominance smiles are asymmetrical and contain nose wrinkling and upper-lip raising. A Bayesian-classifier analysis and a detection task revealed that the three smile types are highly distinct. Finally, social judgments made by a separate participant group showed that the different smile types convey different social messages. Our results provide the first detailed description of the physical form and social messages conveyed by these three types of functional smiles and document the versatility of these facial expressions.


Asunto(s)
Relaciones Interpersonales , Apego a Objetos , Recompensa , Sonrisa/psicología , Predominio Social , Percepción Social , Adolescente , Adulto , Femenino , Humanos , Masculino , Adulto Joven
4.
J Vis ; 16(8): 14, 2016 06 01.
Artículo en Inglés | MEDLINE | ID: mdl-27305521

RESUMEN

Visual categorization is the brain computation that reduces high-dimensional information in the visual environment into a smaller set of meaningful categories. An important problem in visual neuroscience is to identify the visual information that the brain must represent and then use to categorize visual inputs. Here we introduce a new mathematical formalism-termed space-by-time manifold decomposition-that describes this information as a low-dimensional manifold separable in space and time. We use this decomposition to characterize the representations used by observers to categorize the six classic facial expressions of emotion (happy, surprise, fear, disgust, anger, and sad). By means of a Generative Face Grammar, we presented random dynamic facial movements on each experimental trial and used subjective human perception to identify the facial movements that correlate with each emotion category. When the random movements projected onto the categorization manifold region corresponding to one of the emotion categories, observers categorized the stimulus accordingly; otherwise they selected "other." Using this information, we determined both the Action Unit and temporal components whose linear combinations lead to reliable categorization of each emotion. In a validation experiment, we confirmed the psychological validity of the resulting space-by-time manifold representation. Finally, we demonstrated the importance of temporal sequencing for accurate emotion categorization and identified the temporal dynamics of Action Unit components that cause typical confusions between specific emotions (e.g., fear and surprise) as well as those resolving these confusions.


Asunto(s)
Emociones/fisiología , Expresión Facial , Movimiento/fisiología , Percepción Espacial/fisiología , Percepción del Tiempo/fisiología , Ambiente , Miedo/fisiología , Femenino , Felicidad , Humanos , Masculino , Adulto Joven
5.
J Neurosci ; 34(20): 6813-21, 2014 May 14.
Artículo en Inglés | MEDLINE | ID: mdl-24828635

RESUMEN

The integration of emotional information from the face and voice of other persons is known to be mediated by a number of "multisensory" cerebral regions, such as the right posterior superior temporal sulcus (pSTS). However, whether multimodal integration in these regions is attributable to interleaved populations of unisensory neurons responding to face or voice or rather by multimodal neurons receiving input from the two modalities is not fully clear. Here, we examine this question using functional magnetic resonance adaptation and dynamic audiovisual stimuli in which emotional information was manipulated parametrically and independently in the face and voice via morphing between angry and happy expressions. Healthy human adult subjects were scanned while performing a happy/angry emotion categorization task on a series of such stimuli included in a fast event-related, continuous carryover design. Subjects integrated both face and voice information when categorizing emotion-although there was a greater weighting of face information-and showed behavioral adaptation effects both within and across modality. Adaptation also occurred at the neural level: in addition to modality-specific adaptation in visual and auditory cortices, we observed for the first time a crossmodal adaptation effect. Specifically, fMRI signal in the right pSTS was reduced in response to a stimulus in which facial emotion was similar to the vocal emotion of the preceding stimulus. These results suggest that the integration of emotional information from face and voice in the pSTS involves a detectable proportion of bimodal neurons that combine inputs from visual and auditory cortices.


Asunto(s)
Percepción Auditiva/fisiología , Emociones/fisiología , Lóbulo Temporal/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Adulto , Mapeo Encefálico , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Estimulación Luminosa , Percepción Social , Voz
6.
Proc Natl Acad Sci U S A ; 109(19): 7241-4, 2012 May 08.
Artículo en Inglés | MEDLINE | ID: mdl-22509011

RESUMEN

Since Darwin's seminal works, the universality of facial expressions of emotion has remained one of the longest standing debates in the biological and social sciences. Briefly stated, the universality hypothesis claims that all humans communicate six basic internal emotional states (happy, surprise, fear, disgust, anger, and sad) using the same facial movements by virtue of their biological and evolutionary origins [Susskind JM, et al. (2008) Nat Neurosci 11:843-850]. Here, we refute this assumed universality. Using a unique computer graphics platform that combines generative grammars [Chomsky N (1965) MIT Press, Cambridge, MA] with visual perception, we accessed the mind's eye of 30 Western and Eastern culture individuals and reconstructed their mental representations of the six basic facial expressions of emotion. Cross-cultural comparisons of the mental representations challenge universality on two separate counts. First, whereas Westerners represent each of the six basic emotions with a distinct set of facial movements common to the group, Easterners do not. Second, Easterners represent emotional intensity with distinctive dynamic eye activity. By refuting the long-standing universality hypothesis, our data highlight the powerful influence of culture on shaping basic behaviors once considered biologically hardwired. Consequently, our data open a unique nature-nurture debate across broad fields from evolutionary psychology and social neuroscience to social networking via digital avatars.


Asunto(s)
Comparación Transcultural , Emociones , Expresión Facial , Interfaz Usuario-Computador , Pueblo Asiatico/psicología , Características Culturales , Femenino , Humanos , Masculino , Modelos Psicológicos , Estimulación Luminosa , Encuestas y Cuestionarios , Percepción Visual , Población Blanca/psicología , Adulto Joven
7.
Psychol Sci ; 25(5): 1079-86, 2014 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-24659191

RESUMEN

Animals use social camouflage as a tool of deceit to increase the likelihood of survival and reproduction. We tested whether humans can also strategically deploy transient facial movements to camouflage the default social traits conveyed by the phenotypic morphology of their faces. We used the responses of 12 observers to create models of the dynamic facial signals of dominance, trustworthiness, and attractiveness. We applied these dynamic models to facial morphologies differing on perceived dominance, trustworthiness, and attractiveness to create a set of dynamic faces; new observers rated each dynamic face according to the three social traits. We found that specific facial movements camouflage the social appearance of a face by modulating the features of phenotypic morphology. A comparison of these facial expressions with those similarly derived for facial emotions showed that social-trait expressions, rather than being simple one-to-one overgeneralizations of emotional expressions, are a distinct set of signals composed of movements from different emotions. Our generative face models represent novel psychophysical laws for social sciences; these laws predict the perception of social traits on the basis of dynamic face identities.


Asunto(s)
Emociones/fisiología , Cara/anatomía & histología , Expresión Facial , Factores Sociológicos , Adolescente , Adulto , Femenino , Humanos , Masculino , Percepción , Percepción Social , Adulto Joven
8.
J Exp Psychol Gen ; 153(3): 742-753, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38271012

RESUMEN

Social class is a powerful hierarchy that determines many privileges and disadvantages. People form impressions of others' social class (like other important social attributes) from facial appearance, and these impressions correlate with stereotype judgments. However, what drives these related subjective judgments remains unknown. That is, what makes someone look like they are of higher or lower social class standing (e.g., rich or poor), and how does this relate to harmful or advantageous stereotypes? We addressed these questions using a perception-based data-driven method to model the specific three-dimensional facial features that drive social class judgments and compared them to those of stereotype-related judgments (competence, warmth, dominance, and trustworthiness), based on White Western culture participants and face stimuli. Using a complementary data-reduction analysis and machine learning approach, we show that social class judgments are driven by a unique constellation of facial features that reflect multiple embedded stereotypes: poor-looking (vs. rich-looking) faces are wider, shorter, and flatter with downturned mouths and darker, cooler complexions, mirroring features of incompetent, cold, and untrustworthy-looking (vs. competent, warm, and trustworthy-looking) faces. Our results reveal the specific facial features that underlie the connection between impressions of social class and stereotype-related social traits, with implications for central social perception theories, including understanding the causal links between stereotype knowledge and social class judgments. We anticipate that our results will inform future interventions designed to interrupt biased perception and social inequalities. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Asunto(s)
Reconocimiento Facial , Estereotipo , Humanos , Percepción Social , Actitud , Juicio , Clase Social , Expresión Facial , Confianza
9.
Curr Biol ; 34(1): 213-223.e5, 2024 01 08.
Artículo en Inglés | MEDLINE | ID: mdl-38141619

RESUMEN

Communicating emotional intensity plays a vital ecological role because it provides valuable information about the nature and likelihood of the sender's behavior.1,2,3 For example, attack often follows signals of intense aggression if receivers fail to retreat.4,5 Humans regularly use facial expressions to communicate such information.6,7,8,9,10,11 Yet how this complex signaling task is achieved remains unknown. We addressed this question using a perception-based, data-driven method to mathematically model the specific facial movements that receivers use to classify the six basic emotions-"happy," "surprise," "fear," "disgust," "anger," and "sad"-and judge their intensity in two distinct cultures (East Asian, Western European; total n = 120). In both cultures, receivers expected facial expressions to dynamically represent emotion category and intensity information over time, using a multi-component compositional signaling structure. Specifically, emotion intensifiers peaked earlier or later than emotion classifiers and represented intensity using amplitude variations. Emotion intensifiers are also more similar across emotions than classifiers are, suggesting a latent broad-plus-specific signaling structure. Cross-cultural analysis further revealed similarities and differences in expectations that could impact cross-cultural communication. Specifically, East Asian and Western European receivers have similar expectations about which facial movements represent high intensity for threat-related emotions, such as "anger," "disgust," and "fear," but differ on those that represent low threat emotions, such as happiness and sadness. Together, our results provide new insights into the intricate processes by which facial expressions can achieve complex dynamic signaling tasks by revealing the rich information embedded in facial expressions.


Asunto(s)
Emociones , Expresión Facial , Humanos , Ira , Miedo , Felicidad
10.
Curr Biol ; 33(24): 5505-5514.e6, 2023 12 18.
Artículo en Inglés | MEDLINE | ID: mdl-38065096

RESUMEN

Prediction-for-perception theories suggest that the brain predicts incoming stimuli to facilitate their categorization.1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17 However, it remains unknown what the information contents of these predictions are, which hinders mechanistic explanations. This is because typical approaches cast predictions as an underconstrained contrast between two categories18,19,20,21,22,23,24-e.g., faces versus cars, which could lead to predictions of features specific to faces or cars, or features from both categories. Here, to pinpoint the information contents of predictions and thus their mechanistic processing in the brain, we identified the features that enable two different categorical perceptions of the same stimuli. We then trained multivariate classifiers to discern, from dynamic MEG brain responses, the features tied to each perception. With an auditory cueing design, we reveal where, when, and how the brain reactivates visual category features (versus the typical category contrast) before the stimulus is shown. We demonstrate that the predictions of category features have a more direct influence (bias) on subsequent decision behavior in participants than the typical category contrast. Specifically, these predictions are more precisely localized in the brain (lateralized), are more specifically driven by the auditory cues, and their reactivation strength before a stimulus presentation exerts a greater bias on how the individual participant later categorizes this stimulus. By characterizing the specific information contents that the brain predicts and then processes, our findings provide new insights into the brain's mechanisms of prediction for perception.


Asunto(s)
Encéfalo , Señales (Psicología) , Humanos , Encéfalo/fisiología , Mapeo Encefálico , Estimulación Luminosa
11.
Sci Adv ; 9(6): eabq8421, 2023 02 10.
Artículo en Inglés | MEDLINE | ID: mdl-36763663

RESUMEN

Models are the hallmark of mature scientific inquiry. In psychology, this maturity has been reached in a pervasive question-what models best represent facial expressions of emotion? Several hypotheses propose different combinations of facial movements [action units (AUs)] as best representing the six basic emotions and four conversational signals across cultures. We developed a new framework to formalize such hypotheses as predictive models, compare their ability to predict human emotion categorizations in Western and East Asian cultures, explain the causal role of individual AUs, and explore updated, culture-accented models that improve performance by reducing a prevalent Western bias. Our predictive models also provide a noise ceiling to inform the explanatory power and limitations of different factors (e.g., AUs and individual differences). Thus, our framework provides a new approach to test models of social signals, explain their predictive power, and explore their optimization, with direct implications for theory development.


Asunto(s)
Emociones , Expresión Facial , Humanos , Cara , Movimiento
12.
Curr Biol ; 32(1): 200-209.e6, 2022 01 10.
Artículo en Inglés | MEDLINE | ID: mdl-34767768

RESUMEN

Human facial expressions are complex, multi-component signals that can communicate rich information about emotions,1-5 including specific categories, such as "anger," and broader dimensions, such as "negative valence, high arousal."6-8 An enduring question is how this complex signaling is achieved. Communication theory predicts that multi-component signals could transmit each type of emotion information-i.e., specific categories and broader dimensions-via the same or different facial signal components, with implications for elucidating the system and ontology of facial expression communication.9 We addressed this question using a communication-systems-based method that agnostically generates facial expressions and uses the receiver's perceptions to model the specific facial signal components that represent emotion category and dimensional information to them.10-12 First, we derived the facial expressions that elicit the perception of emotion categories (i.e., the six classic emotions13 plus 19 complex emotions3) and dimensions (i.e., valence and arousal) separately, in 60 individual participants. Comparison of these facial signals showed that they share subsets of components, suggesting that specific latent signals jointly represent-i.e., multiplex-categorical and dimensional information. Further examination revealed these specific latent signals and the joint information they represent. Our results-based on white Western participants, same-ethnicity face stimuli, and commonly used English emotion terms-show that facial expressions can jointly represent specific emotion categories and broad dimensions to perceivers via multiplexed facial signal components. Our results provide insights into the ontology and system of facial expression communication and a new information-theoretic framework that can characterize its complexities.


Asunto(s)
Emociones , Expresión Facial , Ira , Nivel de Alerta , Cara , Humanos
14.
Patterns (N Y) ; 2(10): 100348, 2021 Oct 08.
Artículo en Inglés | MEDLINE | ID: mdl-34693374

RESUMEN

Deep neural networks (DNNs) can resolve real-world categorization tasks with apparent human-level performance. However, true equivalence of behavioral performance between humans and their DNN models requires that their internal mechanisms process equivalent features of the stimulus. To develop such feature equivalence, our methodology leveraged an interpretable and experimentally controlled generative model of the stimuli (realistic three-dimensional textured faces). Humans rated the similarity of randomly generated faces to four familiar identities. We predicted these similarity ratings from the activations of five DNNs trained with different optimization objectives. Using information theoretic redundancy, reverse correlation, and the testing of generalization gradients, we show that DNN predictions of human behavior improve because their shape and texture features overlap with those that subsume human behavior. Thus, we must equate the functional features that subsume the behavioral performances of the brain and its models before comparing where, when, and how these features are processed.

15.
Emotion ; 21(6): 1324-1339, 2021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-32628034

RESUMEN

Action video game players (AVGPs) display superior performance in various aspects of cognition, especially in perception and top-down attention. The existing literature has examined these performance almost exclusively with stimuli and tasks devoid of any emotional content. Thus, whether the superior performance documented in the cognitive domain extend to the emotional domain remains unknown. We present 2 cross-sectional studies contrasting AVGPs and nonvideo game players (NVGPs) in their ability to perceive facial emotions. Under an enhanced perception account, AVGPs should outperform NVGPs when processing facial emotion. Yet, alternative accounts exist. For instance, under some social accounts, exposure to action video games, which often contain violence, may lower sensitivity for empathy-related expressions such as sadness, happiness, and pain while increasing sensitivity to aggression signals. Finally, under the view that AVGPs excel at learning new tasks (in contrast to the view that they are immediately better at all new tasks), the use of stimuli that participants are already experts at predicts little to no group differences. Study 1 uses drift-diffusion modeling and establishes that AVGPs are comparable to NVGPs in every decision-making stage mediating the discrimination of facial emotions, despite showing group difference in aggressive behavior. Study 2 uses the reverse inference technique to assess the mental representation of facial emotion expressions, and again documents no group differences. These results indicate that the perceptual benefits associated with action video game play do not extend to overlearned stimuli such as facial emotion, and rather indicate equivalent facial emotion skills in AVGPs and NVGPs. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Asunto(s)
Juegos de Video , Estudios Transversales , Emociones , Expresión Facial , Humanos , Percepción
16.
Curr Biol ; 31(10): 2243-2252.e6, 2021 05 24.
Artículo en Inglés | MEDLINE | ID: mdl-33798430

RESUMEN

Facial attractiveness confers considerable advantages in social interactions,1,2 with preferences likely reflecting psychobiological mechanisms shaped by natural selection. Theories of universal beauty propose that attractive faces comprise features that are closer to the population average3 while optimizing sexual dimorphism.4 However, emerging evidence questions this model as an accurate representation of facial attractiveness,5-7 including representing the diversity of beauty preferences within and across cultures.8-12 Here, we demonstrate that Western Europeans (WEs) and East Asians (EAs) evaluate facial beauty using culture-specific features, contradicting theories of universality. With a data-driven method, we modeled, at both the individual and group levels, the attractive face features of young females (25 years old) in two matched groups each of 40 young male WE and EA participants. Specifically, we generated a broad range of same- and other-ethnicity female faces with naturally varying shapes and complexions. Participants rated each on attractiveness. We then reverse correlated the face features that drive perception of attractiveness in each participant. From these individual face models, we reconstructed a facial attractiveness representation space that explains preference variations. We show that facial attractiveness is distinct both from averageness and from sexual dimorphism in both cultures. Finally, we disentangled attractive face features into those shared across cultures, culture specific, and specific to individual participants, thereby revealing their diversity. Our results have direct theoretical and methodological impact for representing diversity in social perception and for the design of culturally and ethnically sensitive socially interactive digital agents.


Asunto(s)
Belleza , Cultura , Cara , Adulto , Pueblo Asiatico , Femenino , Humanos , Masculino , Caracteres Sexuales , Población Blanca
17.
Nat Hum Behav ; 3(8): 817-826, 2019 08.
Artículo en Inglés | MEDLINE | ID: mdl-31209368

RESUMEN

Current cognitive theories are cast in terms of information-processing mechanisms that use mental representations1-4. For example, people use their mental representations to identify familiar faces under various conditions of pose, illumination and ageing, or to draw resemblance between family members. Yet, the actual information contents of these representations are rarely characterized, which hinders knowledge of the mechanisms that use them. Here, we modelled the three-dimensional representational contents of 4 faces that were familiar to 14 participants as work colleagues. The representational contents were created by reverse-correlating identity information generated on each trial with judgements of the face's similarity to the individual participant's memory of this face. In a second study, testing new participants, we demonstrated the validity of the modelled contents using everyday face tasks that generalize identity judgements to new viewpoints, age and sex. Our work highlights that such models of mental representations are critical to understanding generalization behaviour and its underlying information-processing mechanisms.


Asunto(s)
Cara , Reconocimiento Facial , Memoria , Adulto , Femenino , Percepción de Forma , Generalización Psicológica , Humanos , Masculino , Modelos Psicológicos , Adulto Joven
18.
J Exp Psychol Hum Percept Perform ; 45(12): 1589-1595, 2019 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-31556686

RESUMEN

Facial attractiveness plays a critical role in social interaction, influencing many different social outcomes. However, the factors that influence facial attractiveness judgments remain relatively poorly understood. Here, we used a sample of 594 young adult female face images to compare the performance of existing theory-driven models of facial attractiveness and a data-driven (i.e., theory-neutral) model. Our data-driven model and a theory-driven model including various traits commonly studied in facial attractiveness research (asymmetry, averageness, sexual dimorphism, body mass index, and representational sparseness) performed similarly well. By contrast, univariate theory-driven models performed relatively poorly. These results (a) highlight the utility of data driven models of facial attractiveness and (b) suggest that theory-driven research on facial attractiveness would benefit from greater adoption of multivariate approaches, rather than the univariate approaches that they currently almost exclusively employ. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Asunto(s)
Belleza , Cara , Juicio , Femenino , Humanos , Masculino , Modelos Psicológicos , Estimulación Luminosa , Teoría Psicológica , Adulto Joven
19.
J Exp Psychol Gen ; 145(6): 708-30, 2016 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-27077757

RESUMEN

As a highly social species, humans generate complex facial expressions to communicate a diverse range of emotions. Since Darwin's work, identifying among these complex patterns which are common across cultures and which are culture-specific has remained a central question in psychology, anthropology, philosophy, and more recently machine vision and social robotics. Classic approaches to addressing this question typically tested the cross-cultural recognition of theoretically motivated facial expressions representing 6 emotions, and reported universality. Yet, variable recognition accuracy across cultures suggests a narrower cross-cultural communication supported by sets of simpler expressive patterns embedded in more complex facial expressions. We explore this hypothesis by modeling the facial expressions of over 60 emotions across 2 cultures, and segregating out the latent expressive patterns. Using a multidisciplinary approach, we first map the conceptual organization of a broad spectrum of emotion words by building semantic networks in 2 cultures. For each emotion word in each culture, we then model and validate its corresponding dynamic facial expression, producing over 60 culturally valid facial expression models. We then apply to the pooled models a multivariate data reduction technique, revealing 4 latent and culturally common facial expression patterns that each communicates specific combinations of valence, arousal, and dominance. We then reveal the face movements that accentuate each latent expressive pattern to create complex facial expressions. Our data questions the widely held view that 6 facial expression patterns are universal, instead suggesting 4 latent expressive patterns with direct implications for emotion communication, social psychology, cognitive neuroscience, and social robotics. (PsycINFO Database Record


Asunto(s)
Cultura , Emociones/fisiología , Expresión Facial , Lenguaje , Adulto , Comparación Transcultural , Femenino , Humanos , Masculino , Adulto Joven
20.
Cortex ; 65: 50-64, 2015 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-25638352

RESUMEN

The human face transmits a wealth of signals that readily provide crucial information for social interactions, such as facial identity and emotional expression. Yet, a fundamental question remains unresolved: does the face information for identity and emotional expression categorization tap into common or distinct representational systems? To address this question we tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions that are assumed to contribute to facial expression (de)coding (i.e., the amygdala, the insula and the posterior superior temporal sulcus--pSTS). We previously demonstrated that PS does not use information from the eye region to identify faces, but relies on the suboptimal mouth region. PS's abnormal information use for identity, coupled with her neural dissociation, provides a unique opportunity to probe the existence of a dichotomy in the face representational system. To reconstruct the mental models of the six basic facial expressions of emotion in PS and age-matched healthy observers, we used a novel reverse correlation technique tracking information use on dynamic faces. PS was comparable to controls, using all facial features to (de)code facial expressions with the exception of fear. PS's normal (de)coding of dynamic facial expressions suggests that the face system relies either on distinct representational systems for identity and expression, or dissociable cortical pathways to access them. Interestingly, PS showed a selective impairment for categorizing many static facial expressions, which could be accounted for by her lesion in the right inferior occipital gyrus. PS's advantage for dynamic facial expressions might instead relate to a functionally distinct and sufficient cortical pathway directly connecting the early visual cortex to the spared pSTS. Altogether, our data provide critical insights on the healthy and impaired face systems, question evidence of deficits obtained from patients by using static images of facial expressions, and offer novel routes for patient rehabilitation.


Asunto(s)
Emociones/fisiología , Cara , Expresión Facial , Prosopagnosia/psicología , Percepción Visual/fisiología , Mapeo Encefálico , Corteza Cerebral/fisiopatología , Discriminación en Psicología , Femenino , Humanos , Masculino , Persona de Mediana Edad , Modelos Psicológicos , Prosopagnosia/diagnóstico
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA