Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 71
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 13(1): 19323, 2023 11 07.
Artigo em Inglês | MEDLINE | ID: mdl-37935828

RESUMO

Face ensemble coding is the perceptual ability to create a quick and overall impression of a group of faces, triggering social and behavioral motivations towards other people (approaching friendly people or avoiding an angry mob). Cultural differences in this ability have been reported, such that Easterners are better at face ensemble coding than Westerners are. The underlying mechanism has been attributed to differences in processing styles, with Easterners allocating attention globally, and Westerners focusing on local parts. However, the remaining question is how such default attention mode is influenced by salient information during ensemble perception. We created visual displays that resembled a real-world social setting in which one individual in a crowd of different faces drew the viewer's attention while the viewer judged the overall emotion of the crowd. In each trial, one face in the crowd was highlighted by a salient cue, capturing spatial attention before the participants viewed the entire group. American participants' judgment of group emotion more strongly weighed the attended individual face than Korean participants, suggesting a greater influence of local information on global perception. Our results showed that different attentional modes between cultural groups modulate social-emotional processing underlying people's perceptions and attributions.


Assuntos
População do Leste Asiático , Julgamento , Humanos , Estados Unidos , Expressão Facial , Emoções , Ira
2.
Proc IEEE Inst Electr Electron Eng ; 111(10): 1236-1286, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37859667

RESUMO

The emergence of artificial emotional intelligence technology is revolutionizing the fields of computers and robotics, allowing for a new level of communication and understanding of human behavior that was once thought impossible. While recent advancements in deep learning have transformed the field of computer vision, automated understanding of evoked or expressed emotions in visual media remains in its infancy. This foundering stems from the absence of a universally accepted definition of "emotion," coupled with the inherently subjective nature of emotions and their intricate nuances. In this article, we provide a comprehensive, multidisciplinary overview of the field of emotion analysis in visual media, drawing on insights from psychology, engineering, and the arts. We begin by exploring the psychological foundations of emotion and the computational principles that underpin the understanding of emotions from images and videos. We then review the latest research and systems within the field, accentuating the most promising approaches. We also discuss the current technological challenges and limitations of emotion analysis, underscoring the necessity for continued investigation and innovation. We contend that this represents a "Holy Grail" research problem in computing and delineate pivotal directions for future inquiry. Finally, we examine the ethical ramifications of emotion-understanding technologies and contemplate their potential societal impacts. Overall, this article endeavors to equip readers with a deeper understanding of the domain of emotion analysis in visual media and to inspire further research and development in this captivating and rapidly evolving field.

3.
Pers Soc Psychol Rev ; 27(3): 332-356, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-36218340

RESUMO

Social vision research, which examines, in part, how humans visually perceive social stimuli, is well-positioned to improve understandings of social inequality. However, social vision research has rarely prioritized the perspectives of marginalized group members. We offer a theoretical argument for diversifying understandings of social perceptual processes by centering marginalized perspectives. We examine (a) how social vision researchers frame their research questions and who these framings prioritize and (b) how perceptual processes (person perception; people perception; perception of social objects) are linked to group membership and thus comprehensively understanding these processes necessitates attention to marginalized perceivers. We discuss how social vision research translates into theoretical advances and to action for reducing negative intergroup consequences (e.g., prejudice). The purpose of this article is to delineate how prioritizing marginalized perspectives in social vision research could develop novel questions, bridge theoretical gaps, and elevate social vision's translational impact to improve outcomes for marginalized groups.


Assuntos
Lentes , Preconceito , Humanos , Mudança Social , Feminismo , Percepção Social
4.
Affect Sci ; 3(3): 539-545, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-36385905

RESUMO

Meeting the demands of a social world is an incredibly complex task. Since humans are able to navigate the social world so effortlessly, our ability to both interpret and signal complex social and emotional information is arguably shaped by evolutionary pressures. Dunbar (1992) tested this assumption in his Social Brain Hypothesis, observing that different primates' neocortical volume predicted their average social network size, suggesting that neocortical evolution was driven at least in part by social demands. Here we examined the Social Face Hypothesis, based on the assumption that the face co-evolved with the brain to signal more complex and nuanced emotional, mental, and behavioral states to others. Despite prior observations suggestive of this conclusion (e.g., Redican, 1982), it has not, to our knowledge, been empirically tested. To do this, we obtained updated metrics of primate facial musculature, facial hair bareness, average social network size, and average brain weight data for a large number of primate genera (N = 63). In this sample, we replicated Dunbar's original observation by finding that average brain weight predicted average social network size. Critically, we also found that perceived facial hair bareness predicted both group size and average brain weight. Finally, we found that all three variables acted as mediators, confirming a complex, interdependent relationship between primate social network size, primate brain weight, and primate facial hair bareness. These findings are consistent with the conclusion that the primate brain and face co-evolved in response to meeting the increased social demands of one's environment. Supplementary Information: The online version contains supplementary material available at 10.1007/s42761-022-00116-7.

5.
Affect Sci ; 3(1): 46-61, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36046095

RESUMO

Machine learning findings suggest Eurocentric (aka White/European) faces structurally resemble anger more than Afrocentric (aka Black/African) faces (e.g., Albohn, 2020; Zebrowitz et al., 2010); however, Afrocentric faces are typically associated with anger more so than Eurocentric faces (e.g., Hugenberg & Bodenhausen, 2003, 2004). Here, we further examine counter-stereotypic associations between Eurocentric faces and anger, and Afrocentric faces and fear. In Study 1, using a computer vision algorithm, we demonstrate that neutral European American faces structurally resemble anger more and fear less than do African American faces. In Study 2, we then found that anger- and fear-resembling facial appearance influences perceived racial prototypicality in this same counter-stereotypic manner. In Study 3, we likewise found that imagined European American versus African American faces were rated counter-stereotypically (i.e., more like anger than fear) on key emotion-related facial characteristics (i.e., size of eyes, size of mouth, overall angularity of features). Finally in Study 4, we again found counter-stereotypic differences, this time in processing fluency, such that angry Eurocentric versus Afrocentric faces and fearful Afrocentric versus Eurocentric faces were categorized more accurately and quickly. Only in Study 5, using race-ambiguous interior facial cues coupled with Afrocentric versus Eurocentric hairstyles and skin tone, did we find the stereotypical effects commonly reported in the literature. These findings are consistent with the conclusion that the "angry Black" association in face perception is socially constructed in that structural cues considered prototypical of African American appearance conflict with common race-emotion stereotypes.

6.
Atten Percept Psychophys ; 84(7): 2271-2280, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-36045309

RESUMO

Decades of research show that contextual information from the body, visual scene, and voices can facilitate judgments of facial expressions of emotion. To date, most research suggests that bodily expressions of emotion offer context for interpreting facial expressions, but not vice versa. The present research aimed to investigate the conditions under which mutual processing of facial and bodily displays of emotion facilitate and/or interfere with emotion recognition. In the current two studies, we examined whether body and face emotion recognition are enhanced through integration of shared emotion cues, and/or hindered through mixed signals (i.e., interference). We tested whether faces and bodies facilitate or interfere with emotion processing by pairing briefly presented (33 ms), backward-masked presentations of faces with supraliminally presented bodies (Experiment 1) and vice versa (Experiment 2). Both studies revealed strong support for integration effects, but not interference. Integration effects are most pronounced for low-emotional clarity facial and bodily expressions, suggesting that when more information is needed in one channel, the other channel is recruited to disentangle any ambiguity. That this occurs for briefly presented, backward-masked presentations reveals low-level visual integration of shared emotional signal value.


Assuntos
Emoções , Reconhecimento Facial , Sinais (Psicologia) , Expressão Facial , Humanos , Estimulação Luminosa
7.
Cogn Emot ; 36(4): 741-749, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-35175173

RESUMO

Social exclusion influences how expressions are perceived and the tendency of the perceiver to mimic them. However, less is known about social exclusion's effect on one's own facial expressions. The aim of the present study was to identify the effects of social exclusion on Duchenne smiling behaviour, defined as activity of both zygomaticus major and the orbicularis oculi muscles. Utilising a within-subject's design, participants took part in the Cyberball Task in which they were both included and excluded while facial electromyography was measured. We found that during the active experience of social exclusion, participants showed greater orbicularis oculi activation when compared to the social inclusion condition. Further, we found that across both conditions, participants showed greater zygomaticus major muscle activation the longer they engaged in the Cyberball Task. Order of condition also mattered, with those who experienced social exclusion before social inclusion showing the greatest overall muscle activation. These results are consistent with an affiliative function of smiling, particularly as social exclusion engaged activation of muscles associated with a Duchenne smile.


Assuntos
Músculos Faciais , Sorriso , Eletromiografia , Expressão Facial , Músculos Faciais/fisiologia , Humanos , Isolamento Social
8.
Front Psychol ; 12: 612923, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33716875

RESUMO

Previous research has demonstrated how emotion resembling cues in the face help shape impression formation (i. e., emotion overgeneralization). Perhaps most notable in the literature to date, has been work suggesting that gender-related appearance cues are visually confounded with certain stereotypic expressive cues (see Adams et al., 2015 for review). Only a couple studies to date have used computer vision to directly map out and test facial structural resemblance to emotion expressions using facial landmark coordinates to estimate face shape. In one study using a Bayesian network classifier trained to detect emotional expressions structural resemblance to a specific expression on a non-expressive (i.e., neutral) face was found to influence trait impressions of others (Said et al., 2009). In another study, a connectionist model trained to detect emotional expressions found different emotion-resembling cues in male vs. female faces (Zebrowitz et al., 2010). Despite this seminal work, direct evidence confirming the theoretical assertion that humans likewise utilize these emotion-resembling cues when forming impressions has been lacking. Across four studies, we replicate and extend these prior findings using new advances in computer vision to examine gender-related, emotion-resembling structure, color, and texture (as well as their weighted combination) and their impact on gender-stereotypic impression formation. We show that all three (plus their combination) are meaningfully related to human impressions of emotionally neutral faces. Further when applying the computer vision algorithms to experimentally manipulate faces, we show that humans derive similar impressions from them as did the computer.

9.
Front Psychol ; 11: 264, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32180750

RESUMO

The evolution of the human brain and visual system is widely believed to have been shaped by the need to process and make sense out of expressive information, particularly via the face. We are so attuned to expressive information in the face that it informs even stable trait inferences (e.g., Knutson, 1996) through a process we refer to here as the face-specific fundamental attribution error (Albohn et al., 2019). We even derive highly consistent beliefs about the emotional lives of others based on emotion-resembling facial appearance (e.g., low versus high brows, big versus small eyes, etc.) in faces we know are completely devoid of overt expression (i.e., emotion overgeneralization effect: see Zebrowitz et al., 2010). The present studies extend these insights to better understand lay beliefs about older and younger adults' emotion dispositions and their impact on behavioral outcomes. In Study 1, we found that older versus younger faces objectively have more negative emotion-resembling cues in the face (using computer vision), and that raters likewise attribute more negative emotional dispositions to older versus younger adults based just on neutral facial appearance (see too Adams et al., 2016). In Study 2, we found that people appear to encode these negative emotional appearance cues in memory more so for older than younger adult faces. Finally, in Study 3 we exam downstream behavioral consequences of these negative attributions, showing that observers' avoidance of older versus younger faces is mediated by emotion-resembling facial appearance.

10.
J Vis ; 20(2): 9, 2020 02 10.
Artigo em Inglês | MEDLINE | ID: mdl-32097485

RESUMO

The parallel pathways of the human visual system differ in their tuning to luminance, color, and spatial frequency. These attunements recently have been shown to propagate to differential processing of higher-order stimuli, facial threat cues, in the magnocellular (M) and parvocellular (P) pathways, with greater sensitivity to clear and ambiguous threat, respectively. The role of the third, koniocellular (K) pathway in facial threat processing, however, remains unknown. To address this gap in knowledge, we briefly presented peripheral face stimuli psychophysically biased towards M, P, or K pathways. Observers were instructed to report via a key-press whether the face was angry or neutral while their eye movements and manual responses were recorded. We found that short-latency saccades were made more frequently to faces presented in the K channel than to P or M channels. Saccade latencies were not significantly modulated by expressive and identity cues. In contrast, manual response latencies and accuracy were modulated by both pathway biasing and by interactions of facial expression with facial masculinity, such that angry male faces elicited the fastest, and angry female faces, the least accurate, responses. We conclude that face stimuli can evoke fast saccadic and manual responses when projected to the K pathway.


Assuntos
Expressão Facial , Movimentos Sacádicos/fisiologia , Vias Visuais/fisiologia , Adulto , Sinais (Psicologia) , Feminino , Humanos , Masculino , Tempo de Reação/fisiologia , Adulto Jovem
11.
Int J Comput Vis ; 128(1): 1-25, 2020 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-33664553

RESUMO

Humans are arguably innately prepared to comprehend others' emotional expressions from subtle body movements. If robots or computers can be empowered with this capability, a number of robotic applications become possible. Automatically recognizing human bodily expression in unconstrained situations, however, is daunting given the incomplete understanding of the relationship between emotional expressions and body movements. The current research, as a multidisciplinary effort among computer and information sciences, psychology, and statistics, proposes a scalable and reliable crowdsourcing approach for collecting in-the-wild perceived emotion data for computers to learn to recognize body languages of humans. To accomplish this task, a large and growing annotated dataset with 9876 video clips of body movements and 13,239 human characters, named Body Language Dataset (BoLD), has been created. Comprehensive statistical analysis of the dataset revealed many interesting insights. A system to model the emotional expressions based on bodily movements, named Automated Recognition of Bodily Expression of Emotion (ARBEE), has also been developed and evaluated. Our analysis shows the effectiveness of Laban Movement Analysis (LMA) features in characterizing arousal, and our experiments using LMA features further demonstrate computability of bodily expression. We report and compare results of several other baseline methods which were developed for action recognition based on two different modalities, body skeleton and raw image. The dataset and findings presented in this work will likely serve as a launchpad for future discoveries in body language understanding that will enable future robots to interact and collaborate more effectively with humans.

12.
Emotion ; 20(7): 1244-1254, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31259586

RESUMO

Individuals use naïve emotion theories, including stereotypical information on the emotional disposition of an interaction partner, to form social impressions. In view of an aging population in Western societies, beliefs on emotion and age become more and more relevant. Across 10 studies, we thus present findings on how individuals associate specific affective states with young and old adults using the emotion implicit association test. The results of the studies are summarized in 2 separate mini meta-analyses. Participants implicitly associated young adult individuals with positive emotions, that is, happiness and serenity, respectively, and old adult individuals with negative emotions, that is, sadness and anger, respectively (Mini Meta-Analysis 1). Within negative emotions, participants preferentially associated young adult individuals with sadness and old adult individuals with anger (Mini Meta-Analysis 2). Even though young and old adults are stereotypically associated with specific emotions, contextual factors influence which age-emotion stereotype is salient in a given context. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Assuntos
Emoções/fisiologia , Adolescente , Adulto , Fatores Etários , Feminino , Humanos , Masculino , Estereotipagem , Adulto Jovem
13.
IEEE Trans Affect Comput ; 10(1): 115-128, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31576202

RESUMO

We proposed a probabilistic approach to joint modeling of participants' reliability and humans' regularity in crowdsourced affective studies. Reliability measures how likely a subject will respond to a question seriously; and regularity measures how often a human will agree with other seriously-entered responses coming from a targeted population. Crowdsourcing-based studies or experiments, which rely on human self-reported affect, pose additional challenges as compared with typical crowdsourcing studies that attempt to acquire concrete non-affective labels of objects. The reliability of participants has been massively pursued for typical non-affective crowdsourcing studies, whereas the regularity of humans in an affective experiment in its own right has not been thoroughly considered. It has been often observed that different individuals exhibit different feelings on the same test question, which does not have a sole correct response in the first place. High reliability of responses from one individual thus cannot conclusively result in high consensus across individuals. Instead, globally testing consensus of a population is of interest to investigators. Built upon the agreement multigraph among tasks and workers, our probabilistic model differentiates subject regularity from population reliability. We demonstrate the method's effectiveness for in-depth robust analysis of large-scale crowdsourced affective data, including emotion and aesthetic assessments collected by presenting visual stimuli to human subjects.

14.
Prog Brain Res ; 247: 71-87, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31196444

RESUMO

Recently, speed of presentation of facially expressive stimuli was found to influence the processing of compound threat cues (e.g., anger/fear/gaze). For instance, greater amygdala responses were found to clear (e.g., direct gaze anger/averted gaze fear) versus ambiguous (averted gaze anger/direct gaze fear) combinations of threat cues when rapidly presented (33 and 300ms), but greater to ambiguous versus clear threat cues when presented for more sustained durations (1, 1.5, and 2s). A working hypothesis was put forth (Adams et al., 2012) that these effects were due to differential magnocellular versus parvocellular pathways contributions to the rapid versus sustained processing of threat, respectively. To test this possibility directly here, we restricted visual stream processing in the fMRI environment using facially expressive stimuli specifically designed to bias visual input exclusively to the magnocellular versus parvocellular pathways. We found that for magnocellular-biased stimuli, activations were predominantly greater to clear versus ambiguous threat-gaze pairs (on par with that previously found for rapid presentations of threat cues), whereas activations to ambiguous versus clear threat-gaze pairs were greater for parvocellular-biased stimuli (on par with that previously found for sustained presentations). We couch these findings in an adaptive dual process account of threat perception and highlight implications for other dual process models within psychology.


Assuntos
Encéfalo/fisiologia , Expressão Facial , Medo/psicologia , Adulto , Tonsila do Cerebelo/fisiologia , Sinais (Psicologia) , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Rede Nervosa/fisiologia , Estimulação Luminosa/métodos
15.
Soc Cogn Affect Neurosci ; 14(2): 151-162, 2019 02 13.
Artigo em Inglês | MEDLINE | ID: mdl-30721981

RESUMO

Human faces evolved to signal emotions, with their meaning contextualized by eye gaze. For instance, a fearful expression paired with averted gaze clearly signals both presence of threat and its probable location. Conversely, direct gaze paired with facial fear leaves the source of the fear-evoking threat ambiguous. Given that visual perception occurs in parallel streams with different processing emphases, our goal was to test a recently developed hypothesis that clear and ambiguous threat cues would differentially engage the magnocellular (M) and parvocellular (P) pathways, respectively. We employed two-tone face images to characterize the neurodynamics evoked by stimuli that were biased toward M or P pathways. Human observers (N = 57) had to identify the expression of fearful or neutral faces with direct or averted gaze while their magnetoencephalogram was recorded. Phase locking between the amygdaloid complex, orbitofrontal cortex (OFC) and fusiform gyrus increased early (0-300 ms) for M-biased clear threat cues (averted-gaze fear) in the ß-band (13-30 Hz) while P-biased ambiguous threat cues (direct-gaze fear) evoked increased θ (4-8 Hz) phase locking in connections with OFC of the right hemisphere. We show that M and P pathways are relatively more sensitive toward clear and ambiguous threat processing, respectively, and characterize the neurodynamics underlying emotional face processing in the M and P pathways.


Assuntos
Tonsila do Cerebelo/fisiologia , Emoções/fisiologia , Medo/fisiologia , Fixação Ocular/fisiologia , Adulto , Sinais (Psicologia) , Expressão Facial , Medo/psicologia , Feminino , Humanos , Masculino , Percepção Visual/fisiologia
16.
Exp Brain Res ; 237(4): 967-975, 2019 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-30683957

RESUMO

Facial emotion is an important cue for deciding whether an individual is potentially helpful or harmful. However, facial expressions are inherently ambiguous and observers typically employ other cues to categorize emotion expressed on the face, such as race, sex, and context. Here, we explored the effect of increasing or reducing different types of uncertainty associated with a facial expression that is to be categorized. On each trial, observers responded according to the emotion and location of a peripherally presented face stimulus and were provided with either: (1) no information about the upcoming face; (2) its location; (3) its expressed emotion; or (4) both its location and emotion. While cueing emotion or location resulted in faster response times than cueing unpredictive information, cueing face emotion alone resulted in faster responses than cueing face location alone. Moreover, cueing both stimulus location and emotion resulted in a superadditive reduction of response times compared with cueing location or emotion alone, suggesting that feature-based attention to emotion and spatially selective attention interact to facilitate perception of face stimuli. While categorization of facial expressions was significantly affected by stable identity cues (sex and race) in the face, we found that these interactions were eliminated when uncertainty about facial expression, but not spatial uncertainty about stimulus location, was reduced by predictive cueing. This demonstrates that feature-based attention to facial expression greatly attenuates the need to rely on stable identity cues to interpret facial emotion.


Assuntos
Atenção/fisiologia , Emoções/fisiologia , Expressão Facial , Reconhecimento Facial/fisiologia , Percepção Social , Percepção Espacial/fisiologia , Adolescente , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
17.
Emotion ; 19(2): 209-218, 2019 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-29756792

RESUMO

Through 3 studies, we investigated whether angularity and roundness present in faces contributes to the perception of anger and joyful expressions, respectively. First, in Study 1 we found that angry expressions naturally contain more inward-pointing lines, whereas joyful expressions contain more outward-pointing lines. Then, using image-processing techniques in Studies 2 and 3, we filtered images to contain only inward-pointing or outward-pointing lines as a way to approximate angularity and roundness. We found that filtering images to be more angular increased how threatening and angry a neutral face was rated, increased how intense angry expressions were rated, and enhanced the recognition of anger. Conversely, filtering images to be rounder increased how warm and joyful a neutral face was rated, increased the intensity of joyful expressions, and enhanced recognition of joy. Together these findings show that angularity and roundness play a direct role in the recognition of angry and joyful expressions. Given evidence that angularity and roundness may play a biological role in indicating threat and safety in the environment, this suggests that angularity and roundness represent primitive facial cues used to signal threat-anger and warmth-joy pairings. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Assuntos
Ira , Sinais (Psicologia) , Face/anatomia & histologia , Expressão Facial , Reconhecimento Facial , Felicidade , Adulto , Feminino , Humanos , Masculino , Percepção Social
18.
Front Psychol ; 9: 1509, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30197614

RESUMO

The present study examined how emotional fit with culture - the degree of similarity between an individual' emotional response to the emotional response of others from the same culture - relates to well-being in a sample of Asian American and European American college students. Using a profile correlation method, we calculated three types of emotional fit based on self-reported emotions, facial expressions, and physiological responses. We then examined the relationships between emotional fit and individual well-being (depression, life satisfaction) as well as collective aspects of well-being, namely collective self-esteem (one's evaluation of one's cultural group) and identification with one's group. The results revealed that self-report emotional fit was associated with greater individual well-being across cultures. In contrast, culture moderated the relationship between self-report emotional fit and collective self-esteem, such that emotional fit predicted greater collective self-esteem in Asian Americans, but not in European Americans. Behavioral emotional fit was unrelated to well-being. There was a marginally significant cultural moderation in the relationship between physiological emotional fit in a strong emotional situation and group identification. Specifically, physiological emotional fit predicted greater group identification in Asian Americans, but not in European Americans. However, this finding disappeared after a Bonferroni correction. The current finding extends previous research by showing that, while emotional fit may be closely related to individual aspects of well-being across cultures, the influence of emotional fit on collective aspects of well-being may be unique to cultures that emphasize interdependence and social harmony, and thus being in alignment with other members of the group.

19.
Iperception ; 9(1): 2041669518755806, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29774139

RESUMO

Previous work using color photographic scenes has shown that human observers are keenly sensitive to different types of threatening and negative stimuli and reliably classify them by the presence, and spatial and temporal directions of threat. To test whether such distinctions can be extracted from impoverished visual information, we used 500 line drawings made by hand-tracing the original set of photographic scenes. Sixty participants rated the scenes on spatial and temporal dimensions of threat. Based on these ratings, trend analysis revealed five scene categories that were comparable to those identified for the matching color photographic scenes. Another 61 participants were randomly assigned to rate the valence or arousal evoked by the line drawings. The line drawings perceived to be the most negative were also perceived to be the most arousing, replicating the finding for color photographic scenes. We demonstrate here that humans are very sensitive to the spatial and temporal directions of threat even when they must extract this information from simple line drawings, and rate the line drawings very similarly to matched color photographs. The set of 500 hand-traced line-drawing scenes has been made freely available to the research community: http://www.kveragalab.org/threat.html.

20.
Hum Brain Mapp ; 39(7): 2725-2741, 2018 07.
Artigo em Inglês | MEDLINE | ID: mdl-29520882

RESUMO

During face perception, we integrate facial expression and eye gaze to take advantage of their shared signals. For example, fear with averted gaze provides a congruent avoidance cue, signaling both threat presence and its location, whereas fear with direct gaze sends an incongruent cue, leaving threat location ambiguous. It has been proposed that the processing of different combinations of threat cues is mediated by dual processing routes: reflexive processing via magnocellular (M) pathway and reflective processing via parvocellular (P) pathway. Because growing evidence has identified a variety of sex differences in emotional perception, here we also investigated how M and P processing of fear and eye gaze might be modulated by observer's sex, focusing on the amygdala, a structure important to threat perception and affective appraisal. We adjusted luminance and color of face stimuli to selectively engage M or P processing and asked observers to identify emotion of the face. Female observers showed more accurate behavioral responses to faces with averted gaze and greater left amygdala reactivity both to fearful and neutral faces. Conversely, males showed greater right amygdala activation only for M-biased averted-gaze fear faces. In addition to functional reactivity differences, females had proportionately greater bilateral amygdala volumes, which positively correlated with behavioral accuracy for M-biased fear. Conversely, in males only the right amygdala volume was positively correlated with accuracy for M-biased fear faces. Our findings suggest that M and P processing of facial threat cues is modulated by functional and structural differences in the amygdalae associated with observer's sex.


Assuntos
Tonsila do Cerebelo/fisiologia , Mapeamento Encefálico/métodos , Expressão Facial , Reconhecimento Facial/fisiologia , Medo/fisiologia , Caracteres Sexuais , Percepção Social , Adulto , Feminino , Fixação Ocular/fisiologia , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...