Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 4.385
Filter
1.
Multisens Res ; 37(2): 125-141, 2024 Apr 03.
Article in English | MEDLINE | ID: mdl-38714314

ABSTRACT

Trust is an aspect critical to human social interaction and research has identified many cues that help in the assimilation of this social trait. Two of these cues are the pitch of the voice and the width-to-height ratio of the face (fWHR). Additionally, research has indicated that the content of a spoken sentence itself has an effect on trustworthiness; a finding that has not yet been brought into multisensory research. The current research aims to investigate previously developed theories on trust in relation to vocal pitch, fWHR, and sentence content in a multimodal setting. Twenty-six female participants were asked to judge the trustworthiness of a voice speaking a neutral or romantic sentence while seeing a face. The average pitch of the voice and the fWHR were varied systematically. Results indicate that the content of the spoken message was an important predictor of trustworthiness extending into multimodality. Further, the mean pitch of the voice and fWHR of the face appeared to be useful indicators in a multimodal setting. These effects interacted with one another across modalities. The data demonstrate that trust in the voice is shaped by task-irrelevant visual stimuli. Future research is encouraged to clarify whether these findings remain consistent across genders, age groups, and languages.


Subject(s)
Face , Trust , Voice , Humans , Female , Voice/physiology , Young Adult , Adult , Face/physiology , Speech Perception/physiology , Pitch Perception/physiology , Facial Recognition/physiology , Cues , Adolescent
2.
PLoS One ; 19(5): e0303400, 2024.
Article in English | MEDLINE | ID: mdl-38739635

ABSTRACT

Visual abilities tend to vary predictably across the visual field-for simple low-level stimuli, visibility is better along the horizontal vs. vertical meridian and in the lower vs. upper visual field. In contrast, face perception abilities have been reported to show either distinct or entirely idiosyncratic patterns of variation in peripheral vision, suggesting a dissociation between the spatial properties of low- and higher-level vision. To assess this link more clearly, we extended methods used in low-level vision to develop an acuity test for face perception, measuring the smallest size at which facial gender can be reliably judged in peripheral vision. In 3 experiments, we show the characteristic inversion effect, with better acuity for upright faces than inverted, demonstrating the engagement of high-level face-selective processes in peripheral vision. We also observe a clear advantage for gender acuity on the horizontal vs. vertical meridian and a smaller-but-consistent lower- vs. upper-field advantage. These visual field variations match those of low-level vision, indicating that higher-level face processing abilities either inherit or actively maintain the characteristic patterns of spatial selectivity found in early vision. The commonality of these spatial variations throughout the visual hierarchy means that the location of faces in our visual field systematically influences our perception of them.


Subject(s)
Facial Recognition , Visual Fields , Humans , Visual Fields/physiology , Female , Male , Adult , Facial Recognition/physiology , Young Adult , Photic Stimulation , Visual Perception/physiology , Visual Acuity/physiology , Face/physiology
3.
Sci Rep ; 14(1): 10304, 2024 05 05.
Article in English | MEDLINE | ID: mdl-38705917

ABSTRACT

Understanding neurogenetic mechanisms underlying neuropsychiatric disorders such as schizophrenia and autism is complicated by their inherent clinical and genetic heterogeneity. Williams syndrome (WS), a rare neurodevelopmental condition in which both the genetic alteration (hemideletion of ~ twenty-six 7q11.23 genes) and the cognitive/behavioral profile are well-defined, offers an invaluable opportunity to delineate gene-brain-behavior relationships. People with WS are characterized by increased social drive, including particular interest in faces, together with hallmark difficulty in visuospatial processing. Prior work, primarily in adults with WS, has searched for neural correlates of these characteristics, with reports of altered fusiform gyrus function while viewing socioemotional stimuli such as faces, along with hypoactivation of the intraparietal sulcus during visuospatial processing. Here, we investigated neural function in children and adolescents with WS by using four separate fMRI paradigms, two that probe each of these two cognitive/behavioral domains. During the two visuospatial tasks, but not during the two face processing tasks, we found bilateral intraparietal sulcus hypoactivation in WS. In contrast, during both face processing tasks, but not during the visuospatial tasks, we found fusiform hyperactivation. These data not only demonstrate that previous findings in adults with WS are also present in childhood and adolescence, but also provide a clear example that genetic mechanisms can bias neural circuit function, thereby affecting behavioral traits.


Subject(s)
Magnetic Resonance Imaging , Williams Syndrome , Humans , Williams Syndrome/physiopathology , Williams Syndrome/genetics , Williams Syndrome/diagnostic imaging , Magnetic Resonance Imaging/methods , Adolescent , Child , Female , Male , Brain Mapping/methods , Brain/diagnostic imaging , Brain/physiopathology , Face , Facial Recognition/physiology , Parietal Lobe/physiopathology , Parietal Lobe/diagnostic imaging , Space Perception/physiology
4.
Sci Rep ; 14(1): 10040, 2024 05 02.
Article in English | MEDLINE | ID: mdl-38693189

ABSTRACT

Investigation of visual illusions helps us understand how we process visual information. For example, face pareidolia, the misperception of illusory faces in objects, could be used to understand how we process real faces. However, it remains unclear whether this illusion emerges from errors in face detection or from slower, cognitive processes. Here, our logic is straightforward; if examples of face pareidolia activate the mechanisms that rapidly detect faces in visual environments, then participants will look at objects more quickly when the objects also contain illusory faces. To test this hypothesis, we sampled continuous eye movements during a fast saccadic choice task-participants were required to select either faces or food items. During this task, pairs of stimuli were positioned close to the initial fixation point or further away, in the periphery. As expected, the participants were faster to look at face targets than food targets. Importantly, we also discovered an advantage for food items with illusory faces but, this advantage was limited to the peripheral condition. These findings are among the first to demonstrate that the face pareidolia illusion persists in the periphery and, thus, it is likely to be a consequence of erroneous face detection.


Subject(s)
Illusions , Humans , Female , Male , Adult , Illusions/physiology , Young Adult , Visual Perception/physiology , Photic Stimulation , Face/physiology , Facial Recognition/physiology , Eye Movements/physiology , Pattern Recognition, Visual/physiology
5.
Cereb Cortex ; 34(13): 172-186, 2024 May 02.
Article in English | MEDLINE | ID: mdl-38696606

ABSTRACT

Individuals with autism spectrum disorder (ASD) experience pervasive difficulties in processing social information from faces. However, the behavioral and neural mechanisms underlying social trait judgments of faces in ASD remain largely unclear. Here, we comprehensively addressed this question by employing functional neuroimaging and parametrically generated faces that vary in facial trustworthiness and dominance. Behaviorally, participants with ASD exhibited reduced specificity but increased inter-rater variability in social trait judgments. Neurally, participants with ASD showed hypo-activation across broad face-processing areas. Multivariate analysis based on trial-by-trial face responses could discriminate participant groups in the majority of the face-processing areas. Encoding social traits in ASD engaged vastly different face-processing areas compared to controls, and encoding different social traits engaged different brain areas. Interestingly, the idiosyncratic brain areas encoding social traits in ASD were still flexible and context-dependent, similar to neurotypicals. Additionally, participants with ASD also showed an altered encoding of facial saliency features in the eyes and mouth. Together, our results provide a comprehensive understanding of the neural mechanisms underlying social trait judgments in ASD.


Subject(s)
Autism Spectrum Disorder , Brain , Facial Recognition , Magnetic Resonance Imaging , Social Perception , Humans , Autism Spectrum Disorder/physiopathology , Autism Spectrum Disorder/diagnostic imaging , Autism Spectrum Disorder/psychology , Male , Female , Adult , Young Adult , Facial Recognition/physiology , Brain/physiopathology , Brain/diagnostic imaging , Judgment/physiology , Brain Mapping , Adolescent
6.
J Psychiatry Neurosci ; 49(3): E145-E156, 2024.
Article in English | MEDLINE | ID: mdl-38692692

ABSTRACT

BACKGROUND: Neuroimaging studies have revealed abnormal functional interaction during the processing of emotional faces in patients with major depressive disorder (MDD), thereby enhancing our comprehension of the pathophysiology of MDD. However, it is unclear whether there is abnormal directional interaction among face-processing systems in patients with MDD. METHODS: A group of patients with MDD and a healthy control group underwent a face-matching task during functional magnetic resonance imaging. Dynamic causal modelling (DCM) analysis was used to investigate effective connectivity between 7 regions in the face-processing systems. We used a Parametric Empirical Bayes model to compare effective connectivity between patients with MDD and controls. RESULTS: We included 48 patients and 44 healthy controls in our analyses. Both groups showed higher accuracy and faster reaction time in the shape-matching condition than in the face-matching condition. However, no significant behavioural or brain activation differences were found between the groups. Using DCM, we found that, compared with controls, patients with MDD showed decreased self-connection in the right dorsolateral prefrontal cortex (DLPFC), amygdala, and fusiform face area (FFA) across task conditions; increased intrinsic connectivity from the right amygdala to the bilateral DLPFC, right FFA, and left amygdala, suggesting an increased intrinsic connectivity centred in the amygdala in the right side of the face-processing systems; both increased and decreased positive intrinsic connectivity in the left side of the face-processing systems; and comparable task modulation effect on connectivity. LIMITATIONS: Our study did not include longitudinal neuroimaging data, and there was limited region of interest selection in the DCM analysis. CONCLUSION: Our findings provide evidence for a complex pattern of alterations in the face-processing systems in patients with MDD, potentially involving the right amygdala to a greater extent. The results confirm some previous findings and highlight the crucial role of the regions on both sides of face-processing systems in the pathophysiology of MDD.


Subject(s)
Amygdala , Depressive Disorder, Major , Facial Recognition , Magnetic Resonance Imaging , Humans , Depressive Disorder, Major/physiopathology , Depressive Disorder, Major/diagnostic imaging , Male , Female , Adult , Facial Recognition/physiology , Amygdala/diagnostic imaging , Amygdala/physiopathology , Brain/diagnostic imaging , Brain/physiopathology , Neural Pathways/physiopathology , Neural Pathways/diagnostic imaging , Bayes Theorem , Young Adult , Brain Mapping , Facial Expression , Middle Aged , Reaction Time/physiology
7.
J Exp Psychol Hum Percept Perform ; 50(6): 570-586, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38635225

ABSTRACT

Theoretical understanding of first impressions from faces has been closely associated with the proposal that rapid approach-avoidance decisions are needed during social interactions. Nevertheless, experimental work has rarely examined first impressions of people who are actually moving-instead extrapolating from photographic images. In six experiments, we describe the relationship between social attributions (dominance and trustworthiness) and the motion and apparent intent of a perceived person. We first show strong correspondence between judgments of photos and avatars of the same people (Experiment 1). Avatars were rated as more dominant and trustworthy when walking toward the viewer than when stationary (Experiment 2). Furthermore, avatars approaching the viewer were rated as more dominant than those avoiding (walking past) the viewer, or remaining stationary (Experiment 3). Trustworthiness was increased by movement, but not affected by approaching/avoiding paths. Surprisingly, dominance ratings increased both when avatars were approaching and being approached (Experiments 4-6), independently of agency. However, diverging movement (moving backward) reduced dominance ratings-again independently of agency (Experiment 6). These results demonstrate the close link between dominance judgments and approach and show the updatable nature of first impressions-their formation depended on the immediate dynamic context in a more subtle manner than previously suggested. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Subject(s)
Facial Recognition , Social Perception , Humans , Adult , Male , Female , Young Adult , Facial Recognition/physiology , Trust , Social Interaction , Judgment/physiology , Motion Perception/physiology
8.
Cogn Emot ; 38(3): 296-314, 2024 05.
Article in English | MEDLINE | ID: mdl-38678446

ABSTRACT

Social exclusion is an emotionally painful experience that leads to various alterations in socio-emotional processing. The perceptual and emotional consequences that may arise from experiencing social exclusion can vary depending on the paradigm used to manipulate it. Exclusion paradigms can vary in terms of the severity and duration of the leading exclusion experience, thereby classifying it as either a short-term or long-term experience. The present study aimed to study the impact of exclusion on socio-emotional processing using different paradigms that caused experiencing short-term and imagining long-term exclusion. Ambiguous facial emotions were used as socio-emotional cues. In study 1, the Ostracism Online paradigm was used to manipulate short-term exclusion. In study 2, a new sample of participants imagined long-term exclusion through the future life alone paradigm. Participants of both studies then completed a facial emotion recognition task consisting of morphed ambiguous facial emotions. By means of Point of Subjective Equivalence analyses, our results indicate that the experience of short-term exclusion hinders recognising happy facial expressions. In contrast, imagining long-term exclusion causes difficulties in recognising sad facial expressions. These findings extend the current literature, suggesting that not all social exclusion paradigms affect socio-emotional processing similarly.


Subject(s)
Emotions , Facial Expression , Humans , Female , Male , Young Adult , Adult , Facial Recognition , Psychological Distance , Social Isolation/psychology , Recognition, Psychology , Adolescent
9.
Zhejiang Da Xue Xue Bao Yi Xue Ban ; 53(2): 254-260, 2024 Apr 25.
Article in English, Chinese | MEDLINE | ID: mdl-38650447

ABSTRACT

Attention deficit and hyperactive disorder (ADHD) is a chronic neurodevelopmental disorder characterized by inattention, hyperactivity-impulsivity, and working memory deficits. Social dysfunction is one of the major challenges faced by children with ADHD. It has been found that children with ADHD can't perform as well as typically developing children on facial expression recognition (FER) tasks. Generally, children with ADHD have some difficulties in FER, while some studies suggest that they have no significant differences in accuracy of specific emotion recognition compared with typically developing children. The neuropsychological mechanisms underlying these difficulties are as follows. First, neuroanatomically. Compared to typically developing children, children with ADHD show smaller gray matter volume and surface area in the amygdala and medial prefrontal cortex regions, as well as reduced density and volume of axons/cells in certain frontal white matter fiber tracts. Second, neurophysiologically. Children with ADHD exhibit increased slow-wave activity in their electroencephalogram, and event-related potential studies reveal abnormalities in emotional regulation and responses to angry faces when facing facial stimuli. Third, psychologically. Psychosocial stressors may influence FER abilities in children with ADHD, and sleep deprivation in ADHD children may significantly increase their recognition threshold for negative expressions such as sadness and anger. This article reviews research progress over the past three years on FER abilities of children with ADHD, analyzing the FER deficit in children with ADHD from three dimensions: neuroanatomy, neurophysiology and psychology, aiming to provide new perspectives for further research and clinical treatment of ADHD.


Subject(s)
Attention Deficit Disorder with Hyperactivity , Facial Expression , Humans , Attention Deficit Disorder with Hyperactivity/physiopathology , Attention Deficit Disorder with Hyperactivity/psychology , Child , Facial Recognition/physiology , Emotions
10.
BMC Psychiatry ; 24(1): 307, 2024 Apr 23.
Article in English | MEDLINE | ID: mdl-38654234

ABSTRACT

BACKGROUND: Obstructive sleep apnea-hypopnea syndrome (OSAHS) is a chronic breathing disorder characterized by recurrent upper airway obstruction during sleep. Although previous studies have shown a link between OSAHS and depressive mood, the neurobiological mechanisms underlying mood disorders in OSAHS patients remain poorly understood. This study aims to investigate the emotion processing mechanism in OSAHS patients with depressive mood using event-related potentials (ERPs). METHODS: Seventy-four OSAHS patients were divided into the depressive mood and non-depressive mood groups according to their Self-rating Depression Scale (SDS) scores. Patients underwent overnight polysomnography and completed various cognitive and emotional questionnaires. The patients were shown facial images displaying positive, neutral, and negative emotions and tasked to identify the emotion category, while their visual evoked potential was simultaneously recorded. RESULTS: The two groups did not differ significantly in age, BMI, and years of education, but showed significant differences in their slow wave sleep ratio (P = 0.039), ESS (P = 0.006), MMSE (P < 0.001), and MOCA scores (P = 0.043). No significant difference was found in accuracy and response time on emotional face recognition between the two groups. N170 latency in the depressive group was significantly longer than the non-depressive group (P = 0.014 and 0.007) at the bilateral parieto-occipital lobe, while no significant difference in N170 amplitude was found. No significant difference in P300 amplitude or latency between the two groups. Furthermore, N170 amplitude at PO7 was positively correlated with the arousal index and negatively with MOCA scores (both P < 0.01). CONCLUSION: OSAHS patients with depressive mood exhibit increased N170 latency and impaired facial emotion recognition ability. Special attention towards the depressive mood among OSAHS patients is warranted for its implications for patient care.


Subject(s)
Depression , Emotions , Sleep Apnea, Obstructive , Humans , Male , Middle Aged , Sleep Apnea, Obstructive/physiopathology , Sleep Apnea, Obstructive/psychology , Sleep Apnea, Obstructive/complications , Depression/physiopathology , Depression/psychology , Depression/complications , Female , Adult , Emotions/physiology , Polysomnography , Evoked Potentials/physiology , Electroencephalography , Facial Recognition/physiology , Evoked Potentials, Visual/physiology , Facial Expression
11.
Brain Res Bull ; 211: 110946, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38614407

ABSTRACT

Post-traumatic stress disorder (PTSD) is associated with abnormalities in the processing and regulation of emotion as well as cognitive deficits. This study evaluated the differential brain activation patterns associated with cognitive and emotional distractors during working memory (WM) maintenance for human faces between patients with PTSD and healthy controls (HCs) and assessed the relationship between changes in the activation patterns by the opposing effects of distraction types and gray matter volume (GMV). Twenty-two patients with PTSD and twenty-two HCs underwent T1-weighted magnetic resonance imaging (MRI) and event-related functional MRI (fMRI), respectively. Event-related fMRI data were recorded while subjects performed a delayed-response WM task with human face and trauma-related distractors. Compared to the HCs, the patients with PTSD showed significantly reduced GMV of the inferior frontal gyrus (IFG) (p < 0.05, FWE-corrected). For the human face distractor trial, the patients showed significantly decreased activities in the superior frontal gyrus and IFG compared with HCs (p < 0.05, FWE-corrected). The patients showed lower accuracy scores and slower reaction times for the face recognition task with trauma-related distractors compared with HCs as well as significantly increased brain activity in the STG during the trauma-related distractor trial was observed (p < 0.05, FWE-corrected). Such differential brain activation patterns associated with the effects of distraction in PTSD patients may be linked to neural mechanisms associated with impairments in both cognitive control for confusable distractors and the ability to control emotional distraction.


Subject(s)
Brain , Emotions , Magnetic Resonance Imaging , Memory, Short-Term , Stress Disorders, Post-Traumatic , Humans , Stress Disorders, Post-Traumatic/physiopathology , Stress Disorders, Post-Traumatic/diagnostic imaging , Stress Disorders, Post-Traumatic/pathology , Male , Memory, Short-Term/physiology , Adult , Female , Emotions/physiology , Brain/physiopathology , Brain/diagnostic imaging , Brain/pathology , Cognition/physiology , Brain Mapping , Young Adult , Facial Recognition/physiology , Reaction Time/physiology , Middle Aged , Gray Matter/diagnostic imaging , Gray Matter/pathology , Gray Matter/physiopathology , Attention/physiology
12.
Article in English | MEDLINE | ID: mdl-38607744

ABSTRACT

The purpose of this work is to analyze how new technologies can enhance clinical practice while also examining the physical traits of emotional expressiveness of face expression in a number of psychiatric illnesses. Hence, in this work, an automatic facial expression recognition system has been proposed that analyzes static, sequential, or video facial images from medical healthcare data to detect emotions in people's facial regions. The proposed method has been implemented in five steps. The first step is image preprocessing, where a facial region of interest has been segmented from the input image. The second component includes a classical deep feature representation and the quantum part that involves successive sets of quantum convolutional layers followed by random quantum variational circuits for feature learning. Here, the proposed system has attained a faster training approach using the proposed quantum convolutional neural network approach that takes [Formula: see text] time. In contrast, the classical convolutional neural network models have [Formula: see text] time. Additionally, some performance improvement techniques, such as image augmentation, fine-tuning, matrix normalization, and transfer learning methods, have been applied to the recognition system. Finally, the scores due to classical and quantum deep learning models are fused to improve the performance of the proposed method. Extensive experimentation with Karolinska-directed emotional faces (KDEF), Static Facial Expressions in the Wild (SFEW 2.0), and Facial Expression Recognition 2013 (FER-2013) benchmark databases and compared with other state-of-the-art methods that show the improvement of the proposed system.


Subject(s)
Facial Recognition , Mental Health , Humans , Benchmarking , Databases, Factual , Neural Networks, Computer
13.
Cereb Cortex ; 34(4)2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38679483

ABSTRACT

Prior research has yet to fully elucidate the impact of varying relative saliency between target and distractor on attentional capture and suppression, along with their underlying neural mechanisms, especially when social (e.g. face) and perceptual (e.g. color) information interchangeably serve as singleton targets or distractors, competing for attention in a search array. Here, we employed an additional singleton paradigm to investigate the effects of relative saliency on attentional capture (as assessed by N2pc) and suppression (as assessed by PD) of color or face singleton distractors in a visual search task by recording event-related potentials. We found that face singleton distractors with higher relative saliency induced stronger attentional processing. Furthermore, enhancing the physical salience of colors using a bold color ring could enhance attentional processing toward color singleton distractors. Reducing the physical salience of facial stimuli by blurring weakened attentional processing toward face singleton distractors; however, blurring enhanced attentional processing toward color singleton distractors because of the change in relative saliency. In conclusion, the attentional processes of singleton distractors are affected by their relative saliency to singleton targets, with higher relative saliency of singleton distractors resulting in stronger attentional capture and suppression; faces, however, exhibit some specificity in attentional capture and suppression due to high social saliency.


Subject(s)
Attention , Color Perception , Electroencephalography , Evoked Potentials , Humans , Attention/physiology , Female , Male , Young Adult , Evoked Potentials/physiology , Adult , Color Perception/physiology , Photic Stimulation/methods , Facial Recognition/physiology , Pattern Recognition, Visual/physiology , Brain/physiology
14.
J Pers Soc Psychol ; 126(3): 390-412, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38647440

ABSTRACT

There is abundant evidence that emotion categorization is influenced by the social category membership of target faces, with target sex and target race modulating the ease with which perceivers can categorize happy and angry emotional expressions. However, theoretical interpretation of these findings is constrained by gender and race imbalances in both the participant samples and target faces typically used when demonstrating these effects (e.g., most participants have been White women and most Black targets have been men). Across seven experiments, the current research used gender-matched samples (Experiments 1a and 1b), gender- and racial identity-matched samples (Experiments 2a and 2b), and manipulations of social context (Experiments 3a, 3b, and 4) to establish whether emotion categorization is influenced by interactions between the social category membership of perceivers and target faces. Supporting this idea, we found the presence and size of the happy face advantage were influenced by interactions between perceivers and target social categories, with reliable happy face advantages in reaction times for ingroup targets but not necessarily for outgroup targets. White targets and female targets were the only categories associated with a reliable happy face advantage that was independent of perceiver category. The interactions between perceiver and target social category were eliminated when targets were blocked by social category (e.g., a block of all White female targets; Experiments 3a and 3b) and accentuated when targets were associated with additional category information (i.e., ingroup/outgroup nationality; Experiment 4). These findings support the possibility that contextually sensitive intergroup processes influence emotion categorization. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Subject(s)
Emotions , Facial Expression , Facial Recognition , Group Processes , Happiness , Social Perception , Humans , Female , Male , Adult , Young Adult , Social Identification
15.
J Exp Psychol Gen ; 153(5): 1374-1387, 2024 May.
Article in English | MEDLINE | ID: mdl-38647481

ABSTRACT

A subcortical pathway is thought to have evolved to facilitate fear information transmission, but direct evidence for its existence in humans is lacking. In recent years, rapid, preattentive, and preconscious fear processing has been demonstrated, providing indirect support for the existence of the subcortical pathway by challenging the necessity of canonical cortical pathways in fear processing. However, direct support also requires evidence for the involvement of subcortical regions in fear processing. To address this issue, here we investigate whether fear processing reflects the characteristics of the subcortical structures in the hypothesized subcortical pathway. Using a monocular/dichoptic paradigm, Experiment 1 demonstrated a same-eye advantage for fearful but not neutral face processing, suggesting that fear processing relied on monocular neurons existing mainly in the subcortex. Experiments 2 and 3 further showed insensitivity to short-wavelength stimuli and a nasal-temporal hemifield asymmetry in fear processing, both of which were functional characteristics of the superior colliculus, a key hub of the subcortical pathway. Furthermore, all three experiments revealed a low spatial frequency selectivity of fear processing, consistent with magnocellular input via subcortical neurons. These results suggest a selective involvement of subcortical structures in fear processing, which, together with the indirect evidence for automatic fear processing, provides a more complete picture of the existence of a subcortical pathway for fear processing in humans. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Subject(s)
Facial Expression , Facial Recognition , Fear , Humans , Fear/physiology , Male , Female , Adult , Young Adult , Facial Recognition/physiology , Superior Colliculi/physiology
16.
Nat Commun ; 15(1): 3407, 2024 Apr 22.
Article in English | MEDLINE | ID: mdl-38649694

ABSTRACT

The perception and neural processing of sensory information are strongly influenced by prior expectations. The integration of prior and sensory information can manifest through distinct underlying mechanisms: focusing on unexpected input, denoted as prediction error (PE) processing, or amplifying anticipated information via sharpened representation. In this study, we employed computational modeling using deep neural networks combined with representational similarity analyses of fMRI data to investigate these two processes during face perception. Participants were cued to see face images, some generated by morphing two faces, leading to ambiguity in face identity. We show that expected faces were identified faster and perception of ambiguous faces was shifted towards priors. Multivariate analyses uncovered evidence for PE processing across and beyond the face-processing hierarchy from the occipital face area (OFA), via the fusiform face area, to the anterior temporal lobe, and suggest sharpened representations in the OFA. Our findings support the proposition that the brain represents faces grounded in prior expectations.


Subject(s)
Brain Mapping , Facial Recognition , Magnetic Resonance Imaging , Humans , Male , Female , Adult , Young Adult , Facial Recognition/physiology , Brain/physiology , Brain/diagnostic imaging , Temporal Lobe/physiology , Temporal Lobe/diagnostic imaging , Face , Photic Stimulation , Neural Networks, Computer , Occipital Lobe/physiology , Occipital Lobe/diagnostic imaging , Pattern Recognition, Visual/physiology , Visual Perception/physiology
17.
Sci Rep ; 14(1): 9402, 2024 04 24.
Article in English | MEDLINE | ID: mdl-38658575

ABSTRACT

Perceptual decisions are derived from the combination of priors and sensorial input. While priors are broadly understood to reflect experience/expertise developed over one's lifetime, the role of perceptual expertise at the individual level has seldom been directly explored. Here, we manipulate probabilistic information associated with a high and low expertise category (faces and cars respectively), while assessing individual level of expertise with each category. 67 participants learned the probabilistic association between a color cue and each target category (face/car) in a behavioural categorization task. Neural activity (EEG) was then recorded in a similar paradigm in the same participants featuring the previously learned contingencies without the explicit task. Behaviourally, perception of the higher expertise category (faces) was modulated by expectation. Specifically, we observed facilitatory and interference effects when targets were correctly or incorrectly expected, which were also associated with independently measured individual levels of face expertise. Multivariate pattern analysis of the EEG signal revealed clear effects of expectation from 100 ms post stimulus, with significant decoding of the neural response to expected vs. not stimuli, when viewing identical images. Latency of peak decoding when participants saw faces was directly associated with individual level facilitation effects in the behavioural task. The current results not only provide time sensitive evidence of expectation effects on early perception but highlight the role of higher-level expertise on forming priors.


Subject(s)
Electroencephalography , Facial Recognition , Humans , Male , Female , Adult , Facial Recognition/physiology , Young Adult , Photic Stimulation , Reaction Time/physiology , Visual Perception/physiology , Face/physiology
18.
Sci Rep ; 14(1): 9418, 2024 04 24.
Article in English | MEDLINE | ID: mdl-38658628

ABSTRACT

Pupil contagion refers to the observer's pupil-diameter changes in response to changes in the pupil diameter of others. Recent studies on the other-race effect on pupil contagion have mainly focused on using eye region images as stimuli, revealing the effect in adults but not in infants. To address this research gap, the current study used whole-face images as stimuli to assess the pupil-diameter response of 5-6-month-old and 7-8-month-old infants to changes in the pupil-diameter of both upright and inverted unfamiliar-race faces. The study initially hypothesized that there would be no pupil contagion in either upright or inverted unfamiliar-race faces, based on our previous finding of pupil contagion occurring only in familiar-race faces among 5-6-month-old infants. Notably, the current results indicated that 5-6-month-old infants exhibited pupil contagion in both upright and inverted unfamiliar-race faces, while 7-8-month-old infants showed this effect only in upright unfamiliar-race faces. These results demonstrate that the face inversion effect of pupil contagion does not occur in 5-6-month-old infants, thereby suggesting the presence of the other-race effect in pupil contagion among this age group. Overall, this study provides the first evidence of the other-race effect on infants' pupil contagion using face stimuli.


Subject(s)
Pupil , Humans , Pupil/physiology , Infant , Male , Female , Photic Stimulation , Facial Recognition/physiology
19.
Sensors (Basel) ; 24(7)2024 Apr 04.
Article in English | MEDLINE | ID: mdl-38610510

ABSTRACT

The perception of sound greatly impacts users' emotional states, expectations, affective relationships with products, and purchase decisions. Consequently, assessing the perceived quality of sounds through jury testing is crucial in product design. However, the subjective nature of jurors' responses may limit the accuracy and reliability of jury test outcomes. This research explores the utility of facial expression analysis in jury testing to enhance response reliability and mitigate subjectivity. Some quantitative indicators allow the research hypothesis to be validated, such as the correlation between jurors' emotional responses and valence values, the accuracy of jury tests, and the disparities between jurors' questionnaire responses and the emotions measured by FER (facial expression recognition). Specifically, analysis of attention levels during different statuses reveals a discernible decrease in attention levels, with 70 percent of jurors exhibiting reduced attention levels in the 'distracted' state and 62 percent in the 'heavy-eyed' state. On the other hand, regression analysis shows that the correlation between jurors' valence and their choices in the jury test increases when considering the data where the jurors are attentive. The correlation highlights the potential of facial expression analysis as a reliable tool for assessing juror engagement. The findings suggest that integrating facial expression recognition can enhance the accuracy of jury testing in product design by providing a more dependable assessment of user responses and deeper insights into participants' reactions to auditory stimuli.


Subject(s)
Facial Recognition , Humans , Reproducibility of Results , Acoustics , Sound , Emotions
20.
Sci Rep ; 14(1): 8121, 2024 04 07.
Article in English | MEDLINE | ID: mdl-38582772

ABSTRACT

This paper proposes an improved strategy for the MobileNetV2 neural network(I-MobileNetV2) in response to problems such as large parameter quantities in existing deep convolutional neural networks and the shortcomings of the lightweight neural network MobileNetV2 such as easy loss of feature information, poor real-time performance, and low accuracy rate in facial emotion recognition tasks. The network inherits the characteristics of MobilenetV2 depthwise separated convolution, signifying a reduction in computational load while maintaining a lightweight profile. It utilizes a reverse fusion mechanism to retain negative features, which makes the information less likely to be lost. The SELU activation function is used to replace the RELU6 activation function to avoid gradient vanishing. Meanwhile, to improve the feature recognition capability, the channel attention mechanism (Squeeze-and-Excitation Networks (SE-Net)) is integrated into the MobilenetV2 network. Experiments conducted on the facial expression datasets FER2013 and CK + showed that the proposed network model achieved facial expression recognition accuracies of 68.62% and 95.96%, improving upon the MobileNetV2 model by 0.72% and 6.14% respectively, and the parameter count decreased by 83.8%. These results empirically verify the effectiveness of the improvements made to the network model.


Subject(s)
Accidental Injuries , Facial Recognition , Humans , Neural Networks, Computer , Recognition, Psychology
SELECTION OF CITATIONS
SEARCH DETAIL
...