Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 2.400
1.
PLoS One ; 19(5): e0303400, 2024.
Article En | MEDLINE | ID: mdl-38739635

Visual abilities tend to vary predictably across the visual field-for simple low-level stimuli, visibility is better along the horizontal vs. vertical meridian and in the lower vs. upper visual field. In contrast, face perception abilities have been reported to show either distinct or entirely idiosyncratic patterns of variation in peripheral vision, suggesting a dissociation between the spatial properties of low- and higher-level vision. To assess this link more clearly, we extended methods used in low-level vision to develop an acuity test for face perception, measuring the smallest size at which facial gender can be reliably judged in peripheral vision. In 3 experiments, we show the characteristic inversion effect, with better acuity for upright faces than inverted, demonstrating the engagement of high-level face-selective processes in peripheral vision. We also observe a clear advantage for gender acuity on the horizontal vs. vertical meridian and a smaller-but-consistent lower- vs. upper-field advantage. These visual field variations match those of low-level vision, indicating that higher-level face processing abilities either inherit or actively maintain the characteristic patterns of spatial selectivity found in early vision. The commonality of these spatial variations throughout the visual hierarchy means that the location of faces in our visual field systematically influences our perception of them.


Facial Recognition , Visual Fields , Humans , Visual Fields/physiology , Female , Male , Adult , Facial Recognition/physiology , Young Adult , Photic Stimulation , Visual Perception/physiology , Visual Acuity/physiology , Face/physiology
2.
Sci Rep ; 14(1): 10304, 2024 05 05.
Article En | MEDLINE | ID: mdl-38705917

Understanding neurogenetic mechanisms underlying neuropsychiatric disorders such as schizophrenia and autism is complicated by their inherent clinical and genetic heterogeneity. Williams syndrome (WS), a rare neurodevelopmental condition in which both the genetic alteration (hemideletion of ~ twenty-six 7q11.23 genes) and the cognitive/behavioral profile are well-defined, offers an invaluable opportunity to delineate gene-brain-behavior relationships. People with WS are characterized by increased social drive, including particular interest in faces, together with hallmark difficulty in visuospatial processing. Prior work, primarily in adults with WS, has searched for neural correlates of these characteristics, with reports of altered fusiform gyrus function while viewing socioemotional stimuli such as faces, along with hypoactivation of the intraparietal sulcus during visuospatial processing. Here, we investigated neural function in children and adolescents with WS by using four separate fMRI paradigms, two that probe each of these two cognitive/behavioral domains. During the two visuospatial tasks, but not during the two face processing tasks, we found bilateral intraparietal sulcus hypoactivation in WS. In contrast, during both face processing tasks, but not during the visuospatial tasks, we found fusiform hyperactivation. These data not only demonstrate that previous findings in adults with WS are also present in childhood and adolescence, but also provide a clear example that genetic mechanisms can bias neural circuit function, thereby affecting behavioral traits.


Magnetic Resonance Imaging , Williams Syndrome , Humans , Williams Syndrome/physiopathology , Williams Syndrome/genetics , Williams Syndrome/diagnostic imaging , Magnetic Resonance Imaging/methods , Adolescent , Child , Female , Male , Brain Mapping/methods , Brain/diagnostic imaging , Brain/physiopathology , Face , Facial Recognition/physiology , Parietal Lobe/physiopathology , Parietal Lobe/diagnostic imaging , Space Perception/physiology
3.
Multisens Res ; 37(2): 125-141, 2024 Apr 03.
Article En | MEDLINE | ID: mdl-38714314

Trust is an aspect critical to human social interaction and research has identified many cues that help in the assimilation of this social trait. Two of these cues are the pitch of the voice and the width-to-height ratio of the face (fWHR). Additionally, research has indicated that the content of a spoken sentence itself has an effect on trustworthiness; a finding that has not yet been brought into multisensory research. The current research aims to investigate previously developed theories on trust in relation to vocal pitch, fWHR, and sentence content in a multimodal setting. Twenty-six female participants were asked to judge the trustworthiness of a voice speaking a neutral or romantic sentence while seeing a face. The average pitch of the voice and the fWHR were varied systematically. Results indicate that the content of the spoken message was an important predictor of trustworthiness extending into multimodality. Further, the mean pitch of the voice and fWHR of the face appeared to be useful indicators in a multimodal setting. These effects interacted with one another across modalities. The data demonstrate that trust in the voice is shaped by task-irrelevant visual stimuli. Future research is encouraged to clarify whether these findings remain consistent across genders, age groups, and languages.


Face , Trust , Voice , Humans , Female , Voice/physiology , Young Adult , Adult , Face/physiology , Speech Perception/physiology , Pitch Perception/physiology , Facial Recognition/physiology , Cues , Adolescent
4.
J Psychiatry Neurosci ; 49(3): E145-E156, 2024.
Article En | MEDLINE | ID: mdl-38692692

BACKGROUND: Neuroimaging studies have revealed abnormal functional interaction during the processing of emotional faces in patients with major depressive disorder (MDD), thereby enhancing our comprehension of the pathophysiology of MDD. However, it is unclear whether there is abnormal directional interaction among face-processing systems in patients with MDD. METHODS: A group of patients with MDD and a healthy control group underwent a face-matching task during functional magnetic resonance imaging. Dynamic causal modelling (DCM) analysis was used to investigate effective connectivity between 7 regions in the face-processing systems. We used a Parametric Empirical Bayes model to compare effective connectivity between patients with MDD and controls. RESULTS: We included 48 patients and 44 healthy controls in our analyses. Both groups showed higher accuracy and faster reaction time in the shape-matching condition than in the face-matching condition. However, no significant behavioural or brain activation differences were found between the groups. Using DCM, we found that, compared with controls, patients with MDD showed decreased self-connection in the right dorsolateral prefrontal cortex (DLPFC), amygdala, and fusiform face area (FFA) across task conditions; increased intrinsic connectivity from the right amygdala to the bilateral DLPFC, right FFA, and left amygdala, suggesting an increased intrinsic connectivity centred in the amygdala in the right side of the face-processing systems; both increased and decreased positive intrinsic connectivity in the left side of the face-processing systems; and comparable task modulation effect on connectivity. LIMITATIONS: Our study did not include longitudinal neuroimaging data, and there was limited region of interest selection in the DCM analysis. CONCLUSION: Our findings provide evidence for a complex pattern of alterations in the face-processing systems in patients with MDD, potentially involving the right amygdala to a greater extent. The results confirm some previous findings and highlight the crucial role of the regions on both sides of face-processing systems in the pathophysiology of MDD.


Amygdala , Depressive Disorder, Major , Facial Recognition , Magnetic Resonance Imaging , Humans , Depressive Disorder, Major/physiopathology , Depressive Disorder, Major/diagnostic imaging , Male , Female , Adult , Facial Recognition/physiology , Amygdala/diagnostic imaging , Amygdala/physiopathology , Brain/diagnostic imaging , Brain/physiopathology , Neural Pathways/physiopathology , Neural Pathways/diagnostic imaging , Bayes Theorem , Young Adult , Brain Mapping , Facial Expression , Middle Aged , Reaction Time/physiology
5.
Sci Rep ; 14(1): 10040, 2024 05 02.
Article En | MEDLINE | ID: mdl-38693189

Investigation of visual illusions helps us understand how we process visual information. For example, face pareidolia, the misperception of illusory faces in objects, could be used to understand how we process real faces. However, it remains unclear whether this illusion emerges from errors in face detection or from slower, cognitive processes. Here, our logic is straightforward; if examples of face pareidolia activate the mechanisms that rapidly detect faces in visual environments, then participants will look at objects more quickly when the objects also contain illusory faces. To test this hypothesis, we sampled continuous eye movements during a fast saccadic choice task-participants were required to select either faces or food items. During this task, pairs of stimuli were positioned close to the initial fixation point or further away, in the periphery. As expected, the participants were faster to look at face targets than food targets. Importantly, we also discovered an advantage for food items with illusory faces but, this advantage was limited to the peripheral condition. These findings are among the first to demonstrate that the face pareidolia illusion persists in the periphery and, thus, it is likely to be a consequence of erroneous face detection.


Illusions , Humans , Female , Male , Adult , Illusions/physiology , Young Adult , Visual Perception/physiology , Photic Stimulation , Face/physiology , Facial Recognition/physiology , Eye Movements/physiology , Pattern Recognition, Visual/physiology
6.
Cereb Cortex ; 34(13): 172-186, 2024 May 02.
Article En | MEDLINE | ID: mdl-38696606

Individuals with autism spectrum disorder (ASD) experience pervasive difficulties in processing social information from faces. However, the behavioral and neural mechanisms underlying social trait judgments of faces in ASD remain largely unclear. Here, we comprehensively addressed this question by employing functional neuroimaging and parametrically generated faces that vary in facial trustworthiness and dominance. Behaviorally, participants with ASD exhibited reduced specificity but increased inter-rater variability in social trait judgments. Neurally, participants with ASD showed hypo-activation across broad face-processing areas. Multivariate analysis based on trial-by-trial face responses could discriminate participant groups in the majority of the face-processing areas. Encoding social traits in ASD engaged vastly different face-processing areas compared to controls, and encoding different social traits engaged different brain areas. Interestingly, the idiosyncratic brain areas encoding social traits in ASD were still flexible and context-dependent, similar to neurotypicals. Additionally, participants with ASD also showed an altered encoding of facial saliency features in the eyes and mouth. Together, our results provide a comprehensive understanding of the neural mechanisms underlying social trait judgments in ASD.


Autism Spectrum Disorder , Brain , Facial Recognition , Magnetic Resonance Imaging , Social Perception , Humans , Autism Spectrum Disorder/physiopathology , Autism Spectrum Disorder/diagnostic imaging , Autism Spectrum Disorder/psychology , Male , Female , Adult , Young Adult , Facial Recognition/physiology , Brain/physiopathology , Brain/diagnostic imaging , Judgment/physiology , Brain Mapping , Adolescent
7.
Cereb Cortex ; 34(5)2024 May 02.
Article En | MEDLINE | ID: mdl-38801420

The ability to accurately assess one's own memory performance during learning is essential for adaptive behavior, but the brain mechanisms underlying this metamemory function are not well understood. We investigated the neural correlates of memory accuracy and retrospective memory confidence in a face-name associative learning task using magnetoencephalography in healthy young adults (n = 32). We found that high retrospective confidence was associated with stronger occipital event-related fields during encoding and widespread event-related fields during retrieval compared to low confidence. On the other hand, memory accuracy was linked to medial temporal activities during both encoding and retrieval, but only in low-confidence trials. A decrease in oscillatory power at alpha/beta bands in the parietal regions during retrieval was associated with higher memory confidence. In addition, representational similarity analysis at the single-trial level revealed distributed but differentiable neural activities associated with memory accuracy and confidence during both encoding and retrieval. In summary, our study unveiled distinct neural activity patterns related to memory confidence and accuracy during associative learning and underscored the crucial role of parietal regions in metamemory.


Association Learning , Magnetoencephalography , Humans , Association Learning/physiology , Male , Female , Young Adult , Adult , Mental Recall/physiology , Brain/physiology , Names , Memory/physiology , Facial Recognition/physiology , Metacognition/physiology
8.
Cortex ; 175: 1-11, 2024 Jun.
Article En | MEDLINE | ID: mdl-38691922

Studies have reported substantial variability in emotion recognition ability (ERA) - an important social skill - but possible neural underpinnings for such individual differences are not well understood. This functional magnetic resonance imaging (fMRI) study investigated neural responses during emotion recognition in young adults (N = 49) who were selected for inclusion based on their performance (high or low) during previous testing of ERA. Participants were asked to judge brief video recordings in a forced-choice emotion recognition task, wherein stimuli were presented in visual, auditory and multimodal (audiovisual) blocks. Emotion recognition rates during brain scanning confirmed that individuals with high (vs low) ERA received higher accuracy for all presentation blocks. fMRI-analyses focused on key regions of interest (ROIs) involved in the processing of multimodal emotion expressions, based on previous meta-analyses. In neural response to emotional stimuli contrasted with neutral stimuli, individuals with high (vs low) ERA showed higher activation in the following ROIs during the multimodal condition: right middle superior temporal gyrus (mSTG), right posterior superior temporal sulcus (PSTS), and right inferior frontal cortex (IFC). Overall, results suggest that individual variability in ERA may be reflected across several stages of decisional processing, including extraction (mSTG), integration (PSTS) and evaluation (IFC) of emotional information.


Brain Mapping , Emotions , Individuality , Magnetic Resonance Imaging , Recognition, Psychology , Humans , Male , Female , Emotions/physiology , Young Adult , Adult , Recognition, Psychology/physiology , Brain/physiology , Brain/diagnostic imaging , Facial Expression , Photic Stimulation/methods , Facial Recognition/physiology
9.
Int J Psychophysiol ; 200: 112358, 2024 Jun.
Article En | MEDLINE | ID: mdl-38710371

Recent studies have shown that the processing of neutral facial expressions could be modulated by the valence and self-relevance of preceding verbal evaluations. However, these studies have not distinguished the dimension (i.e., morality and competence) from verbal evaluations. In fact, there is a hot controversy about whether morality or competence receives more weight. Therefore, using the ERP technique, the current study aimed to address this issue by comparing the influence of morality and competence evaluations on behavioral and neural responses to neutral facial expressions when these evaluations varied with contextual valence and self-relevance. Our ERP results revealed that the early EPN amplitudes were larger for neutral faces after receiving evaluations about self relative to evaluations about senders. Moreover, the EPN was more negative after a competence evaluation relative to a morality evaluation when these evaluations were positive, while this effect was absent when these evaluations were negative. The late LPP was larger after a morality evaluation compared to a competence evaluation when these evaluations were negative and directed to self. However, no significant LPP effect between morality and competence evaluations was observed when these evaluations were positive. The present study extended previous studies by showing that early and late processing stages of faces are affected by the evaluation dimension in a top-down manner and further modulated by contextual valence and self-relevance.


Electroencephalography , Evoked Potentials , Morals , Humans , Female , Male , Young Adult , Adult , Evoked Potentials/physiology , Facial Expression , Facial Recognition/physiology , Photic Stimulation/methods , Adolescent , Self Concept
10.
Cognition ; 248: 105810, 2024 Jul.
Article En | MEDLINE | ID: mdl-38733867

Human observers often exhibit remarkable consistency in remembering specific visual details, such as certain face images. This phenomenon is commonly attributed to visual memorability, a collection of stimulus attributes that enhance the long-term retention of visual information. However, the exact contributions of visual memorability to visual memory formation remain elusive as these effects could emerge anywhere from early perceptual encoding to post-perceptual memory consolidation processes. To clarify this, we tested three key predictions from the hypothesis that visual memorability facilitates early perceptual encoding that supports the formation of visual short-term memory (VSTM) and the retention of visual long-term memory (VLTM). First, we examined whether memorability benefits in VSTM encoding manifest early, even within the constraints of a brief stimulus presentation (100-200 ms; Experiment 1). We achieved this by manipulating stimulus presentation duration in a VSTM change detection task using face images with high- or low-memorability while ensuring they were equally familiar to the participants. Second, we assessed whether this early memorability benefit increases the likelihood of VSTM retention, even with post-stimulus masking designed to interrupt post-perceptual VSTM consolidation processes (Experiment 2). Last, we investigated the durability of memorability benefits by manipulating memory retention intervals from seconds to 24 h (Experiment 3). Across experiments, our data suggest that visual memorability has an early impact on VSTM formation, persisting across variable retention intervals and predicting subsequent VLTM overnight. Combined, these findings highlight that visual memorability enhances visual memory within 100-200 ms following stimulus onset, resulting in robust memory traces resistant to post-perceptual interruption and long-term forgetting.


Memory, Long-Term , Memory, Short-Term , Humans , Young Adult , Adult , Male , Female , Memory, Long-Term/physiology , Memory, Short-Term/physiology , Visual Perception/physiology , Facial Recognition/physiology , Memory Consolidation/physiology , Adolescent
11.
J Vis ; 24(5): 14, 2024 May 01.
Article En | MEDLINE | ID: mdl-38814935

Facial color influences the perception of facial expressions, and emotional expressions bias how facial color is remembered. However, it remains unclear whether facial expressions affect daily facial color memory. The memory color effect demonstrates that knowledge about typical colors affects the perception of the actual color of given objects. To investigate the effect of facial color memory, we examined whether the memory color effect for faces varies depending on facial expression. We calculated the subjective achromatic point of the facial expression image stimulus and compared the degree to which it was shifted from the actual achromatic point between facial expression conditions. We hypothesized that if the memory of facial color is influenced by the facial expression color (e.g., anger is a warm color, fear is a cold color), then the subjective achromatic point would vary with facial expression. In Experiment 1, we recruited 13 participants who adjusted the color of facial expression stimuli (anger, neutral, and fear) and a banana stimulus to be achromatic. No significant differences in the subjective achromatic point between facial expressions were observed. Subsequently, we conducted Experiment 2 with 23 participants because Experiment 1 did not account for the sensitivity to color changes on the face; humans perceive greater color differences in faces than in non-faces. Participants selected which facial color they believed the expression stimulus appeared to be, choosing one of two options provided to them. The results indicated that the subjective achromatic points of anger and fear faces significantly shifted toward the opposite color direction compared with neutral faces in the brief presentation condition. This research suggests that the memory color of faces differs depending on facial expressions and supports the idea that the perception of emotional expressions can bias facial color memory.


Color Perception , Facial Expression , Memory , Humans , Male , Female , Young Adult , Color Perception/physiology , Adult , Memory/physiology , Photic Stimulation/methods , Emotions/physiology , Anger/physiology , Facial Recognition/physiology
12.
Nature ; 629(8013): 861-868, 2024 May.
Article En | MEDLINE | ID: mdl-38750353

A central assumption of neuroscience is that long-term memories are represented by the same brain areas that encode sensory stimuli1. Neurons in inferotemporal (IT) cortex represent the sensory percept of visual objects using a distributed axis code2-4. Whether and how the same IT neural population represents the long-term memory of visual objects remains unclear. Here we examined how familiar faces are encoded in the IT anterior medial face patch (AM), perirhinal face patch (PR) and temporal pole face patch (TP). In AM and PR we observed that the encoding axis for familiar faces is rotated relative to that for unfamiliar faces at long latency; in TP this memory-related rotation was much weaker. Contrary to previous claims, the relative response magnitude to familiar versus unfamiliar faces was not a stable indicator of familiarity in any patch5-11. The mechanism underlying the memory-related axis change is likely intrinsic to IT cortex, because inactivation of PR did not affect axis change dynamics in AM. Overall, our results suggest that memories of familiar faces are represented in AM and perirhinal cortex by a distinct long-latency code, explaining how the same cell population can encode both the percept and memory of faces.


Recognition, Psychology , Temporal Lobe , Temporal Lobe/physiology , Temporal Lobe/cytology , Male , Animals , Recognition, Psychology/physiology , Time Factors , Memory, Long-Term/physiology , Facial Recognition/physiology , Macaca mulatta , Perirhinal Cortex/physiology , Perirhinal Cortex/cytology , Neurons/physiology , Memory/physiology , Face , Visual Perception/physiology , Female , Photic Stimulation
13.
J Neurosci ; 44(22)2024 May 29.
Article En | MEDLINE | ID: mdl-38627090

Humans have the remarkable ability to vividly retrieve sensory details of past events. According to the theory of sensory reinstatement, during remembering, brain regions specialized for processing specific sensory stimuli are reactivated to support content-specific retrieval. Recently, several studies have emphasized transformations in the spatial organization of these reinstated activity patterns. Specifically, studies of scene stimuli suggest a clear anterior shift in the location of retrieval activations compared with the activity observed during perception. However, it is not clear that such transformations occur universally, with inconsistent evidence for other important stimulus categories, particularly faces. One challenge in addressing this question is the careful delineation of face-selective cortices, which are interdigitated with other selective regions, in configurations that spatially differ across individuals. Therefore, we conducted a multisession neuroimaging study to first carefully map individual participants' (nine males and seven females) face-selective regions within ventral temporal cortex (VTC), followed by a second session to examine the activity patterns within these regions during face memory encoding and retrieval. While face-selective regions were expectedly engaged during face perception at encoding, memory retrieval engagement exhibited a more selective and constricted reinstatement pattern within these regions, but did not show any consistent direction of spatial transformation (e.g., anteriorization). We also report on unique human intracranial recordings from VTC under the same experimental conditions. These findings highlight the importance of considering the complex configuration of category-selective cortex in elucidating principles shaping the neural transformations that occur from perception to memory.


Brain Mapping , Facial Recognition , Magnetic Resonance Imaging , Temporal Lobe , Humans , Male , Female , Temporal Lobe/physiology , Temporal Lobe/diagnostic imaging , Adult , Facial Recognition/physiology , Young Adult , Memory/physiology , Photic Stimulation/methods , Mental Recall/physiology
14.
Exp Brain Res ; 242(6): 1339-1348, 2024 Jun.
Article En | MEDLINE | ID: mdl-38563980

Using the "Don't look" (DL) paradigm, wherein participants are asked not to look at a specific feature (i.e., eye, nose, and mouth), we previously documented that Easterners struggled to completely avoid fixating on the eyes and nose. Their underlying mechanisms for attractiveness may differ because the fixations on the eyes were triggered only reflexively, whereas fixations on the nose were consistently elicited. In this study, we predominantly focused on the nose, where the center-of-gravity (CoG) effect, which refers to a person's tendency to look near an object's CoG, could be confounded. Full-frontal and mid-profile faces were used because the latter's CoG did not correspond to the nose location. Although we hypothesized that these two effects are independent, the results indicated that, in addition to the successful tracing of previous studies, the CoG effect explains the nose-attracting effect. This study not only reveals this explanation but also raises a question regarding the CoG effect on Eastern participants.


Facial Recognition , Humans , Female , Male , Facial Recognition/physiology , Young Adult , Adult , Fixation, Ocular/physiology , Eye , Photic Stimulation/methods , Face
15.
J Psychiatr Res ; 174: 94-100, 2024 Jun.
Article En | MEDLINE | ID: mdl-38626566

Cognitive impairment remains understudied in generalized anxiety disorder (GAD), despite the high prevalence and substantial burden associated with this disorder. We aimed to assess cognitive impairment in patients with GAD and evaluate the ability of cognitive tests to detect this disorder. Because of its high rate of comorbidity, we also examined how other anxiety disorders and current major depressive episodes affected our results. We tested 263 consecutive general practice outpatients. We used the GAD-7 and the Mini International Neuropsychiatric Interview (MINI) to detect anxiety and mood disorders. We assessed cognitive performance with the Stroop test, a facial emotion recognition test, and the trail-making test (TMT). Compared to patients without GAD, patients with GAD were significantly slower to complete the TMT(B-A) and faster to recognize emotions, especially negative ones such as disgust and anger. When controlling for other anxiety disorders and current major depressive episode, GAD retained a significant effect on the TMT(B-A), but not on the emotion recognition test. The TMT(B-A) could detect GAD with good accuracy (area under the curve (AUC) = 0.83, maximal Youden's index = 0.56), which was by no means comparable to the GAD-7 (AUC = 0.97, Youden's index = 0.81). While it is not efficient enough to replace the GAD-7 as a diagnostic tool, the capacity of the TMT(B-A) to detect GAD emphasizes the importance of cognitive flexibility impairment in GAD.


Anxiety Disorders , Humans , Pilot Projects , Male , Female , Anxiety Disorders/diagnosis , Adult , Middle Aged , Neuropsychological Tests/standards , Cognitive Dysfunction/diagnosis , Cognitive Dysfunction/etiology , Aged , Facial Recognition/physiology , Depressive Disorder, Major/diagnosis
16.
Zhejiang Da Xue Xue Bao Yi Xue Ban ; 53(2): 254-260, 2024 Apr 25.
Article En, Zh | MEDLINE | ID: mdl-38650447

Attention deficit and hyperactive disorder (ADHD) is a chronic neurodevelopmental disorder characterized by inattention, hyperactivity-impulsivity, and working memory deficits. Social dysfunction is one of the major challenges faced by children with ADHD. It has been found that children with ADHD can't perform as well as typically developing children on facial expression recognition (FER) tasks. Generally, children with ADHD have some difficulties in FER, while some studies suggest that they have no significant differences in accuracy of specific emotion recognition compared with typically developing children. The neuropsychological mechanisms underlying these difficulties are as follows. First, neuroanatomically. Compared to typically developing children, children with ADHD show smaller gray matter volume and surface area in the amygdala and medial prefrontal cortex regions, as well as reduced density and volume of axons/cells in certain frontal white matter fiber tracts. Second, neurophysiologically. Children with ADHD exhibit increased slow-wave activity in their electroencephalogram, and event-related potential studies reveal abnormalities in emotional regulation and responses to angry faces when facing facial stimuli. Third, psychologically. Psychosocial stressors may influence FER abilities in children with ADHD, and sleep deprivation in ADHD children may significantly increase their recognition threshold for negative expressions such as sadness and anger. This article reviews research progress over the past three years on FER abilities of children with ADHD, analyzing the FER deficit in children with ADHD from three dimensions: neuroanatomy, neurophysiology and psychology, aiming to provide new perspectives for further research and clinical treatment of ADHD.


Attention Deficit Disorder with Hyperactivity , Facial Expression , Humans , Attention Deficit Disorder with Hyperactivity/physiopathology , Attention Deficit Disorder with Hyperactivity/psychology , Child , Facial Recognition/physiology , Emotions
17.
BMC Psychiatry ; 24(1): 307, 2024 Apr 23.
Article En | MEDLINE | ID: mdl-38654234

BACKGROUND: Obstructive sleep apnea-hypopnea syndrome (OSAHS) is a chronic breathing disorder characterized by recurrent upper airway obstruction during sleep. Although previous studies have shown a link between OSAHS and depressive mood, the neurobiological mechanisms underlying mood disorders in OSAHS patients remain poorly understood. This study aims to investigate the emotion processing mechanism in OSAHS patients with depressive mood using event-related potentials (ERPs). METHODS: Seventy-four OSAHS patients were divided into the depressive mood and non-depressive mood groups according to their Self-rating Depression Scale (SDS) scores. Patients underwent overnight polysomnography and completed various cognitive and emotional questionnaires. The patients were shown facial images displaying positive, neutral, and negative emotions and tasked to identify the emotion category, while their visual evoked potential was simultaneously recorded. RESULTS: The two groups did not differ significantly in age, BMI, and years of education, but showed significant differences in their slow wave sleep ratio (P = 0.039), ESS (P = 0.006), MMSE (P < 0.001), and MOCA scores (P = 0.043). No significant difference was found in accuracy and response time on emotional face recognition between the two groups. N170 latency in the depressive group was significantly longer than the non-depressive group (P = 0.014 and 0.007) at the bilateral parieto-occipital lobe, while no significant difference in N170 amplitude was found. No significant difference in P300 amplitude or latency between the two groups. Furthermore, N170 amplitude at PO7 was positively correlated with the arousal index and negatively with MOCA scores (both P < 0.01). CONCLUSION: OSAHS patients with depressive mood exhibit increased N170 latency and impaired facial emotion recognition ability. Special attention towards the depressive mood among OSAHS patients is warranted for its implications for patient care.


Depression , Emotions , Sleep Apnea, Obstructive , Humans , Male , Middle Aged , Sleep Apnea, Obstructive/physiopathology , Sleep Apnea, Obstructive/psychology , Sleep Apnea, Obstructive/complications , Depression/physiopathology , Depression/psychology , Depression/complications , Female , Adult , Emotions/physiology , Polysomnography , Evoked Potentials/physiology , Electroencephalography , Facial Recognition/physiology , Evoked Potentials, Visual/physiology , Facial Expression
18.
Cereb Cortex ; 34(4)2024 Apr 01.
Article En | MEDLINE | ID: mdl-38679483

Prior research has yet to fully elucidate the impact of varying relative saliency between target and distractor on attentional capture and suppression, along with their underlying neural mechanisms, especially when social (e.g. face) and perceptual (e.g. color) information interchangeably serve as singleton targets or distractors, competing for attention in a search array. Here, we employed an additional singleton paradigm to investigate the effects of relative saliency on attentional capture (as assessed by N2pc) and suppression (as assessed by PD) of color or face singleton distractors in a visual search task by recording event-related potentials. We found that face singleton distractors with higher relative saliency induced stronger attentional processing. Furthermore, enhancing the physical salience of colors using a bold color ring could enhance attentional processing toward color singleton distractors. Reducing the physical salience of facial stimuli by blurring weakened attentional processing toward face singleton distractors; however, blurring enhanced attentional processing toward color singleton distractors because of the change in relative saliency. In conclusion, the attentional processes of singleton distractors are affected by their relative saliency to singleton targets, with higher relative saliency of singleton distractors resulting in stronger attentional capture and suppression; faces, however, exhibit some specificity in attentional capture and suppression due to high social saliency.


Attention , Color Perception , Electroencephalography , Evoked Potentials , Humans , Attention/physiology , Female , Male , Young Adult , Evoked Potentials/physiology , Adult , Color Perception/physiology , Photic Stimulation/methods , Facial Recognition/physiology , Pattern Recognition, Visual/physiology , Brain/physiology
19.
Sci Rep ; 14(1): 9402, 2024 04 24.
Article En | MEDLINE | ID: mdl-38658575

Perceptual decisions are derived from the combination of priors and sensorial input. While priors are broadly understood to reflect experience/expertise developed over one's lifetime, the role of perceptual expertise at the individual level has seldom been directly explored. Here, we manipulate probabilistic information associated with a high and low expertise category (faces and cars respectively), while assessing individual level of expertise with each category. 67 participants learned the probabilistic association between a color cue and each target category (face/car) in a behavioural categorization task. Neural activity (EEG) was then recorded in a similar paradigm in the same participants featuring the previously learned contingencies without the explicit task. Behaviourally, perception of the higher expertise category (faces) was modulated by expectation. Specifically, we observed facilitatory and interference effects when targets were correctly or incorrectly expected, which were also associated with independently measured individual levels of face expertise. Multivariate pattern analysis of the EEG signal revealed clear effects of expectation from 100 ms post stimulus, with significant decoding of the neural response to expected vs. not stimuli, when viewing identical images. Latency of peak decoding when participants saw faces was directly associated with individual level facilitation effects in the behavioural task. The current results not only provide time sensitive evidence of expectation effects on early perception but highlight the role of higher-level expertise on forming priors.


Electroencephalography , Facial Recognition , Humans , Male , Female , Adult , Facial Recognition/physiology , Young Adult , Photic Stimulation , Reaction Time/physiology , Visual Perception/physiology , Face/physiology
20.
Sci Rep ; 14(1): 9418, 2024 04 24.
Article En | MEDLINE | ID: mdl-38658628

Pupil contagion refers to the observer's pupil-diameter changes in response to changes in the pupil diameter of others. Recent studies on the other-race effect on pupil contagion have mainly focused on using eye region images as stimuli, revealing the effect in adults but not in infants. To address this research gap, the current study used whole-face images as stimuli to assess the pupil-diameter response of 5-6-month-old and 7-8-month-old infants to changes in the pupil-diameter of both upright and inverted unfamiliar-race faces. The study initially hypothesized that there would be no pupil contagion in either upright or inverted unfamiliar-race faces, based on our previous finding of pupil contagion occurring only in familiar-race faces among 5-6-month-old infants. Notably, the current results indicated that 5-6-month-old infants exhibited pupil contagion in both upright and inverted unfamiliar-race faces, while 7-8-month-old infants showed this effect only in upright unfamiliar-race faces. These results demonstrate that the face inversion effect of pupil contagion does not occur in 5-6-month-old infants, thereby suggesting the presence of the other-race effect in pupil contagion among this age group. Overall, this study provides the first evidence of the other-race effect on infants' pupil contagion using face stimuli.


Pupil , Humans , Pupil/physiology , Infant , Male , Female , Photic Stimulation , Facial Recognition/physiology
...