RESUMEN
The hippocampus is largely recognized for its integral contributions to memory processing. By contrast, its role in perceptual processing remains less clear. Hippocampal properties vary along the anterior-posterior (AP) axis. Based on past research suggesting a gradient in the scale of features processed along the AP extent of the hippocampus, the representations have been proposed to vary as a function of granularity along this axis. One way to quantify such granularity is with population receptive field (pRF) size measured during visual processing, which has so far received little attention. In this study, we compare the pRF sizes within the hippocampus to its activation for images of scenes versus faces. We also measure these functional properties in surrounding medial temporal lobe (MTL) structures. Consistent with past research, we find pRFs to be larger in the anterior than in the posterior hippocampus. Critically, our analysis of surrounding MTL regions, the perirhinal cortex, entorhinal cortex, and parahippocampal cortex shows a similar correlation between scene sensitivity and larger pRF size. These findings provide conclusive evidence for a tight relationship between the pRF size and the sensitivity to image content in the hippocampus and adjacent medial temporal cortex.
Asunto(s)
Imagen por Resonancia Magnética , Lóbulo Temporal , Imagen por Resonancia Magnética/métodos , Lóbulo Temporal/fisiología , Hipocampo/fisiología , Corteza Entorrinal/fisiología , Memoria/fisiologíaRESUMEN
In the human brain, a multiple-demand (MD) network plays a key role in cognitive control, with core components in lateral frontal, dorsomedial frontal and lateral parietal cortex, and multivariate activity patterns that discriminate the contents of many cognitive activities. In prefrontal cortex of the behaving monkey, different cognitive operations are associated with very different patterns of neural activity, while details of a particular stimulus are encoded as small variations on these basic patterns (Sigala et al, 2008). Here, using the advanced fMRI methods of the Human Connectome Project and their 360-region cortical parcellation, we searched for a similar result in MD activation patterns. In each parcel, we compared multivertex patterns for every combination of three tasks (working memory, task-switching, and stop-signal) and two stimulus classes (faces and buildings). Though both task and stimulus category were discriminated in every cortical parcel, the strength of discrimination varied strongly across parcels. The different cognitive operations of the three tasks were strongly discriminated in MD regions. Stimulus categories, in contrast, were most strongly discriminated in a large region of primary and higher visual cortex, and intriguingly, in both parietal and frontal lobe regions adjacent to core MD regions. In the monkey, frontal neurons show a strong pattern of nonlinear mixed selectivity, with activity reflecting specific conjunctions of task events. In our data, however, there was limited evidence for mixed selectivity; throughout the brain, discriminations of task and stimulus combined largely linearly, with a small nonlinear component. In MD regions, human fMRI data recapitulate some but not all aspects of electrophysiological data from nonhuman primates.
Asunto(s)
Imagen por Resonancia Magnética , Imagen por Resonancia Magnética/métodos , Humanos , Masculino , Adulto , Femenino , Memoria a Corto Plazo/fisiología , Adulto Joven , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen , Conectoma/métodos , Estimulación Luminosa/métodos , Mapeo Encefálico/métodos , Red Nerviosa/fisiología , Red Nerviosa/diagnóstico por imagen , Cognición/fisiologíaRESUMEN
Aberrations in non-verbal social cognition have been reported to coincide with major depressive disorder. Yet little is known about the role of the eyes. To fill this gap, the present study explores whether and, if so, how reading language of the eyes is altered in depression. For this purpose, patients and person-by-person matched typically developing individuals were administered the Emotions in Masked Faces task and Reading the Mind in the Eyes Test, modified, both of which contained a comparable amount of visual information available. For achieving group homogeneity, we set a focus on females as major depressive disorder displays a gender-specific profile. The findings show that facial masks selectively affect inferring emotions: recognition of sadness and anger are more heavily compromised in major depressive disorder as compared with typically developing controls, whereas the recognition of fear, happiness, and neutral expressions remains unhindered. Disgust, the forgotten emotion of psychiatry, is the least recognizable emotion in both groups. On the Reading the Mind in the Eyes Test patients exhibit lower accuracy on positive expressions than their typically developing peers, but do not differ on negative items. In both depressive and typically developing individuals, the ability to recognize emotions behind a mask and performance on the Reading the Mind in the Eyes Test are linked to each other in processing speed, but not recognition accuracy. The outcome provides a blueprint for understanding the complexities of reading language of the eyes within and beyond the COVID-19 pandemic.
Asunto(s)
Trastorno Depresivo Mayor , Emociones , Expresión Facial , Humanos , Femenino , Adulto , Emociones/fisiología , Trastorno Depresivo Mayor/psicología , Trastorno Depresivo Mayor/fisiopatología , Adulto Joven , Reconocimiento Facial/fisiología , Persona de Mediana Edad , COVID-19/psicología , LecturaRESUMEN
A prominent aspect of primate lateral prefrontal cortex organization is its division into several cytoarchitecturally distinct subregions. Neurophysiological investigations in macaques have provided evidence for the functional specialization of these subregions, but an understanding of the relative representational topography of sensory, social, and cognitive processes within them remains elusive. One explanatory factor is that evidence for functional specialization has been compiled largely from a patchwork of findings across studies, in many animals, and with considerable variation in stimulus sets and tasks. Here, we addressed this by leveraging the common marmoset (Callithrix jacchus) to carry out large-scale neurophysiological mapping of the lateral prefrontal cortex using high-density microelectrode arrays, and a diverse suite of test stimuli including faces, marmoset calls, and spatial working memory task. Task-modulated units and units responsive to visual and auditory stimuli were distributed throughout the lateral prefrontal cortex, while those with saccade-related activity or face-selective responses were restricted to 8aV, 8aD, 10, 46 V, and 47. Neurons with contralateral visual receptive fields were limited to areas 8aV and 8aD. These data reveal a mixed pattern of functional specialization in the lateral prefrontal cortex, in which responses to some stimuli and tasks are distributed broadly across lateral prefrontal cortex subregions, while others are more limited in their representation.
Asunto(s)
Callithrix , Corteza Prefrontal , Animales , Corteza Prefrontal/fisiología , Masculino , Femenino , Mapeo Encefálico , Memoria a Corto Plazo/fisiología , Estimulación Luminosa/métodos , Neuronas/fisiología , Estimulación Acústica , Movimientos Sacádicos/fisiología , Percepción Auditiva/fisiología , Vocalización Animal/fisiologíaRESUMEN
Emotional communication relies on a mutual understanding, between expresser and viewer, of facial configurations that broadcast specific emotions. However, we do not know whether people share a common understanding of how emotional states map onto facial expressions. This is because expressions exist in a high-dimensional space too large to explore in conventional experimental paradigms. Here, we address this by adapting genetic algorithms and combining them with photorealistic three-dimensional avatars to efficiently explore the high-dimensional expression space. A total of 336 people used these tools to generate facial expressions that represent happiness, fear, sadness, and anger. We found substantial variability in the expressions generated via our procedure, suggesting that different people associate different facial expressions to the same emotional state. We then examined whether variability in the facial expressions created could account for differences in performance on standard emotion recognition tasks by asking people to categorize different test expressions. We found that emotion categorization performance was explained by the extent to which test expressions matched the expressions generated by each individual. Our findings reveal the breadth of variability in people's representations of facial emotions, even among typical adult populations. This has profound implications for the interpretation of responses to emotional stimuli, which may reflect individual differences in the emotional category people attribute to a particular facial expression, rather than differences in the brain mechanisms that produce emotional responses.
Asunto(s)
Reconocimiento Facial , Individualidad , Adulto , Humanos , Expresión Facial , Emociones/fisiología , Ira/fisiología , AlgoritmosRESUMEN
The correct identification of facial expressions is critical for understanding the intention of others during social communication in the daily life of all primates. Here we used ultra-high-field fMRI at 9.4 T to investigate the neural network activated by facial expressions in awake New World common marmosets from both male and female sex, and to determine the effect of facial motions on this network. We further explored how the face-patch network is involved in the processing of facial expressions. Our results show that dynamic and static facial expressions activate face patches in temporal and frontal areas (O, PV, PD, MD, AD, and PL) as well as in the amygdala, with stronger responses for negative faces, also associated with an increase of the respiration rates of the monkey. Processing of dynamic facial expressions involves an extended network recruiting additional regions not known to be part of the face-processing network, suggesting that face motions may facilitate the recognition of facial expressions. We report for the first time in New World marmosets that the perception and identification of changeable facial expressions, vital for social communication, recruit face-selective brain patches also involved in face detection processing and are associated with an increase of arousal.SIGNIFICANCE STATEMENT Recent research in humans and nonhuman primates has highlighted the importance to correctly recognize and process facial expressions to understand others' emotions in social interactions. The current study focuses on the fMRI responses of emotional facial expressions in the common marmoset (Callithrix jacchus), a New World primate species sharing several similarities of social behavior with humans. Our results reveal that temporal and frontal face patches are involved in both basic face detection and facial expression processing. The specific recruitment of these patches for negative faces associated with an increase of the arousal level show that marmosets process facial expressions of their congener, vital for social communication.
Asunto(s)
Callithrix , Expresión Facial , Humanos , Animales , Masculino , Femenino , Mapeo Encefálico , Encéfalo/diagnóstico por imagen , Encéfalo/fisiología , Emociones/fisiología , Imagen por Resonancia MagnéticaRESUMEN
Repeated exposure to a stimulus results in reduced neural response, or repetition suppression, in brain regions responsible for processing that stimulus. This rapid accommodation to repetition is thought to underlie learning, stimulus selectivity, and strengthening of perceptual expectations. Importantly, reduced sensitivity to repetition has been identified in several neurodevelopmental, learning, and psychiatric disorders, including autism spectrum disorder (ASD), a neurodevelopmental disorder characterized by challenges in social communication and repetitive behaviors and restricted interests. Reduced ability to exploit or learn from repetition in ASD is hypothesized to contribute to sensory hypersensitivities, and parallels several theoretical frameworks claiming that ASD individuals show difficulty using regularities in the environment to facilitate behavior. Using fMRI in autistic and neurotypical human adults (females and males), we assessed the status of repetition suppression across two modalities (vision, audition) and with four stimulus categories (faces, objects, printed words, and spoken words). ASD individuals showed domain-specific reductions in repetition suppression for face stimuli only, but not for objects, printed words, or spoken words. Reduced repetition suppression for faces was associated with greater challenges in social communication in ASD. We also found altered functional connectivity between atypically adapting cortical regions and higher-order face recognition regions, and microstructural differences in related white matter tracts in ASD. These results suggest that fundamental neural mechanisms and system-wide circuits are selectively altered for face processing in ASD and enhance our understanding of how disruptions in the formation of stable face representations may relate to higher-order social communication processes.SIGNIFICANCE STATEMENT A common finding in neuroscience is that repetition results in plasticity in stimulus-specific processing regions, reflecting selectivity and adaptation (repetition suppression [RS]). RS is reduced in several neurodevelopmental and psychiatric conditions including autism spectrum disorder (ASD). Theoretical frameworks of ASD posit that reduced adaptation may contribute to associated challenges in social communication and sensory processing. However, the scope of RS differences in ASD is unknown. We examined RS for multiple categories across visual and auditory domains (faces, objects, printed words, spoken words) in autistic and neurotypical individuals. We found reduced RS in ASD for face stimuli only and altered functional connectivity and white matter microstructure between cortical face-recognition areas. RS magnitude correlated with social communication challenges among autistic individuals.
Asunto(s)
Trastorno del Espectro Autista , Trastorno Autístico , Reconocimiento Facial , Masculino , Adulto , Femenino , Humanos , Mapeo Encefálico , Encéfalo , Imagen por Resonancia Magnética/métodosRESUMEN
The 3,4-methylenedioxymethamphetamine (MDMA) has long been used non-medically, and it is currently under investigation for its potential therapeutic benefits. Both uses may be related to its ability to enhance empathy, sociability, emotional processing and its anxiolytic effects. However, the neural mechanisms underlying these effects, and their specificity to MDMA compared to other stimulants, are not yet fully understood. Here, using electroencephalography (EEG), we investigated the effects of MDMA and a prototypic stimulant, methamphetamine (MA), on early visual processing of socio-emotional stimuli in an oddball emotional faces paradigm. Specifically, we examined whether MDMA or MA enhance the processing of facial expressions, compared to placebo, during the early stages of visual perception. MDMA enhanced an event-related component that is sensitive to detecting faces (N170), specifically for happy and angry expressions compared to neutral faces. MA did not affect this measure, and neither drug altered other components of the response to emotional faces. These findings provide novel insights into the neural mechanisms underlying the effects of MDMA on socio-emotional processing and may have implications for the therapeutic use of MDMA in the treatment of social anxiety and other psychiatric disorders.
Asunto(s)
Emociones , Expresión Facial , N-Metil-3,4-metilenodioxianfetamina , Adulto , Femenino , Humanos , Masculino , Adulto Joven , Estimulantes del Sistema Nervioso Central/farmacología , Electroencefalografía/métodos , Emociones/efectos de los fármacos , Emociones/fisiología , Reconocimiento Facial/efectos de los fármacos , Reconocimiento Facial/fisiología , Alucinógenos/farmacología , Metanfetamina/farmacología , N-Metil-3,4-metilenodioxianfetamina/farmacología , N-Metil-3,4-metilenodioxianfetamina/administración & dosificación , Percepción Visual/efectos de los fármacos , Percepción Visual/fisiología , Método Doble CiegoRESUMEN
The center-periphery visual field axis guides early visual system organization with enhanced resources devoted to central vision leading to reduced peripheral performance relative to that of central vision (i.e., behavioral eccentricity effect) for many visual functions. The center-periphery organization extends to high-order visual cortex where, for example, the well-studied face-sensitive fusiform face area (FFA) shows sensitivity to central vision and the place-sensitive parahippocampal place area (PPA) shows sensitivity to peripheral vision. As we have recently found that face perception is more sensitive to eccentricity than place perception, here we examined whether these behavioral findings reflect differences in FFA's and PPA's sensitivities to eccentricity. We assumed FFA would show higher sensitivity to eccentricity than PPA would, but that both regions' modulation by eccentricity would be invariant to the viewed category. We parametrically investigated (fMRI, n = 32) how FFA's and PPA's activations are modulated by eccentricity (≤8°) and category (upright/inverted faces/houses) while keeping stimulus size constant. As expected, FFA showed an overall higher sensitivity to eccentricity than PPA. However, both regions' activation modulations by eccentricity were dependent on the viewed category. In FFA, a reduction of activation with growing eccentricity ("BOLD eccentricity effect") was found (with different amplitudes) for all categories. In PPA however, qualitatively different BOLD eccentricity effect modulations were found (e.g., at 8° mild BOLD eccentricity effect for houses but a reverse BOLD eccentricity effect for faces and no modulation for inverted faces). Our results emphasize that peripheral vision investigations are critical to further our understanding of visual processing.
Asunto(s)
Reconocimiento Facial , Corteza Visual , Humanos , Mapeo Encefálico , Percepción Visual/fisiología , Corteza Visual/diagnóstico por imagen , Corteza Visual/fisiología , Campos Visuales , Reconocimiento Facial/fisiología , Imagen por Resonancia Magnética , Reconocimiento Visual de Modelos/fisiología , Estimulación LuminosaRESUMEN
Konrad Lorenz introduced the concept of a 'baby schema', suggesting that infants have specific physical features, such as a relatively large head, large eyes and protruding cheeks, which function as an innate releaser to promote caretaking motivation from perceivers. Over the years, a large body of research has been conducted on the baby schema. However, there are two critical problems underpinning the current literature. First, the term 'baby schema' lacks consistency among researchers. Some researchers use the term baby schema to refer to infant stimuli (often faces) in comparison with adults (categorical usage), while others use the term to refer to the extent that features contribute to cuteness perception (spectrum usage). Second, cross-species continuity of the 'baby schema' has been assumed despite few empirical demonstrations. The evolutionary and comparative relevance of the concept is, therefore, debatable, and we cannot exclude the possibility that extreme sensitivity to the baby schema is a uniquely human trait. This article critically reviews the state of the existing literature and evaluates the significance of the baby schema from an evolutionary perspective.
Asunto(s)
Evolución Biológica , Humanos , Lactante , Cara/anatomía & histologíaRESUMEN
The subcortical visual pathway to the amygdala has long been considered a rapid and crude stream for processing emotionally salient information that is reliant on low spatial frequency (LSF) information. Recently, research has called this LSF dependency into question. To resolve this debate, we take advantage of an anatomical hemiretinal asymmetry, whereby the nasal hemiretina sends a higher proportion of information through the subcortical pathway than the temporal hemiretina. We recorded brain activity using electroencephalography (EEG) in human participants (N = 40) while they completed a monocular viewing paradigm. Pairs of faces (one fearful and one neutral, or both neutral) were projected simultaneously to the nasal and temporal hemiretina in three contrast-equated blocks; faces filtered to display only (i) LSF, (ii) high spatial frequency (HSF), or (iii) unfiltered information (broadband spatial frequency; BSF). BSF fearful faces were found to produce a greater naso-temporal asymmetry, with greater N170 amplitudes evoked by BSF faces in the nasal field, compared to HSF faces. Conversely, the naso-temporal asymmetry for LSF fearful faces did not differ between BSF and HSF. Collectively, these findings provide crucial evidence that the subcortical pathway carries combined spatial frequency visual signals, with a potential bias against HSF content.
Asunto(s)
Electroencefalografía , Emociones , Humanos , Masculino , Femenino , Adulto , Adulto Joven , Vías Visuales/fisiología , Expresión Facial , Estimulación Luminosa , MiedoRESUMEN
Initial impressions of others based on facial appearances are often inaccurate yet can lead to dire outcomes. Across four studies, adult participants underwent a counterstereotype training to reduce their reliance on facial appearance in consequential social judgments of White male faces. In Studies 1 and 2, trustworthiness and sentencing judgments among control participants predicted whether real-world inmates were sentenced to death versus life in prison, but these relationships were diminished among trained participants. In Study 3, a sequential priming paradigm demonstrated that the training was able to abolish the relationship between even automatically and implicitly perceived trustworthiness and the inmates' life-or-death sentences. Study 4 extended these results to realistic decision-making, showing that training reduced the impact of facial trustworthiness on sentencing decisions even in the presence of decision-relevant information. Overall, our findings suggest that a counterstereotype intervention can mitigate the potentially harmful effects of relying on facial appearance in consequential social judgments.
Asunto(s)
Juicio , Percepción Social , Adulto , Humanos , Masculino , Confianza , Estereotipo , Expresión Facial , Población BlancaRESUMEN
BACKGROUND: Previous evidence suggests that early life complications (ELCs) interact with polygenic risk for schizophrenia (SCZ) in increasing risk for the disease. However, no studies have investigated this interaction on neurobiological phenotypes. Among those, anomalous emotion-related brain activity has been reported in SCZ, even if evidence of its link with SCZ-related genetic risk is not solid. Indeed, it is possible this relationship is influenced by non-genetic risk factors. Thus, this study investigated the interaction between SCZ-related polygenic risk and ELCs on emotion-related brain activity. METHODS: 169 healthy participants (HP) in a discovery and 113 HP in a replication sample underwent functional magnetic resonance imaging (fMRI) during emotion processing, were categorized for history of ELCs and genome-wide genotyped. Polygenic risk scores (PRSs) were computed using SCZ-associated variants considering the most recent genome-wide association study. Furthermore, 75 patients with SCZ also underwent fMRI during emotion processing to verify consistency of their brain activity patterns with those associated with risk factors for SCZ in HP. RESULTS: Results in the discovery and replication samples indicated no effect of PRSs, but an interaction between PRS and ELCs in left ventrolateral prefrontal cortex (VLPFC), where the greater the activity, the greater PRS only in presence of ELCs. Moreover, SCZ had greater VLPFC response than HP. CONCLUSIONS: These results suggest that emotion-related VLPFC response lies in the path from genetic and non-genetic risk factors to the clinical presentation of SCZ, and may implicate an updated concept of intermediate phenotype considering early non-genetic factors of risk for SCZ.
Asunto(s)
Emociones , Imagen por Resonancia Magnética , Herencia Multifactorial , Esquizofrenia , Humanos , Esquizofrenia/fisiopatología , Esquizofrenia/genética , Esquizofrenia/diagnóstico por imagen , Masculino , Femenino , Adulto , Emociones/fisiología , Adulto Joven , Estudio de Asociación del Genoma Completo , Factores de Riesgo , Predisposición Genética a la Enfermedad , Corteza Prefrontal/fisiopatología , Corteza Prefrontal/diagnóstico por imagen , Encéfalo/fisiopatología , Encéfalo/diagnóstico por imagen , Voluntarios Sanos , Persona de Mediana Edad , Puntuación de Riesgo GenéticoRESUMEN
BACKGROUND: Amygdala and dorsal anterior cingulate cortex responses to facial emotions have shown promise in predicting treatment response in medication-free major depressive disorder (MDD). Here, we examined their role in the pathophysiology of clinical outcomes in more chronic, difficult-to-treat forms of MDD. METHODS: Forty-five people with current MDD who had not responded to ⩾2 serotonergic antidepressants (n = 42, meeting pre-defined fMRI minimum quality thresholds) were enrolled and followed up over four months of standard primary care. Prior to medication review, subliminal facial emotion fMRI was used to extract blood-oxygen level-dependent effects for sad v. happy faces from two pre-registered a priori defined regions: bilateral amygdala and dorsal/pregenual anterior cingulate cortex. Clinical outcome was the percentage change on the self-reported Quick Inventory of Depressive Symptomatology (16-item). RESULTS: We corroborated our pre-registered hypothesis (NCT04342299) that lower bilateral amygdala activation for sad v. happy faces predicted favorable clinical outcomes (rs[38] = 0.40, p = 0.01). In contrast, there was no effect for dorsal/pregenual anterior cingulate cortex activation (rs[38] = 0.18, p = 0.29), nor when using voxel-based whole-brain analyses (voxel-based Family-Wise Error-corrected p < 0.05). Predictive effects were mainly driven by the right amygdala whose response to happy faces was reduced in patients with higher anxiety levels. CONCLUSIONS: We confirmed the prediction that a lower amygdala response to negative v. positive facial expressions might be an adaptive neural signature, which predicts subsequent symptom improvement also in difficult-to-treat MDD. Anxiety reduced adaptive amygdala responses.
RESUMEN
Image content is prioritized in the visual system. Faces are a paradigmatic example, receiving preferential processing along the visual pathway compared to other visual stimuli. Moreover, face prioritization manifests also in behavior. People tend to look at faces more frequently and for longer periods, and saccadic reaction times can be faster when targeting a face as opposed to a phase-scrambled control. However, it is currently not clear at which stage image content affects oculomotor planning and execution. It can be hypothesized that image content directly influences oculomotor signal generation. Alternatively, the image content could exert its influence on oculomotor planning and execution at a later stage, after the image has been processed. Here we aim to disentangle these two alternative hypotheses by measuring the frequency of saccades toward a visual target when the latter is followed by a visual transient in the central visual field. Behaviorally, this paradigm leads to a reduction in saccade frequency that happens about 90 ms after any visual transient event, also known as saccadic "inhibition". In two experiments, we measured occurrence of saccades in visually guided saccades as well as microsaccades during fixation, using face and noise-matched visual stimuli. We observed that while the reduction in saccade occurrence was similar for both stimulus types, face stimuli lead to a prolonged reduction in eye movements. Moreover, saccade kinematics were altered by both stimulus types, showing an amplitude reduction without change in peak velocity for the earliest saccades. Taken together, our experiments imply that face stimuli primarily affect the later stages of the behavioral phenomenon of saccadic "inhibition". We propose that while some stimulus features are processed at an early stage and can quickly influence eye movements, a delayed signal conveying image content information is necessary to further inhibit/delay activity in the oculomotor system to trigger eye movements.
Asunto(s)
Movimientos Sacádicos , Humanos , Movimientos Sacádicos/fisiología , Adulto , Femenino , Masculino , Adulto Joven , Reconocimiento Facial/fisiología , Tiempo de Reacción/fisiología , Estimulación Luminosa/métodos , Fijación Ocular/fisiologíaRESUMEN
Microstates represent brief periods of quasi-stable electroencephalography (EEG) scalp topography, offering insights into dynamic fluctuations in event-related potential (ERP) topographies. Despite this, there is a lack of a comprehensive systematic overview of microstate findings concerning cognitive face processing. This review aims to summarize ERP findings on face processing using microstate analyses and assess their effectiveness in characterizing face-related neural representations. A literature search was conducted for microstate ERP studies involving healthy individuals and psychiatric populations, utilizing PubMed, Google Scholar, Web of Science, PsychInfo, and Scopus databases. Twenty-two studies were identified, primarily focusing on healthy individuals (n = 16), with a smaller subset examining psychiatric populations (n = 6). The evidence reviewed in this study suggests that various microstates are consistently associated with distinct ERP stages involved in face processing, encompassing the processing of basic visual facial features to more complex functions such as analytical processing, facial recognition, and semantic representations. Furthermore, these studies shed light on atypical attentional neural mechanisms in Autism Spectrum Disorder (ASD), facial recognition deficits among emotional dysregulation disorders, and encoding and semantic dysfunctions in Post-Traumatic Stress Disorder (PTSD). In conclusion, this review underscores the practical utility of ERP microstate analyses in investigating face processing. Methodologies have evolved towards greater automation and data-driven approaches over time. Future research should aim to forecast clinical outcomes and conduct validation studies to directly demonstrate the efficacy of such analyses in inverse space.
Asunto(s)
Encéfalo , Electroencefalografía , Potenciales Evocados , Reconocimiento Facial , Trastornos Mentales , Humanos , Potenciales Evocados/fisiología , Reconocimiento Facial/fisiología , Electroencefalografía/métodos , Trastornos Mentales/fisiopatología , Encéfalo/fisiopatología , Encéfalo/fisiologíaRESUMEN
Most Event Related Potential studies investigating the time course of visual processing have focused mainly on the N170 component. Stimulus orientation affects the N170 amplitude for faces but not for objects, a finding interpreted as reflecting holistic/configural processing for faces and featural processing for objects. Furthermore, while recent studies suggest where on the face people fixate impacts the N170, fixation location effects have not been investigated in objects. A data-driven mass univariate analysis (all time points and electrodes) was used to investigate the time course of inversion and fixation location effects on the neural processing of faces and houses. Strong and widespread orientation effects were found for both faces and houses, from 100-350ms post-stimulus onset, including P1 and N170 components, and later, a finding arguing against a lack of holistic processing for houses. While no clear fixation effect was found for houses, fixation location strongly impacted face processing early, reflecting retinotopic mapping around the C2 and P1 components, and during the N170-P2 interval. Face inversion effects were also largest for nasion fixation around 120ms. The results support the view that facial feature integration (1) depends on which feature is being fixated and where the other features are situated in the visual field, (2) occurs maximally during the P1-N170 interval when fixation is on the nasion and (3) continues past 200ms, suggesting the N170 peak, where weak effects were found, might be an inflexion point between processes rather than the end of a feature integration into a whole process.
Asunto(s)
Electroencefalografía , Fijación Ocular , Humanos , Femenino , Masculino , Electroencefalografía/métodos , Adulto Joven , Adulto , Fijación Ocular/fisiología , Estimulación Luminosa/métodos , Reconocimiento Visual de Modelos/fisiología , Encéfalo/fisiología , Cara , Reconocimiento Facial/fisiología , AdolescenteRESUMEN
The defensive reaction to threats consists of two components: non-specific physiological arousal and specific attentional prioritization of the threatening stimulus, both of which are assumed by the so-called "low-road" hypothesis to be induced automatically and unconsciously. Although ample evidence indicates that non-specific arousal can indeed be caused by unconscious threatening stimuli, data regarding the involvement of the attentional selection mechanism remain inconclusive. Therefore, in the present study we used ERPs to compare the potential engagement of attention in the perception of subliminal and supraliminal fearful facial expressions to that of neutral ones. In the conscious condition, fearful faces were preferentially encoded (as indicated by the N170 component) and prioritized by bottom-up (EPN) and spatial attention (N2pc) in an automatic, task-independent manner. Furthermore, consciously perceived fearful expressions engaged cognitive resources (SPCN, P3) when face stimuli were task-relevant. In the unconscious condition, fearful faces were still preferentially encoded (N170), but we found no evidence for any type of attentional prioritization. Therefore, by showing that threatening stimuli engage attention only when perceived consciously, our findings challenge the "low road" hypothesis and point to the limits of unconscious attentional selection.
Asunto(s)
Miedo , Trastornos Mentales , Humanos , Miedo/fisiología , Atención/fisiología , Potenciales Evocados/fisiología , Inconsciencia , Expresión Facial , ElectroencefalografíaRESUMEN
The self can be associated with arbitrary images, such as geometric figures or unknown faces. By adopting a cross-cultural perspective, we explored in two experiments whether the self can be associated with faces of unknown people from different ethnic groups. In Experiment 1, Asian Japanese participants completed a perceptual matching task, associating Asian or White faces with themselves. The same task was used in Experiment 2 with White Italians. Both experiments showed a reliable association between the self and facial stimuli. Importantly, this association was similar for both Asian and White faces. Additionally, no correlations were found between the strength of this association and an index of implicit bias towards Asian and White individuals. These results suggest that the self is malleable and can incorporate social stimuli from different groups.
Asunto(s)
Pueblo Asiatico , Comparación Transcultural , Reconocimiento Facial , Población Blanca , Humanos , Femenino , Masculino , Adulto , Adulto Joven , Reconocimiento Facial/fisiología , Población Blanca/etnología , Japón/etnología , Percepción Social , Italia/etnología , AutoimagenRESUMEN
Portrait viewpoint and illumination editing is an important problem with several applications in VR/AR, movies, and photography. Comprehensive knowledge of geometry and illumination is critical for obtaining photorealistic results. Current methods are unable to explicitly model in 3D while handling both viewpoint and illumination editing from a single image. In this paper, we propose VoRF, a novel approach that can take even a single portrait image as input and relight human heads under novel illuminations that can be viewed from arbitrary viewpoints. VoRF represents a human head as a continuous volumetric field and learns a prior model of human heads using a coordinate-based MLP with individual latent spaces for identity and illumination. The prior model is learned in an auto-decoder manner over a diverse class of head shapes and appearances, allowing VoRF to generalize to novel test identities from a single input image. Additionally, VoRF has a reflectance MLP that uses the intermediate features of the prior model for rendering One-Light-at-A-Time (OLAT) images under novel views. We synthesize novel illuminations by combining these OLAT images with target environment maps. Qualitative and quantitative evaluations demonstrate the effectiveness of VoRF for relighting and novel view synthesis, even when applied to unseen subjects under uncontrolled illumination. This work is an extension of Rao et al. (VoRF: Volumetric Relightable Faces 2022). We provide extensive evaluation and ablative studies of our model and also provide an application, where any face can be relighted using textual input.