Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 53
Filtrar
1.
Cortex ; 174: 93-109, 2024 05.
Artículo en Inglés | MEDLINE | ID: mdl-38493568

RESUMEN

Contrary to the extensive research on processing subliminal and/or unattended emotional facial expressions, only a minority of studies have investigated the neural correlates of consciousness (NCCs) of emotions conveyed by faces. In the present high-density electroencephalography (EEG) study, we first employed a staircase procedure to identify each participant's perceptual threshold of the emotion expressed by the face and then compared the EEG signals elicited in trials where the participants were aware with the activity elicited in trials where participants were unaware of the emotions expressed by these, otherwise identical, faces. Drawing on existing knowledge of the neural mechanisms of face processing and NCCs, we hypothesized that activity in frontal electrodes would be modulated in relation to participants' awareness of facial emotional content. More specifically, we hypothesized that the NCC of fear seen on someone else's face could be detected as a modulation of a later and more anterior (i.e., at frontal sites) event-related potential (ERP) than the face-sensitive N170. By adopting a data-driven approach and cluster-based statistics to the analysis of EEG signals, the results were clear-cut in showing that visual awareness of fear was associated with the modulation of a frontal ERP component in a 150-300 msec interval. These insights are dissected and contextualized in relation to prevailing theories of visual consciousness and their proposed NCC benchmarks.


Asunto(s)
Estado de Conciencia , Reconocimiento Facial , Humanos , Electroencefalografía , Miedo , Emociones , Potenciales Evocados , Expresión Facial
2.
Front Psychol ; 15: 1335857, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38544511

RESUMEN

Deciding where to direct our vehicle in a crowded parking area or where to line up at an airport gateway relies on our ability to appraise the numerosity of multitudes at a glimpse and react accordingly. Approximating numerosities without actually counting is an ontogenetically and phylogenetically primordial ability, given its presence in human infants shortly after birth, and in primate and non-primate animal species. Prior research in the field suggested that numerosity approximation is a ballistic automatism that has little to do with human cognition as commonly intended. Here, we measured visual working memory capacity using a state-of-the-art change detection task and numerosity approximation using a dot-comparison task, and found a null correlation between these two parametrical domains. By checking the evidential strength of the tested correlation using both classic and Bayesian analytical approaches, as well as the construct validity for working memory capacity and numerosity approximation estimates, we concluded that the present psychophysical evidence was sufficiently strong to support the view that visual working memory and numerosity approximation are likely to rely on functionally independent stages of processing of the human cognitive architecture.

3.
Cogn Emot ; 38(2): 267-275, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37997901

RESUMEN

This study explored how congruency between facial mimicry and observed expressions affects the stability of conscious facial expression representations. Focusing on the congruency effect between proprioceptive/sensorimotor signals and visual stimuli for happy expressions, participants underwent a binocular rivalry task displaying neutral and happy faces. Mimicry was either facilitated with a chopstick or left unrestricted. Key metrics included Initial Percept (bias indicator), Onset Resolution Time (time from onset to Initial Percept), and Cumulative Time (content stabilization measure). Results indicated that mimicry manipulation significantly impacted Cumulative Time for happy faces, highlighting the importance of congruent mimicry in stabilizing conscious awareness of facial expressions. This supports embodied cognition models, showing the integration of proprioceptive information significantly biases conscious visual perception of facial expressions.


Asunto(s)
Expresión Facial , Felicidad , Humanos , Percepción Visual , Cara , Emociones
4.
Emotion ; 24(3): 602-616, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-37676163

RESUMEN

According to sensorimotor simulation models, recognition of another person's emotion is achieved by recreating the motor production of the perceived facial expression in oneself. Therefore, congenital difficulties in the production of facial expressions may affect emotion processing. The present study assessed a sample (N = 11) of Moebius syndrome (MBS) patients and a matched control group (N = 33), using a highly sensitive emotion recognition task. Leveraging the uniqueness of MBS, which is characterized by congenital facial paralysis, the role of facial mimicry and sensorimotor simulation in creating precise embodied concepts of emotion categories was investigated. Particularly, the research focused on how MBS patients (both as a group and individually, compared to controls) perceived the intensity of primary emotions and how well they discriminated between these and secondary (i.e., blended) emotions. The results showed that MBS patients registered significantly lower intensities for sadness, fear, anger, and disgust. Furthermore, these emotions appeared closely clustered-and therefore confused with anger and surprise-in the multidimensional scaling map, which was used to qualitatively analyze the emotion perception space. Further analysis of each MBS participant showed a stronger tendency in most patients to perceive primary emotions as less intense, relative to controls. Thus, the findings provide evidence for a residual deficit in emotion processing in adults with MBS. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Asunto(s)
Parálisis Facial , Reconocimiento Facial , Adulto , Humanos , Emociones , Miedo , Ira , Expresión Facial , Percepción
5.
Front Hum Neurosci ; 17: 1145653, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37284480

RESUMEN

Contents of consciousness change over time. However, the study of dynamics in consciousness has been largely neglected. Aru and Bachmann have recently brought to the attention of scientists dealing with consciousness the relevance of making inquiries about its temporal evolution. Importantly, they also pointed out several experimental questions as guidelines for researchers interested in studying the temporal evolution of consciousness, including the phases of formation and dissolution of content. They also suggested that these two phases could be characterized by asymmetric inertia. The main objective of the present investigation was to approximate the dynamics of these two phases in the context of conscious face perception. To this aim, we tested the time course of content transitions during a binocular rivalry task using face stimuli and asked participants to map their subjective experience of transitions from one content to the other through a joystick. We then computed metrics of joystick velocity linked to content transitions as proxies of the formation and dissolution phases. We found a general phase effect such that the formation phase was slower than the dissolution phase. Furthermore, we observed an effect specific to happy facial expressions, such that their contents were slower to form and dissolve than that of neutral expressions. We further propose to include a third phase of stabilization of conscious content between formation and dissolution.

6.
Sci Rep ; 13(1): 9951, 2023 06 19.
Artículo en Inglés | MEDLINE | ID: mdl-37337009

RESUMEN

Current knowledge regarding how the focus of our attention during face processing influences neural responses largely comes from neuroimaging studies reporting on regional brain activations. The present study was designed to add novel insights to this research by studying how attention can differentially impact the way cortical regions interact during emotional face processing. High-density electroencephalogram was recorded in a sample of fifty-two healthy participants during an emotional face processing task. The task required participants to either attend to the expressions (i.e., overt processing) or attend to a perceptual distractor, which rendered the expressions task-irrelevant (i.e., covert processing). Functional connectivity in the alpha band was estimated in source space and modeled using graph theory to quantify whole-brain integration and segregation. Results revealed that overt processing of facial expressions is linked to reduced cortical segregation and increased cortical integration, this latter specifically for negative expressions of fear and sadness. Furthermore, we observed increased communication efficiency during overt processing of negative expressions between the core and the extended face processing systems. Overall, these findings reveal that attention makes the interaction among the nodes involved in face processing more efficient, also uncovering a connectivity signature of the prioritized processing mechanism of negative expressions, that is an increased cross-communication within the nodes of the face processing network.


Asunto(s)
Reconocimiento Facial , Humanos , Reconocimiento Facial/fisiología , Emociones/fisiología , Encéfalo/fisiología , Miedo , Electroencefalografía , Mapeo Encefálico , Expresión Facial , Imagen por Resonancia Magnética
7.
Soc Cogn Affect Neurosci ; 18(1)2023 06 12.
Artículo en Inglés | MEDLINE | ID: mdl-37243725

RESUMEN

The space surrounding the body [i.e. peripersonal space (PPS)] has a crucial impact on individuals' interactions with the environment. Research showed that the interaction within the PPS increases individuals' behavioral and neural responses. Furthermore, individuals' empathy is affected by the distance between them and the observed stimuli. This study investigated empathic responses to painfully stimulated or gently touched faces presented within the PPS depending on the presence vs absence of a transparent barrier erected to prevent the interaction. To this aim, participants had to determine whether faces were painfully stimulated or gently touched, while their electroencephalographic signals were recorded. Brain activity [i.e. event-related potentials (ERPs) and source activations] was separately compared for the two types of stimuli (i.e. gently touched vs painfully stimulated faces) across two barrier conditions: (i) no-barrier between participants and the screen (i.e. no-barrier) and (ii) a plexiglass barrier erected between participants and the screen (i.e. barrier). While the barrier did not affect performance behaviorally, it reduced cortical activation at both the ERP and source activation levels in brain areas that regulate the interpersonal interaction (i.e. primary, somatosensory, premotor cortices and inferior frontal gyrus). These findings suggest that the barrier, precluding the possibility of interacting, reduced the observer's empathy.


Asunto(s)
Empatía , Espacio Personal , Humanos , Potenciales Evocados/fisiología , Electroencefalografía , Encéfalo , Percepción Espacial/fisiología
8.
Front Syst Neurosci ; 17: 1123221, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37215358

RESUMEN

Moebius syndrome (MBS) is characterized by the congenital absence or underdevelopment of cranial nerves VII and VI, leading to facial palsy and impaired lateral eye movements. As a result, MBS individuals cannot produce facial expressions and did not develop motor programs for facial expressions. In the latest model of sensorimotor simulation, an iterative communication between somatosensory, motor/premotor cortices, and visual regions has been proposed, which should allow more efficient discriminations among subtle facial expressions. Accordingly, individuals with congenital facial motor disability, specifically with MBS, should exhibit atypical communication within this network. Here, we aimed to test this facet of the sensorimotor simulation models. We estimated the functional connectivity between the visual cortices for face processing and the sensorimotor cortices in healthy and MBS individuals. To this aim, we studied the strength of beta band functional connectivity between these two systems using high-density EEG, combined with a change detection task with facial expressions (and a control condition involving non-face stimuli). The results supported our hypothesis such that when discriminating subtle facial expressions, participants affected by congenital facial palsy (compared to healthy controls) showed reduced connectivity strength between sensorimotor regions and visual regions for face processing. This effect was absent for the condition with non-face stimuli. These findings support sensorimotor simulation models and the communication between sensorimotor and visual areas during subtle facial expression processing.

9.
Philos Trans R Soc Lond B Biol Sci ; 377(1863): 20210190, 2022 11 07.
Artículo en Inglés | MEDLINE | ID: mdl-36126673

RESUMEN

Influential theoretical models argue that an internal simulation mechanism (motor or sensorimotor simulation) supports the recognition of facial expressions. However, despite numerous converging sources of evidence, recent studies testing patients with congenital facial palsy (i.e. Moebius syndrome) seem to refute these theoretical models. However, these results do not consider the principles of neuroplasticity and degeneracy that could support the involvement of an alternative neural processing pathway in these patients. In the present study, we tested healthy participants and participants with Moebius syndrome in a highly sensitive facial expression discrimination task and concomitant high-density electroencephalographic recording. The results, both at the scalp and source levels, indicate the activation of two different pathways of facial expression processing in healthy participants and participants with Moebius syndrome, compatible, respectively, with a dorsal pathway that includes premotor areas and a ventral pathway. Therefore, these results support the reactivation of sensorimotor representations of facial expressions (i.e. simulation) in healthy subjects, in the place of an alternative processing pathway in subjects with congenital facial palsy. This article is part of the theme issue 'Cracking the laugh code: laughter through the lens of biology, psychology and neuroscience'.


Asunto(s)
Parálisis Facial , Síndrome de Mobius , Emociones/fisiología , Expresión Facial , Parálisis Facial/complicaciones , Humanos , Síndrome de Mobius/complicaciones , Reconocimiento en Psicología
10.
Front Psychol ; 13: 956832, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36176786

RESUMEN

With the advent of the severe acute respiratory syndrome-Corona Virus type 2 (SARS-CoV-2) pandemic, the theme of emotion recognition from facial expressions has become highly relevant due to the widespread use of face masks as one of the main devices imposed to counter the spread of the virus. Unsurprisingly, several studies published in the last 2 years have shown that accuracy in the recognition of basic emotions expressed by faces wearing masks is reduced. However, less is known about the impact that wearing face masks has on the ability to recognize emotions from subtle expressions. Furthermore, even less is known regarding the role of interindividual differences (such as alexithymic and autistic traits) in emotion processing. This study investigated the perception of all the six basic emotions (anger, disgust, fear, happiness, sadness, and surprise), both as a function of the face mask and as a function of the facial expressions' intensity (full vs. subtle) in terms of participants' uncertainty in their responses, misattribution errors, and perceived intensity. The experiment was conducted online on a large sample of participants (N = 129). Participants completed the 20-item Toronto Alexithymia Scale and the Autistic Spectrum Quotient and then performed an emotion-recognition task that involved face stimuli wearing a mask or not, and displaying full or subtle expressions. Each face stimulus was presented alongside the Geneva Emotion Wheel (GEW), and participants had to indicate what emotion they believed the other person was feeling and its intensity using the GEW. For each combination of our variables, we computed the indices of 'uncertainty' (i.e., the spread of responses around the correct emotion category), 'bias' (i.e., the systematic errors in recognition), and 'perceived intensity' (i.e., the distance from the center of the GEW). We found that face masks increase uncertainty for all facial expressions of emotion, except for fear when intense, and that disgust was systematically confused with anger (i.e., response bias). Furthermore, when faces were covered by the mask, all the emotions were perceived as less intense, and this was particularly evident for subtle expressions. Finally, we did not find any evidence of a relationship between these indices and alexithymic/autistic traits.

11.
Front Behav Neurosci ; 16: 920989, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35874655

RESUMEN

People at risk of developing clinical depression exhibit attentional biases for emotional faces. To clarify whether such effects occur at an early, automatic, or at a late, deliberate processing stage of emotional processing, the present study used high-density electroencephalography during both covert and overt processing of sad, fearful, happy, and neutral expressions in healthy participants with high dysphoria (n = 16) and with low dysphoria (n = 19). A state-of-the-art non-parametric permutation-based statistical approach was then used to explore the effects of emotion, attentional task demands, and group. Behaviorally, participants responded faster and more accurately when overtly categorizing happy faces and they were slower and less accurate when categorizing sad and fearful faces, independent of the dysphoria group. Electrophysiologically, in an early time-window (N170: 140-180 ms), there was a significant main effect for the dysphoria group, with greater negative voltage for the high vs. low dysphoria group over the left-sided temporo-occipital scalp. Furthermore, there was a significant group by emotional interaction, with the high dysphoria group displaying greater negative amplitude N170 for happy than fearful faces. Attentional task demands did not influence such early effects. In contrast, in an intermediate time-window (EPN: 200-400 ms) and in a late time-window (LPP: 500-750 ms) there were no significant main effects nor interactions involving the dysphoria Group. The LPP results paralleled the behavioral results, with greater LPP voltages for sad and fearful relative to happy faces only in the overt task, but similarly so in the two dysphoria groups. This study provides novel evidence that alterations in face processing in dysphoric individuals can be seen at the early stages of face perception, as indexed by the N170, although not in the form of a typical pattern of mood-congruent attentional bias. In contrast, intermediate (EPN) and late (LPP) stages of emotional face processing appear unaffected by dysphoria. Importantly, the early dysphoria effect appears to be independent of the top-down allocation of attention, further supporting the idea that dysphoria may influence a stage of automatic emotional appraisal. It is proposed that it may be a consequence of a shift from holistic to feature-based processing of facial expressions, or may be due to the influence of negative schemas acting as a negative context for emotional facial processing.

12.
Front Neurol ; 13: 757523, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35665048

RESUMEN

Rehabilitation after free gracilis muscle transfer (smile surgery, SS) is crucial for a functional recovery of the smiling skill, mitigating social and psychological problems resulting from facial paralysis. We compared two post-SS rehabilitation treatments: the traditional based on teeth clenching exercises and the FIT-SAT (facial imitation and synergistic activity treatment). FIT-SAT, based on observation/imitation therapy and on hand-mouth motor synergies would facilitate neuronal activity in the facial motor cortex avoiding unwanted contractions of the jaw, implementing muscle control. We measured the smile symmetry on 30 patients, half of whom after SS underwent traditional treatment (control group, CG meanage = 20 ± 9) while the other half FIT-SAT (experimental group, EG meanage= 21 ± 14). We compared pictures of participants while holding two postures: maximum and gentle smile. The former corresponds to the maximal muscle contraction, whereas the latter is strongly linked to the control of muscle strength during voluntary movements. No differences were observed between the two groups in the maximum smile, whereas in the gentle smile the EG obtained a better symmetry than the CG. These results support the efficacy of FIT-SAT in modulating the smile allowing patients to adapt their smile to the various social contexts, aspect which is crucial during reciprocal interactions.

13.
Brain Sci ; 12(5)2022 Apr 19.
Artículo en Inglés | MEDLINE | ID: mdl-35624903

RESUMEN

Temporal dynamics of behavior, particularly facial expressions, are fundamental for communication between individuals from very early in development. Facial expression processing has been widely demonstrated to involve embodied simulative processes mediated by the motor system. Such processes may be impaired in patients with congenital facial palsy, including those affected by Moebius syndrome (MBS). The aims of this study were to investigate (a) the role of motor mechanisms in the processing of dynamic facial expression timing by testing patients affected by congenital facial palsy and (b) age-dependent effects on such processing. Accordingly, we recruited 38 typically developing individuals and 15 individuals with MBS, ranging in age from childhood to adulthood. We used a time comparison task where participants were asked to identify which one of two dynamic facial expressions was faster. Results showed that MBS individuals performed worse than controls in correctly estimating the duration of facial expressions. Interestingly, we did not find any performance differences in relation to age. These findings provide further evidence for the involvement of the motor system in processing facial expression duration and suggest that a sensorimotor matching mechanism may contribute to such timing perception from childhood.

14.
Neurosci Biobehav Rev ; 136: 104618, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-35289273

RESUMEN

The relationship between consciousness and working memory (WM) has been recently debated both at the theoretical and methodological levels (Persuh et al., 2018; Velichkovsky, 2017). While there is behavioral and neural evidence that argues for the existence of unconscious WM, several methodological concerns have been raised, rendering this issue highly controversial. To address the robustness of the previous findings, here we adopt a meta-analytic approach to estimate the effect size and heterogeneity of the previously reported unconscious WM results, also including unpublished results. We used meta-regression to isolate relevant experimental variables, in particular, consciousness manipulation and the WM paradigm to identify the source of the heterogeneity in the reported effect size of the unconscious WM. Our meta-analysis supports the existence of the unconscious WM effect and critically reveals several experimental variables that contribute to relevant heterogeneity. Our analysis clarifies several theoretical and methodological issues. We recommend that future studies explicitly operationalize the definition of consciousness, standardize the methodology and systematically explore the role of critical variables for the unconscious WM effect.


Asunto(s)
Estado de Conciencia , Memoria a Corto Plazo , Teorema de Bayes , Humanos , Inconsciencia
15.
J Cogn Neurosci ; 34(6): 917-932, 2022 05 02.
Artículo en Inglés | MEDLINE | ID: mdl-35258571

RESUMEN

Sharing an experience, without communicating, affects people's subjective perception of the experience, often by intensifying it. We investigated the neural mechanisms underlying shared attention by implementing an EEG study where participants attended to and rated the intensity of emotional faces, simultaneously or independently. Participants performed the task in three experimental conditions: (a) alone; (b) simultaneously next to each other in pairs, without receiving feedback of the other's responses (shared without feedback); and (c) simultaneously while receiving the feedback (shared with feedback). We focused on two face-sensitive ERP components: The amplitude of the N170 was greater in the "shared with feedback" condition compared to the alone condition, reflecting a top-down effect of shared attention on the structural encoding of faces, whereas the EPN was greater in both shared context conditions compared to the alone condition, reflecting an enhanced attention allocation in the processing of emotional content of faces, modulated by the social context. Taken together, these results suggest that shared attention amplifies the neural processing of faces, regardless of the valence of facial expressions.


Asunto(s)
Electroencefalografía , Potenciales Evocados , Atención/fisiología , Emociones/fisiología , Potenciales Evocados/fisiología , Expresión Facial , Humanos
16.
Brain Sci ; 11(7)2021 Jul 17.
Artículo en Inglés | MEDLINE | ID: mdl-34356176

RESUMEN

Behavioral and electrophysiological correlates of the influence of task demands on the processing of happy, sad, and fearful expressions were investigated in a within-subjects study that compared a perceptual distraction condition with task-irrelevant faces (e.g., covert emotion task) to an emotion task-relevant categorization condition (e.g., overt emotion task). A state-of-the-art non-parametric mass univariate analysis method was used to address the limitations of previous studies. Behaviorally, participants responded faster to overtly categorized happy faces and were slower and less accurate to categorize sad and fearful faces; there were no behavioral differences in the covert task. Event-related potential (ERP) responses to the emotional expressions included the N170 (140-180 ms), which was enhanced by emotion irrespective of task, with happy and sad expressions eliciting greater amplitudes than neutral expressions. EPN (200-400 ms) amplitude was modulated by task, with greater voltages in the overt condition, and by emotion, however, there was no interaction of emotion and task. ERP activity was modulated by emotion as a function of task only at a late processing stage, which included the LPP (500-800 ms), with fearful and sad faces showing greater amplitude enhancements than happy faces. This study reveals that affective content does not necessarily require attention in the early stages of face processing, supporting recent evidence that the core and extended parts of the face processing system act in parallel, rather than serially. The role of voluntary attention starts at an intermediate stage, and fully modulates the response to emotional content in the final stage of processing.

17.
Sci Rep ; 11(1): 9972, 2021 05 11.
Artículo en Inglés | MEDLINE | ID: mdl-33976281

RESUMEN

Several previous studies have interfered with the observer's facial mimicry during a variety of facial expression recognition tasks providing evidence in favor of the role of facial mimicry and sensorimotor activity in emotion processing. In this theoretical context, a particularly intriguing facet has been neglected, namely whether blocking facial mimicry modulates conscious perception of facial expressions of emotions. To address this issue, we used a binocular rivalry paradigm, in which two dissimilar stimuli presented to the two eyes alternatingly dominate conscious perception. On each trial, female participants (N = 32) were exposed to a rivalrous pair of a neutral and a happy expression of the same individual through anaglyph glasses in two conditions: in one, they could freely use their facial mimicry, in the other they had to keep a chopstick between their lips, constraining the mobility of the zygomatic muscle and producing 'noise' for sensorimotor simulation. We found that blocking facial mimicry affected the perceptual dominance in terms of cumulative time favoring neutral faces, but it did not change the time before the first dominance was established. Taken together, our results open a door to future investigation of the intersection between sensorimotor simulation models and conscious perception of emotional facial expressions.


Asunto(s)
Expresión Facial , Reconocimiento Facial , Adulto , Femenino , Voluntarios Sanos , Humanos , Disparidad Visual , Adulto Joven
18.
Psychophysiology ; 58(5): e13786, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-33550632

RESUMEN

Face perception arises from a collective activation of brain regions in the occipital, parietal and temporal cortices. Despite the wide acknowledgment that these regions act in an intertwined network, the network behavior itself is poorly understood. Here we present a study in which time-varying connectivity estimated from EEG activity elicited by facial expressions presentation was characterized using graph-theoretical measures of node centrality and global network topology. Results revealed that face perception results from a dynamic reshaping of the network architecture, characterized by the emergence of hubs located in the occipital and temporal regions of the scalp. The importance of these nodes can be observed from the early stages of visual processing and reaches a climax in the same time-window in which the face-sensitive N170 is observed. Furthermore, using Granger causality, we found that the time-evolving centrality of these nodes is associated with ERP amplitude, providing a direct link between the network state and local neural response. Additionally, investigating global network topology by means of small-worldness and modularity, we found that face processing requires a functional network with a strong small-world organization that maximizes integration, at the cost of segregated subdivisions. Interestingly, we found that this architecture is not static, but instead, it is implemented by the network from stimulus onset to ~200 ms. Altogether, this study reveals the event-related changes underlying face processing at the network level, suggesting that a distributed processing mechanism operates through dynamically weighting the contribution of the cortical regions involved.


Asunto(s)
Potenciales Evocados/fisiología , Reconocimiento Facial/fisiología , Lóbulo Occipital/fisiología , Lóbulo Parietal/fisiología , Lóbulo Temporal/fisiología , Adulto , Corteza Cerebral , Electroencefalografía , Femenino , Humanos , Masculino , Vías Nerviosas/fisiología , Adulto Joven
19.
Brain Cogn ; 148: 105678, 2021 03.
Artículo en Inglés | MEDLINE | ID: mdl-33454594

RESUMEN

Simulation models of facial expressions suggest that posterior visual areas and brain areas underpinning sensorimotor simulations might interact to improve facial expression processing. According to these models, facial mimicry, a manifestation of sensorimotor simulation, may contribute to the visual processing of facial expressions by influencing early stages. The aim of this study was to assess whether and how sensorimotor simulation influences early stages of face processing, also investigating its relationship with alexithymic traits given that previous studies have suggested that individuals with high levels of alexithymic traits (vs. individuals with low levels of alexithymic traits) tend to use sensorimotor simulation to a lesser extent. We monitored P1 and N170 ERP components of the event-related potentials (ERP) in participants performing a fine discrimination task of facial expressions and animals, as a control condition. In half of the experiment, participants could freely use their facial mimicry whereas in the other half they had their facial mimicry blocked by a gel. Our results revealed that only individuals with lower compared to high alexithymic traits showed a larger modulation of the P1 amplitude as a function of the mimicry manipulation selectively for facial expressions (but not for animals), while we did not observe any modulation of the N170. Given the null results at the behavioural level, we interpreted the P1 modulation as compensative visual processing in individuals with low levels of alexithymia under conditions of interference on the sensorimotor processing, providing a preliminary evidence in favor of sensorimotor simulation models.


Asunto(s)
Expresión Facial , Reconocimiento Facial , Electroencefalografía , Emociones , Potenciales Evocados , Humanos , Individualidad , Percepción Visual
20.
Brain Sci ; 11(1)2020 Dec 24.
Artículo en Inglés | MEDLINE | ID: mdl-33374355

RESUMEN

BACKGROUND: Spino-bulbar muscular atrophy is a rare genetic X-linked disease caused by testosterone insensitivity. An inverse correlation has been described between testosterone levels and empathic responses. The present study explored the profile of neural empathic responding in spino-bulbar muscular atrophy patients. METHODS: Eighteen patients with spino-bulbar muscular atrophy and eighteen healthy male controls were enrolled in the study. Their event-related potentials were recorded during an "Empathy Task" designed to distinguish neural responses linked with experience-sharing (early response) and mentalizing (late response) components of empathy. The task involved the presentation of contextual information (painful vs. neutral sentences) and facial expressions (painful vs. neutral). An explicit dispositional empathy-related questionnaire was also administered to all participants, who were screened via neuropsychological battery tests that did not reveal potential cognitive deficits. Due to electrophysiological artefacts, data from 12 patients and 17 controls were finally included in the analyses. RESULTS: Although patients and controls did not differ in terms of dispositional, explicit empathic self-ratings, notably conservative event-related potentials analyses (i.e., spatio-temporal permutation cluster analyses) showed a significantly greater experience-sharing neural response in patients compared to healthy controls in the Empathy-task when both contextual information and facial expressions were painful. CONCLUSION: The present study contributes to the characterization of the psychological profile of patients with spino-bulbar muscular atrophy, highlighting the peculiarities in enhanced neural responses underlying empathic reactions.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...