Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 46
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
J Neurophysiol ; 130(3): 547-556, 2023 09 01.
Artículo en Inglés | MEDLINE | ID: mdl-37492898

RESUMEN

Somatosensory evoked potential (SEP) studies typically characterize short-latency components following median nerve stimulations of the wrist. However, these studies rarely considered 1) skin type (glabrous/hairy) at the stimulation site, 2) nerve being stimulated, and 3) middle-latency (>30 ms) components. Our aim was to investigate middle-latency SEPs following simple mechanical stimulation of two skin types innervated by two different nerves. Eighteen adults received 400 mechanical stimulations over four territories of the right hand (two nerves: radial/median; two skin types: hairy/glabrous skin) while their EEG was recorded. Four middle-latency components were identified: P50, N80, N130, and P200. As expected, significantly shorter latencies and larger amplitudes were found over the contralateral hemisphere for all components. A skin type effect was found for the N80; glabrous skin stimulations induced larger amplitude than hairy skin stimulations. Regarding nerve effects, median stimulations induced larger P50 and N80. Latency of the N80 was longer after median nerve stimulation compared with radial nerve stimulation. This study showed that skin type and stimulated nerve influence middle-latency SEPs, highlighting the importance of considering these parameters in future studies. These modulations could reflect differences in cutaneous receptors and somatotopy. Middle-latency SEPs can be used to evaluate the different steps of tactile information cortical processing. Modulation of SEP components before 100 ms possibly reflects somatotopy and differential processing in primary somatosensory cortex.NEW & NOTEWORTHY The current paper highlights the influences of stimulated skin type (glabrous/hairy) and nerve (median/radial) on cortical somatosensory evoked potentials. Mechanical stimulations were applied over four territories of the right hand in 18 adults. Four middle-latency components were identified: P50, N80, N130, and P200. A larger N80 was found after glabrous skin stimulations than after hairy skin ones, regardless of the nerve being stimulated. P50 and N80 were larger after median than radial nerve stimulations.


Asunto(s)
Potenciales Evocados Somatosensoriales , Muñeca , Potenciales Evocados Somatosensoriales/fisiología , Nervio Mediano/fisiología , Tacto , Piel , Estimulación Eléctrica , Corteza Somatosensorial/fisiología
2.
J Child Psychol Psychiatry ; 61(7): 768-778, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-31823380

RESUMEN

BACKGROUND: Faces are crucial social stimuli, eliciting automatic processing associated with increased physiological arousal in observers. The level of arousal can be indexed by pupil diameter (the 'Event-Related Pupil Dilation', ERPD). However, many parameters could influence the arousal evoked by a face and its social saliency (e.g. virtual vs. real, neutral vs. emotional, static vs. dynamic). A few studies have shown an atypical ERPD in autism spectrum disorder (ASD) patients using several kinds of faces but no study has focused on identifying which parameter of the stimulus is the most interfering with face processing in ASD. METHODS: In order to disentangle the influence of these parameters, we propose an original paradigm including stimuli along an ecological social saliency gradient: from static objects to virtual faces to dynamic emotional faces. This strategy was applied to 186 children (78 ASD and 108 typically developing (TD) children) in two pupillometric studies (22 ASD and 47 TD children in the study 1 and 56 ASD and 61 TD children in the study 2). RESULTS: Strikingly, the ERPD in ASD children is insensitive to any of the parameters tested: the ERPD was similar for objects, static faces or dynamic faces. On the opposite, the ERPD in TD children is sensitive to all the parameters tested: the humanoid, biological, dynamic and emotional quality of the stimuli. Moreover, ERPD had a good discriminative power between ASD and TD children: ASD had a larger ERPD than TD in response to virtual faces, while TD had a larger ERPD than ASD for dynamic faces. CONCLUSIONS: This novel approach evidences an abnormal physiological adjustment to socially relevant stimuli in ASD.


Asunto(s)
Nivel de Alerta , Trastorno del Espectro Autista/psicología , Emociones , Expresión Facial , Reconocimiento Facial , Pupila , Niño , Preescolar , Femenino , Humanos , Masculino
3.
Brain Cogn ; 136: 103599, 2019 11.
Artículo en Inglés | MEDLINE | ID: mdl-31536931

RESUMEN

Although ASD (Autism Spectrum Disorder) diagnosis requires the co-occurrence of socio-emotional deficits and inflexible behaviors, the interaction between these two domains remains unexplored. We used an emotional Wisconsin Card Sorting Test adapted to fMRI to explore this question. ASD and control participants matched a central card (a face) with one of four surrounding cards according to one of three rules: frame color, facial identity or expression. Feedback informed participants on whether to change or maintain the current sorting rule. For each rule, we modeled feedback onsets to change, switch (confirming the newly found rule) and maintenance events. "Bias error", which measures participants' willingness to switch, was larger in ASD participants for the emotional sorting rule. Brain activity to change events showed no group differences. In response to switch events significantly larger activity was observed for ASD participants in bilateral Inferior Parietal Sulci. Inflexibility in ASD appears characterized by the unwillingness to switch toward processing socio-emotional information, rather than a major disruption in cognitive flexibility. However, a larger activity to switch events in ASD highlights the need for a higher level of certainty before setting into a stable processing stage, which may be particularly detrimental in the highly changeable socio-emotional environment.


Asunto(s)
Trastorno del Espectro Autista/psicología , Emociones/fisiología , Incertidumbre , Adulto , Cognición/fisiología , Expresión Facial , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Pruebas Neuropsicológicas , Adulto Joven
4.
Cogn Affect Behav Neurosci ; 18(4): 748-763, 2018 08.
Artículo en Inglés | MEDLINE | ID: mdl-29736682

RESUMEN

Voices transmit social signals through speech and/or prosody. Emotional prosody conveys key information about the emotional state of a speaker and is thus a crucial cue that one has to detect in order to develop efficient social communication. Previous studies in adults reported different brain responses to emotional than to neutral prosodic deviancy. The aim of this study was to characterize such specific emotional deviancy effects in school-age children. The mismatch negativity (MMN) and P3a evoked potentials, reflecting automatic change detection and automatic attention orienting, respectively, were obtained for neutral and emotional angry deviants in both school-age children (n = 26) and adults (n = 14). Shorter latencies were found for emotional than for neutral preattentional responses in both groups. However, whereas this effect was observed on the MMN in adults, it appeared in an early discriminative negativity preceding the MMN in children. A smaller P3a amplitude was observed for the emotional than for the neutral deviants at all ages. Overall, the brain responses involved in specific emotional change processing are already present during childhood, but responses have not yet reached an adult pattern. We suggest that these processing differences might contribute to the known improvement of emotional prosody perception between childhood and adulthood.


Asunto(s)
Ira , Encéfalo/crecimiento & desarrollo , Encéfalo/fisiología , Detección de Señal Psicológica/fisiología , Percepción Social , Percepción del Habla/fisiología , Adulto , Atención/fisiología , Niño , Electroencefalografía , Potenciales Evocados , Femenino , Humanos , Masculino , Adulto Joven
5.
Behav Res Methods ; 49(1): 97-110, 2017 02.
Artículo en Inglés | MEDLINE | ID: mdl-26822668

RESUMEN

One thousand one hundred and twenty subjects as well as a developmental phonagnosic subject (KH) along with age-matched controls performed the Glasgow Voice Memory Test, which assesses the ability to encode and immediately recognize, through an old/new judgment, both unfamiliar voices (delivered as vowels, making language requirements minimal) and bell sounds. The inclusion of non-vocal stimuli allows the detection of significant dissociations between the two categories (vocal vs. non-vocal stimuli). The distributions of accuracy and sensitivity scores (d') reflected a wide range of individual differences in voice recognition performance in the population. As expected, KH showed a dissociation between the recognition of voices and bell sounds, her performance being significantly poorer than matched controls for voices but not for bells. By providing normative data of a large sample and by testing a developmental phonagnosic subject, we demonstrated that the Glasgow Voice Memory Test, available online and accessible from all over the world, can be a valid screening tool (~5 min) for a preliminary detection of potential cases of phonagnosia and of "super recognizers" for voices.


Asunto(s)
Memoria , Pruebas Psicológicas , Reconocimiento en Psicología , Voz , Adolescente , Adulto , Anciano , Agnosia/diagnóstico , Estudios de Casos y Controles , Femenino , Humanos , Masculino , Persona de Mediana Edad , Sonido , Adulto Joven
6.
J Neurosci ; 34(24): 8098-105, 2014 Jun 11.
Artículo en Inglés | MEDLINE | ID: mdl-24920615

RESUMEN

The human voice carries speech as well as important nonlinguistic signals that influence our social interactions. Among these cues that impact our behavior and communication with other people is the perceived emotional state of the speaker. A theoretical framework for the neural processing stages of emotional prosody has suggested that auditory emotion is perceived in multiple steps (Schirmer and Kotz, 2006) involving low-level auditory analysis and integration of the acoustic information followed by higher-level cognition. Empirical evidence for this multistep processing chain, however, is still sparse. We examined this question using functional magnetic resonance imaging and a continuous carry-over design (Aguirre, 2007) to measure brain activity while volunteers listened to non-speech-affective vocalizations morphed on a continuum between anger and fear. Analyses dissociated neuronal adaptation effects induced by similarity in perceived emotional content between consecutive stimuli from those induced by their acoustic similarity. We found that bilateral voice-sensitive auditory regions as well as right amygdala coded the physical difference between consecutive stimuli. In contrast, activity in bilateral anterior insulae, medial superior frontal cortex, precuneus, and subcortical regions such as bilateral hippocampi depended predominantly on the perceptual difference between morphs. Our results suggest that the processing of vocal affect recognition is a multistep process involving largely distinct neural networks. Amygdala and auditory areas predominantly code emotion-related acoustic information while more anterior insular and prefrontal regions respond to the abstract, cognitive representation of vocal affect.


Asunto(s)
Adaptación Fisiológica/fisiología , Percepción Auditiva/fisiología , Encéfalo/fisiología , Emociones/fisiología , Voz , Estimulación Acústica , Adolescente , Adulto , Encéfalo/irrigación sanguínea , Mapeo Encefálico , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Oxígeno/sangre , Tiempo de Reacción/fisiología , Adulto Joven
7.
J Neurosci ; 34(20): 6813-21, 2014 May 14.
Artículo en Inglés | MEDLINE | ID: mdl-24828635

RESUMEN

The integration of emotional information from the face and voice of other persons is known to be mediated by a number of "multisensory" cerebral regions, such as the right posterior superior temporal sulcus (pSTS). However, whether multimodal integration in these regions is attributable to interleaved populations of unisensory neurons responding to face or voice or rather by multimodal neurons receiving input from the two modalities is not fully clear. Here, we examine this question using functional magnetic resonance adaptation and dynamic audiovisual stimuli in which emotional information was manipulated parametrically and independently in the face and voice via morphing between angry and happy expressions. Healthy human adult subjects were scanned while performing a happy/angry emotion categorization task on a series of such stimuli included in a fast event-related, continuous carryover design. Subjects integrated both face and voice information when categorizing emotion-although there was a greater weighting of face information-and showed behavioral adaptation effects both within and across modality. Adaptation also occurred at the neural level: in addition to modality-specific adaptation in visual and auditory cortices, we observed for the first time a crossmodal adaptation effect. Specifically, fMRI signal in the right pSTS was reduced in response to a stimulus in which facial emotion was similar to the vocal emotion of the preceding stimulus. These results suggest that the integration of emotional information from face and voice in the pSTS involves a detectable proportion of bimodal neurons that combine inputs from visual and auditory cortices.


Asunto(s)
Percepción Auditiva/fisiología , Emociones/fisiología , Lóbulo Temporal/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Adulto , Mapeo Encefálico , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Estimulación Luminosa , Percepción Social , Voz
8.
Neuroimage ; 119: 164-74, 2015 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-26116964

RESUMEN

fMRI studies increasingly examine functions and properties of non-primary areas of human auditory cortex. However there is currently no standardized localization procedure to reliably identify specific areas across individuals such as the standard 'localizers' available in the visual domain. Here we present an fMRI 'voice localizer' scan allowing rapid and reliable localization of the voice-sensitive 'temporal voice areas' (TVA) of human auditory cortex. We describe results obtained using this standardized localizer scan in a large cohort of normal adult subjects. Most participants (94%) showed bilateral patches of significantly greater response to vocal than non-vocal sounds along the superior temporal sulcus/gyrus (STS/STG). Individual activation patterns, although reproducible, showed high inter-individual variability in precise anatomical location. Cluster analysis of individual peaks from the large cohort highlighted three bilateral clusters of voice-sensitivity, or "voice patches" along posterior (TVAp), mid (TVAm) and anterior (TVAa) STS/STG, respectively. A series of extra-temporal areas including bilateral inferior prefrontal cortex and amygdalae showed small, but reliable voice-sensitivity as part of a large-scale cerebral voice network. Stimuli for the voice localizer scan and probabilistic maps in MNI space are available for download.


Asunto(s)
Corteza Auditiva/fisiología , Individualidad , Percepción del Habla/fisiología , Estimulación Acústica , Adulto , Mapeo Encefálico , Dominancia Cerebral , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Voz , Adulto Joven
9.
Cereb Cortex ; 23(4): 958-66, 2013 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-22490550

RESUMEN

Normal listeners effortlessly determine a person's gender by voice, but the cerebral mechanisms underlying this ability remain unclear. Here, we demonstrate 2 stages of cerebral processing during voice gender categorization. Using voice morphing along with an adaptation-optimized functional magnetic resonance imaging design, we found that secondary auditory cortex including the anterior part of the temporal voice areas in the right hemisphere responded primarily to acoustical distance with the previously heard stimulus. In contrast, a network of bilateral regions involving inferior prefrontal and anterior and posterior cingulate cortex reflected perceived stimulus ambiguity. These findings suggest that voice gender recognition involves neuronal populations along the auditory ventral stream responsible for auditory feature extraction, functioning in pair with the prefrontal cortex in voice gender perception.


Asunto(s)
Percepción Auditiva/fisiología , Corteza Cerebral/irrigación sanguínea , Corteza Cerebral/fisiología , Imagen por Resonancia Magnética , Caracteres Sexuales , Voz , Estimulación Acústica , Adulto , Mapeo Encefálico , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Modelos Lineales , Masculino , Oxígeno/sangre , Psicometría , Tiempo de Reacción/fisiología , Adulto Joven
10.
Cereb Cortex ; 22(6): 1263-70, 2012 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-21828348

RESUMEN

Social interactions involve more than "just" language. As important is a more primitive nonlinguistic mode of communication acting in parallel with linguistic processes and driving our decisions to a much higher degree than is generally suspected. Amongst the "honest signals" that influence our behavior is perceived vocal attractiveness. Not only does vocal attractiveness reflect important biological characteristics of the speaker, it also influences our social perceptions according to the "what sounds beautiful is good" phenomenon. Despite the widespread influence of vocal attractiveness on social interactions revealed by behavioral studies, its neural underpinnings are yet unknown. We measured brain activity while participants listened to a series of vocal sounds ("ah") and performed an unrelated task. We found that voice-sensitive auditory and inferior frontal regions were strongly correlated with implicitly perceived vocal attractiveness. While the involvement of auditory areas reflected the processing of acoustic contributors to vocal attractiveness ("distance to mean" and spectrotemporal regularity), activity in inferior prefrontal regions (traditionally involved in speech processes) reflected the overall perceived attractiveness of the voices despite their lack of linguistic content. These results suggest the strong influence of hidden nonlinguistic aspects of communication signals on cerebral activity and provide an objective measure of this influence.


Asunto(s)
Estimulación Acústica/métodos , Corteza Prefrontal/fisiología , Conducta Social , Percepción del Habla/fisiología , Voz/fisiología , Adolescente , Adulto , Percepción Auditiva/fisiología , Femenino , Humanos , Masculino , Adulto Joven
11.
Transl Psychiatry ; 13(1): 250, 2023 Jul 08.
Artículo en Inglés | MEDLINE | ID: mdl-37422467

RESUMEN

Early identification of children on the autism spectrum is crucial for early intervention with long-term positive effects on symptoms and skills. The need for improved objective autism detection tools is emphasized by the poor diagnostic power in current tools. Here, we aim to evaluate the classification performance of acoustic features of the voice in children with autism spectrum disorder (ASD) with respect to a heterogeneous control group (composed of neurotypical children, children with Developmental Language Disorder [DLD] and children with sensorineural hearing loss with Cochlear Implant [CI]). This retrospective diagnostic study was conducted at the Child Psychiatry Unit of Tours University Hospital (France). A total of 108 children, including 38 diagnosed with ASD (8.5 ± 0.25 years), 24 typically developing (TD; 8.2 ± 0.32 years) and 46 children with atypical development (DLD and CI; 7.9 ± 0.36 years) were enrolled in our studies. The acoustic properties of speech samples produced by children in the context of a nonword repetition task were measured. We used a Monte Carlo cross-validation with an ROC (Receiving Operator Characteristic) supervised k-Means clustering algorithm to develop a classification model that can differentially classify a child with an unknown disorder. We showed that voice acoustics classified autism diagnosis with an overall accuracy of 91% [CI95%, 90.40%-91.65%] against TD children, and of 85% [CI95%, 84.5%-86.6%] against an heterogenous group of non-autistic children. Accuracy reported here with multivariate analysis combined with Monte Carlo cross-validation is higher than in previous studies. Our findings demonstrate that easy-to-measure voice acoustic parameters could be used as a diagnostic aid tool, specific to ASD.


Asunto(s)
Trastorno del Espectro Autista , Trastorno Autístico , Niño , Humanos , Trastorno del Espectro Autista/complicaciones , Trastorno del Espectro Autista/diagnóstico , Estudios Retrospectivos , Acústica , Francia
12.
J Autism Dev Disord ; 2023 Apr 28.
Artículo en Inglés | MEDLINE | ID: mdl-37118645

RESUMEN

A lack of response to voices, and a great interest for music are part of the behavioral expressions, commonly (self-)reported in Autism Spectrum Disorder (ASD). These atypical interests for vocal and musical sounds could be attributable to different levels of acoustical noise, quantified in the harmonic-to-noise ratio (HNR). No previous study has investigated explicit auditory pleasantness in ASD comparing vocal and non-vocal sounds, in relation to acoustic noise level. The aim of this study is to objectively evaluate auditory pleasantness. 16 adults on the autism spectrum and 16 neuro-typical (NT) matched adults rated the likeability of vocal and non-vocal sounds, with varying harmonic-to-noise ratio levels. A group by category interaction in pleasantness judgements revealed that participants on the autism spectrum judged vocal sounds as less pleasant than non-vocal sounds; an effect not found for NT participants. A category by HNR level interaction revealed that participants of both groups rated sounds with a high HNR as more pleasant for non-vocal sounds. A significant group by HNR interaction revealed that people on the autism spectrum tended to judge as less pleasant sounds with high HNR and more pleasant those with low HNR than NT participants. Acoustical noise level of sounds alone does not appear to explain atypical interest for voices and greater interest in music in ASD.

13.
Brain Topogr ; 25(2): 194-204, 2012 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-22080221

RESUMEN

Gender is salient, socially critical information obtained from faces and voices, yet the brain processes underlying gender discrimination have not been well studied. We investigated neural correlates of gender processing of voices in two ERP studies. In the first, ERP differences were seen between female and male voices starting at 87 ms, in both spatial-temporal and peak analyses, particularly the fronto-central N1 and P2. As pitch differences may drive gender differences, the second study used normal, high- and low-pitch voices. The results of these studies suggested that differences in pitch produced early effects (27-63 ms). Gender effects were seen on N1 (120 ms) with implicit pitch processing (study 1), but were not seen with manipulations of pitch (study 2), demonstrating that N1 was modulated by attention. P2 (between 170 and 230 ms) discriminated male from female voices, independent of pitch. Thus, these data show that there are two stages in voice gender processing; a very early pitch or frequency discrimination and a later more accurate determination of gender at the P2 latency.


Asunto(s)
Discriminación en Psicología/fisiología , Potenciales Evocados Auditivos/fisiología , Identidad de Género , Discriminación de la Altura Tonal/fisiología , Percepción de la Altura Tonal/fisiología , Voz , Estimulación Acústica , Adulto , Mapeo Encefálico , Electroencefalografía , Femenino , Humanos , Masculino , Tiempo de Reacción
14.
Cereb Cortex ; 21(12): 2820-8, 2011 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-21531779

RESUMEN

Temporal voice areas showing a larger activity for vocal than non-vocal sounds have been identified along the superior temporal sulcus (STS); more voice-sensitive areas have been described in frontal and parietal lobes. Yet, the role of voice-sensitive regions in representing voice identity remains unclear. Using a functional magnetic resonance adaptation design, we aimed at disentangling acoustic- from identity-based representations of voices. Sixteen participants were scanned while listening to pairs of voices drawn from morphed continua between 2 initially unfamiliar voices, before and after a voice learning phase. In a given pair, the first and second stimuli could be identical or acoustically different and, at the second session, perceptually similar or different. At both sessions, right mid-STS/superior temporal gyrus (STG) and superior temporal pole (sTP) showed sensitivity to acoustical changes. Critically, voice learning induced changes in the acoustical processing of voices in inferior frontal cortices (IFCs). At the second session only, right IFC and left cingulate gyrus showed sensitivity to changes in perceived identity. The processing of voice identity appears to be subserved by a large network of brain areas ranging from the sTP, involved in an acoustic-based representation of unfamiliar voices, to areas along the convexity of the IFC for identity-related processing of familiar voices.


Asunto(s)
Percepción Auditiva/fisiología , Mapeo Encefálico , Corteza Cerebral/fisiología , Aprendizaje/fisiología , Voz , Estimulación Acústica , Femenino , Humanos , Interpretación de Imagen Asistida por Computador , Imagen por Resonancia Magnética , Masculino , Adulto Joven
15.
Front Neurosci ; 16: 982899, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36213730

RESUMEN

With the COVID-19 pandemic, we have become used to wearing masks and have experienced how masks seem to impair emotion and speech recognition. While several studies have focused on facial emotion recognition by adding images of masks on photographs of emotional faces, we have created a video database with actors really wearing masks to test its effect in more ecological conditions. After validating the emotions displayed by the actors, we found that surgical mask impaired happiness and sadness recognition but not neutrality. Moreover, for happiness, this effect was specific to the mask and not to covering the lower part of the face, possibly due to a cognitive bias associated with the surgical mask. We also created videos with speech and tested the effect of mask on emotion and speech recognition when displayed in auditory, visual, or audiovisual modalities. In visual and audiovisual modalities, mask impaired happiness and sadness but improved neutrality recognition. Mask impaired the recognition of bilabial syllables regardless of modality. In addition, it altered speech recognition only in the audiovisual modality for participants above 70 years old. Overall, COVID-19 masks mainly impair emotion recognition, except for older participants for whom it also impacts speech recognition, probably because they rely more on visual information to compensate age-related hearing loss.

16.
Front Neurosci ; 16: 1033243, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36478875

RESUMEN

Introduction: The COVID-19 pandemic has imposed to wear a face mask that may have negative consequences for social interactions despite its health benefits. A lot of recent studies focused on emotion recognition of masked faces, as the mouth is, with the eyes, essential to convey emotional content. However, none have studied neurobehavioral and neurophysiological markers of masked faces perception, such as ocular exploration and pupil reactivity. The purpose of this eye tracking study was to quantify how wearing a facial accessory, and in particular a face mask, affected the ocular and pupillary response to a face, emotional or not. Methods: We used videos of actors wearing a facial accessory to characterize the visual exploration and pupillary response in several occlusion (no accessory, sunglasses, scarf, and mask) and emotional conditions (neutral, happy, and sad) in a population of 44 adults. Results: We showed that ocular exploration differed for face covered with an accessory, and in particular a mask, compared to the classical visual scanning pattern of a non-covered face. The covered areas of the face were less explored. Pupil reactivity seemed only slightly affected by the mask, while its sensitivity to emotions was observed even in the presence of a facial accessory. Discussion: These results suggest a mixed impact of the mask on attentional capture and physiological adjustment, which does not seem to be reconcilable with its strong effect on behavioral emotional recognition previously described.

17.
Front Pediatr ; 9: 785762, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34976896

RESUMEN

Early intervention programs positively affect key behaviors for children with autism spectrum disorder (ASD). However, most of these programs do not target children with severe autistic symptomatology associated with intellectual disability (ID). This study aimed to investigate the psychological and clinical outcomes of children with severe autism and ID enrolled in the Tailored and Inclusive Program for Autism-Tours (TIPA-T). The first step of the TIPA-T is the Exchange and Development Therapy (EDT): an individual neurofunctional intervention consisting of one-to-one exchanges between a child and a therapist taking place in a pared-down environment. It aims to rehabilitate psychophysiological abilities at the roots of social communication through structured sequences of "social play." Cognitive and socio-emotional skills and general development were evaluated with the Social Cognitive Evaluation Battery scale and the Brunet-Lézine Scale-Revised, respectively, before and after 9 months of intervention in 32 children with ASD and ID. Autistic symptomatology was evaluated with the Behavior Summarized Evaluation-Revised scale at five time-points in a subset of 14 children, both in individual and group settings. Statistically significant post-intervention improvements were found in cognitive and socio-emotional skills. All but one child showed improvements in at least one social domain, and 78% of children gained one level in at least four social domains. Twenty-nine children improved in cognitive domains, with 66% of children improving in at least three cognitive domains. Autistic symptomatology evaluated in one-to-one settings significantly decreased with therapy; this reduction was observed in more than 85% of children. In group settings, autistic symptomatology also decreased in more than 60% of children. Global developmental age significantly increased by 3.8 months. The TIPA-T, including EDT in particular, improves socio-emotional skills of most children with ASD and reduces autistic symptomatology, yet with heterogeneous outcomes profiles, in line with the strong heterogeneity of profiles observed in ASD. At the group level, this study highlights the benefits of the TIPA-T for children with severe autism and associated ID. Assessment of autistic core symptoms showed an improvement of social interaction, both in one-to-one and group evaluations, demonstrating the generalizability of the skills learned during the EDT.

18.
Trials ; 22(1): 248, 2021 Apr 06.
Artículo en Inglés | MEDLINE | ID: mdl-33823927

RESUMEN

BACKGROUND: Autism spectrum disorder (ASD) is characterized by impaired social communication and interaction, and stereotyped, repetitive behaviour and sensory interests. To date, there is no effective medication that can improve social communication and interaction in ASD, and effect sizes of behaviour-based psychotherapy remain in the low to medium range. Consequently, there is a clear need for new treatment options. ASD is associated with altered activation and connectivity patterns in brain areas which process social information. Transcranial direct current stimulation (tDCS) is a technique that applies a weak electrical current to the brain in order to modulate neural excitability and alter connectivity. Combined with specific cognitive tasks, it allows to facilitate and consolidate the respective training effects. Therefore, application of tDCS in brain areas relevant to social cognition in combination with a specific cognitive training is a promising treatment approach for ASD. METHODS: A phase-IIa pilot randomized, double-blind, sham-controlled, parallel-group clinical study is presented, which aims at investigating if 10 days of 20-min multi-channel tDCS stimulation of the bilateral tempo-parietal junction (TPJ) at 2.0 mA in combination with a computer-based cognitive training on perspective taking, intention and emotion understanding, can improve social cognitive abilities in children and adolescents with ASD. The main objectives are to describe the change in parent-rated social responsiveness from baseline (within 1 week before first stimulation) to post-intervention (within 7 days after last stimulation) and to monitor safety and tolerability of the intervention. Secondary objectives include the evaluation of change in parent-rated social responsiveness at follow-up (4 weeks after end of intervention), change in other ASD core symptoms and psychopathology, social cognitive abilities and neural functioning post-intervention and at follow-up in order to explore underlying neural and cognitive mechanisms. DISCUSSION: If shown, positive results regarding change in parent-rated social cognition and favourable safety and tolerability of the intervention will confirm tDCS as a promising treatment for ASD core-symptoms. This may be a first step in establishing a new and cost-efficient intervention for individuals with ASD. TRIAL REGISTRATION: The trial is registered with the German Clinical Trials Register (DRKS), DRKS00014732 . Registered on 15 August 2018. PROTOCOL VERSION: This study protocol refers to protocol version 1.2 from 24 May 2019.


Asunto(s)
Trastorno del Espectro Autista , Estimulación Transcraneal de Corriente Directa , Adolescente , Trastorno del Espectro Autista/diagnóstico , Trastorno del Espectro Autista/terapia , Encéfalo , Niño , Ensayos Clínicos Fase II como Asunto , Método Doble Ciego , Humanos , Ensayos Clínicos Controlados Aleatorios como Asunto , Resultado del Tratamiento
19.
BMC Neurosci ; 11: 36, 2010 Mar 11.
Artículo en Inglés | MEDLINE | ID: mdl-20222946

RESUMEN

BACKGROUND: Processing of multimodal information is a critical capacity of the human brain, with classic studies showing bimodal stimulation either facilitating or interfering in perceptual processing. Comparing activity to congruent and incongruent bimodal stimuli can reveal sensory dominance in particular cognitive tasks. RESULTS: We investigated audiovisual interactions driven by stimulus properties (bottom-up influences) or by task (top-down influences) on congruent and incongruent simultaneously presented faces and voices while ERPs were recorded. Subjects performed gender categorisation, directing attention either to faces or to voices and also judged whether the face/voice stimuli were congruent in terms of gender. Behaviourally, the unattended modality affected processing in the attended modality: the disruption was greater for attended voices. ERPs revealed top-down modulations of early brain processing (30-100 ms) over unisensory cortices. No effects were found on N170 or VPP, but from 180-230 ms larger right frontal activity was seen for incongruent than congruent stimuli. CONCLUSIONS: Our data demonstrates that in a gender categorisation task the processing of faces dominate over the processing of voices. Brain activity showed different modulation by top-down and bottom-up information. Top-down influences modulated early brain activity whereas bottom-up interactions occurred relatively late.


Asunto(s)
Atención/fisiología , Percepción Auditiva/fisiología , Encéfalo/fisiología , Juicio/fisiología , Caracteres Sexuales , Percepción Visual/fisiología , Estimulación Acústica , Adulto , Electroencefalografía , Potenciales Evocados , Cara , Femenino , Lóbulo Frontal/fisiología , Humanos , Masculino , Pruebas Neuropsicológicas , Estimulación Luminosa , Tiempo de Reacción , Factores de Tiempo , Voz , Adulto Joven
20.
Neuroimage Clin ; 28: 102512, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33395999

RESUMEN

Autism Spectrum Disorder (ASD) is currently diagnosed by the joint presence of social impairments and restrictive, repetitive patterns of behaviors. While the co-occurrence of these two categories of symptoms is at the core of the pathology, most studies investigated only one dimension to understand underlying physiopathology. In this study, we analyzed brain hemodynamic responses in neurotypical adults (CTRL) and adults with autism spectrum disorder during an oddball paradigm allowing to explore brain responses to vocal changes with different levels of saliency (deviancy or novelty) and different emotional content (neutral, angry). Change detection relies on activation of the supratemporal gyrus and insula and on deactivation of the lingual area. The activity of these brain areas involved in the processing of deviancy with vocal stimuli was modulated by saliency and emotion. No group difference between CTRL and ASD was reported for vocal stimuli processing or for deviancy/novelty processing, regardless of emotional content. Findings highlight that brain processing of voices and of neutral/ emotional vocal changes is typical in adults with ASD. Yet, at the behavioral level, persons with ASD still experience difficulties with those cues. This might indicate impairments at latter processing stages or simply show that alterations present in childhood might have repercussions at adult age.


Asunto(s)
Trastorno del Espectro Autista , Voz , Adulto , Encéfalo/diagnóstico por imagen , Señales (Psicología) , Emociones , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA