Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
BMC Med ; 22(1): 121, 2024 Mar 14.
Artigo em Inglês | MEDLINE | ID: mdl-38486293

RESUMO

BACKGROUND: Socio-emotional impairments are among the diagnostic criteria for autism spectrum disorder (ASD), but the actual knowledge has substantiated both altered and intact emotional prosodies recognition. Here, a Bayesian framework of perception is considered suggesting that the oversampling of sensory evidence would impair perception within highly variable environments. However, reliable hierarchical structures for spectral and temporal cues would foster emotion discrimination by autistics. METHODS: Event-related spectral perturbations (ERSP) extracted from electroencephalographic (EEG) data indexed the perception of anger, disgust, fear, happiness, neutral, and sadness prosodies while listening to speech uttered by (a) human or (b) synthesized voices characterized by reduced volatility and variability of acoustic environments. The assessment of mechanisms for perception was extended to the visual domain by analyzing the behavioral accuracy within a non-social task in which dynamics of precision weighting between bottom-up evidence and top-down inferences were emphasized. Eighty children (mean 9.7 years old; standard deviation 1.8) volunteered including 40 autistics. The symptomatology was assessed at the time of the study via the Autism Diagnostic Observation Schedule, Second Edition, and parents' responses on the Autism Spectrum Rating Scales. A mixed within-between analysis of variance was conducted to assess the effects of group (autism versus typical development), voice, emotions, and interaction between factors. A Bayesian analysis was implemented to quantify the evidence in favor of the null hypothesis in case of non-significance. Post hoc comparisons were corrected for multiple testing. RESULTS: Autistic children presented impaired emotion differentiation while listening to speech uttered by human voices, which was improved when the acoustic volatility and variability of voices were reduced. Divergent neural patterns were observed from neurotypicals to autistics, emphasizing different mechanisms for perception. Accordingly, behavioral measurements on the visual task were consistent with the over-precision ascribed to the environmental variability (sensory processing) that weakened performance. Unlike autistic children, neurotypicals could differentiate emotions induced by all voices. CONCLUSIONS: This study outlines behavioral and neurophysiological mechanisms that underpin responses to sensory variability. Neurobiological insights into the processing of emotional prosodies emphasized the potential of acoustically modified emotional prosodies to improve emotion differentiation by autistics. TRIAL REGISTRATION: BioMed Central ISRCTN Registry, ISRCTN18117434. Registered on September 20, 2020.


Assuntos
Transtorno do Espectro Autista , Transtorno Autístico , Criança , Humanos , Transtorno Autístico/diagnóstico , Fala , Transtorno do Espectro Autista/diagnóstico , Teorema de Bayes , Emoções/fisiologia , Acústica
2.
Sci Rep ; 13(1): 8178, 2023 05 20.
Artigo em Inglês | MEDLINE | ID: mdl-37210415

RESUMO

Emotional content is particularly salient, but situational factors such as cognitive load may disturb the attentional prioritization towards affective stimuli and interfere with their processing. In this study, 31 autistic and 31 typically developed children volunteered to assess their perception of affective prosodies via event-related spectral perturbations of neuronal oscillations recorded by electroencephalography under attentional load modulations induced by Multiple Object Tracking or neutral images. Although intermediate load optimized emotion processing by typically developed children, load and emotion did not interplay in children with autism. Results also outlined impaired emotional integration emphasized in theta, alpha and beta oscillations at early and late stages, and lower attentional ability indexed by the tracking capacity. Furthermore, both tracking capacity and neuronal patterns of emotion perception during task were predicted by daily-life autistic behaviors. These findings highlight that intermediate load may encourage emotion processing in typically developed children. However, autism aligns with impaired affective processing and selective attention, both insensitive to load modulations. Results were discussed within a Bayesian perspective that suggests atypical updating in precision between sensations and hidden states, towards poor contextual evaluations. For the first time, implicit emotion perception assessed by neuronal markers was integrated with environmental demands to characterize autism.


Assuntos
Transtorno Autístico , Criança , Humanos , Transtorno Autístico/psicologia , Teorema de Bayes , Emoções/fisiologia , Eletroencefalografia , Atenção/fisiologia , Percepção
3.
Data Brief ; 48: 109057, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37006385

RESUMO

The relevance of affective information triggers cognitive prioritisation, dictated by both the attentional load of the relevant task, and socio-emotional abilities. This dataset provides electroencephalographic (EEG) signals related to implicit emotional speech perception under low, intermediate, and high attentional demands. Demographic and behavioural data are also provided. Specific social-emotional reciprocity and verbal communication characterise Autism Spectrum Disorder (ASD) and may influence the processing of affective prosodies. Therefore, 62 children and their parents or legal guardians participated in data collection, including 31 children with high autistic traits (x̄age=9.6-year-old, σage=1.5) who previously received a diagnosis of ASD by a medical specialist, and 31 typically developed children (x̄age=10.2-year-old, σage=1.2). Assessments of the scope of autistic behaviours using the Autism Spectrum Rating Scales (ASRS, parent report) are provided for every child. During the experiment, children listened to task-irrelevant affective prosodies (anger, disgust, fear, happiness, neutral and sadness) while answering three visual tasks: neutral image viewing (low attentional load), one-target 4-disc Multiple Object Tracking (MOT; intermediate), one-target 8-disc MOT (high). The EEG data recorded during all three tasks and the tracking capacity (behavioural data) from MOT conditions are included in the dataset. Particularly, the tracking capacity was computed as a standardised index of attentional abilities during MOT, corrected for guessing. Beforehand, children answered the Edinburgh Handedness Inventory, and resting-state EEG activity of children was recorded for 2 minutes with eyes open. Those data are also provided. The present dataset can be used to investigate the electrophysiological correlates of implicit emotion and speech perceptions and their interaction with attentional load and autistic traits. Besides, resting-state EEG data may be used to characterise inter-individual heterogeneity at rest and, in turn, associate it with attentional capacities during MOT and with autistic behavioural patterns. Finally, tracking capacity may be useful to explore dynamic and selective attentional mechanisms under emotional constraints.

4.
Front Comput Neurosci ; 16: 1022787, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36465969

RESUMO

Artificial voices are nowadays embedded into our daily lives with latest neural voices approaching human voice consistency (naturalness). Nevertheless, behavioral, and neuronal correlates of the perception of less naturalistic emotional prosodies are still misunderstood. In this study, we explored the acoustic tendencies that define naturalness from human to synthesized voices. Then, we created naturalness-reduced emotional utterances by acoustic editions of human voices. Finally, we used Event-Related Potentials (ERP) to assess the time dynamics of emotional integration when listening to both human and synthesized voices in a healthy adult sample. Additionally, listeners rated their perceptions for valence, arousal, discrete emotions, naturalness, and intelligibility. Synthesized voices were characterized by less lexical stress (i.e., reduced difference between stressed and unstressed syllables within words) as regards duration and median pitch modulations. Besides, spectral content was attenuated toward lower F2 and F3 frequencies and lower intensities for harmonics 1 and 4. Both psychometric and neuronal correlates were sensitive to naturalness reduction. (1) Naturalness and intelligibility ratings dropped with emotional utterances synthetization, (2) Discrete emotion recognition was impaired as naturalness declined, consistent with P200 and Late Positive Potentials (LPP) being less sensitive to emotional differentiation at lower naturalness, and (3) Relative P200 and LPP amplitudes between prosodies were modulated by synthetization. Nevertheless, (4) Valence and arousal perceptions were preserved at lower naturalness, (5) Valence (arousal) ratings correlated negatively (positively) with Higuchi's fractal dimension extracted on neuronal data under all naturalness perturbations, (6) Inter-Trial Phase Coherence (ITPC) and standard deviation measurements revealed high inter-individual heterogeneity for emotion perception that is still preserved as naturalness reduces. Notably, partial between-participant synchrony (low ITPC), along with high amplitude dispersion on ERPs at both early and late stages emphasized miscellaneous emotional responses among subjects. In this study, we highlighted for the first time both behavioral and neuronal basis of emotional perception under acoustic naturalness alterations. Partial dependencies between ecological relevance and emotion understanding outlined the modulation but not the annihilation of emotional integration by synthetization.

5.
Front Hum Neurosci ; 15: 626146, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33716696

RESUMO

Socio-emotional impairments are key symptoms of Autism Spectrum Disorders. This work proposes to analyze the neuronal activity related to the discrimination of emotional prosodies in autistic children (aged 9 to 11-year-old) as follows. Firstly, a database for single words uttered in Mexican Spanish by males, females, and children will be created. Then, optimal acoustic features for emotion characterization will be extracted, followed of a cubic kernel function Support Vector Machine (SVM) in order to validate the speech corpus. As a result, human-specific acoustic properties of emotional voice signals will be identified. Secondly, those identified acoustic properties will be modified to synthesize the recorded human emotional voices. Thirdly, both human and synthesized utterances will be used to study the electroencephalographic correlate of affective prosody processing in typically developed and autistic children. Finally, and on the basis of the outcomes, synthesized voice-enhanced environments will be created to develop an intervention based on social-robot and Social StoryTM for autistic children to improve affective prosodies discrimination. This protocol has been registered at BioMed Central under the following number: ISRCTN18117434.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...