Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 175
Filtrar
1.
PLoS One ; 19(5): e0293781, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38776350

RESUMO

The brain calibrates itself based on the past stimulus diet, which makes frequently observed stimuli appear as typical (as opposed to uncommon stimuli, which appear as distinctive). Based on predictive processing theory, the brain should be more "prepared" for typical exemplars, because these contain information that has been encountered frequently, allowing it to economically represent items of that category. Thus, one could ask whether predictability and typicality of visual stimuli interact, or rather act in an additive manner. We adapted the design by Egner and colleagues (2010), who used cues to induce expectations about stimulus category (face vs. chair) occurrence during an orthogonal inversion detection task. We measured BOLD responses with fMRI in 35 participants. First, distinctive stimuli always elicited stronger responses than typical ones in all ROIs, and our whole-brain directional contrasts for the effects of typicality and distinctiveness converge with previous findings. Second and importantly, we could not replicate the interaction between category and predictability reported by Egner et al. (2010), which casts doubt on whether cueing designs are ideal to elicit reliable predictability effects. Third, likely as a consequence of the lack of predictability effects, we found no interaction between predictability and typicality in any of the four tested regions (bilateral fusiform face areas, lateral occipital complexes) when considering both categories, nor in the whole brain. We discuss the issue of replicability in neuroscience and sketch an agenda for how future studies might address the same question.


Assuntos
Encéfalo , Imageamento por Ressonância Magnética , Estimulação Luminosa , Humanos , Masculino , Feminino , Imageamento por Ressonância Magnética/métodos , Adulto , Adulto Jovem , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico/métodos , Sinais (Psicologia) , Reconhecimento Visual de Modelos/fisiologia , Face
2.
Artigo em Inglês | MEDLINE | ID: mdl-38739182

RESUMO

Neurofeedback training (NFT) is a promising adjuvant intervention method. The desynchronization of mu rhythm (8-13 Hz) in the electroencephalogram (EEG) over centro-parietal areas is known as a valid indicator of mirror neuron system (MNS) activation, which has been associated with social skills. Still, the effect of neurofeedback training on the MNS requires to be well investigated. The present study examined the possible impact of NFT with a mu suppression training protocol encompassing 15 NFT sessions (45 min each) on 16 healthy neurotypical participants. In separate pre- and post-training sessions, 64-channel EEG was recorded while participants (1) observed videos with various types of movements (including complex goal-directed hand movements and social interaction scenes) and (2) performed the "Reading the Mind in the Eyes Test" (RMET). EEG source reconstruction analysis revealed statistically significant mu suppression during hand movement observation across MNS-attributed fronto-parietal areas after NFT. The frequency analysis showed no significant mu suppression after NFT, despite the fact that numerical mu suppression appeared to be visible in a majority of participants during goal-directed hand movement observation. At the behavioral level, RMET accuracy scores did not suggest an effect of NFT on the ability to interpret subtle emotional expressions, although RMET response times were reduced after NFT. In conclusion, the present study exhibited preliminary and partial evidence that mu suppression NFT can induce mu suppression in MNS-attributed areas. More powerful experimental designs and longer training may be necessary to induce substantial and consistent mu suppression, particularly while observing social scenarios.

3.
Br J Psychol ; 115(2): 206-225, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-37851369

RESUMO

Musicians outperform non-musicians in vocal emotion perception, likely because of increased sensitivity to acoustic cues, such as fundamental frequency (F0) and timbre. Yet, how musicians make use of these acoustic cues to perceive emotions, and how they might differ from non-musicians, is unclear. To address these points, we created vocal stimuli that conveyed happiness, fear, pleasure or sadness, either in all acoustic cues, or selectively in either F0 or timbre only. We then compared vocal emotion perception performance between professional/semi-professional musicians (N = 39) and non-musicians (N = 38), all socialized in Western music culture. Compared to non-musicians, musicians classified vocal emotions more accurately. This advantage was seen in the full and F0-modulated conditions, but was absent in the timbre-modulated condition indicating that musicians excel at perceiving the melody (F0), but not the timbre of vocal emotions. Further, F0 seemed more important than timbre for the recognition of all emotional categories. Additional exploratory analyses revealed a link between time-varying F0 perception in music and voices that was independent of musical training. Together, these findings suggest that musicians are particularly tuned to the melody of vocal emotions, presumably due to a natural predisposition to exploit melodic patterns.


Assuntos
Música , Voz , Humanos , Estimulação Acústica , Emoções , Medo , Reconhecimento Psicológico , Música/psicologia , Percepção Auditiva
4.
PLoS One ; 18(12): e0295033, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38091269

RESUMO

Empirical investigations into eyewitness identification accuracy typically necessitate the creation of novel stimulus materials, which can be a challenging and time-consuming task. To facilitate this process and promote further research in this domain, we introduce the new Jena Eyewitness Research Stimuli (JERS). They comprise six video sequences depicting a mock theft committed by two different perpetrators, available in both two-dimensional (2D) and 360° format, combined with the corresponding lineup images presented in 2D or three-dimensional (3D) format. Images of one suspect and eight fillers are available for each lineup. We evaluated lineup fairness by using mock eyewitness paradigm and noted a Tredoux's E of 4.687 for Perpetrator 1 and 5.406 for Perpetrator 2. Moreover, no bias towards the perpetrators was observed in the lineups. We incorporated 360° videos and 3D lineup images to encourage the adoption of innovative data formats in experimental investigations of eyewitness accuracy. In particular, compatibility with Virtual Reality (VR) makes JERS a promising tool for advancing eyewitness research by enabling researchers to construct controlled environments that offer observers an immersive experience. JERS is freely accessible for the use of academic purposes via the Open Science Framework (OSF).


Assuntos
Reconhecimento Psicológico , Realidade Virtual , Humanos , Roubo , Bases de Dados Factuais , Pesquisadores , Crime , Rememoração Mental
5.
Brain Sci ; 13(11)2023 Nov 07.
Artigo em Inglês | MEDLINE | ID: mdl-38002523

RESUMO

Musicians outperform non-musicians in vocal emotion recognition, but the underlying mechanisms are still debated. Behavioral measures highlight the importance of auditory sensitivity towards emotional voice cues. However, it remains unclear whether and how this group difference is reflected at the brain level. Here, we compared event-related potentials (ERPs) to acoustically manipulated voices between musicians (n = 39) and non-musicians (n = 39). We used parameter-specific voice morphing to create and present vocal stimuli that conveyed happiness, fear, pleasure, or sadness, either in all acoustic cues or selectively in either pitch contour (F0) or timbre. Although the fronto-central P200 (150-250 ms) and N400 (300-500 ms) components were modulated by pitch and timbre, differences between musicians and non-musicians appeared only for a centro-parietal late positive potential (500-1000 ms). Thus, this study does not support an early auditory specialization in musicians but suggests instead that musicality affects the manner in which listeners use acoustic voice cues during later, controlled aspects of emotion evaluation.

6.
Behav Res Methods ; 2023 Oct 11.
Artigo em Inglês | MEDLINE | ID: mdl-37821750

RESUMO

We describe JAVMEPS, an audiovisual (AV) database for emotional voice and dynamic face stimuli, with voices varying in emotional intensity. JAVMEPS includes 2256 stimulus files comprising (A) recordings of 12 speakers, speaking four bisyllabic pseudowords with six naturalistic induced basic emotions plus neutral, in auditory-only, visual-only, and congruent AV conditions. It furthermore comprises (B) caricatures (140%), original voices (100%), and anti-caricatures (60%) for happy, fearful, angry, sad, disgusted, and surprised voices for eight speakers and two pseudowords. Crucially, JAVMEPS contains (C) precisely time-synchronized congruent and incongruent AV (and corresponding auditory-only) stimuli with two emotions (anger, surprise), (C1) with original intensity (ten speakers, four pseudowords), (C2) and with graded AV congruence (implemented via five voice morph levels, from caricatures to anti-caricatures; eight speakers, two pseudowords). We collected classification data for Stimulus Set A from 22 normal-hearing listeners and four cochlear implant users, for two pseudowords, in auditory-only, visual-only, and AV conditions. Normal-hearing individuals showed good classification performance (McorrAV = .59 to .92), with classification rates in the auditory-only condition ≥ .38 correct (surprise: .67, anger: .51). Despite compromised vocal emotion perception, CI users performed above chance levels of .14 for auditory-only stimuli, with best rates for surprise (.31) and anger (.30). We anticipate JAVMEPS to become a useful open resource for researchers into auditory emotion perception, especially when adaptive testing or calibration of task difficulty is desirable. With its time-synchronized congruent and incongruent stimuli, JAVMEPS can also contribute to filling a gap in research regarding dynamic audiovisual integration of emotion perception via behavioral or neurophysiological recordings.

7.
Animals (Basel) ; 13(17)2023 Aug 29.
Artigo em Inglês | MEDLINE | ID: mdl-37685015

RESUMO

Guide dogs hold the potential to increase confidence and independence in visually impaired individuals. However, the success of the partnership between a guide dog and its handler depends on various factors, including the compatibility between the dog and the handler. Here, we conducted interviews with 21 guide dog owners to explore determinants of compatibility between the dog and the owner. Experienced compatibility between the dog and the owner was associated with positive relationship aspects such as feeling secure with the dog. Certain characteristics emerged as subjective determinants of compatibility, including shared hobbies, high levels of openness in both or only the dog, similar activity levels and higher activeness in dogs, similar expressions of calmness; happiness; greediness; friendliness; and a complementary dominance-submissiveness relationship. Owners who perceived themselves to be similar in their personality to their dogs often reported to have a strong bond, to feel secure with their dog and to be less influenced by previous relationships. However, our results suggest that a strong bond between the dog and the owner does not exclusively yield positive effects. Moreover, prior dog ownership seems to have a potentially strong impact on the subsequent relationship. Our results contribute to the understanding of dog-owner compatibility and may improve the matching process of guide dogs and their prospective handlers.

8.
Biol Psychol ; 182: 108654, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37549807

RESUMO

Valentine's influential norm-based multidimensional face-space model (nMDFS) predicts that perceived distinctiveness of a face increases with its distance to the norm. Occipito-temporal event-related potentials (ERPs) have been recently shown to respond selectively to variations in distance-to-norm (P200) or familiarity (N250, late negativity), respectively (Wuttke & Schweinberger, 2019). Despite growing evidence on interindividual differences in face perception skills at the behavioral level, little research has focused on their electrophysiological correlates. To reveal potential interindividual differences in face spaces, we contrasted high and low performers in face recognition in regards to distance-to-norm (P200) and familiarity (N250). We replicated both the P200 distance-to-norm and the N250 familiarity effect. Importantly, we observed: i) reduced responses in low compared to high performers of face recognition, especially in terms of smaller distance-to-norm effects in the P200, possibly indicating less 'expanded' face spaces in low compared to high performers; ii) increased N250 responses to familiar original faces in high performers, suggesting more robust face identity representations. In summary, these findings suggest the contribution of both early norm-based face coding and robust face representations to individual face recognition skills, and indicate that ERPs can offer a promising route to understand individual differences in face perception and their neurocognitive correlates.


Assuntos
Reconhecimento Facial , Humanos , Reconhecimento Facial/fisiologia , Eletroencefalografia/métodos , Face , Reconhecimento Psicológico/fisiologia , Potenciais Evocados/fisiologia , Reconhecimento Visual de Modelos/fisiologia
9.
Brain Sci ; 13(4)2023 Apr 07.
Artigo em Inglês | MEDLINE | ID: mdl-37190602

RESUMO

Recognizing people from their voices may be facilitated by a voice's distinctiveness, in a manner similar to that which has been reported for faces. However, little is known about the neural time-course of voice learning and the role of facial information in voice learning. Based on evidence for audiovisual integration in the recognition of familiar people, we studied the behavioral and electrophysiological correlates of voice learning associated with distinctive or non-distinctive faces. We repeated twelve unfamiliar voices uttering short sentences, together with either distinctive or non-distinctive faces (depicted before and during voice presentation) in six learning-test cycles. During learning, distinctive faces increased early visually-evoked (N170, P200, N250) potentials relative to non-distinctive faces, and face distinctiveness modulated voice-elicited slow EEG activity at the occipito-temporal and fronto-central electrodes. At the test, unimodally-presented voices previously learned with distinctive faces were classified more quickly than were voices learned with non-distinctive faces, and also more quickly than novel voices. Moreover, voices previously learned with faces elicited an N250-like component that was similar in topography to that typically observed for facial stimuli. The preliminary source localization of this voice-induced N250 was compatible with a source in the fusiform gyrus. Taken together, our findings provide support for a theory of early interaction between voice and face processing areas during both learning and voice recognition.

10.
Cogn Emot ; 37(4): 731-747, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37104118

RESUMO

Research into voice perception benefits from manipulation software to gain experimental control over acoustic expression of social signals such as vocal emotions. Today, parameter-specific voice morphing allows a precise control of the emotional quality expressed by single vocal parameters, such as fundamental frequency (F0) and timbre. However, potential side effects, in particular reduced naturalness, could limit ecological validity of speech stimuli. To address this for the domain of emotion perception, we collected ratings of perceived naturalness and emotionality on voice morphs expressing different emotions either through F0 or Timbre only. In two experiments, we compared two different morphing approaches, using either neutral voices or emotional averages as emotionally non-informative reference stimuli. As expected, parameter-specific voice morphing reduced perceived naturalness. However, perceived naturalness of F0 and Timbre morphs were comparable with averaged emotions as reference, potentially making this approach more suitable for future research. Crucially, there was no relationship between ratings of emotionality and naturalness, suggesting that the perception of emotion was not substantially affected by a reduction of voice naturalness. We hold that while these findings advocate parameter-specific voice morphing as a suitable tool for research on vocal emotion perception, great care should be taken in producing ecologically valid stimuli.


Assuntos
Percepção da Fala , Voz , Humanos , Emoções
11.
Behav Res Methods ; 55(3): 1352-1371, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-35648317

RESUMO

The ability to recognize someone's voice spans a broad spectrum with phonagnosia on the low end and super-recognition at the high end. Yet there is no standardized test to measure an individual's ability of learning and recognizing newly learned voices with samples of speech-like phonetic variability. We have developed the Jena Voice Learning and Memory Test (JVLMT), a 22-min test based on item response theory and applicable across languages. The JVLMT consists of three phases in which participants (1) become familiarized with eight speakers, (2) revise the learned voices, and (3) perform a 3AFC recognition task, using pseudo-sentences devoid of semantic content. Acoustic (dis)similarity analyses were used to create items with various levels of difficulty. Test scores are based on 22 items which had been selected and validated based on two online studies with 232 and 454 participants, respectively. Mean accuracy in the JVLMT is 0.51 (SD = .18) with an empirical (marginal) reliability of 0.66. Correlational analyses showed high and moderate convergent validity with the Bangor Voice Matching Test (BVMT) and Glasgow Voice Memory Test (GVMT), respectively, and high discriminant validity with a digit span test. Four participants with potential super recognition abilities and seven participants with potential phonagnosia were identified who performed at least 2 SDs above or below the mean, respectively. The JVLMT is a promising research and diagnostic screening tool to detect both impairments in voice recognition and super-recognition abilities.


Assuntos
Percepção da Fala , Voz , Humanos , Reprodutibilidade dos Testes , Voz/fisiologia , Fala , Aprendizagem/fisiologia , Reconhecimento Psicológico/fisiologia , Percepção da Fala/fisiologia
12.
Br J Psychol ; 114 Suppl 1: 1-9, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-36583346

RESUMO

Although different human races do not exist from the perspective of biology and genetics, ascribed 'race' influences psychological processing, such as memory and perception of faces. Research from this Special Issue, as well as a wealth of previous research, shows that other-'race' faces are more difficult to recognize compared to own-'race' faces, a phenomenon known as the other-'race' effect. Theories of expertise attribute the cause of the other-'race' effect to less efficient visual representations of other-'race' faces, which results from reduced visual expertise with other-'race' faces compared to own-'race' faces due to limited contact with individuals from other 'racial' groups. By contrast, social-cognitive accounts attribute the cause of the other-'race' effect to reduced motivation to individuate other-'race' faces compared to own-'race' faces. Evidence for both types of theories is still mixed, but progress in understanding the phenomenon has also been hampered by the fact that there has been little crosstalk between these accounts, which tend to be rooted in separate domains of experimental perception science and social psychology, respectively. To promote an integrative perspective on current knowledge on own- versus other-'race' face processing, the present Special Issue bridges different psychological subdisciplines, showcasing research using a large variety of methodological approaches and measures. In this guest editorial, we briefly highlight individual contributions to this Special Issue and offer what we see as important avenues for future research on the other-'race' effect.


Assuntos
Reconhecimento Facial , Grupos Raciais , Humanos , Reconhecimento Visual de Modelos
13.
Br J Psychol ; 114 Suppl 1: 45-69, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-36111613

RESUMO

Two competing theories explain the other-'race' effect (ORE) either by greater perceptual expertise to same-'race' (SR) faces or by social categorization of other-'race' (OR) faces at the expense of individuation. To assess expertise and categorization contributions to the ORE, a promising-yet overlooked-approach is comparing activations for different other-'races'. We present a label-based systematic review of neuroimaging studies reporting increased activity in response to OR faces (African, Caucasian, or Asian) when compared with the SR of participants. Hypothetically, while common activations would reflect general aspects of OR perception, 'race'-preferential ones would represent effects of 'race'-specific visual appearance. We find that several studies report activation of occipito-temporal and midcingulate areas in response to faces across different other-'races', presumably due to high demand on the visual system and category processing. Another area reported in response to all OR faces, the caudate nucleus, suggests the involvement of socio-affective processes and behavioural regulation. Overall, our results support hybrid models-both expertise and social categorization contribute to the ORE, but they provide little evidence for reduced motivation to process OR faces. Additionally, we identify areas preferentially responding to specific OR faces, reflecting effects of visual appearance.


Assuntos
Reconhecimento Facial , Grupos Raciais , Humanos , Povo Asiático , Cognição , Neuroimagem , Reconhecimento Visual de Modelos/fisiologia , Brancos , Negro ou Afro-Americano , Comportamento Social
14.
Sensors (Basel) ; 22(19)2022 Oct 06.
Artigo em Inglês | MEDLINE | ID: mdl-36236658

RESUMO

Vocal emotion recognition (VER) in natural speech, often referred to as speech emotion recognition (SER), remains challenging for both humans and computers. Applied fields including clinical diagnosis and intervention, social interaction research or Human Computer Interaction (HCI) increasingly benefit from efficient VER algorithms. Several feature sets were used with machine-learning (ML) algorithms for discrete emotion classification. However, there is no consensus for which low-level-descriptors and classifiers are optimal. Therefore, we aimed to compare the performance of machine-learning algorithms with several different feature sets. Concretely, seven ML algorithms were compared on the Berlin Database of Emotional Speech: Multilayer Perceptron Neural Network (MLP), J48 Decision Tree (DT), Support Vector Machine with Sequential Minimal Optimization (SMO), Random Forest (RF), k-Nearest Neighbor (KNN), Simple Logistic Regression (LOG) and Multinomial Logistic Regression (MLR) with 10-fold cross validation using four openSMILE feature sets (i.e., IS-09, emobase, GeMAPS and eGeMAPS). Results indicated that SMO, MLP and LOG show better performance (reaching to 87.85%, 84.00% and 83.74% accuracies, respectively) compared to RF, DT, MLR and KNN (with minimum 73.46%, 53.08%, 70.65% and 58.69% accuracies, respectively). Overall, the emobase feature set performed best. We discuss the implications of these findings for applications in diagnosis, intervention or HCI.


Assuntos
Aprendizado de Máquina , Fala , Algoritmos , Emoções , Humanos , Redes Neurais de Computação , Máquina de Vetores de Suporte
15.
Cogn Neuropsychol ; 39(3-4): 196-207, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36202621

RESUMO

Most findings on prosopagnosia to date suggest preserved voice recognition in prosopagnosia (except in cases with bilateral lesions). Here we report a follow-up examination on M.T., suffering from acquired prosopagnosia following a large unilateral right-hemispheric lesion in frontal, parietal, and anterior temporal areas excluding core ventral occipitotemporal face areas. Twenty-three years after initial testing we reassessed face and object recognition skills [Henke, K., Schweinberger, S. R., Grigo, A., Klos, T., & Sommer, W. (1998). Specificity of face recognition: Recognition of exemplars of non-face objects in prosopagnosia. Cortex, 34(2), 289-296]; [Schweinberger, S. R., Klos, T., & Sommer, W. (1995). Covert face recognition in prosopagnosia - A dissociable function? Cortex, 31(3), 517-529] and additionally studied voice recognition. Confirming the persistence of deficits, M.T. exhibited substantial impairments in famous face recognition and memory for learned faces, but preserved face matching and object recognition skills. Critically, he showed substantially impaired voice recognition skills. These findings are congruent with the ideas that (i) prosopagnosia after right anterior temporal lesions can persist over long periods > 20 years, and that (ii) such lesions can be associated with both facial and vocal deficits in person recognition.


Assuntos
Prosopagnosia , Acidente Vascular Cerebral , Seguimentos , Humanos , Imageamento por Ressonância Magnética , Prosopagnosia/patologia , Lobo Temporal
16.
Front Neurosci ; 16: 956917, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36090287

RESUMO

The use of digitally modified stimuli with enhanced diagnostic information to improve verbal communication in children with sensory or central handicaps was pioneered by Tallal and colleagues in 1996, who targeted speech comprehension in language-learning impaired children. Today, researchers are aware that successful communication cannot be reduced to linguistic information-it depends strongly on the quality of communication, including non-verbal socio-emotional communication. In children with cochlear implants (CIs), quality of life (QoL) is affected, but this can be related to the ability to recognize emotions in a voice rather than speech comprehension alone. In this manuscript, we describe a family of new methods, termed parameter-specific facial and vocal morphing. We propose that these provide novel perspectives for assessing sensory determinants of human communication, but also for enhancing socio-emotional communication and QoL in the context of sensory handicaps, via training with digitally enhanced, caricatured stimuli. Based on promising initial results with various target groups including people with age-related macular degeneration, people with low abilities to recognize faces, older people, and adult CI users, we discuss chances and challenges for perceptual training interventions for young CI users based on enhanced auditory stimuli, as well as perspectives for CI sound processing technology.

17.
Soc Cogn Affect Neurosci ; 17(12): 1145-1154, 2022 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-35522247

RESUMO

Our ability to infer a speaker's emotional state depends on the processing of acoustic parameters such as fundamental frequency (F0) and timbre. Yet, how these parameters are processed and integrated to inform emotion perception remains largely unknown. Here we pursued this issue using a novel parameter-specific voice morphing technique to create stimuli with emotion modulations in only F0 or only timbre. We used these stimuli together with fully modulated vocal stimuli in an event-related potential (ERP) study in which participants listened to and identified stimulus emotion. ERPs (P200 and N400) and behavioral data converged in showing that both F0 and timbre support emotion processing but do so differently for different emotions: Whereas F0 was most relevant for responses to happy, fearful and sad voices, timbre was most relevant for responses to voices expressing pleasure. Together, these findings offer original insights into the relative significance of different acoustic parameters for early neuronal representations of speaker emotion and show that such representations are predictive of subsequent evaluative judgments.


Assuntos
Percepção da Fala , Voz , Humanos , Masculino , Feminino , Eletroencefalografia , Potenciais Evocados , Emoções/fisiologia , Percepção Auditiva/fisiologia , Percepção da Fala/fisiologia
18.
Healthcare (Basel) ; 10(4)2022 Apr 12.
Artigo em Inglês | MEDLINE | ID: mdl-35455891

RESUMO

Since COVID-19 has become a pandemic, everyday life has seen dramatic changes affecting individuals, families, and children with and without autism. Among other things, these changes entail more time at home, digital forms of communication, school closures, and reduced support and intervention. Here, we assess the effects of the pandemic on quality of life for school-age autistic and neurotypical children and adolescents. First, we provide a comprehensive review of the current relevant literature. Next, we report original data from a survey conducted in several countries, assessing activities, well-being, and social life in families with autism, and their changes over time. We focus on differences between children with and without autism from within the same families, and on different outcomes for children with high- or low-functioning autism. While individuals with autism scored lower in emotional and social functioning than their neurotypical siblings, both groups of children showed comparable decreases in well-being and increases in anxiety, compared to before the pandemic. By contrast, decreases in adaptability were significantly more pronounced in autistic children and adolescents compared to neurotypical children and adolescents. Overall, although individual families reported some positive effects of pandemic restrictions, our data provide no evidence that these generalize across children and adolescents with autism, or even just to individuals with high-functioning autism. We discuss the increased challenges that need to be addressed to protect children and adolescents' well-being under pandemic conditions, but also point out potentials in the present situation that could be used towards social participation and success in older children and young adults with autism.

19.
Ear Hear ; 43(4): 1178-1188, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34999594

RESUMO

OBJECTIVES: Research on cochlear implants (CIs) has focused on speech comprehension, with little research on perception of vocal emotions. We compared emotion perception in CI users and normal-hearing (NH) individuals, using parameter-specific voice morphing. DESIGN: Twenty-five CI users and 25 NH individuals (matched for age and gender) performed fearful-angry discriminations on bisyllabic pseudoword stimuli from morph continua across all acoustic parameters (Full), or across selected parameters (F0, Timbre, or Time information), with other parameters set to a noninformative intermediate level. RESULTS: Unsurprisingly, CI users as a group showed lower performance in vocal emotion perception overall. Importantly, while NH individuals used timbre and fundamental frequency (F0) information to equivalent degrees, CI users were far more efficient in using timbre (compared to F0) information for this task. Thus, under the conditions of this task, CIs were inefficient in conveying emotion based on F0 alone. There was enormous variability between CI users, with low performers responding close to guessing level. Echoing previous research, we found that better vocal emotion perception was associated with better quality of life ratings. CONCLUSIONS: Some CI users can utilize timbre cues remarkably well when perceiving vocal emotions.


Assuntos
Implante Coclear , Implantes Cocleares , Música , Percepção da Fala , Estimulação Acústica , Percepção Auditiva , Emoções , Humanos , Qualidade de Vida
20.
Neuropsychologia ; 165: 108133, 2022 01 28.
Artigo em Inglês | MEDLINE | ID: mdl-34971671

RESUMO

Recent research suggested disproportional usage of shape information by people with poor face recognition, although texture information appears to be more important for familiar face recognition. Here, we tested a training program with faces that were selectively caricatured in either shape or texture parameters. Forty-eight young adults with poor face recognition skills (1 SD below the mean in at least 2/3 face processing tests: CFMT, GFMT, BFFT) were pseudo-randomly assigned to either one of two training groups or a control group (n = 16 each). Training comprised six sessions over three weeks. Per session, participants studied ten unfamiliar facial identities whose shape or texture characteristics were caricatured. Before and after training (or waiting in the control group), all participants completed EEG experiments on face learning and famous face recognition, and behavioral face processing tests. Results showed small but specific training-induced improvements: Whereas shape training improved face matching (training tasks, and to some extent GFMT), texture training elicited marked improvements in face learning (CFMT). Moreover, for the texture training group the N170 ERP was enhanced for novel faces post-training, suggesting training-induced changes in early markers of face processing. Although further research is necessary, this suggests that parameter-specific caricature training is a promising way to improve performance in people with poor face recognition skills.


Assuntos
Reconhecimento Facial , Reconhecimento Visual de Modelos , Eletroencefalografia , Humanos , Aprendizagem , Projetos Piloto , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA