Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Brain Res ; 1741: 146887, 2020 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-32422128

RESUMO

From a baby's cry to a piece of music, we perceive emotions from our auditory environment every day. Many theories bring forward the concept of common neural substrates for the perception of vocal and musical emotions. It has been proposed that, for us to perceive emotions, music recruits emotional circuits that evolved for the processing of biologically relevant vocalizations (e.g., screams, laughs). Although some studies have found similarities between voice and instrumental music in terms of acoustic cues and neural correlates, little is known about their processing timecourse. To further understand how vocal and instrumental emotional sounds are perceived, we used EEG to compare the neural processing timecourse of both stimuli type expressed with a varying degree of complexity (vocal/musical affect bursts and emotion-embedded speech/music). Vocal stimuli in general, as well as musical/vocal bursts, were associated with a more concise sensory trace at initial stages of analysis (smaller N1), although vocal bursts had shorter latencies than the musical ones. As for the P2 - vocal affect bursts and Emotion-Embedded Musical stimuli were associated with earlier P2s. These results support the idea that emotional vocal stimuli are differentiated early from other sources and provide insight into the common neurobiological underpinnings of auditory emotions.


Assuntos
Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Emoções/fisiologia , Música/psicologia , Tempo de Reação/fisiologia , Voz/fisiologia , Adulto , Eletroencefalografia/métodos , Feminino , Humanos , Masculino , Fatores de Tempo , Adulto Jovem
2.
Biol Psychol ; 111: 14-25, 2015 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-26307467

RESUMO

This study used event-related brain potentials (ERPs) to compare the time course of emotion processing from non-linguistic vocalizations versus speech prosody, to test whether vocalizations are treated preferentially by the neurocognitive system. Participants passively listened to vocalizations or pseudo-utterances conveying anger, sadness, or happiness as the EEG was recorded. Simultaneous effects of vocal expression type and emotion were analyzed for three ERP components (N100, P200, late positive component). Emotional vocalizations and speech were differentiated very early (N100) and vocalizations elicited stronger, earlier, and more differentiated P200 responses than speech. At later stages (450-700ms), anger vocalizations evoked a stronger late positivity (LPC) than other vocal expressions, which was similar but delayed for angry speech. Individuals with high trait anxiety exhibited early, heightened sensitivity to vocal emotions (particularly vocalizations). These data provide new neurophysiological evidence that vocalizations, as evolutionarily primitive signals, are accorded precedence over speech-embedded emotions in the human voice.


Assuntos
Percepção Auditiva/fisiologia , Emoções/fisiologia , Potenciais Evocados/fisiologia , Comportamento Verbal/fisiologia , Adulto , Encéfalo/fisiologia , Eletroencefalografia , Feminino , Humanos , Linguística , Masculino , Voz , Adulto Jovem
3.
Neuroscience ; 290: 175-84, 2015 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-25637804

RESUMO

Previous functional magnetic resonance imaging (fMRI) studies have suggested that different cerebral regions preferentially process human voice and music. Yet, little is known on the temporal course of the brain processes that decode the category of sounds and how the expertise in one sound category can impact these processes. To address this question, we recorded the electroencephalogram (EEG) of 15 musicians and 18 non-musicians while they were listening to short musical excerpts (piano and violin) and vocal stimuli (speech and non-linguistic vocalizations). The task of the participants was to detect noise targets embedded within the stream of sounds. Event-related potentials revealed an early differentiation of sound category, within the first 100 ms after the onset of the sound, with mostly increased responses to musical sounds. Importantly, this effect was modulated by the musical background of participants, as musicians were more responsive to music sounds than non-musicians, consistent with the notion that musical training increases sensitivity to music. In late temporal windows, brain responses were enhanced in response to vocal stimuli, but musicians were still more responsive to music. These results shed new light on the temporal course of neural dynamics of auditory processing and reveal how it is impacted by the stimulus category and the expertise of participants.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Música , Estimulação Acústica , Adulto , Eletroencefalografia , Potenciais Evocados , Feminino , Humanos , Masculino , Análise de Componente Principal , Competência Profissional , Processamento de Sinais Assistido por Computador , Fala , Fatores de Tempo , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA