Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
J Cogn Neurosci ; 27(2): 280-91, 2015 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-25170793

RESUMO

The human voice is the primary carrier of speech but also a fingerprint for person identity. Previous neuroimaging studies have revealed that speech and identity recognition is accomplished by partially different neural pathways, despite the perceptual unity of the vocal sound. Importantly, the right STS has been implicated in voice processing, with different contributions of its posterior and anterior parts. However, the time point at which vocal and speech processing diverge is currently unknown. Also, the exact role of the right STS during voice processing is so far unclear because its behavioral relevance has not yet been established. Here, we used the high temporal resolution of magnetoencephalography and a speech task control to pinpoint transient behavioral correlates: we found, at 200 msec after stimulus onset, that activity in right anterior STS predicted behavioral voice recognition performance. At the same time point, the posterior right STS showed increased activity during voice identity recognition in contrast to speech recognition whereas the left mid STS showed the reverse pattern. In contrast to the highly speech-sensitive left STS, the current results highlight the right STS as a key area for voice identity recognition and show that its anatomical-functional division emerges around 200 msec after stimulus onset. We suggest that this time point marks the speech-independent processing of vocal sounds in the posterior STS and their successful mapping to vocal identities in the anterior STS.


Assuntos
Córtex Cerebral/fisiologia , Reconhecimento Fisiológico de Modelo/fisiologia , Percepção da Fala/fisiologia , Voz , Estimulação Acústica , Mapeamento Encefálico , Feminino , Lateralidade Funcional , Humanos , Magnetoencefalografia , Masculino , Processamento de Sinais Assistido por Computador , Adulto Jovem
2.
Neuroimage ; 77: 237-45, 2013 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-23563227

RESUMO

How do we recognize people that are familiar to us? There is overwhelming evidence that our brains process voice and face in a combined fashion to optimally recognize both who is speaking and what is said. Surprisingly, this combined processing of voice and face seems to occur even if one stream of information is missing. For example, if subjects only hear someone who is familiar to them talking, without seeing their face, visual face-processing areas are active. One reason for this crossmodal activation might be that it is instrumental for early sensory processing of voices-a hypothesis that is contrary to current models of unisensory perception. Here, we test this hypothesis by harnessing a temporally highly resolved method, i.e., magnetoencephalography (MEG), to identify the temporal response profile of the fusiform face area in response to auditory-only voice recognition. Participants briefly learned a set of voices audio-visually, i.e., together with a talking face. After learning, we measured subjects' MEG signals in response to the auditory-only, now familiar, voices. The results revealed three key mechanisms that characterize the sensory processing of familiar speakers' voices: (i) activation in the face-sensitive fusiform gyrus at very early auditory processing stages, i.e., only 100ms after auditory onset, (ii) a temporal facilitation of auditory processing (M200), and (iii) a correlation of this temporal facilitation with recognition performance. These findings suggest that a neural representation of face information is evoked before the identity of the voice is even recognized and that the brain uses this visual representation to facilitate early sensory processing of auditory-only voices.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Reconhecimento Psicológico/fisiologia , Percepção Visual/fisiologia , Voz/fisiologia , Mapeamento Encefálico , Face , Feminino , Humanos , Imageamento por Ressonância Magnética , Magnetoencefalografia , Masculino , Adulto Jovem
3.
Exp Brain Res ; 198(2-3): 137-51, 2009 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-19526359

RESUMO

Disparate sensory streams originating from a common underlying event share similar dynamics, and this plays an important part in multisensory integration. Here we investigate audiovisual binding by presenting continuously changing, temporally congruent and incongruent stimuli. Recorded EEG signals are used to quantify spectrotemporal and waveform locking of neural activity to stimulus dynamics. Spectrotemporal analysis reveals locking to visual stimulus dynamics in both a broad alpha and the beta band. The properties of these effects suggest they are a correlate of bottom-up processing in the visual system. Waveform locking reveals two cortically distinct processes that lock to visual stimulus dynamics with differing topographies and time lags relative to the stimuli. Most importantly, these are modulated in strength by the congruency of an accompanying auditory stream. In addition, the waveform locking found at occipital electrodes shows an increase over stimulus duration for visual and congruent audiovisual stimuli. Hence we argue that these effects reflect audiovisual interaction. We thus propose that spectrotemporal and waveform locking reflect different mechanisms involved in the processing of dynamic audiovisual stimuli.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adulto , Ritmo alfa , Ritmo beta , Eletroencefalografia , Potenciais Evocados Auditivos , Potenciais Evocados Visuais , Feminino , Humanos , Masculino , Testes Neuropsicológicos , Estimulação Luminosa , Espectrografia do Som , Análise e Desempenho de Tarefas , Fatores de Tempo , Adulto Jovem
4.
PLoS One ; 9(1): e86325, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24466026

RESUMO

It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers' voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker's face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.


Assuntos
Face/fisiologia , Prosopagnosia/fisiopatologia , Inteligibilidade da Fala , Percepção da Fala , Estimulação Acústica , Adulto , Compreensão , Sinais (Psicologia) , Feminino , Neuroimagem Funcional , Humanos , Imageamento por Ressonância Magnética , Masculino , Movimento , Lobo Temporal/fisiopatologia , Percepção Visual
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA