Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
J Cogn Neurosci ; 27(2): 280-91, 2015 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-25170793

RESUMO

The human voice is the primary carrier of speech but also a fingerprint for person identity. Previous neuroimaging studies have revealed that speech and identity recognition is accomplished by partially different neural pathways, despite the perceptual unity of the vocal sound. Importantly, the right STS has been implicated in voice processing, with different contributions of its posterior and anterior parts. However, the time point at which vocal and speech processing diverge is currently unknown. Also, the exact role of the right STS during voice processing is so far unclear because its behavioral relevance has not yet been established. Here, we used the high temporal resolution of magnetoencephalography and a speech task control to pinpoint transient behavioral correlates: we found, at 200 msec after stimulus onset, that activity in right anterior STS predicted behavioral voice recognition performance. At the same time point, the posterior right STS showed increased activity during voice identity recognition in contrast to speech recognition whereas the left mid STS showed the reverse pattern. In contrast to the highly speech-sensitive left STS, the current results highlight the right STS as a key area for voice identity recognition and show that its anatomical-functional division emerges around 200 msec after stimulus onset. We suggest that this time point marks the speech-independent processing of vocal sounds in the posterior STS and their successful mapping to vocal identities in the anterior STS.


Assuntos
Córtex Cerebral/fisiologia , Reconhecimento Fisiológico de Modelo/fisiologia , Percepção da Fala/fisiologia , Voz , Estimulação Acústica , Mapeamento Encefálico , Feminino , Lateralidade Funcional , Humanos , Magnetoencefalografia , Masculino , Processamento de Sinais Assistido por Computador , Adulto Jovem
2.
PLoS One ; 9(1): e86325, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24466026

RESUMO

It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers' voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker's face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.


Assuntos
Face/fisiologia , Prosopagnosia/fisiopatologia , Inteligibilidade da Fala , Percepção da Fala , Estimulação Acústica , Adulto , Compreensão , Sinais (Psicologia) , Feminino , Neuroimagem Funcional , Humanos , Imageamento por Ressonância Magnética , Masculino , Movimento , Lobo Temporal/fisiopatologia , Percepção Visual
3.
Exp Brain Res ; 198(2-3): 137-51, 2009 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-19526359

RESUMO

Disparate sensory streams originating from a common underlying event share similar dynamics, and this plays an important part in multisensory integration. Here we investigate audiovisual binding by presenting continuously changing, temporally congruent and incongruent stimuli. Recorded EEG signals are used to quantify spectrotemporal and waveform locking of neural activity to stimulus dynamics. Spectrotemporal analysis reveals locking to visual stimulus dynamics in both a broad alpha and the beta band. The properties of these effects suggest they are a correlate of bottom-up processing in the visual system. Waveform locking reveals two cortically distinct processes that lock to visual stimulus dynamics with differing topographies and time lags relative to the stimuli. Most importantly, these are modulated in strength by the congruency of an accompanying auditory stream. In addition, the waveform locking found at occipital electrodes shows an increase over stimulus duration for visual and congruent audiovisual stimuli. Hence we argue that these effects reflect audiovisual interaction. We thus propose that spectrotemporal and waveform locking reflect different mechanisms involved in the processing of dynamic audiovisual stimuli.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adulto , Ritmo alfa , Ritmo beta , Eletroencefalografia , Potenciais Evocados Auditivos , Potenciais Evocados Visuais , Feminino , Humanos , Masculino , Testes Neuropsicológicos , Estimulação Luminosa , Espectrografia do Som , Análise e Desempenho de Tarefas , Fatores de Tempo , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA