Your browser doesn't support javascript.
loading
Crossmodal adaptation in right posterior superior temporal sulcus during face-voice emotional integration.
Watson, Rebecca; Latinus, Marianne; Noguchi, Takao; Garrod, Oliver; Crabbe, Frances; Belin, Pascal.
Afiliación
  • Watson R; Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht 6229 EV, The Netherlands, Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, G12 8QB, United Kingdom, rebecca.watson@maastrichtuni
  • Latinus M; Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, G12 8QB, United Kingdom, Neuroscience Institute of Timone, Coeducational Research Unit 7289, National Center of Scientific Research-Aix-Marseille University, F-13284 Marseille, France.
  • Noguchi T; Department of Psychology, University of Warwick, Coventry, CV4 7A, United Kingdom, and.
  • Garrod O; Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, G12 8QB, United Kingdom.
  • Crabbe F; Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, G12 8QB, United Kingdom.
  • Belin P; Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, G12 8QB, United Kingdom, Neuroscience Institute of Timone, Coeducational Research Unit 7289, National Center of Scientific Research-Aix-Marseille University, F-13284 Marseille, France, Intern
J Neurosci ; 34(20): 6813-21, 2014 May 14.
Article en En | MEDLINE | ID: mdl-24828635
The integration of emotional information from the face and voice of other persons is known to be mediated by a number of "multisensory" cerebral regions, such as the right posterior superior temporal sulcus (pSTS). However, whether multimodal integration in these regions is attributable to interleaved populations of unisensory neurons responding to face or voice or rather by multimodal neurons receiving input from the two modalities is not fully clear. Here, we examine this question using functional magnetic resonance adaptation and dynamic audiovisual stimuli in which emotional information was manipulated parametrically and independently in the face and voice via morphing between angry and happy expressions. Healthy human adult subjects were scanned while performing a happy/angry emotion categorization task on a series of such stimuli included in a fast event-related, continuous carryover design. Subjects integrated both face and voice information when categorizing emotion-although there was a greater weighting of face information-and showed behavioral adaptation effects both within and across modality. Adaptation also occurred at the neural level: in addition to modality-specific adaptation in visual and auditory cortices, we observed for the first time a crossmodal adaptation effect. Specifically, fMRI signal in the right pSTS was reduced in response to a stimulus in which facial emotion was similar to the vocal emotion of the preceding stimulus. These results suggest that the integration of emotional information from face and voice in the pSTS involves a detectable proportion of bimodal neurons that combine inputs from visual and auditory cortices.
Asunto(s)
Palabras clave

Texto completo: 1 Base de datos: MEDLINE Asunto principal: Percepción Auditiva / Lóbulo Temporal / Percepción Visual / Emociones Idioma: En Revista: J Neurosci Año: 2014 Tipo del documento: Article

Texto completo: 1 Base de datos: MEDLINE Asunto principal: Percepción Auditiva / Lóbulo Temporal / Percepción Visual / Emociones Idioma: En Revista: J Neurosci Año: 2014 Tipo del documento: Article