Your browser doesn't support javascript.
loading
The sound of your lips: electrophysiological cross-modal interactions during hand-to-face and face-to-face speech perception.
Treille, Avril; Vilain, Coriandre; Sato, Marc.
Affiliation
  • Treille A; CNRS, Département Parole and Cognition, Gipsa-Lab, UMR 5216, Grenoble Université Grenoble, France.
  • Vilain C; CNRS, Département Parole and Cognition, Gipsa-Lab, UMR 5216, Grenoble Université Grenoble, France.
  • Sato M; CNRS, Département Parole and Cognition, Gipsa-Lab, UMR 5216, Grenoble Université Grenoble, France.
Front Psychol ; 5: 420, 2014.
Article in En | MEDLINE | ID: mdl-24860533
Recent magneto-encephalographic and electro-encephalographic studies provide evidence for cross-modal integration during audio-visual and audio-haptic speech perception, with speech gestures viewed or felt from manual tactile contact with the speaker's face. Given the temporal precedence of the haptic and visual signals on the acoustic signal in these studies, the observed modulation of N1/P2 auditory evoked responses during bimodal compared to unimodal speech perception suggest that relevant and predictive visual and haptic cues may facilitate auditory speech processing. To further investigate this hypothesis, auditory evoked potentials were here compared during auditory-only, audio-visual and audio-haptic speech perception in live dyadic interactions between a listener and a speaker. In line with previous studies, auditory evoked potentials were attenuated and speeded up during both audio-haptic and audio-visual compared to auditory speech perception. Importantly, the observed latency and amplitude reduction did not significantly depend on the degree of visual and haptic recognition of the speech targets. Altogether, these results further demonstrate cross-modal interactions between the auditory, visual and haptic speech signals. Although they do not contradict the hypothesis that visual and haptic sensory inputs convey predictive information with respect to the incoming auditory speech input, these results suggest that, at least in live conversational interactions, systematic conclusions on sensory predictability in bimodal speech integration have to be taken with caution, with the extraction of predictive cues likely depending on the variability of the speech stimuli.
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Front Psychol Year: 2014 Document type: Article Affiliation country: France Country of publication: Switzerland

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Front Psychol Year: 2014 Document type: Article Affiliation country: France Country of publication: Switzerland