Your browser doesn't support javascript.
loading
Visual scanning patterns of a talking face when evaluating phonetic information in a native and non-native language.
Deng, Xizi; McClay, Elise; Jastrzebski, Erin; Wang, Yue; Yeung, H Henny.
Affiliation
  • Deng X; Department of Linguistics, Simon Fraser University, Burnaby BC, Canada.
  • McClay E; Department of Linguistics, Simon Fraser University, Burnaby BC, Canada.
  • Jastrzebski E; Department of Linguistics, Simon Fraser University, Burnaby BC, Canada.
  • Wang Y; Department of Linguistics, Simon Fraser University, Burnaby BC, Canada.
  • Yeung HH; Department of Linguistics, Simon Fraser University, Burnaby BC, Canada.
PLoS One ; 19(5): e0304150, 2024.
Article in En | MEDLINE | ID: mdl-38805447
ABSTRACT
When comprehending speech, listeners can use information encoded in visual cues from a face to enhance auditory speech comprehension. For example, prior work has shown that the mouth movements reflect articulatory features of speech segments and durational information, while pitch and speech amplitude are primarily cued by eyebrow and head movements. Little is known about how the visual perception of segmental and prosodic speech information is influenced by linguistic experience. Using eye-tracking, we studied how perceivers' visual scanning of different regions on a talking face predicts accuracy in a task targeting both segmental versus prosodic information, and also asked how this was influenced by language familiarity. Twenty-four native English perceivers heard two audio sentences in either English or Mandarin (an unfamiliar, non-native language), which sometimes differed in segmental or prosodic information (or both). Perceivers then saw a silent video of a talking face, and judged whether that video matched either the first or second audio sentence (or whether both sentences were the same). First, increased looking to the mouth predicted correct responses only for non-native language trials. Second, the start of a successful search for speech information in the mouth area was significantly delayed in non-native versus native trials, but just when there were only prosodic differences in the auditory sentences, and not when there were segmental differences. Third, (in correct trials) the saccade amplitude in native language trials was significantly greater than in non-native trials, indicating more intensely focused fixations in the latter. Taken together, these results suggest that mouth-looking was generally more evident when processing a non-native versus native language in all analyses, but fascinatingly, when measuring perceivers' latency to fixate the mouth, this language effect was largest in trials where only prosodic information was useful for the task.
Subject(s)

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Speech Perception / Phonetics / Language Limits: Adult / Female / Humans / Male Language: En Journal: PLoS One Journal subject: CIENCIA / MEDICINA Year: 2024 Document type: Article Affiliation country: Canadá

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Speech Perception / Phonetics / Language Limits: Adult / Female / Humans / Male Language: En Journal: PLoS One Journal subject: CIENCIA / MEDICINA Year: 2024 Document type: Article Affiliation country: Canadá