Visual-Auditory Integration and High-Variability Speech Can Facilitate Mandarin Chinese Tone Identification.
J Speech Lang Hear Res
; 65(11): 4096-4111, 2022 11 17.
Article
en En
| MEDLINE
| ID: mdl-36279876
PURPOSE: Previous studies have demonstrated that tone identification can be facilitated when auditory tones are integrated with visual information that depicts the pitch contours of the auditory tones (hereafter, visual effect). This study investigates this visual effect in combined visual-auditory integration with high- and low-variability speech and examines whether one's prior tonal-language learning experience shapes the strength of this visual effect. METHOD: Thirty Mandarin-naïve listeners, 25 Mandarin second language learners, and 30 native Mandarin listeners participated in a tone identification task in which participants judged whether an auditory tone was rising or falling in pitch. Moving arrows depicted the pitch contours of the auditory tones. A priming paradigm was used with the target auditory tones primed by four multimodal conditions: no stimuli (A-V-), visual-only stimuli (A-V+), auditory-only stimuli (A+V-), and both auditory and visual stimuli (A+V+). RESULTS: For Mandarin naïve listeners, the visual effect in accuracy produced under the cross-modal integration (A+V+ vs. A+V-) was superior to a unimodal approach (A-V+ vs. A-V-), as evidenced by a higher d prime of A+V+ as opposed to A+V-. However, this was not the case in response time. Additionally, the visual effect in accuracy and response time under the unimodal approach only occurred for high-variability speech, not for low-variability speech. Across the three groups of listeners, we found that the less tonal-language learning experience one had, the stronger the visual effect. CONCLUSION: Our study revealed the visual-auditory advantage and disadvantage of the visual effect and the joint contribution of visual-auditory integration and high-variability speech on facilitating tone perception via the process of speech symbolization and categorization. SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.21357729.
Texto completo:
1
Colección:
01-internacional
Base de datos:
MEDLINE
Asunto principal:
Percepción del Habla
/
Lenguaje
Tipo de estudio:
Diagnostic_studies
Límite:
Humans
País/Región como asunto:
Asia
Idioma:
En
Revista:
J Speech Lang Hear Res
Asunto de la revista:
AUDIOLOGIA
/
PATOLOGIA DA FALA E LINGUAGEM
Año:
2022
Tipo del documento:
Article
País de afiliación:
China