RESUMO
BACKGROUND: Recent models of speech production suggest a link between speech production and perception. Persons with stuttering are known to have deficits in sensorimotor timing and exhibit auditory processing problems. Most of the earlier studies have focused on assessing temporal ordering in adults who stutter (AWS), but limited attempts have been made to document temporal resolution abilities in AWS. METHODS: A group of 16 AWS and 16 age- and gender-matched adults who do not stutter (AWNS) were recruited for the study. Temporal resolution abilities were assessed using the Gap Detection Test and temporal modulation transfer function (TMTF). RESULTS: The results revealed significant differences in TMTF between AWS and AWNS, but no differences were found in the gap detection thresholds. CONCLUSIONS: Results suggest that the sensory representations of the temporal modulations are compromised in AWS, which may contribute to the programming of rhythmic movements during speech planning.
Assuntos
Gagueira , Adulto , Percepção Auditiva , Humanos , Movimento , Fala , Gagueira/diagnósticoRESUMO
This mini review is aimed at a clinician-scientist seeking to understand the role of oscillations in neural processing and their functional relevance in speech and music perception. We present an overview of neural oscillations, methods used to study them, and their functional relevance with respect to music processing, aging, hearing loss, and disorders affecting speech and language. We first review the oscillatory frequency bands and their associations with speech and music processing. Next we describe commonly used metrics for quantifying neural oscillations, briefly touching upon the still-debated mechanisms underpinning oscillatory alignment. Following this, we highlight key findings from research on neural oscillations in speech and music perception, as well as contributions of this work to our understanding of disordered perception in clinical populations. Finally, we conclude with a look toward the future of oscillatory research in speech and music perception, including promising methods and potential avenues for future work. We note that the intention of this mini review is not to systematically review all literature on cortical tracking of speech and music. Rather, we seek to provide the clinician-scientist with foundational information that can be used to evaluate and design research studies targeting the functional role of oscillations in speech and music processing in typical and clinical populations.
RESUMO
Purpose Listeners shift their listening strategies between lower level acoustic information and higher level semantic information to prioritize maximum speech intelligibility in challenging listening conditions. Although increasing task demands via acoustic degradation modulates lexical-semantic processing, the neural mechanisms underlying different listening strategies are unclear. The current study examined the extent to which encoding of lower level acoustic cues is modulated by task demand and associations with lexical-semantic processes. Method Electroencephalography was acquired while participants listened to sentences in the presence of four-talker babble that contained either higher or lower probability final words. Task difficulty was modulated by time available to process responses. Cortical tracking of speech-neural correlates of acoustic temporal envelope processing-were estimated using temporal response functions. Results Task difficulty did not affect cortical tracking of temporal envelope of speech under challenging listening conditions. Neural indices of lexical-semantic processing (N400 amplitudes) were larger with increased task difficulty. No correlations were observed between the cortical tracking of temporal envelope of speech and lexical-semantic processes, even after controlling for the effect of individualized signal-to-noise ratios. Conclusions Cortical tracking of the temporal envelope of speech and semantic processing are differentially influenced by task difficulty. While increased task demands modulated higher level semantic processing, cortical tracking of the temporal envelope of speech may be influenced by task difficulty primarily when the demand is manipulated in terms of acoustic properties of the stimulus, consistent with an emerging perspective in speech perception.
Assuntos
Semântica , Percepção da Fala , Estimulação Acústica , Acústica , Eletroencefalografia , Potenciais Evocados , Feminino , Humanos , Masculino , Inteligibilidade da FalaRESUMO
Purpose Adults with stuttering (AWS) exhibit compromised phonological working memory abilities, poor central auditory processing, and impaired auditory processing especially during overt speech production tasks. However, these tasks are sensitive to language disturbances already found in them. Thus, in this study, monosyllables were used ruling out the language effects, and auditory working memory ability was evaluated in AWS using the n-back task. In specific, the auditory sensory input of the working memory mechanism was evaluated. Method Thirty-two participants, 16 each of AWS and adults with no stuttering (AWNS), performed behavioral auditory 1-back and 2-back tasks. The long latency responses were also recorded during no-back and 2-back conditions from 64 electrode sites. Results Results revealed no significant differences between the groups in any of the behavioral parameters such as reaction time, accuracy, false alarm rate, or d'. N1 amplitude modulation was noted in AWNS, which was absent in AWS. The segmentation analysis showed a left hemisphere-oriented topographical distribution in the N2 region in AWS irrespective of conditions, whereas the scalp topography was right hemisphere-oriented with the involvement of parietal channels in AWNS. The timing differences existed between AWS and AWNS in the intervals that a topographical distribution lasted in all throughout the time window of analysis. Conclusion The results suggest altered neural pathway and hemispheric differences during auditory working memory tasks in AWS.
Assuntos
Doenças Auditivas Centrais/fisiopatologia , Córtex Cerebral/fisiopatologia , Memória de Curto Prazo/fisiologia , Gagueira/fisiopatologia , Adolescente , Adulto , Estudos de Casos e Controles , Cognição , Eletroencefalografia , Feminino , Humanos , Masculino , Fonética , Tempo de Reação , Fala , Adulto JovemRESUMO
BACKGROUND AND OBJECTIVES: The influence of visual stimulus on the auditory component in the perception of auditory-visual (AV) consonant-vowel syllables has been demonstrated in different languages. Inherent properties of unimodal stimuli are known to modulate AV integration. The present study investigated how the amount of McGurk effect (an outcome of AV integration) varies across three different consonant combinations in Kannada language. The importance of unimodal syllable identification on the amount of McGurk effect was also seen. Subjects and. METHODS: Twenty-eight individuals performed an AV identification task with ba/ ga, pa/ka and ma/n· a consonant combinations in AV congruent, AV incongruent (McGurk combination), audio alone and visual alone condition. Cluster analysis was performed using the identification scores for the incongruent stimuli, to classify the individuals into two groups; one with high and the other with low McGurk scores. The differences in the audio alone and visual alone scores between these groups were compared. RESULTS: The results showed significantly higher McGurk scores for ma/n· a compared to ba/ga and pa/ka combinations in both high and low McGurk score groups. No significant difference was noted between ba/ga and pa/ka combinations in either group. Identification of /n· a/ presented in the visual alone condition correlated negatively with the higher McGurk scores. CONCLUSIONS: The results suggest that the final percept following the AV integration is not exclusively explained by the unimodal identification of the syllables. But there are other factors which may also contribute to making inferences about the final percept.