Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Speech Lang Hear Res ; 66(9): 3382-3398, 2023 09 13.
Artigo em Inglês | MEDLINE | ID: mdl-37647655

RESUMO

PURPOSE: Previous studies have shown that individuals who stutter exhibit abnormal speech perception in addition to disfluent production as compared with their nonstuttering peers. This study investigated whether adult Chinese-speaking stutterers are still able to use knowledge of statistical regularities embedded in their native language to recognize spoken words and, if so, how much acoustic information is needed to trigger this information. METHOD: Seventeen stutterers and 20 typical, nonstuttering controls participated in a gating experiment. All participants listened to monosyllabic words that consisted of syllables and lexical tones and were segmented into eight successive gates. These words differed in syllable token frequency and syllable-tone co-occurrence probability in line with a Chinese spoken word corpus. The correct syllable-only, correct tone-only, correct syllable-tone word, and correct syllable-incorrect tone responses were analyzed between the two groups using mixed-effects models. RESULTS: Stutterers were less accurate overall than controls, with fewer correct syllables, tones, and their combination as words. However, stutterers showed consistent and reliable perceptual patterns triggered by statistical information of speech, as reflected by more accurate responses to high-frequency syllables, high-probability tones, and tone errors all in manners similar to those of nonstuttering controls. CONCLUSIONS: Stutterers' atypical speech perception is not due to a lack of statistical learning. Stutterers were able to perceive spoken words with phonological tones based on statistical regularities embedded in their native speech. This finding echoes previous production studies of stuttering and lends some support for a link between perception and production. Implications of pathological, diagnostic, and therapeutic conditions of stuttering are discussed.


Assuntos
Gagueira , Adulto , Humanos , Idioma , Linguística , Fala , Acústica
2.
J Speech Lang Hear Res ; 66(7): 2461-2477, 2023 07 12.
Artigo em Inglês | MEDLINE | ID: mdl-37267445

RESUMO

PURPOSE: Previous studies have shown that individuals with congenital amusia exhibit deficient pitch processing across music and language domains. This study investigated whether adult Chinese-speaking listeners with amusia were still able to learn Thai lexical tones based on stimulus frequency of statistical distribution via distributional learning, despite their degraded lexical tone perception. METHOD: Following a pretest-training-posttest design, 21 amusics and 23 typical, musically intact listeners were assigned into bimodal and unimodal distribution conditions. Listeners were asked to discriminate minimal pairs of Thai mid-level tone and falling tone superimposed on variable base syllables and uttered by different speakers. The perceptual accuracy for each test session and improvement from pretest to posttest were collected and analyzed between the two groups using generalized mixed-effects models. RESULTS: When discriminating Thai lexical tones, amusics were less accurate than typical listeners. Nonetheless, similarly to control listeners, perceptual gains from pretest to posttest were observed in bimodally rather than unimodally trained amusics, as evidenced by both trained and nontrained test words. CONCLUSIONS: Amusics are able to learn lexical tones in a second or foreign context of speech. This extends previous research by showing that amusics' distributional learning of linguistic pitch remains largely preserved despite their degraded pitch processing. It is thus likely that manifestations of amusia in speech could not result from their abnormal statistical learning mechanism. This study meanwhile provides a heuristic approach for future studies to apply this paradigm into amusics' treatment to mitigate their pitch-processing disorder.


Assuntos
Transtornos da Percepção Auditiva , Surdez , Música , Percepção da Fala , Adulto , Humanos , Percepção da Altura Sonora , Idioma , Transtornos da Percepção Auditiva/diagnóstico , Estimulação Acústica
3.
J Acoust Soc Am ; 153(5): 3117, 2023 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-37232583

RESUMO

Congenital amusia is an innate and lifelong deficit of music processing. This study investigated whether adult listeners with amusia were still able to learn pitch-related musical chords based on stimulus frequency of statistical distribution, i.e., via distributional learning. Following a pretest-training-posttest design, 18 amusics and 19 typical, musically intact listeners were assigned to bimodal and unimodal conditions that differed in distribution of the stimuli. Participants' task was to discriminate chord minimal pairs, which were transposed to a novel microtonal scale. Accuracy rates for each test session were collected and compared between the two groups using generalized mixed-effects models. Results showed that amusics were less accurate than typical listeners at all comparisons, thus corroborating previous findings. Importantly, amusics-like typical listeners-demonstrated perceptual gains from pretest to posttest in the bimodal condition (but not the unimodal condition). The findings reveal that amusics' distributional learning of music remains largely preserved despite their deficient music processing. Implications of the results for statistical learning and intervention programs to mitigate amusia are discussed.


Assuntos
Transtornos da Percepção Auditiva , Surdez , Perda Auditiva , Música , Adulto , Humanos , Percepção da Altura Sonora , Estimulação Acústica , Aprendizagem , Transtornos da Percepção Auditiva/diagnóstico
4.
Cogn Res Princ Implic ; 7(1): 89, 2022 10 04.
Artigo em Inglês | MEDLINE | ID: mdl-36194295

RESUMO

Face masks affect the transmission of speech and obscure facial cues. Here, we examine how this reduction in acoustic and facial information affects a listener's understanding of speech prosody. English sentence pairs that differed in their intonational (statement/question) and emotional (happy/sad) prosody were created. These pairs were recorded by a masked and unmasked speaker and manipulated to contain audio or not. This resulted in a continuum from typical unmasked speech with audio (easiest) to masked speech without audio (hardest). English listeners (N = 129) were tested on their discrimination of these statement/question and happy/sad pairs. We also collected six individual difference measures previously reported to affect various linguistic processes: Autism Spectrum Quotient, musical background, phonological short-term memory (digit span, 2-back), and congruence task (flanker, Simon) behavior. The results indicated that masked statement/question and happy/sad prosodies were harder to discriminate than unmasked prosodies. Masks can therefore make it more difficult to understand a speaker's intended intonation or emotion. Importantly, listeners differed considerably in their ability to understand prosody. When wearing a mask, speakers should try to speak clearer and louder, if possible, and make intentions and emotions explicit to the listener.


Assuntos
Percepção da Fala , Fala , Emoções , Individualidade , Máscaras
5.
J Acoust Soc Am ; 151(2): 992, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-35232077

RESUMO

Speech contrasts are signaled by multiple acoustic dimensions, but these dimensions are not equally diagnostic. Moreover, the relative diagnosticity, or weight, of acoustic dimensions in speech can shift in different communicative contexts for both speech perception and speech production. However, the literature remains unclear on whether, and if so how, talkers adjust speech to emphasize different acoustic dimensions in the context of changing communicative demands. Here, we examine the interplay of flexible cue weights in speech production and perception across amplitude and duration, secondary non-spectral acoustic dimensions for phonated Mandarin Chinese lexical tone, across natural speech and whispering, which eliminates fundamental frequency contour, the primary acoustic dimension. Phonated and whispered Mandarin productions from native talkers revealed enhancement of both duration and amplitude cues in whispered, compared to phonated speech. When nonspeech amplitude-modulated noises modeled these patterns of enhancement, identification of the noises as Mandarin lexical tone categories was more accurate than identification of noises modeling phonated speech amplitude and duration cues. Thus, speakers exaggerate secondary cues in whispered speech and listeners make use of this information. Yet, enhancement is not symmetric among the four Mandarin lexical tones, indicating possible constraints on the realization of this enhancement.


Assuntos
Percepção da Fala , Fala , China , Sinais (Psicologia) , Fonética , Percepção da Altura Sonora
6.
J Speech Lang Hear Res ; 65(1): 53-69, 2022 01 12.
Artigo em Inglês | MEDLINE | ID: mdl-34860571

RESUMO

PURPOSE: Individuals with congenital amusia exhibit degraded speech perception. This study examined whether adult Chinese Mandarin listeners with amusia were still able to extract the statistical regularities of Mandarin speech sounds, despite their degraded speech perception. METHOD: Using the gating paradigm with monosyllabic syllable-tone words, we tested 19 Mandarin-speaking amusics and 19 musically intact controls. Listeners heard increasingly longer fragments of the acoustic signal across eight duration-blocked gates. The stimuli varied in syllable token frequency and syllable-tone co-occurrence probability. The correct syllable-tone word, correct syllable-only, correct tone-only, and correct syllable-incorrect tone responses were compared respectively between the two groups using mixed-effects models. RESULTS: Amusics were less accurate than controls in terms of the correct word, correct syllable-only, and correct tone-only responses. Amusics, however, showed consistent patterns of top-down processing, as indicated by more accurate responses to high-frequency syllables, high-probability tones, and tone errors all in manners similar to those of the control listeners. CONCLUSIONS: Amusics are able to learn syllable and tone statistical regularities from the language input. This extends previous work by showing that amusics can track phonological segment and pitch cues despite their degraded speech perception. The observed speech deficits in amusics are therefore not due to an abnormal statistical learning mechanism. These results support rehabilitation programs aimed at improving amusics' sensitivity to pitch.


Assuntos
Transtornos da Percepção Auditiva , Percepção da Fala , Estimulação Acústica , Adulto , Humanos , Idioma , Fonética , Percepção da Altura Sonora , Percepção da Fala/fisiologia
7.
JASA Express Lett ; 1(4): 045202, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-36154204

RESUMO

This study examined how phonetic categorization in a second language (L2) is jointly affected by perceptual abilities and lexical knowledge. Adult L1 Mandarin Chinese and L1 English-L2 Mandarin learners performed a phonetic categorization task. The stimuli varied the F0 contour along a continuum resulting in four different tonal word/nonword end point combinations. Both L1 and L2 listeners categorized more ambiguous tokens as words than nonwords, thus demonstrating a lexical bias in their behavior, i.e., the Ganong effect. Non-phonetic, linguistic information can thus modify L2 phonetic categorization of lexical tones. This effect, however, can be constrained by the listener's pitch perception abilities.


Assuntos
Percepção da Fala , Fala , Adulto , Humanos , Idioma , Fonética , Percepção da Altura Sonora
8.
Front Psychol ; 11: 214, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32161560

RESUMO

Spoken word recognition involves a perceptual tradeoff between the reliance on the incoming acoustic signal and knowledge about likely sound categories and their co-occurrences as words. This study examined how adult second language (L2) learners navigate between acoustic-based and knowledge-based spoken word recognition when listening to highly variable, multi-talker truncated speech, and whether this perceptual tradeoff changes as L2 listeners gradually become more proficient in their L2 after multiple months of structured classroom learning. First language (L1) Mandarin Chinese listeners and L1 English-L2 Mandarin adult listeners took part in a gating experiment. The L2 listeners were tested twice - once at the start of their intermediate/advanced L2 language class and again 2 months later. L1 listeners were only tested once. Participants were asked to identify syllable-tone words that varied in syllable token frequency (high/low according to a spoken word corpus) and syllable-conditioned tonal probability (most probable/least probable in speech given the syllable). The stimuli were recorded by 16 different talkers and presented at eight gates ranging from onset-only (gate 1) through onset +40 ms increments (gates 2 through 7) to the full word (gate 8). Mixed-effects regression modeling was used to compare performance to our previous study which used single-talker stimuli (Wiener et al., 2019). The results indicated that multi-talker speech caused both L1 and L2 listeners to rely greater on knowledge-based processing of tone. L1 listeners were able to draw on distributional knowledge of syllable-tone probabilities in early gates and switch to predominantly acoustic-based processing when more of the signal was available. In contrast, L2 listeners, with their limited experience with talker range normalization, were less able to effectively transition from probability-based to acoustic-based processing. Moreover, for the L2 listeners, the reliance on such distributional information for spoken word recognition appeared to be conditioned by the nature of the acoustic signal. Single-talker speech did not result in the same pattern of probability-based tone processing, suggesting that knowledge-based processing of L2 speech may only occur under certain acoustic conditions, such as multi-talker speech.

9.
Lang Speech ; 61(4): 632-656, 2018 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-29560782

RESUMO

This study examines the perceptual trade-off between knowledge of a language's statistical regularities and reliance on the acoustic signal during L2 spoken word recognition. We test how early learners track and make use of segmental and suprasegmental cues and their relative frequencies during non-native word recognition. English learners of Mandarin were taught an artificial tonal language in which a tone's informativeness for word identification varied according to neighborhood density. The stimuli mimicked Mandarin's uneven distribution of syllable+tone combinations by varying syllable frequency and the probability of particular tones co-occurring with a particular syllable. Use of statistical regularities was measured by four-alternative forced-choice judgments and by eye fixations to target and competitor symbols. Half of the participants were trained on one speaker, that is, low speaker variability while the other half were trained on four speakers. After four days of learning, the results confirmed that tones are processed according to their informativeness. Eye movements to the newly learned symbols demonstrated that L2 learners use tonal probabilities at an early stage of word recognition, regardless of speaker variability. The amount of variability in the signal, however, influenced the time course of recovery from incorrect anticipatory looks: participants exposed to low speaker variability recovered from incorrect probability-based predictions of tone more rapidly than participants exposed to greater variability. These results motivate two conclusions: early L2 learners track the distribution of segmental and suprasegmental co-occurrences and make predictions accordingly during spoken word recognition; and when the acoustic input is more variable because of multi-speaker input, listeners rely more on their knowledge of tone-syllable co-occurrence frequency distributions and less on the incoming acoustic signal.


Assuntos
Idioma , Aprendizagem , Multilinguismo , Fonética , Percepção da Fala , China , Sinais (Psicologia) , Feminino , Fixação Ocular , Humanos , Masculino , Reconhecimento Psicológico , Adulto Jovem
10.
Lang Speech ; 59(Pt 1): 59-82, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-27089806

RESUMO

Previous studies have shown that when speakers of European languages are asked to turn nonwords into words by altering either a vowel or consonant, they tend to treat vowels as more mutable than consonants. These results inspired the universal vowel mutability hypothesis: listeners learn to cope with vowel variability because vowel information constrains lexical selection less tightly and allows for more potential candidates than does consonant information. The present study extends the word reconstruction paradigm to Mandarin Chinese--a Sino-Tibetan language, which makes use of lexically contrastive tone. Native speakers listened to word-like nonwords (e.g., su3) and were asked to change them into words by manipulating a single consonant (e.g., tu3), vowel (e.g., si3), or tone (e.g., su4). Additionally, items were presented in a fourth condition in which participants could change any part. The participants' reaction times and responses were recorded. Results revealed that participants responded faster and more accurately in both the free response and the tonal change conditions. Unlike previous reconstruction studies on European languages, where vowels were changed faster and more often than consonants, these results demonstrate that, in Mandarin, changes to vowels and consonants were both overshadowed by changes to tone, which was the preferred modification to the stimulus nonwords, while changes to vowels were the slowest and least accurate. Our findings show that the universal vowel mutability hypothesis is not consistent with a tonal language, that Mandarin tonal information is lower-priority than consonants and vowels and that vowel information most tightly constrains Mandarin lexical access.


Assuntos
Asiático , Multilinguismo , Fonética , Psicolinguística , Semântica , Acústica da Fala , Adulto , Tomada de Decisões , Feminino , Humanos , Modelos Lineares , Masculino , Tempo de Reação , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...