Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 52
Filter
1.
JASA Express Lett ; 4(2)2024 Feb 01.
Article in English | MEDLINE | ID: mdl-38350077

ABSTRACT

Measuring how well human listeners recognize speech under varying environmental conditions (speech intelligibility) is a challenge for theoretical, technological, and clinical approaches to speech communication. The current gold standard-human transcription-is time- and resource-intensive. Recent advances in automatic speech recognition (ASR) systems raise the possibility of automating intelligibility measurement. This study tested 4 state-of-the-art ASR systems with second language speech-in-noise and found that one, whisper, performed at or above human listener accuracy. However, the content of whisper's responses diverged substantially from human responses, especially at lower signal-to-noise ratios, suggesting both opportunities and limitations for ASR--based speech intelligibility modeling.


Subject(s)
Speech Perception , Humans , Speech Perception/physiology , Noise/adverse effects , Speech Intelligibility/physiology , Speech Recognition Software , Recognition, Psychology
3.
J Acoust Soc Am ; 154(3): 1601-1613, 2023 09 01.
Article in English | MEDLINE | ID: mdl-37698438

ABSTRACT

Recent work on perceptual learning for speech has suggested that while high-variability training typically results in generalization, low-variability exposure can sometimes be sufficient for cross-talker generalization. We tested predictions of a similarity-based account, according to which, generalization depends on training-test talker similarity rather than on exposure to variability. We compared perceptual adaptation to second-language (L2) speech following single- or multiple-talker training with a round-robin design in which four L2 English talkers from four different first-language (L1) backgrounds served as both training and test talkers. After exposure to 60 L2 English sentences in one training session, cross-talker/cross-accent generalization was possible (but not guaranteed) following either multiple- or single-talker training with variation across training-test talker pairings. Contrary to predictions of the similarity-based account, adaptation was not consistently better for identical than for mismatched training-test talker pairings, and generalization patterns were asymmetrical across training-test talker pairs. Acoustic analyses also revealed a dissociation between phonetic similarity and cross-talker/cross-accent generalization. Notably, variation in adaptation and generalization related to variation in training phase intelligibility. Together with prior evidence, these data suggest that perceptual learning for speech may benefit from some combination of exposure to talker variability, training-test similarity, and high training phase intelligibility.


Subject(s)
Learning , Speech , Phonetics , Generalization, Psychological , Language
4.
Biling (Camb Engl) ; 25(1): 148-162, 2022 Jan.
Article in English | MEDLINE | ID: mdl-35340908

ABSTRACT

Inspired by information theoretic analyses of L1 speech and language, this study proposes that L1 and L2 speech exhibit distinct information encoding and transmission profiles in the temporal domain. Both the number and average duration of acoustic syllables (i.e., intensity peaks in the temporal envelope) were automatically measured from L1 and L2 recordings of standard texts in English, French, and Spanish. Across languages, L2 acoustic syllables were greater in number (more acoustic syllables/text) and longer in duration (fewer acoustic syllables/second). While substantial syllable reduction (fewer acoustic than orthographic syllables) was evident in both L1 and L2 speech, L2 speech generally exhibited less syllable reduction, resulting in low information density (more syllables with less information/syllable). Low L2 information density compounded low L2 speech rate yielding very low L2 information transmission rate (i.e., less information/second). Overall, this cross-language comparison establishes low information transmission rate as a language-general, distinguishing feature of L2 speech.

5.
Ear Hear ; 43(2): 605-619, 2022.
Article in English | MEDLINE | ID: mdl-34619687

ABSTRACT

OBJECTIVES: The role of subcortical synchrony in speech-in-noise (SIN) recognition and the frequency-following response (FFR) was examined in multiple listeners with auditory neuropathy. Although an absent FFR has been documented in one listener with idiopathic neuropathy who has severe difficulty recognizing SIN, several etiologies cause the neuropathy phenotype. Consequently, it is necessary to replicate absent FFRs and concomitant SIN difficulties in patients with multiple sources and clinical presentations of neuropathy to elucidate fully the importance of subcortical neural synchrony for the FFR and SIN recognition. DESIGN: Case series. Three children with auditory neuropathy (two males with neuropathy attributed to hyperbilirubinemia, one female with a rare missense mutation in the OPA1 gene) were compared to age-matched controls with normal hearing (52 for electrophysiology and 48 for speech recognition testing). Tests included standard audiological evaluations, FFRs, and sentence recognition in noise. The three children with neuropathy had a range of clinical presentations, including moderate sensorineural hearing loss, use of a cochlear implant, and a rapid progressive hearing loss. RESULTS: Children with neuropathy generally had good speech recognition in quiet but substantial difficulties in noise. These SIN difficulties were somewhat mitigated by a clear speaking style and presenting words in a high semantic context. In the children with neuropathy, FFRs were absent from all tested stimuli. In contrast, age-matched controls had reliable FFRs. CONCLUSION: Subcortical synchrony is subject to multiple forms of disruption but results in a consistent phenotype of an absent FFR and substantial difficulties recognizing SIN. These results support the hypothesis that subcortical synchrony is necessary for the FFR. Thus, in healthy listeners, the FFR may reflect subcortical neural processes important for SIN recognition.


Subject(s)
Hearing Loss, Central , Speech Perception , Female , Humans , Male , Noise , Speech , Speech Perception/physiology
6.
JASA Express Lett ; 1(3): 035201, 2021 Mar.
Article in English | MEDLINE | ID: mdl-33791685

ABSTRACT

Recordings of Spanish and English sentences by switched-dominance bilingual (SDB) Spanish (i.e., L2-dominant Spanish-English bilinguals) and by L1-dominant Spanish and English controls were presented to L1-dominant Spanish and English listeners, respectively. At -4 dB signal-to-noise ratio (SNR), Spanish and English productions by SDBs were equally intelligible with both reaching L1-dominant control levels. At -8 dB SNR, SDB English intelligibility matched that of L1-dominant English controls, yet SDB Spanish intelligibility was significantly lower than that of L1-dominant Spanish controls. These results emphasize that extended (but not early) exposure is both necessary and sufficient for robust speech learning.

7.
Int J Audiol ; 60(2): 140-150, 2021 02.
Article in English | MEDLINE | ID: mdl-32972283

ABSTRACT

OBJECTIVE: The goal of this study was to assess recognition of foreign-accented speech of varying intelligibility and linguistic complexity in older adults. It is important to understand the factors that influence the recognition of this commonly encountered type of speech, in a population that remains understudied in this regard. DESIGN: A repeated measures design was used. Listeners repeated back linguistically simple and complex sentences heard in noise. The sentences were produced by three talkers of varying intelligibility: one native American English, one foreign-accented talker of high intelligibility and one foreign-accented talker of low intelligibility. Percentage word recognition in sentences was measured. STUDY SAMPLE: Twenty-five older listeners with a range of hearing thresholds participated. RESULTS: We found a robust interaction between talker intelligibility and linguistic complexity. Recognition accuracy was higher for simple versus complex sentences, but only for the native and high intelligibility foreign-accented talkers. This pattern was present after effects of working memory capacity and hearing acuity were taken into consideration. CONCLUSION: Older listeners exhibit qualitatively different speech processing strategies for low versus high intelligibility foreign-accented talkers. Differences in recognition accuracy for words presented in simple versus in complex sentence contexts only emerged for speech over a threshold of intelligibility.


Subject(s)
Hearing Loss , Speech Perception , Aged , Humans , Linguistics , Noise/adverse effects , Speech , Speech Intelligibility
8.
J Acoust Soc Am ; 147(6): 3765, 2020 06.
Article in English | MEDLINE | ID: mdl-32611135

ABSTRACT

Foreign-accented speech recognition is typically tested with linguistically simple materials, which offer a limited window into realistic speech processing. The present study examined the relationship between linguistic structure and talker intelligibility in several sentence-in-noise recognition experiments. Listeners transcribed simple/short and more complex/longer sentences embedded in noise. The sentences were spoken by three talkers of varying intelligibility: one native, one high-, and one low-intelligibility non-native English speakers. The effect of linguistic structure on sentence recognition accuracy was modulated by talker intelligibility. Accuracy was disadvantaged by increasing complexity only for the native and high intelligibility foreign-accented talkers, whereas no such effect was found for the low intelligibility foreign-accented talker. This pattern emerged across conditions: low and high signal-to-noise ratios, mixed and blocked stimulus presentation, and in the absence of a major cue to prosodic structure, the natural pitch contour of the sentences. Moreover, the pattern generalized to a different set of three talkers that matched the intelligibility of the original talkers. Taken together, the results in this study suggest that listeners employ qualitatively different speech processing strategies for low- versus high-intelligibility foreign-accented talkers, with sentence-related linguistic factors only emerging for speech over a threshold of intelligibility. Findings are discussed in the context of alternative accounts.


Subject(s)
Speech Perception , Speech , Linguistics , Noise/adverse effects , Recognition, Psychology , Speech Intelligibility
9.
Languages (Basel) ; 5(4)2020 Dec.
Article in English | MEDLINE | ID: mdl-33732634

ABSTRACT

Both the timing (i.e., when) and amount (i.e., how much) of language exposure affect language-learning outcomes. We compared speech recognition accuracy across three listener groups for whom the order (first versus second) and dominance (dominant versus non-dominant) of two languages, English and Spanish, varied: one group of Spanish heritage speakers (SHS; L2-English dominant; L1-Spanish non-dominant) and two groups of late onset L2 learners (L1-dominant English/Spanish learners and L1-dominant Spanish/English learners). Sentence-final word recognition accuracy in both English and Spanish was assessed across three "easy" versus "difficult" listening conditions: (1) signal-to-noise ratio (SNR; +5 dB SNR versus 0 dB SNR), (2) sentence predictability (high versus low sentence predictability), and (3) speech style (clear versus plain speech style). Overall, SHS English recognition accuracy was equivalent to that of the L1-dominant English Spanish learners, whereas SHS Spanish recognition accuracy was substantially lower than that of the L1-dominant Spanish English learners. Moreover, while SHS benefitted in both languages from the "easy" listening conditions, they were more adversely affected by (i.e., they recognized fewer words) the presence of higher noise and lower predictability in their non-dominant L1 Spanish compared to their dominant L2 English. These results identify both a benefit and limit on the influence of early exposure. Specifically, the L2-dominant heritage speakers displayed L1-like speech recognition in their dominant-L2, as well as generally better recognition in their non-dominant L1 than late onset L2 learners. Yet, subtle recognition accuracy differences between SHS and L1-dominant listeners emerged under relatively difficult communicative conditions.

10.
Lang Speech ; 60(4): 530-561, 2017 12.
Article in English | MEDLINE | ID: mdl-29216813

ABSTRACT

While indexical information is implicated in many levels of language processing, little is known about the internal structure of the system of indexical dimensions, particularly in bilinguals. A series of three experiments using the speeded classification paradigm investigated the relationship between various indexical and non-linguistic dimensions of speech in processing. Namely, we compared the relationship between a lesser-studied indexical dimension relevant to bilinguals, which language is being spoken (in these experiments, either Mandarin Chinese or English), with: talker identity (Experiment 1), talker gender (Experiment 2), and amplitude of speech (Experiment 3). Results demonstrate that language-being-spoken is integrated in processing with each of the other dimensions tested, and that these processing dependencies seem to be independent of listeners' bilingual status or experience with the languages tested. Moreover, the data reveal processing interference asymmetries, suggesting a processing hierarchy for indexical, non-linguistic speech features.


Subject(s)
Multilingualism , Phonetics , Speech Acoustics , Speech Perception , Voice Quality , Acoustic Stimulation , Adolescent , Adult , Attention , Audiometry, Speech , Cues , Female , Humans , Male , Sex Factors , Speech Intelligibility , Young Adult
11.
J Acoust Soc Am ; 141(2): 886, 2017 02.
Article in English | MEDLINE | ID: mdl-28253679

ABSTRACT

Second-language (L2) speech is consistently slower than first-language (L1) speech, and L1 speaking rate varies within- and across-talkers depending on many individual, situational, linguistic, and sociolinguistic factors. It is asked whether speaking rate is also determined by a language-independent talker-specific trait such that, across a group of bilinguals, L1 speaking rate significantly predicts L2 speaking rate. Two measurements of speaking rate were automatically extracted from recordings of read and spontaneous speech by English monolinguals (n = 27) and bilinguals from ten L1 backgrounds (n = 86): speech rate (syllables/second), and articulation rate (syllables/second excluding silent pauses). Replicating prior work, L2 speaking rates were significantly slower than L1 speaking rates both across-groups (monolinguals' L1 English vs bilinguals' L2 English), and across L1 and L2 within bilinguals. Critically, within the bilingual group, L1 speaking rate significantly predicted L2 speaking rate, suggesting that a significant portion of inter-talker variation in L2 speech is derived from inter-talker variation in L1 speech, and that individual variability in L2 spoken language production may be best understood within the context of individual variability in L1 spoken language production.


Subject(s)
Multilingualism , Phonetics , Speech Acoustics , Voice Quality , Adolescent , Adult , Female , Humans , Male , Speech Production Measurement , Time Factors , Young Adult
12.
J Acoust Soc Am ; 140(5): EL378, 2016 Nov.
Article in English | MEDLINE | ID: mdl-27908062

ABSTRACT

Adaptation to foreign-accented sentences can be guided by knowledge of the lexical content of those sentences, which, being an exact match for the target, provides feedback on all linguistic levels. The extent to which this feedback needs to match the accented sentence was examined by manipulating the degree of match on different linguistic dimensions, including sub-lexical, lexical, and syntactic levels. Conditions where target-feedback sentence pairs matched and mismatched generated greater transcription improvement over non-English speech feedback, indicating listeners can draw upon sources of linguistic information beyond matching lexical items, such as sub- and supra-lexical information, during adaptation.


Subject(s)
Speech , Adaptation, Physiological , Humans , Male , Phonetics , Speech Perception
13.
J Psycholinguist Res ; 45(5): 1151-60, 2016 Oct.
Article in English | MEDLINE | ID: mdl-26420754

ABSTRACT

This study examined the temporal dynamics of spoken word recognition in noise and background speech. In two visual-world experiments, English participants listened to target words while looking at four pictures on the screen: a target (e.g. candle), an onset competitor (e.g. candy), a rhyme competitor (e.g. sandal), and an unrelated distractor (e.g. lemon). Target words were presented in quiet, mixed with broadband noise, or mixed with background speech. Results showed that lexical competition changes throughout the observation window as a function of what is presented in the background. These findings suggest that, rather than being strictly sequential, stream segregation and lexical competition interact during spoken word recognition.


Subject(s)
Pattern Recognition, Visual/physiology , Psycholinguistics/methods , Psychomotor Performance/physiology , Recognition, Psychology/physiology , Speech Perception/physiology , Adolescent , Adult , Female , Humans , Male , Time Factors , Young Adult
14.
J Acoust Soc Am ; 138(2): 928-37, 2015 Aug.
Article in English | MEDLINE | ID: mdl-26328708

ABSTRACT

Language acquisition typically involves periods when the learner speaks and listens to the new language, and others when the learner is exposed to the language without consciously speaking or listening to it. Adaptation to variants of a native language occurs under similar conditions. Here, speech learning by adults was assessed following a training regimen that mimicked this common situation of language immersion without continuous active language processing. Experiment 1 focused on the acquisition of a novel phonetic category along the voice-onset-time continuum, while Experiment 2 focused on adaptation to foreign-accented speech. The critical training regimens of each experiment involved alternation between periods of practice with the task of phonetic classification (Experiment 1) or sentence recognition (Experiment 2) and periods of stimulus exposure without practice. These practice and exposure periods yielded little to no improvement separately, but alternation between them generated as much or more improvement as did practicing during every period. Practice appears to serve as a catalyst that enables stimulus exposures encountered both during and outside of the practice periods to contribute to quite distinct cases of speech learning. It follows that practice-plus-exposure combinations may tap a general learning mechanism that facilitates language acquisition and speech processing.


Subject(s)
Language , Learning , Practice, Psychological , Acoustic Stimulation , Adolescent , Education/methods , Educational Measurement , Female , Humans , Male , Phonetics , Psychomotor Performance , Time Factors , Young Adult
15.
PLoS Biol ; 13(7): e1002196, 2015 Jul.
Article in English | MEDLINE | ID: mdl-26172057

ABSTRACT

Learning to read is a fundamental developmental milestone, and achieving reading competency has lifelong consequences. Although literacy development proceeds smoothly for many children, a subset struggle with this learning process, creating a need to identify reliable biomarkers of a child's future literacy that could facilitate early diagnosis and access to crucial early interventions. Neural markers of reading skills have been identified in school-aged children and adults; many pertain to the precision of information processing in noise, but it is unknown whether these markers are present in pre-reading children. Here, in a series of experiments in 112 children (ages 3-14 y), we show brain-behavior relationships between the integrity of the neural coding of speech in noise and phonology. We harness these findings into a predictive model of preliteracy, revealing that a 30-min neurophysiological assessment predicts performance on multiple pre-reading tests and, one year later, predicts preschoolers' performance across multiple domains of emergent literacy. This same neural coding model predicts literacy and diagnosis of a learning disability in school-aged children. These findings offer new insight into the biological constraints on preliteracy during early childhood, suggesting that neural processing of consonants in noise is fundamental for language and reading development. Pragmatically, these findings open doors to early identification of children at risk for language learning problems; this early identification may in turn facilitate access to early interventions that could prevent a life spent struggling to read.


Subject(s)
Literacy , Noise , Speech Perception/physiology , Adolescent , Biomarkers , Child , Child, Preschool , Female , Humans , Learning Disabilities/diagnosis , Male
16.
Hear Res ; 328: 34-47, 2015 Oct.
Article in English | MEDLINE | ID: mdl-26113025

ABSTRACT

Early childhood is a critical period of auditory learning, during which children are constantly mapping sounds to meaning. But this auditory learning rarely occurs in ideal listening conditions-children are forced to listen against a relentless din. This background noise degrades the neural coding of these critical sounds, in turn interfering with auditory learning. Despite the importance of robust and reliable auditory processing during early childhood, little is known about the neurophysiology underlying speech processing in children so young. To better understand the physiological constraints these adverse listening scenarios impose on speech sound coding during early childhood, auditory-neurophysiological responses were elicited to a consonant-vowel syllable in quiet and background noise in a cohort of typically-developing preschoolers (ages 3-5 yr). Overall, responses were degraded in noise: they were smaller, less stable across trials, slower, and there was poorer coding of spectral content and the temporal envelope. These effects were exacerbated in response to the consonant transition relative to the vowel, suggesting that the neural coding of spectrotemporally-dynamic speech features is more tenuous in noise than the coding of static features-even in children this young. Neural coding of speech temporal fine structure, however, was more resilient to the addition of background noise than coding of temporal envelope information. Taken together, these results demonstrate that noise places a neurophysiological constraint on speech processing during early childhood by causing a breakdown in neural processing of speech acoustics. These results may explain why some listeners have inordinate difficulties understanding speech in noise. Speech-elicited auditory-neurophysiological responses offer objective insight into listening skills during early childhood by reflecting the integrity of neural coding in quiet and noise; this paper documents typical response properties in this age group. These normative metrics may be useful clinically to evaluate auditory processing difficulties during early childhood.


Subject(s)
Auditory Perception , Hearing , Noise/adverse effects , Speech Perception , Acoustic Stimulation , Child, Preschool , Cohort Studies , Electrophysiology , Female , Fourier Analysis , Hearing Tests , Humans , Language , Male , Neurophysiology , Phonetics , Risk Factors , Signal Processing, Computer-Assisted , Speech , Speech Acoustics
17.
Atten Percept Psychophys ; 77(4): 1342-57, 2015 May.
Article in English | MEDLINE | ID: mdl-25772102

ABSTRACT

Speech processing can often take place in adverse listening conditions that involve the mixing of speech and background noise. In this study, we investigated processing dependencies between background noise and indexical speech features, using a speeded classification paradigm (Garner, 1974; Exp. 1), and whether background noise is encoded and represented in memory for spoken words in a continuous recognition memory paradigm (Exp. 2). Whether or not the noise spectrally overlapped with the speech signal was also manipulated. The results of Experiment 1 indicated that background noise and indexical features of speech (gender, talker identity) cannot be completely segregated during processing, even when the two auditory streams are spectrally nonoverlapping. Perceptual interference was asymmetric, whereby irrelevant indexical feature variation in the speech signal slowed noise classification to a greater extent than irrelevant noise variation slowed speech classification. This asymmetry may stem from the fact that speech features have greater functional relevance to listeners, and are thus more difficult to selectively ignore than background noise. Experiment 2 revealed that a recognition cost for words embedded in different types of background noise on the first and second occurrences only emerged when the noise and the speech signal were spectrally overlapping. Together, these data suggest integral processing of speech and background noise, modulated by the level of processing and the spectral separation of the speech and noise.


Subject(s)
Auditory Perception , Memory , Noise , Speech Perception , Female , Humans , Male , Recognition, Psychology , Young Adult
18.
J Am Acad Audiol ; 25(4): 355-66, 2014 Apr.
Article in English | MEDLINE | ID: mdl-25126683

ABSTRACT

BACKGROUND: Masking release for an English sentence-recognition task in the presence of foreign-accented English speech compared with native-accented English speech was reported in Calandruccio et al (2010a). The masking release appeared to increase as the masker intelligibility decreased. However, it could not be ruled out that spectral differences between the speech maskers were influencing the significant differences observed. PURPOSE: The purpose of the current experiment was to minimize spectral differences between speech maskers to determine how various amounts of linguistic information within competing speech Affiliationect masking release. RESEARCH DESIGN: A mixed-model design with within-subject (four two-talker speech maskers) and between-subject (listener group) factors was conducted. Speech maskers included native-accented English speech and high-intelligibility, moderate-intelligibility, and low-intelligibility Mandarin-accented English. Normalizing the long-term average speech spectra of the maskers to each other minimized spectral differences between the masker conditions. STUDY SAMPLE: Three listener groups were tested, including monolingual English speakers with normal hearing, nonnative English speakers with normal hearing, and monolingual English speakers with hearing loss. The nonnative English speakers were from various native language backgrounds, not including Mandarin (or any other Chinese dialect). Listeners with hearing loss had symmetric mild sloping to moderate sensorineural hearing loss. DATA COLLECTION AND ANALYSIS: Listeners were asked to repeat back sentences that were presented in the presence of four different two-talker speech maskers. Responses were scored based on the key words within the sentences (100 key words per masker condition). A mixed-model regression analysis was used to analyze the difference in performance scores between the masker conditions and listener groups. RESULTS: Monolingual English speakers with normal hearing benefited when the competing speech signal was foreign accented compared with native accented, allowing for improved speech recognition. Various levels of intelligibility across the foreign-accented speech maskers did not influence results. Neither the nonnative English-speaking listeners with normal hearing nor the monolingual English speakers with hearing loss benefited from masking release when the masker was changed from native-accented to foreign-accented English. CONCLUSIONS: Slight modifications between the target and the masker speech allowed monolingual English speakers with normal hearing to improve their recognition of native-accented English, even when the competing speech was highly intelligible. Further research is needed to determine which modifications within the competing speech signal caused the Mandarin-accented English to be less effective with respect to masking. Determining the influences within the competing speech that make it less effective as a masker or determining why monolingual normal-hearing listeners can take advantage of these differences could help improve speech recognition for those with hearing loss in the future.


Subject(s)
Hearing Loss , Language , Perceptual Masking/physiology , Phonetics , Humans , Speech
19.
J Acoust Soc Am ; 136(1): EL26-32, 2014 Jul.
Article in English | MEDLINE | ID: mdl-24993234

ABSTRACT

This study examined the influence of background language variation on speech recognition. English listeners performed an English sentence recognition task in either "pure" background conditions in which all trials had either English or Dutch background babble or in mixed background conditions in which the background language varied across trials (i.e., a mix of English and Dutch or one of these background languages mixed with quiet trials). This design allowed the authors to compare performance on identical trials across pure and mixed conditions. The data reveal that speech-in-speech recognition is sensitive to contextual variation in terms of the target-background language (mis)match depending on the relative ease/difficulty of the test trials in relation to the surrounding trials.


Subject(s)
Cues , Noise/adverse effects , Perceptual Masking , Speech Intelligibility , Speech Perception , Acoustic Stimulation , Adolescent , Adult , Female , Humans , Language , Male , Recognition, Psychology , Speech Acoustics , Voice Quality , Young Adult
20.
J Acoust Soc Am ; 135(6): EL270-6, 2014 Jun.
Article in English | MEDLINE | ID: mdl-24907833

ABSTRACT

This study examined whether language specific properties may lead to cross-language differences in the degree of phonetic reduction. Rates of syllabic reduction (defined here as reduction in which the number of syllables pronounced is less than expected based on canonical form) in English and Mandarin were compared. The rate of syllabic reduction was higher in Mandarin than English. Regardless of language, open syllables participated in reduction more often than closed syllables. The prevalence of open syllables was higher in Mandarin than English, and this phonotactic difference could account for Mandarin's higher rate of syllabic reduction.


Subject(s)
Multilingualism , Phonetics , Speech Acoustics , Voice Quality , Acoustics , Adult , Female , Humans , Male , Signal Processing, Computer-Assisted , Sound Spectrography , Speech Production Measurement , Time Factors , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...