Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
1.
J Speech Lang Hear Res ; 62(4): 853-867, 2019 04 15.
Article in English | MEDLINE | ID: mdl-30986136

ABSTRACT

Purpose Child phonologists have long been interested in how tightly speech input constrains the speech production capacities of young children, and the question acquires clinical significance when children with hearing loss are considered. Children with sensorineural hearing loss often show differences in the spectral and temporal structures of their speech production, compared to children with normal hearing. The current study was designed to investigate the extent to which this problem can be explained by signal degradation. Method Ten 5-year-olds with normal hearing were recorded imitating 120 three-syllable nonwords presented in unprocessed form and as noise-vocoded signals. Target segments consisted of fricatives, stops, and vowels. Several measures were made: 2 duration measures (voice onset time and fricative length) and 4 spectral measures involving 2 segments (1st and 3rd moments of fricatives and 1st and 2nd formant frequencies for the point vowels). Results All spectral measures were affected by signal degradation, with vowel production showing the largest effects. Although a change in voice onset time was observed with vocoded signals for /d/, voicing category was not affected. Fricative duration remained constant. Conclusions Results support the hypothesis that quality of the input signal constrains the speech production capacities of young children. Consequently, it can be concluded that the production problems of children with hearing loss-including those with cochlear implants-can be explained to some extent by the degradation in the signal they hear. However, experience with both speech perception and production likely plays a role as well.


Subject(s)
Hearing Loss, Sensorineural/physiopathology , Phonetics , Speech Perception/physiology , Speech/physiology , Acoustic Stimulation , Child, Preschool , Female , Humans , Male , Noise , Perceptual Masking/physiology , Reaction Time , Speech Production Measurement
2.
Otol Neurotol ; 37(1): 24-30, 2016 Jan.
Article in English | MEDLINE | ID: mdl-26571408

ABSTRACT

HYPOTHESIS: Adding a low-frequency acoustic signal to the cochlear implant (CI) signal (i.e., bimodal stimulation) for a period of time early in life improves language acquisition. BACKGROUND: Children must acquire sensitivity to the phonemic units of language to develop most language-related skills, including expressive vocabulary, working memory, and reading. Acquiring sensitivity to phonemic structure depends largely on having refined spectral (frequency) representations available in the signal, which does not happen with CIs alone. Combining the low-frequency acoustic signal available through hearing aids with the CI signal can enhance signal quality. A period with this bimodal stimulation has been shown to improve language skills in very young children. This study examined whether these benefits persist into childhood. METHODS: Data were examined for 48 children with CIs implanted under age 3 years, participating in a longitudinal study. All children wore hearing aids before receiving a CI, but upon receiving a first CI, 24 children had at least 1 year of bimodal stimulation (Bimodal group), and 24 children had only electric stimulation subsequent to implantation (CI-only group). Measures of phonemic awareness were obtained at second and fourth grades, along with measures of expressive vocabulary, working memory, and reading. RESULTS: Children in the Bimodal group generally performed better on measures of phonemic awareness, and that advantage was reflected in other language measures. CONCLUSIONS: Having even a brief period of time early in life with combined electric-acoustic input provides benefits to language learning into childhood, likely because of the enhancement in spectral representations provided.


Subject(s)
Cochlear Implants , Language Development , Acoustic Stimulation , Electric Stimulation , Female , Hearing Loss, Sensorineural/psychology , Hearing Loss, Sensorineural/therapy , Humans , Infant , Longitudinal Studies , Male , Memory, Short-Term , Reading , Speech Perception , Vocabulary
3.
J Acoust Soc Am ; 137(5): 2811-22, 2015 May.
Article in English | MEDLINE | ID: mdl-25994709

ABSTRACT

Children need to discover linguistically meaningful structures in the acoustic speech signal. Being attentive to recurring, time-varying formant patterns helps in that process. However, that kind of acoustic structure may not be available to children with cochlear implants (CIs), thus hindering development. The major goal of this study was to examine whether children with CIs are as sensitive to time-varying formant structure as children with normal hearing (NH) by asking them to recognize sine-wave speech. The same materials were presented as speech in noise, as well, to evaluate whether any group differences might simply reflect general perceptual deficits on the part of children with CIs. Vocabulary knowledge, phonemic awareness, and "top-down" language effects were all also assessed. Finally, treatment factors were examined as possible predictors of outcomes. Results showed that children with CIs were as accurate as children with NH at recognizing sine-wave speech, but poorer at recognizing speech in noise. Phonemic awareness was related to that recognition. Top-down effects were similar across groups. Having had a period of bimodal stimulation near the time of receiving a first CI facilitated these effects. Results suggest that children with CIs have access to the important time-varying structure of vocal-tract formants.


Subject(s)
Cochlear Implantation/instrumentation , Cochlear Implants , Correction of Hearing Impairment/instrumentation , Persons With Hearing Impairments/rehabilitation , Speech Perception , Acoustic Stimulation , Acoustics , Age Factors , Audiometry, Speech , Awareness , Case-Control Studies , Child , Child Behavior , Child Language , Cues , Humans , Noise/adverse effects , Pattern Recognition, Physiological , Perceptual Masking , Persons With Hearing Impairments/psychology , Phonetics , Recognition, Psychology , Signal Processing, Computer-Assisted , Sound Spectrography
4.
J Speech Lang Hear Res ; 58(3): 1077-92, 2015 Jun.
Article in English | MEDLINE | ID: mdl-25813201

ABSTRACT

PURPOSE: Children must develop optimal perceptual weighting strategies for processing speech in their first language. Hearing loss can interfere with that development, especially if cochlear implants are required. The three goals of this study were to measure, for children with and without hearing loss: (a) cue weighting for a manner distinction, (b) sensitivity to those cues, and (c) real-world communication functions. METHOD: One hundred and seven children (43 with normal hearing [NH], 17 with hearing aids [HAs], and 47 with cochlear implants [CIs]) performed several tasks: labeling of stimuli from /bɑ/-to-/wɑ/ continua varying in formant and amplitude rise time (FRT and ART), discrimination of ART, word recognition, and phonemic awareness. RESULTS: Children with hearing loss were less attentive overall to acoustic structure than children with NH. Children with CIs, but not those with HAs, weighted FRT less and ART more than children with NH. Sensitivity could not explain cue weighting. FRT cue weighting explained significant amounts of variability in word recognition and phonemic awareness; ART cue weighting did not. CONCLUSION: Signal degradation inhibits access to spectral structure for children with CIs, but cannot explain their delayed development of optimal weighting strategies. Auditory training could strengthen the weighting of spectral cues for children with CIs, thus aiding spoken language acquisition.


Subject(s)
Cues , Hearing Loss , Speech Acoustics , Speech Perception , Acoustic Stimulation , Child , Child Language , Cochlear Implants , Discrimination, Psychological , Hearing Aids , Hearing Loss/psychology , Hearing Loss/rehabilitation , Humans , Language Tests , Neuropsychological Tests , Recognition, Psychology
5.
J Acoust Soc Am ; 136(4): 1845-56, 2014 Oct.
Article in English | MEDLINE | ID: mdl-25324085

ABSTRACT

Cochlear implants have improved speech recognition for deaf individuals, but further modifications are required before performance will match that of normal-hearing listeners. In this study, the hypotheses were tested that (1) implant processing would benefit from efforts to preserve the structure of the low-frequency formants and (2) time-varying aspects of that structure would be especially beneficial. Using noise-vocoded and sine-wave stimuli with normal-hearing listeners, two experiments examined placing boundaries between static spectral channels to optimize representation of the first two formants and preserving time-varying formant structure. Another hypothesis tested in this study was that children might benefit more than adults from strategies that preserve formant structure, especially time-varying structure. Sixty listeners provided data to each experiment: 20 adults and 20 children at each of 5 and 7 years old. Materials were consonant-vowel-consonant words, four-word syntactically correct, meaningless sentences, and five-word syntactically correct, meaningful sentences. Results showed that listeners of all ages benefited from having channel boundaries placed to optimize information about the first two formants, and benefited even more from having time-varying structure. Children showed greater gains than adults only for time-varying formant structure. Results suggest that efforts would be well spent trying to design processing strategies that preserve formant structure.


Subject(s)
Cochlear Implants , Persons With Hearing Impairments/rehabilitation , Speech Acoustics , Speech Perception , Acoustic Stimulation , Adolescent , Adult , Age Factors , Audiometry, Speech , Auditory Threshold , Child , Child, Preschool , Humans , Persons With Hearing Impairments/psychology , Recognition, Psychology , Time Factors , Young Adult
6.
J Commun Disord ; 52: 111-33, 2014.
Article in English | MEDLINE | ID: mdl-25307477

ABSTRACT

PURPOSE: This study compared perceptual weighting strategies of children with cochlear implants (CIs) and children with normal hearing (NH), and asked if strategies are explained solely by degraded spectral representations, or if diminished language experience accounts for some of the effect. Relationships between weighting strategies and other language skills were examined. METHOD: One hundred 8-year-olds (49 with NH and 51 with CIs) were tested on four measures: (1) labeling of cop-cob and sa-sha stimuli; (2) discrimination of the acoustic cues to the cop-cob decision; (3) phonemic awareness; and (4) word recognition. RESULTS: No differences in weighting of cues to the cop-cob decision were observed between children with CIs and NH, suggesting that language experience was sufficient for the children with CIs. Differences in weighting of cues to the sa-sha decision were found, but were not entirely explained by auditory sensitivity. Weighting strategies were related to phonemic awareness and word recognition. CONCLUSIONS: More salient cues facilitate stronger weighting of those cues. Nonetheless, individuals differ in how salient cues need to be to capture perceptual attention. Familiarity with stimuli also affects how reliably children attend to acoustic cues. Training should help children with CIs learn to categorize speech sounds with less-salient cues. LEARNING OUTCOMES: After reading this article, the learner should be able to: (1) recognize methods and motivations for studying perceptual weighting strategies in speech perception; (2) explain how signal quality and language experience affect the development of weighting strategies for children with cochlear implants and children with normal hearing; and (3) summarize the importance of perceptual weighting strategies for other aspects of language functioning.


Subject(s)
Cochlear Implants/psychology , Speech Perception , Acoustic Stimulation , Case-Control Studies , Child , Cues , Female , Hearing Tests , Humans , Male , Phonetics , Speech Discrimination Tests
7.
J Speech Lang Hear Res ; 57(2): 566-82, 2014 Apr 01.
Article in English | MEDLINE | ID: mdl-24686722

ABSTRACT

PURPOSE Several acoustic cues specify any single phonemic contrast. Nonetheless, adult, native speakers of a language share weighting strategies, showing preferential attention to some properties over others. Cochlear implant (CI) signal processing disrupts the salience of some cues: In general, amplitude structure remains readily available, but spectral structure less so. This study asked how well speech recognition is supported if CI users shift attention to salient cues not weighted strongly by native speakers. METHOD Twenty adults with CIs participated. The /bɑ/-/wɑ/ contrast was used because spectral and amplitude structure varies in correlated fashion for this contrast. Adults with normal hearing weight the spectral cue strongly but the amplitude cue negligibly. Three measurements were made: labeling decisions, spectral and amplitude discrimination, and word recognition. RESULTS Outcomes varied across listeners: Some weighted the spectral cue strongly, some weighted the amplitude cue, and some weighted neither. Spectral discrimination predicted spectral weighting. Spectral weighting explained the most variance in word recognition. Age of onset of hearing loss predicted spectral weighting but not unique variance in word recognition. CONCLUSION The weighting strategies of listeners with normal hearing likely support speech recognition best, so efforts in implant design, fitting, and training should focus on developing those strategies.


Subject(s)
Cochlear Implantation/rehabilitation , Cochlear Implants , Cues , Phonetics , Psychoacoustics , Speech Perception , Acoustic Stimulation/methods , Adolescent , Adult , Hearing , Hearing Loss/rehabilitation , Humans , Middle Aged , Multilingualism , Speech Acoustics , Speech Discrimination Tests , Young Adult
8.
Int J Audiol ; 53(4): 270-84, 2014 Apr.
Article in English | MEDLINE | ID: mdl-24456179

ABSTRACT

OBJECTIVE: Using signals processed to simulate speech received through cochlear implants and low-frequency extended hearing aids, this study examined the proposal that low-frequency signals facilitate the perceptual organization of broader, spectrally degraded signals. DESIGN: In two experiments, words and sentences were presented in diotic and dichotic configurations as four-channel noise-vocoded signals (VOC-only), and as those signals combined with the acoustic signal below 0.25 kHz (LOW-plus). Dependent measures were percent correct recognition, and the difference between scores for the two processing conditions given as proportions of recognition scores for VOC-only. The influence of linguistic context was also examined. STUDY SAMPLE: Participants had normal hearing. In all, 40 adults, 40 seven-year-olds, and 20 five-year-olds participated. RESULTS: Participants of all ages showed benefits of adding the low-frequency signal. The effect was greater for sentences than words, but no effect of diotic versus dichotic presentation was found. The influence of linguistic context was similar across age groups, and did not contribute to the low-frequency effect. Listeners who had poorer VOC-only scores showed greater low-frequency effects. CONCLUSION: The benefit of adding a low-frequency signal to a broader, spectrally degraded signal derives in some part from its facilitative influence on perceptual organization of the sensory input.


Subject(s)
Cochlear Implantation/instrumentation , Cochlear Implants , Correction of Hearing Impairment/instrumentation , Persons With Hearing Impairments/rehabilitation , Signal Processing, Computer-Assisted , Speech Perception , Acoustic Stimulation , Adolescent , Adult , Age Factors , Audiometry, Speech , Auditory Threshold , Child , Child, Preschool , Cues , Equipment Design , Humans , Persons With Hearing Impairments/psychology , Recognition, Psychology , Sound Spectrography , Young Adult
9.
Int J Audiol ; 52(8): 513-25, 2013 Aug.
Article in English | MEDLINE | ID: mdl-23834373

ABSTRACT

OBJECTIVE: This study examined speech recognition in noise for children with hearing loss, compared it to recognition for children with normal hearing, and examined mechanisms that might explain variance in children's abilities to recognize speech in noise. DESIGN: Word recognition was measured in two levels of noise, both when the speech and noise were co-located in front and when the noise came separately from one side. Four mechanisms were examined as factors possibly explaining variance: vocabulary knowledge, sensitivity to phonological structure, binaural summation, and head shadow. STUDY SAMPLE: Participants were 113 eight-year-old children. Forty-eight had normal hearing (NH) and 65 had hearing loss: 18 with hearing aids (HAs), 19 with one cochlear implant (CI), and 28 with two CIs. RESULTS: Phonological sensitivity explained a significant amount of between-groups variance in speech-in-noise recognition. Little evidence of binaural summation was found. Head shadow was similar in magnitude for children with NH and with CIs, regardless of whether they wore one or two CIs. Children with HAs showed reduced head shadow effects. CONCLUSION: These outcomes suggest that in order to improve speech-in-noise recognition for children with hearing loss, intervention needs to be comprehensive, focusing on both language abilities and auditory mechanisms.


Subject(s)
Child Language , Hearing Loss/psychology , Noise/adverse effects , Perceptual Masking , Persons With Hearing Impairments/psychology , Recognition, Psychology , Sound Localization , Speech Perception , Acoustic Stimulation , Age Factors , Audiometry, Pure-Tone , Audiometry, Speech , Auditory Threshold , Case-Control Studies , Child , Cochlear Implants , Correction of Hearing Impairment/instrumentation , Female , Hearing Aids , Hearing Loss/rehabilitation , Humans , Male , Persons With Hearing Impairments/rehabilitation , Signal-To-Noise Ratio , Vocabulary
10.
J Speech Lang Hear Res ; 56(2): 427-40, 2013 Apr.
Article in English | MEDLINE | ID: mdl-22992704

ABSTRACT

PURPOSE: Previous research has demonstrated that children weight the acoustic cues to many phonemic decisions differently than do adults and gradually shift those strategies as they gain language experience. However, that research has focused on spectral and duration cues rather than on amplitude cues. In the current study, the authors examined amplitude rise time (ART; an amplitude cue) and formant rise time (FRT; a spectral cue) in the /b/-/w/ manner contrast for adults and children, and related those speech decisions to outcomes of nonspeech discrimination tasks. METHOD: Twenty adults and 30 children (ages 4-5 years) labeled natural and synthetic speech stimuli manipulated to vary ARTs and FRTs, and discriminated nonspeech analogs that varied only by ART in an AX paradigm. RESULTS: Three primary results were obtained. First, listeners in both age groups based speech labeling judgments on FRT, not on ART. Second, the fundamental frequency of the natural speech samples did not influence labeling judgments. Third, discrimination performance for the nonspeech stimuli did not predict how listeners would perform with the speech stimuli. CONCLUSION: Even though both adults and children are sensitive to ART, it was not weighted in phonemic judgments by these typical listeners.


Subject(s)
Loudness Perception , Phonetics , Pitch Perception , Speech Perception , Acoustic Stimulation/methods , Adolescent , Adult , Age Factors , Child, Preschool , Cues , Female , Humans , Male , Psychoacoustics , Sound Spectrography , Speech , Young Adult
11.
J Acoust Soc Am ; 132(6): EL443-9, 2012 Dec.
Article in English | MEDLINE | ID: mdl-23231206

ABSTRACT

Earlier work using sine-wave and noise-vocoded signals suggests that dynamic spectral structure plays a greater role in speech recognition for children than adults [Nittrouer and Lowenstein. (2010). J. Acoust. Soc. Am. 127, 1624-1635], but questions arise concerning whether outcomes can be compared because sine waves and wide noise bands are different in nature. The current study addressed that question using narrow noise bands for both signals, and applying a difference ratio to index the contribution made by dynamic spectral structure. Results replicated earlier findings, supporting the idea that dynamic spectral structure plays a critical role in speech recognition, especially for children.


Subject(s)
Recognition, Psychology , Speech Intelligibility , Speech Perception , Acoustic Stimulation , Adolescent , Adult , Age Factors , Analysis of Variance , Audiometry, Speech , Auditory Threshold , Child , Child, Preschool , Humans , Pattern Recognition, Physiological , Sound Spectrography , Young Adult
12.
J Acoust Soc Am ; 130(5): EL290-6, 2011 Nov.
Article in English | MEDLINE | ID: mdl-22088030

ABSTRACT

Coherence masking protection (CMP) refers to the phenomenon in which a target formant is labeled at lower signal-to-noise levels when presented with a stable cosignal consisting of two other formants than when presented alone. This effect has been reported primarily for adults with first-formant (F1) targets and F2/F3 cosignals, but has also been found for children, in fact in greater magnitude. In this experiment, F2 was the target and F1/F3 was the cosignal. Results showed similar effects for each age group as had been found for F1 targets. Implications for auditory prostheses for listeners with hearing loss are discussed.


Subject(s)
Noise/adverse effects , Perceptual Masking , Speech Perception , Acoustic Stimulation , Adolescent , Adult , Age Factors , Audiometry, Speech , Auditory Threshold , Child , Child, Preschool , Humans , Signal Detection, Psychological , Sound Spectrography , Speech Acoustics , Young Adult
13.
J Commun Disord ; 44(3): 294-314, 2011.
Article in English | MEDLINE | ID: mdl-21329941

ABSTRACT

PURPOSE: Children with speech sound disorder (SSD) and reading disability (RD) have poor phonological awareness, a problem believed to arise largely from deficits in processing the sensory information in speech, specifically individual acoustic cues. However, such cues are details of acoustic structure. Recent theories suggest that listeners also need to be able to integrate those details to perceive linguistically relevant form. This study examined abilities of children with SSD, RD, and SSD+RD not only to process acoustic cues but also to recover linguistically relevant form from the speech signal. METHOD: Ten- to 11-year-olds with SSD (n=17), RD (n=16), SSD+RD (n=17), and Controls (n=16) were tested to examine their sensitivity to (1) voice onset times (VOT); (2) spectral structure in fricative-vowel syllables; and (3) vocoded sentences. RESULTS: Children in all groups performed similarly with VOT stimuli, but children with disorders showed delays on other tasks, although the specifics of their performance varied. CONCLUSION: Children with poor phonemic awareness not only lack sensitivity to acoustic details, but are also less able to recover linguistically relevant forms. This is contrary to one of the main current theories of the relation between spoken and written language development. LEARNING OUTCOMES: Readers will be able to (1) understand the role speech perception plays in phonological awareness, (2) distinguish between segmental and global structure analysis of speech perception, (3) describe differences and similarities in speech perception among children with speech sound disorder and/or reading disability, and (4) recognize the importance of broadening clinical interventions to focus on recognizing structure at all levels of speech analysis.


Subject(s)
Dyslexia/diagnosis , Perceptual Disorders/diagnosis , Phonetics , Speech Perception , Acoustic Stimulation , Awareness , Child , Cues , Dyslexia/psychology , Female , Humans , Male , Perceptual Disorders/psychology , Reaction Time , Vocabulary , Wechsler Scales
14.
J Acoust Soc Am ; 127(3): 1624-35, 2010 Mar.
Article in English | MEDLINE | ID: mdl-20329861

ABSTRACT

The ability to recognize speech involves sensory, perceptual, and cognitive processes. For much of the history of speech perception research, investigators have focused on the first and third of these, asking how much and what kinds of sensory information are used by normal and impaired listeners, as well as how effective amounts of that information are altered by "top-down" cognitive processes. This experiment focused on perceptual processes, asking what accounts for how the sensory information in the speech signal gets organized. Two types of speech signals processed to remove properties that could be considered traditional acoustic cues (amplitude envelopes and sine wave replicas) were presented to 100 listeners in five groups: native English-speaking (L1) adults, 7-, 5-, and 3-year-olds, and native Mandarin-speaking adults who were excellent second-language (L2) users of English. The L2 adults performed more poorly than L1 adults with both kinds of signals. Children performed more poorly than L1 adults but showed disproportionately better performance for the sine waves than for the amplitude envelopes compared to both groups of adults. Sentence context had similar effects across groups, so variability in recognition was attributed to differences in perceptual organization of the sensory information, presumed to arise from native language experience.


Subject(s)
Models, Biological , Phonetics , Speech Perception/physiology , Acoustic Stimulation , Adult , Child , Child, Preschool , Female , Humans , Language , Male , Recognition, Psychology/physiology , Sound Spectrography
15.
J Acoust Soc Am ; 115(4): 1777-90, 2004 Apr.
Article in English | MEDLINE | ID: mdl-15101656

ABSTRACT

Adults whose native languages permit syllable-final obstruents, and show a vocalic length distinction based on the voicing of those obstruents, consistently weight vocalic duration strongly in their perceptual decisions about the voicing of final stops, at least in laboratory studies using synthetic speech. Children, on the other hand, generally disregard such signal properties in their speech perception, favoring formant transitions instead. These age-related differences led to the prediction that children learning English as a native language would weight vocalic duration less than adults, but weight syllable-final transitions more in decisions of final-consonant voicing. This study tested that prediction. In the first experiment, adults and children (eight and six years olds) labeled synthetic and natural CVC words with voiced or voiceless stops in final C position. Predictions were strictly supported for synthetic stimuli only. With natural stimuli it appeared that adults and children alike weighted syllable-offset transitions strongly in their voicing decisions. The predicted age-related difference in the weighting of vocalic duration was seen for these natural stimuli almost exclusively when syllable-final transitions signaled a voiced final stop. A second experiment with adults and children (seven and five years old) replicated these results for natural stimuli with four new sets of natural stimuli. It was concluded that acoustic properties other than vocalic duration might play more important roles in voicing decisions for final stops than commonly asserted, sometimes even taking precedence over vocalic duration.


Subject(s)
Phonetics , Speech Perception , Verbal Behavior , Acoustic Stimulation , Adult , Age Factors , Analysis of Variance , Child , Child, Preschool , Female , Humans , Male , Speech Production Measurement
SELECTION OF CITATIONS
SEARCH DETAIL