Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 22
Filter
1.
J Speech Lang Hear Res ; 63(1): 286-304, 2020 01 22.
Article in English | MEDLINE | ID: mdl-31855606

ABSTRACT

Purpose The current study investigates how individual differences in cochlear implant (CI) users' sensitivity to word-nonword differences, reflecting lexical uncertainty, relate to their reliance on sentential context for lexical access in processing continuous speech. Method Fifteen CI users and 14 normal-hearing (NH) controls participated in an auditory lexical decision task (Experiment 1) and a visual-world paradigm task (Experiment 2). Experiment 1 tested participants' reliance on lexical statistics, and Experiment 2 studied how sentential context affects the time course and patterns of lexical competition leading to lexical access. Results In Experiment 1, CI users had lower accuracy scores and longer reaction times than NH listeners, particularly for nonwords. In Experiment 2, CI users' lexical competition patterns were, on average, similar to those of NH listeners, but the patterns of individual CI users varied greatly. Individual CI users' word-nonword sensitivity (Experiment 1) explained differences in the reliance on sentential context to resolve lexical competition, whereas clinical speech perception scores explained competition with phonologically related words. Conclusions The general analysis of CI users' lexical competition patterns showed merely quantitative differences with NH listeners in the time course of lexical competition, but our additional analysis revealed more qualitative differences in CI users' strategies to process speech. Individuals' word-nonword sensitivity explained different parts of individual variability than clinical speech perception scores. These results stress, particularly for heterogeneous clinical populations such as CI users, the importance of investigating individual differences in addition to group averages, as they can be informative for clinical rehabilitation. Supplemental Material https://doi.org/10.23641/asha.11368106.


Subject(s)
Cochlear Implants/psychology , Deafness/psychology , Individuality , Phonetics , Speech Perception , Acoustic Stimulation/methods , Adult , Aged , Cochlear Implantation , Deafness/surgery , Female , Humans , Male , Middle Aged , Task Performance and Analysis
2.
J Acoust Soc Am ; 143(5): EL311, 2018 05.
Article in English | MEDLINE | ID: mdl-29857757

ABSTRACT

In adult normal-hearing musicians, perception of music, vocal emotion, and speech in noise has been previously shown to be better than non-musicians, sometimes even with spectro-temporally degraded stimuli. In this study, melodic contour identification, vocal emotion identification, and speech understanding in noise were measured in young adolescent normal-hearing musicians and non-musicians listening to unprocessed or degraded signals. Different from adults, there was no musician effect for vocal emotion identification or speech in noise. Melodic contour identification with degraded signals was significantly better in musicians, suggesting potential benefits from music training for young cochlear-implant users, who experience similar spectro-temporal signal degradations.


Subject(s)
Acoustic Stimulation/methods , Emotions/physiology , Music/psychology , Pitch Perception/physiology , Speech Perception/physiology , Voice/physiology , Adolescent , Auditory Perception/physiology , Child , Female , Humans , Male , Time Factors
3.
IEEE Trans Neural Syst Rehabil Eng ; 26(2): 392-399, 2018 02.
Article in English | MEDLINE | ID: mdl-29432110

ABSTRACT

Electroencephalographic (EEG) recordings provide objective estimates of listeners' cortical processing of sounds and of the status of their speech perception system. For profoundly deaf listeners with cochlear implants (CIs), the applications of EEG are limited because the device adds electric artifacts to the recordings. This restricts the possibilities for the neural-based metrics of speech processing by CI users, for instance to gauge cortical reorganization due to individual's hearing loss history. This paper describes the characteristics of the CI artifact as recorded with an artificial head substitute, and reports how the artifact is affected by the properties of the acoustical input signal versus the settings of the device. METHODS: We created a brain substitute using agar that simulates the brain's conductivity, placed it in a human skull, and performed EEG recordings with CIs from three different manufacturers. As stimuli, we used simple and complex non-speech stimuli, as well as naturally produced continuous speech. We examined the effect of manipulating device settings in both controlled experimental CI configurations and real clinical maps. RESULTS: An increase in the magnitude of the stimulation current through the device settings increases also the magnitude of the artifact. The artifact recorded to speech is smaller in magnitude than for non-speech stimuli due to signal-inherent amplitude modulations. CONCLUSION: The CI EEG artifact for speech appears more difficult to detect than for simple stimuli. Since the artifact differs across CI users, due to their individual clinical maps, the method presented enables insight into the individual manifestations of the artifact.


Subject(s)
Acoustic Stimulation , Artifacts , Brain/physiology , Cochlear Implants , Electroencephalography/methods , Models, Neurological , Agar , Brain Mapping , Electric Conductivity , Evoked Potentials, Auditory , Humans , Models, Anatomic , Skull
4.
J Acoust Soc Am ; 139(3): EL51-6, 2016 Mar.
Article in English | MEDLINE | ID: mdl-27036287

ABSTRACT

Evidence for transfer of musical training to better perception of speech in noise has been mixed. Unlike speech-in-noise, speech-on-speech perception utilizes many of the skills that musical training improves, such as better pitch perception and stream segregation, as well as use of higher-level auditory cognitive functions, such as attention. Indeed, despite the few non-musicians who performed as well as musicians, on a group level, there was a strong musician benefit for speech perception in a speech masker. This benefit does not seem to result from better voice processing and could instead be related to better stream segregation or enhanced cognitive functions.


Subject(s)
Music , Noise/adverse effects , Perceptual Masking , Speech Intelligibility , Speech Perception , Acoustic Stimulation , Adult , Attention , Audiometry, Speech , Cognition , Cues , Female , Humans , Male , Pitch Perception , Young Adult
5.
J Acoust Soc Am ; 138(3): EL187-92, 2015 Sep.
Article in English | MEDLINE | ID: mdl-26428811

ABSTRACT

This study compares two response-time measures of listening effort that can be combined with a clinical speech test for a more comprehensive evaluation of total listening experience; verbal response times to auditory stimuli (RT(aud)) and response times to a visual task (RTs(vis)) in a dual-task paradigm. The listening task was presented in five masker conditions; no noise, and two types of noise at two fixed intelligibility levels. Both the RTs(aud) and RTs(vis) showed effects of noise. However, only RTs(aud) showed an effect of intelligibility. Because of its simplicity in implementation, RTs(aud) may be a useful effort measure for clinical applications.


Subject(s)
Audiometry, Speech/methods , Reaction Time , Speech Intelligibility , Speech Perception , Verbal Behavior , Acoustic Stimulation , Adolescent , Adult , Humans , Noise/adverse effects , Perceptual Masking , Photic Stimulation , Predictive Value of Tests , Reproducibility of Results , Time Factors , Visual Perception , Young Adult
6.
Hear Res ; 328: 24-33, 2015 Oct.
Article in English | MEDLINE | ID: mdl-26117407

ABSTRACT

In complex listening situations, cognitive restoration mechanisms are commonly used to enhance perception of degraded speech with inaudible segments. Profoundly hearing-impaired people with a cochlear implant (CI) show less benefit from such mechanisms. However, both normal hearing (NH) listeners and CI users do benefit from visual speech cues in these listening situations. In this study we investigated if an accompanying video of the speaker can enhance the intelligibility of interrupted sentences and the phonemic restoration benefit, measured by an increase in intelligibility when the silent intervals are filled with noise. Similar to previous studies, restoration benefit was observed with interrupted speech without spectral degradations (Experiment 1), but was absent in acoustic simulations of CIs (Experiment 2) and was present again in simulations of electric-acoustic stimulation (Experiment 3). In all experiments, the additional speech information provided by the complementary visual cues lead to overall higher intelligibility, however, these cues did not influence the occurrence or extent of the phonemic restoration benefit of filler noise. Results imply that visual cues do not show a synergistic effect with the filler noise, as adding them equally increased the intelligibility of interrupted sentences with or without the filler noise.


Subject(s)
Cues , Speech Intelligibility/physiology , Visual Perception , Acoustic Stimulation/methods , Acoustics , Adolescent , Adult , Audiometry, Speech , Auditory Threshold , Cochlear Implants , Electric Stimulation , Female , Healthy Volunteers , Hearing , Humans , Male , Noise , Speech Perception/physiology , Surveys and Questionnaires , Video Recording , Young Adult
7.
J Acoust Soc Am ; 137(3): 1298-308, 2015 Mar.
Article in English | MEDLINE | ID: mdl-25786943

ABSTRACT

Perception of voice characteristics allows normal hearing listeners to identify the gender of a speaker, and to better segregate speakers from each other in cocktail party situations. This benefit is largely driven by the perception of two vocal characteristics of the speaker: The fundamental frequency (F0) and the vocal-tract length (VTL). Previous studies have suggested that cochlear implant (CI) users have difficulties in perceiving these cues. The aim of the present study was to investigate possible causes for limited sensitivity to VTL differences in CI users. Different acoustic simulations of CI stimulation were implemented to characterize the role of spectral resolution on VTL, both in terms of number of channels and amount of channel interaction. The results indicate that with 12 channels, channel interaction caused by current spread is likely to prevent CI users from perceiving VTL differences typically found between male and female speakers.


Subject(s)
Acoustic Stimulation/methods , Cochlear Implants , Computer Simulation , Discrimination, Psychological , Speech Acoustics , Speech Perception , Voice Quality , Acoustics , Adult , Age Factors , Audiometry, Speech , Auditory Threshold , Cues , Female , Humans , Male , Middle Aged , Phonetics , Prosthesis Design , Sex Factors , Young Adult
8.
J Acoust Soc Am ; 135(2): EL88-94, 2014 Feb.
Article in English | MEDLINE | ID: mdl-25234920

ABSTRACT

Top-down restoration mechanisms can enhance perception of degraded speech. Even in normal hearing, however, a large variability has been observed in how effectively individuals can benefit from these mechanisms. To investigate if this variability is partially caused by individuals' linguistic and cognitive skills, normal-hearing participants of varying ages were assessed for receptive vocabulary (Peabody Picture Vocabulary Test; PPVT-III-NL), for full-scale intelligence (Wechsler Adult Intelligence Scale; WAIS-IV-NL), and for top-down restoration of interrupted speech (with silent or noise-filled gaps). Receptive vocabulary was significantly correlated with the other measures, suggesting linguistic skills to be highly involved in restoration of degraded speech.


Subject(s)
Cognition , Cues , Linguistics , Speech Intelligibility , Speech Perception , Acoustic Stimulation , Adult , Audiometry, Speech , Female , Humans , Intelligence , Intelligence Tests , Language Tests , Male , Middle Aged , Young Adult
9.
J Acoust Soc Am ; 136(3): 1344, 2014 Sep.
Article in English | MEDLINE | ID: mdl-25190407

ABSTRACT

Normal-hearing (NH) listeners make use of context, speech redundancy and top-down linguistic processes to perceptually restore inaudible or masked portions of speech. Previous research has shown poorer perception and restoration of interrupted speech in CI users and NH listeners tested with acoustic simulations of CIs. Three hypotheses were investigated: (1) training with CI simulations of interrupted sentences can teach listeners to use the high-level restoration mechanisms more effectively, (2) phonemic restoration benefit, an increase in intelligibility of interrupted sentences once its silent gaps are filled with noise, can be induced with training, and (3) perceptual learning of interrupted sentences can be reflected in clinical speech audiometry. To test these hypotheses, NH listeners were trained using periodically interrupted sentences, also spectrally degraded with a noiseband vocoder as CI simulation. Feedback was presented by displaying the sentence text and playing back both the intact and the interrupted CI simulation of the sentence. Training induced no phonemic restoration benefit, and learning was not transferred to speech audiometry measured with words. However, a significant improvement was observed in overall intelligibility of interrupted spectrally degraded sentences, with or without filler noise, suggesting possibly better use of restoration mechanisms as a result of training.


Subject(s)
Cues , Learning , Speech Intelligibility , Speech Perception , Acoustic Stimulation , Adolescent , Audiometry, Speech , Feedback, Psychological , Female , Humans , Male , Noise/adverse effects , Perceptual Masking , Phonetics , Time Factors , Young Adult
10.
J Acoust Soc Am ; 135(3): EL147-53, 2014 Mar.
Article in English | MEDLINE | ID: mdl-24606308

ABSTRACT

Older listeners commonly complain about difficulty in understanding speech in noise. Previous studies have shown an age effect for both speech and steady noise maskers, and it is largest for speech maskers. In the present study, speech reception thresholds (SRTs) measured with competing speech, music, and steady noise maskers significantly differed between young (19 to 26 years) and middle-aged (51 to 63 years) adults. SRT differences ranged from 2.1 dB for competing speech, 0.4-1.6 dB for music maskers, and 0.8 dB for steady noise. The data suggest that aging effects are already evident in middle-aged adults without significant hearing impairment.


Subject(s)
Music , Noise/adverse effects , Perceptual Masking , Speech Perception , Acoustic Stimulation , Adult , Age Factors , Audiometry, Speech , Auditory Threshold , Comprehension , Humans , Middle Aged , Sound Spectrography , Speech Intelligibility , Time Factors , Young Adult
11.
J Acoust Soc Am ; 135(3): EL159-65, 2014 Mar.
Article in English | MEDLINE | ID: mdl-24606310

ABSTRACT

Musicians have been shown to better perceive pitch and timbre cues in speech and music, compared to non-musicians. It is unclear whether this "musician advantage" persists under conditions of spectro-temporal degradation, as experienced by cochlear-implant (CI) users. In this study, gender categorization was measured in normal-hearing musicians and non-musicians listening to acoustic CI simulations. Recordings of Dutch words were synthesized to systematically vary fundamental frequency, vocal-tract length, or both to create voices from the female source talker to a synthesized male talker. Results showed an overall musician effect, mainly due to musicians weighting fundamental frequency more than non-musicians in CI simulations.


Subject(s)
Cochlear Implants , Cues , Music , Pitch Perception , Speech Acoustics , Speech Perception , Acoustic Stimulation , Audiometry, Speech , Auditory Threshold , Female , Humans , Male , Noise/adverse effects , Perceptual Masking , Sex Factors , Signal Processing, Computer-Assisted , Sound Spectrography , Speech Intelligibility
12.
Hear Res ; 309: 113-23, 2014 Mar.
Article in English | MEDLINE | ID: mdl-24368138

ABSTRACT

In noisy listening conditions, intelligibility of degraded speech can be enhanced by top-down restoration. Cochlear implant (CI) users have difficulty understanding speech in noisy environments. This could partially be due to reduced top-down restoration of speech, which may be related to the changes that the electrical stimulation imposes on the bottom-up cues. We tested this hypothesis using the phonemic restoration (PhR) paradigm in which speech interrupted with periodic silent intervals is perceived illusorily continuous (continuity illusion or CoI) and becomes more intelligible (PhR benefit) when the interruptions are filled with noise bursts. Using meaningful sentences, both CoI and PhR benefit were measured in CI users, and compared with those of normal-hearing (NH) listeners presented with normal speech and 8-channel noise-band vocoded speech, acoustically simulating CIs. CI users showed different patterns in both PhR benefit and CoI, compared to NH results with or without the noise-band vocoding. However, they were able to use top-down restoration under certain test conditions. This observation supports the idea that changes in bottom-up cues can impose changes to the top-down processes needed to enhance intelligibility of degraded speech. The knowledge that CI users seem to be able to do restoration under the right circumstances could be exploited in patient rehabilitation and product development.


Subject(s)
Cochlear Implantation/instrumentation , Cochlear Implants , Correction of Hearing Impairment/instrumentation , Persons With Hearing Impairments/rehabilitation , Speech Perception , Acoustic Stimulation , Adult , Aged , Audiometry, Speech , Case-Control Studies , Cues , Female , Humans , Male , Middle Aged , Noise/adverse effects , Perceptual Masking , Persons With Hearing Impairments/psychology , Signal Processing, Computer-Assisted , Speech Intelligibility , Young Adult
13.
J Neurosci Methods ; 222: 207-12, 2014 Jan 30.
Article in English | MEDLINE | ID: mdl-24269251

ABSTRACT

BACKGROUND: Amplitude modulation (AM) detection is a measure of temporal processing that has been correlated with cochlear implant (CI) users' speech understanding. For CI users, AM stimuli have been shown to be louder than steady-state (non-AM) stimuli presented at the same reference current level, suggesting that unwanted loudness cues might contribute to CI users' AM sensitivity as measured in a modulation detection task. In this paper, a new method is introduced to dynamically control unwanted AM loudness cues when adaptively measuring modulation detection thresholds (MDTs) in CI users. METHODS: MDTs were adaptively measured in 9 CI subjects using a three-alternative, forced-choice procedure, with and without dynamic control of unwanted AM loudness cues. To control for AM loudness cues during the MDT task, the level of the steady-state (non-AM) stimuli was increased to match the loudness of the AM stimulus using a non-linear amplitude scaling function, which was obtained by first loudness-balancing non-AM stimuli to AM stimuli at various modulation depths. To further protect against unwanted loudness cues, ±0.75dB of level roving was also applied to all stimuli during the MDT task. RESULTS: Absolute MDTs were generally poorer when unwanted AM loudness cues were controlled. However, the effects of modulation frequency and presentation level on modulation sensitivity were fundamentally unchanged by the availability of AM loudness cues. CONCLUSIONS: The data suggest that the present method controlling for unwanted AM loudness cues might better represent CI users' MDTs, without changing fundamental effects of modulation frequency and presentation level on CI users' modulation sensitivity.


Subject(s)
Auditory Perception , Cochlear Implants , Acoustic Stimulation , Adult , Aged , Aged, 80 and over , Cues , Deafness/therapy , Electric Stimulation , Female , Humans , Male , Middle Aged , Nonlinear Dynamics , Psychoacoustics , Time Perception
14.
J Acoust Soc Am ; 134(5): 3844-52, 2013 Nov.
Article in English | MEDLINE | ID: mdl-24180793

ABSTRACT

Speech perception skills in cochlear-implant users are often measured with simple speech materials. In children, it is crucial to fully characterize linguistic development, and this requires linguistically more meaningful materials. The authors propose using the comprehension of reflexives and pronouns, as these specific skills are acquired at different ages. According to the literature, normal-hearing children show adult-like comprehension of reflexives at age 5, while their comprehension of pronouns only reaches adult-like levels around age 10. To provide normative data, a group of younger children (5 to 8 yrs old), older children (10 and 11 yrs old), and adults were tested under conditions without or with spectral degradation, which simulated cochlear-implant speech transmission with four and eight channels. The results without degradation confirmed the different ages of acquisition of reflexives and pronouns. Adding spectral degradation reduced overall performance; however, it did not change the general pattern observed with non-degraded speech. This finding confirms that these linguistic milestones can also be measured with cochlear-implanted children, despite the reduced quality of sound transmission. Thus, the results of the study have implications for clinical practice, as they could contribute to setting realistic expectations and therapeutic goals for children who receive a cochlear implant.


Subject(s)
Child Language , Speech Acoustics , Speech Intelligibility , Speech Perception , Voice Quality , Acoustic Stimulation , Adult , Age Factors , Audiometry, Speech , Child , Child, Preschool , Cochlear Implants , Comprehension , Humans , Persons With Hearing Impairments/psychology , Persons With Hearing Impairments/rehabilitation , Predictive Value of Tests , Sound Spectrography , Young Adult
15.
PLoS One ; 8(3): e58149, 2013.
Article in English | MEDLINE | ID: mdl-23469266

ABSTRACT

The intelligibility of periodically interrupted speech improves once the silent gaps are filled with noise bursts. This improvement has been attributed to phonemic restoration, a top-down repair mechanism that helps intelligibility of degraded speech in daily life. Two hypotheses were investigated using perceptual learning of interrupted speech. If different cognitive processes played a role in restoring interrupted speech with and without filler noise, the two forms of speech would be learned at different rates and with different perceived mental effort. If the restoration benefit were an artificial outcome of using the ecologically invalid stimulus of speech with silent gaps, this benefit would diminish with training. Two groups of normal-hearing listeners were trained, one with interrupted sentences with the filler noise, and the other without. Feedback was provided with the auditory playback of the unprocessed and processed sentences, as well as the visual display of the sentence text. Training increased the overall performance significantly, however restoration benefit did not diminish. The increase in intelligibility and the decrease in perceived mental effort were relatively similar between the groups, implying similar cognitive mechanisms for the restoration of the two types of interruptions. Training effects were generalizable, as both groups improved their performance also with the other form of speech than that they were trained with, and retainable. Due to null results and relatively small number of participants (10 per group), further research is needed to more confidently draw conclusions. Nevertheless, training with interrupted speech seems to be effective, stimulating participants to more actively and efficiently use the top-down restoration. This finding further implies the potential of this training approach as a rehabilitative tool for hearing-impaired/elderly populations.


Subject(s)
Auditory Perception/physiology , Speech Intelligibility/physiology , Speech Perception/physiology , Speech/physiology , Acoustic Stimulation , Adolescent , Adult , Auditory Threshold , Female , Hearing Loss, Sensorineural/psychology , Hearing Loss, Sensorineural/therapy , Humans , Noise
16.
J Speech Lang Hear Res ; 56(4): 1075-84, 2013 Aug.
Article in English | MEDLINE | ID: mdl-23275424

ABSTRACT

PURPOSE: Fitting a cochlear implant (CI) for optimal speech perception does not necessarily optimize listening effort. This study aimed to show that listening effort may change between CI processing conditions for which speech intelligibility remains constant. METHOD: Nineteen normal-hearing participants listened to CI simulations with varying numbers of spectral channels. A dual-task paradigm combining an intelligibility task with either a linguistic or nonlinguistic visual response-time (RT) task measured intelligibility and listening effort. The simultaneously performed tasks compete for limited cognitive resources; changes in effort associated with the intelligibility task are reflected in changes in RT on the visual task. A separate self-report scale provided a subjective measure of listening effort. RESULTS: All measures showed significant improvements with increasing spectral resolution up to 6 channels. However, only the RT measure of listening effort continued improving up to 8 channels. The effects were stronger for RTs recorded during listening than for RTs recorded between listening. CONCLUSION: The results suggest that listening effort decreases with increased spectral resolution. Moreover, these improvements are best reflected in objective measures of listening effort, such as RTs on a secondary task, rather than intelligibility scores or subjective effort measures.


Subject(s)
Cochlear Implantation , Cochlear Implants , Deafness/rehabilitation , Prosthesis Fitting , Speech Discrimination Tests , Speech Perception , Acoustic Stimulation/methods , Adult , Female , Hearing , Humans , Linguistics , Male , Photic Stimulation/methods , Reaction Time , Young Adult
17.
J Speech Lang Hear Res ; 55(6): 1788-801, 2012 Dec.
Article in English | MEDLINE | ID: mdl-22992710

ABSTRACT

PURPOSE: Auditory perception of vowels in background noise is enhanced when combined with visually perceived speech features. The objective of this study was to investigate whether the influence of visual cues on vowel perception extends to incongruent vowels, in a manner similar to the McGurk effect observed with consonants. METHOD: Identification of Dutch front vowels /i, y, e, Y/ that share all features other than height and lip-rounding was measured for congruent and incongruent audiovisual conditions. The audio channel was systematically degraded by adding noise, increasing the reliance on visual cues. RESULTS: The height feature was more robustly carried over through the auditory channel and the lip-rounding feature through the visual channel. Hence, congruent audiovisual presentation enhanced identification, while incongruent presentation led to perceptual fusions and thus decreased identification. CONCLUSIONS: Visual cues influence the identification of congruent as well as incongruent audiovisual vowels. Incongruent visual information results in perceptual fusions, demonstrating that the McGurk effect can be instigated by long phonemes such as vowels. This result extends to the incongruent presentation of the visually less reliably perceived height. The findings stress the importance of audiovisual congruency in communication devices, such as cochlear implants and videoconferencing tools, where the auditory signal could be degraded.


Subject(s)
Illusions/physiology , Perceptual Masking/physiology , Phonetics , Speech Perception/physiology , Visual Perception/physiology , Acoustic Stimulation/methods , Adult , Cues , Female , Humans , Lipreading , Male , Models, Biological , Noise , Photic Stimulation/methods , Psychoacoustics , Speech/physiology , Young Adult
18.
J Neurosci ; 32(23): 8024-34, 2012 Jun 06.
Article in English | MEDLINE | ID: mdl-22674277

ABSTRACT

Human hearing is constructive. For example, when a voice is partially replaced by an extraneous sound (e.g., on the telephone due to a transmission problem), the auditory system may restore the missing portion so that the voice can be perceived as continuous (Miller and Licklider, 1950; for review, see Bregman, 1990; Warren, 1999). The neural mechanisms underlying this continuity illusion have been studied mostly with schematic stimuli (e.g., simple tones) and are still a matter of debate (for review, see Petkov and Sutter, 2011). The goal of the present study was to elucidate how these mechanisms operate under more natural conditions. Using psychophysics and electroencephalography (EEG), we assessed simultaneously the perceived continuity of a human vowel sound through interrupting noise and the concurrent neural activity. We found that vowel continuity illusions were accompanied by a suppression of the 4 Hz EEG power in auditory cortex (AC) that was evoked by the vowel interruption. This suppression was stronger than the suppression accompanying continuity illusions of a simple tone. Finally, continuity perception and 4 Hz power depended on the intactness of the sound that preceded the vowel (i.e., the auditory context). These findings show that a natural sound may be restored during noise due to the suppression of 4 Hz AC activity evoked early during the noise. This mechanism may attenuate sudden pitch changes, adapt the resistance of the auditory system to extraneous sounds across auditory scenes, and provide a useful model for assisted hearing devices.


Subject(s)
Auditory Cortex/physiology , Hearing/physiology , Noise , Speech Perception/physiology , Acoustic Stimulation , Adaptation, Psychological/physiology , Adult , Data Interpretation, Statistical , Electroencephalography , Evoked Potentials, Auditory/physiology , Female , Humans , Illusions/psychology , Male , Principal Component Analysis , Psychomotor Performance/physiology , Psychophysics , Young Adult
19.
J Acoust Soc Am ; 128(4): EL169-74, 2010 Oct.
Article in English | MEDLINE | ID: mdl-20968321

ABSTRACT

The brain can restore missing speech segments using linguistic knowledge and context. The phonemic restoration effect is commonly quantified by the increase in intelligibility of interrupted speech when the silent gaps are filled with noise bursts. In normal hearing, the restoration effect is negatively correlated with the baseline scores with interrupted speech; listeners with poorer baseline show more benefit from restoration. Reanalyzing data from Baskent et al. [(2010). Hear. Res. 260, 54-62], correlations with mild and moderate hearing impairment were observed to differ than with normal hearing. This analysis further shows that hearing impairment may affect top-down restoration of speech.


Subject(s)
Hearing Loss, Sensorineural/psychology , Phonetics , Speech Intelligibility , Speech Perception , Acoustic Stimulation , Adult , Aged , Aged, 80 and over , Audiometry, Pure-Tone , Auditory Threshold , Hearing Loss, Sensorineural/physiopathology , Humans , Middle Aged , Young Adult
20.
Hear Res ; 270(1-2): 127-33, 2010 Dec 01.
Article in English | MEDLINE | ID: mdl-20817081

ABSTRACT

Recognition of periodically interrupted sentences (with an interruption rate of 1.5 Hz, 50% duty cycle) was investigated under conditions of spectral degradation, implemented with a noiseband vocoder, with and without additional unprocessed low-pass filtered speech (cutoff frequency 500 Hz). Intelligibility of interrupted speech decreased with increasing spectral degradation. For all spectral degradation conditions, however, adding the unprocessed low-pass filtered speech enhanced the intelligibility. The improvement at 4 and 8 channels was higher than the improvement at 16 and 32 channels: 19% and 8%, on average, respectively. The Articulation Index predicted an improvement of 0.09, in a scale from 0 to 1. Thus, the improvement at poorest spectral degradation conditions was larger than what would be expected from additional speech information. Therefore, the results implied that the fine temporal cues from the unprocessed low-frequency speech, such as the additional voice pitch cues, helped perceptual integration of temporally interrupted and spectrally degraded speech, especially when the spectral degradations were severe. Considering the vocoder processing as a cochlear implant simulation, where implant users' performance is closest to 4 and 8-channel vocoder performance, the results support additional benefit of low-frequency acoustic input in combined electric-acoustic stimulation for perception of temporally degraded speech.


Subject(s)
Cochlear Implants , Cues , Pitch Perception , Recognition, Psychology , Signal Processing, Computer-Assisted , Speech Acoustics , Speech Intelligibility , Speech Perception , Acoustic Stimulation , Adolescent , Adult , Audiometry, Pure-Tone , Audiometry, Speech , Auditory Threshold , Female , Humans , Male , Sound Spectrography , Time Factors , Voice Quality , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL