Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 82
Filter
Add more filters

Country/Region as subject
Publication year range
1.
Annu Rev Neurosci ; 42: 47-65, 2019 07 08.
Article in English | MEDLINE | ID: mdl-30699049

ABSTRACT

The modern cochlear implant (CI) is the most successful neural prosthesis developed to date. CIs provide hearing to the profoundly hearing impaired and allow the acquisition of spoken language in children born deaf. Results from studies enabled by the CI have provided new insights into (a) minimal representations at the periphery for speech reception, (b) brain mechanisms for decoding speech presented in quiet and in acoustically adverse conditions, (c) the developmental neuroscience of language and hearing, and (d) the mechanisms and time courses of intramodal and cross-modal plasticity. Additionally, the results have underscored the interconnectedness of brain functions and the importance of top-down processes in perception and learning. The findings are described in this review with emphasis on the developing brain and the acquisition of hearing and spoken language.


Subject(s)
Auditory Perception/physiology , Cochlear Implants , Critical Period, Psychological , Language Development , Animals , Auditory Perceptual Disorders/etiology , Brain/growth & development , Cochlear Implantation , Comprehension , Cues , Deafness/congenital , Deafness/physiopathology , Deafness/psychology , Deafness/surgery , Equipment Design , Humans , Language Development Disorders/etiology , Language Development Disorders/prevention & control , Learning/physiology , Neuronal Plasticity , Photic Stimulation
2.
Ear Hear ; 45(4): 969-984, 2024.
Article in English | MEDLINE | ID: mdl-38472134

ABSTRACT

OBJECTIVES: The independence of left and right automatic gain controls (AGCs) used in cochlear implants can distort interaural level differences and thereby compromise dynamic sound source localization. We assessed the degree to which synchronizing left and right AGCs mitigates those difficulties as indicated by listeners' ability to use the changes in interaural level differences that come with head movements to avoid front-back reversals (FBRs). DESIGN: Broadband noise stimuli were presented from one of six equally spaced loudspeakers surrounding the listener. Sound source identification was tested for stimuli presented at 70 dBA (above AGC threshold) for 10 bilateral cochlear implant patients, under conditions where (1) patients remained stationary and (2) free head movements within ±30° were encouraged. These conditions were repeated for both synchronized and independent AGCs. The same conditions were run at 50 dBA, below the AGC threshold, to assess listeners' baseline performance when AGCs were not engaged. In this way, the expected high variability in listener performance could be separated from effects of independent AGCs to reveal the degree to which synchronizing AGCs could restore localization performance to what it was without AGC compression. RESULTS: The mean rate of FBRs was higher for sound stimuli presented at 70 dBA with independent AGCs, both with and without head movements, than at 50 dBA, suggesting that when AGCs were independently engaged they contributed to poorer front-back localization. When listeners remained stationary, synchronizing AGCs did not significantly reduce the rate of FBRs. When AGCs were independent at 70 dBA, head movements did not have a significant effect on the rate of FBRs. Head movements did have a significant group effect on the rate of FBRs at 50 dBA when AGCs were not engaged and at 70 dBA when AGCs were synchronized. Synchronization of AGCs, together with head movements, reduced the rate of FBRs to approximately what it was in the 50-dBA baseline condition. Synchronizing AGCs also had a significant group effect on listeners' overall percent correct localization. CONCLUSIONS: Synchronizing AGCs allowed for listeners to mitigate front-back confusions introduced by unsynchronized AGCs when head motion was permitted, returning individual listener performance to roughly what it was in the 50-dBA baseline condition when AGCs were not engaged. Synchronization of AGCs did not overcome localization deficiencies which were observed when AGCs were not engaged, and which are therefore unrelated to AGC compression.


Subject(s)
Cochlear Implants , Sound Localization , Humans , Middle Aged , Male , Female , Aged , Adult , Cochlear Implantation , Head Movements/physiology , Noise , Aged, 80 and over
3.
Ear Hear ; 41(6): 1660-1674, 2020.
Article in English | MEDLINE | ID: mdl-33136640

ABSTRACT

OBJECTIVES: We investigated the ability of single-sided deaf listeners implanted with a cochlear implant (SSD-CI) to (1) determine the front-back and left-right location of sound sources presented from loudspeakers surrounding the listener and (2) use small head rotations to further improve their localization performance. The resulting behavioral data were used for further analyses investigating the value of so-called "monaural" spectral shape cues for front-back sound source localization. DESIGN: Eight SSD-CI patients were tested with their cochlear implant (CI) on and off. Eight normal-hearing (NH) listeners, with one ear plugged during the experiment, and another group of eight NH listeners, with neither ear plugged, were also tested. Gaussian noises of 3-sec duration were band-pass filtered to 2-8 kHz and presented from 1 of 6 loudspeakers surrounding the listener, spaced 60° apart. Perceived sound source localization was tested under conditions where the patients faced forward with the head stationary, and under conditions where they rotated their heads between (Equation is included in full-text article.). RESULTS: (1) Under stationary listener conditions, unilaterally-plugged NH listeners and SSD-CI listeners (with their CIs both on and off) were nearly at chance in determining the front-back location of high-frequency sound sources. (2) Allowing rotational head movements improved performance in both the front-back and left-right dimensions for all listeners. (3) For SSD-CI patients with their CI turned off, head rotations substantially reduced front-back reversals, and the combination of turning on the CI with head rotations led to near-perfect resolution of front-back sound source location. (4) Turning on the CI also improved left-right localization performance. (5) As expected, NH listeners with both ears unplugged localized to the correct front-back and left-right hemifields both with and without head movements. CONCLUSIONS: Although SSD-CI listeners demonstrate a relatively poor ability to distinguish the front-back location of sound sources when their head is stationary, their performance is substantially improved with head movements. Most of this improvement occurs when the CI is off, suggesting that the NH ear does most of the "work" in this regard, though some additional gain is introduced with turning the CI on. During head turns, these listeners appear to primarily rely on comparing changes in head position to changes in monaural level cues produced by the direction-dependent attenuation of high-frequency sounds that result from acoustic head shadowing. In this way, SSD-CI listeners overcome limitations to the reliability of monaural spectral and level cues under stationary conditions. SSD-CI listeners may have learned, through chronic monaural experience before CI implantation, or with the relatively impoverished spatial cues provided by their CI-implanted ear, to exploit the monaural level cue. Unilaterally-plugged NH listeners were also able to use this cue during the experiment to realize approximately the same magnitude of benefit from head turns just minutes after plugging, though their performance was less accurate than that of the SSD-CI listeners, both with and without their CI turned on.


Subject(s)
Cochlear Implantation , Cochlear Implants , Sound Localization , Head Movements , Humans , Reproducibility of Results
4.
Audiol Neurootol ; 24(5): 264-269, 2019.
Article in English | MEDLINE | ID: mdl-31661682

ABSTRACT

OBJECTIVE: Our aim was to determine the effect of acute changes in cochlear place of stimulation on cochlear implant (CI) sound quality. DESIGN: In Experiment 1, 5 single-sided deaf (SSD) listeners fitted with a long (28-mm) electrode array were tested. Basal shifts in place of stimulation were implemented by turning off the most apical electrodes and reassigning the filters to more basal electrodes. In Experiment 2, 2 SSD patients fitted with a shorter (16.5-mm) electrode array were tested. Both basal and apical shifts in place of stimulation were implemented. The apical shifts were accomplished by current steering and creating a virtual place of stimulation more apical that that of the most apical electrode. RESULTS: Listeners matched basal shifts by shifting, in the normal-hearing ear, the overall spectrum up in frequency and/or increasing voice pitch (F0). Listeners matched apical shifts by shifting down the overall frequency spectrum in the normal-hearing ear. CONCLUSION: One factor determining CI voice quality is the location of stimulation along the cochlear partition.


Subject(s)
Auditory Perception/physiology , Cochlea/surgery , Cochlear Implantation , Cochlear Implants , Deafness/rehabilitation , Acoustic Stimulation , Female , Hearing Tests , Humans , Male , Middle Aged
5.
Ear Hear ; 40(3): 501-516, 2019.
Article in English | MEDLINE | ID: mdl-30285977

ABSTRACT

OBJECTIVE: The objectives of this study were to assess the effectiveness of various measures of speech understanding in distinguishing performance differences between adult bimodal and bilateral cochlear implant (CI) recipients and to provide a preliminary evidence-based tool guiding clinical decisions regarding bilateral CI candidacy. DESIGN: This study used a multiple-baseline, cross-sectional design investigating speech recognition performance for 85 experienced adult CI recipients (49 bimodal, 36 bilateral). Speech recognition was assessed in a standard clinical test environment with a single loudspeaker using the minimum speech test battery for adult CI recipients as well as with an R-SPACE 8-loudspeaker, sound-simulation system. All participants were tested in three listening conditions for each measure including each ear alone as well as in the bilateral/bimodal condition. In addition, we asked each bimodal listener to provide a yes/no answer to the question, "Do you think you need a second CI?" RESULTS: This study yielded three primary findings: (1) there were no significant differences between bimodal and bilateral CI performance or binaural summation on clinical measures of speech recognition, (2) an adaptive speech recognition task in the R-SPACE system revealed significant differences in performance and binaural summation between bimodal and bilateral CI users, with bilateral CI users achieving significantly better performance and greater summation, and (3) the patient's answer to the question, "Do you think you need a second CI?" held high sensitivity (100% hit rate) for identifying likely bilateral CI candidates and moderately high specificity (77% correct rejection rate) for correctly identifying listeners best suited with a bimodal hearing configuration. CONCLUSIONS: Clinics cannot rely on current clinical measures of speech understanding, with a single loudspeaker, to determine bilateral CI candidacy for adult bimodal listeners nor to accurately document bilateral benefit relative to a previous bimodal hearing configuration. Speech recognition in a complex listening environment, such as R-SPACE, is a sensitive and appropriate measure for determining bilateral CI candidacy and also likely for documenting bilateral benefit relative to a previous bimodal configuration. In the absence of an available R-SPACE system, asking the patient whether or not s/he thinks s/he needs a second CI is a highly sensitive measure, which may prove clinically useful.


Subject(s)
Cochlear Implantation/methods , Hearing Aids , Hearing Loss, Bilateral/rehabilitation , Hearing Loss, Sensorineural/rehabilitation , Speech Perception , Adult , Aged , Aged, 80 and over , Clinical Decision-Making , Cochlear Implants , Female , Humans , Male , Middle Aged , Patient Reported Outcome Measures , Young Adult
6.
Audiol Neurootol ; 23(5): 270-276, 2018.
Article in English | MEDLINE | ID: mdl-30537753

ABSTRACT

OBJECTIVE: Our primary aim was to determine, in a simulation of a crowded restaurant, the value to speech understanding of (i) a unilateral cochlear implant (CI), (ii) a CI plus CROS (contralateral routing of signals) aid system and (iii) bilateral CIs when tested with and without beamforming microphones. DESIGN: The listeners were 7 CI listeners who had used bilateral CIs for an average of 9 years. The listeners were tested with three device configurations (bilateral CI, unilateral CI + CROS, and unilateral CI), two signal processing conditions (without and with beamformers) and with speech either from +90°, -90°, or from the front. Speech understanding scores for the TIMIT sentences were obtained in the 8-loudspeaker R-SPACETM test environment - an environment which simulates listening in a crowded restaurant. RESULTS: In the unilateral condition, speech understanding, relative to speech directed to the CI ear, fell by 17% when speech was from the front and fell 28% when speech was to the side opposite the CI. These deficits were overcome with both CI-CROS and bilateral CIs, and scores for the two devices did not differ significantly for any location of speech input. Beamformer microphones improved speech understanding for speech from the front and depressed speech understanding for speech from the sides for all device configurations. Patients with bilateral CIs and beamformers achieved slightly, but significantly, higher scores for speech from the front than patients with CI-CROS and beamformers. CONCLUSIONS: CI-CROS is a valuable addition to the hardware options available to patients fit with a single CI. For patients fit with bilateral CIs, bilateral beamformers are a valuable addition in the condition of speech coming from in front of the listener. The small differences in performance in the CI-CROS and bilateral CI conditions suggest that patient preference for bilateral CIs is based largely on factors other than speech understanding in noise.


Subject(s)
Cochlear Implantation , Cochlear Implants , Sound Localization/physiology , Speech Perception/physiology , Speech/physiology , Adolescent , Adult , Aged , Female , Humans , Male , Middle Aged , Noise , Restaurants
7.
Ear Hear ; 39(6): 1224-1231, 2018.
Article in English | MEDLINE | ID: mdl-29664750

ABSTRACT

OBJECTIVES: We report on the ability of patients fit with bilateral cochlear implants (CIs) to distinguish the front-back location of sound sources both with and without head movements. At issue was (i) whether CI patients are more prone to front-back confusions than normal hearing listeners for wideband, high-frequency stimuli; and (ii) if CI patients can utilize dynamic binaural difference cues, in tandem with their own head rotation, to resolve these front-back confusions. Front-back confusions offer a binary metric to gain insight into CI patients' ability to localize sound sources under dynamic conditions not generally measured in laboratory settings where both the sound source and patient are static. DESIGN: Three-second duration Gaussian noise samples were bandpass filtered to 2 to 8 kHz and presented from one of six loudspeaker locations located 60° apart, surrounding the listener. Perceived sound source localization for seven listeners bilaterally implanted with CIs, was tested under conditions where the patient faced forward and did not move their head and under conditions where they were encouraged to moderately rotate their head. The same conditions were repeated for 5 of the patients with one implant turned off (the implant at the better ear remained on). A control group of normal hearing listeners was also tested for a baseline of comparison. RESULTS: All seven CI patients demonstrated a high rate of front-back confusions when their head was stationary (41.9%). The proportion of front-back confusions was reduced to 6.7% when these patients were allowed to rotate their head within a range of approximately ± 30°. When only one implant was turned on, listeners' localization acuity suffered greatly. In these conditions, head movement or the lack thereof made little difference to listeners' performance. CONCLUSIONS: Bilateral implantation can offer CI listeners the ability to track dynamic auditory spatial difference cues and compare these changes to changes in their own head position, resulting in a reduced rate of front-back confusions. This suggests that, for these patients, estimates of auditory acuity based solely on static laboratory settings may underestimate their real-world localization abilities.


Subject(s)
Auditory Perception , Cochlear Implants , Head Movements , Sound Localization , Aged , Cues , Female , Hearing , Humans , Male , Middle Aged
8.
Audiol Neurootol ; 21(3): 127-31, 2016.
Article in English | MEDLINE | ID: mdl-27077663

ABSTRACT

OBJECTIVE: Our primary aim was to determine whether listeners in the following patient groups achieve localization accuracy within the 95th percentile of accuracy shown by younger or older normal-hearing (NH) listeners: (1) hearing impaired with bilateral hearing aids, (2) bimodal cochlear implant (CI), (3) bilateral CI, (4) hearing preservation CI, (5) single-sided deaf CI and (6) combined bilateral CI and bilateral hearing preservation. DESIGN: The listeners included 57 young NH listeners, 12 older NH listeners, 17 listeners fit with hearing aids, 8 bimodal CI listeners, 32 bilateral CI listeners, 8 hearing preservation CI listeners, 13 single-sided deaf CI listeners and 3 listeners with bilateral CIs and bilateral hearing preservation. Sound source localization was assessed in a sound-deadened room with 13 loudspeakers arrayed in a 180-degree arc. RESULTS: The root mean square (rms) error for the NH listeners was 6 degrees. The 95th percentile was 11 degrees. Nine of 16 listeners with bilateral hearing aids achieved scores within the 95th percentile of normal. Only 1 of 64 CI patients achieved a score within that range. Bimodal CI listeners scored at a level near chance, as did the listeners with a single CI or a single NH ear. Listeners with (1) bilateral CIs, (2) hearing preservation CIs, (3) single-sided deaf CIs and (4) both bilateral CIs and bilateral hearing preservation, all showed rms error scores within a similar range (mean scores between 20 and 30 degrees of error). CONCLUSION: Modern CIs do not restore a normal level of sound source localization for CI listeners with access to sound information from two ears.


Subject(s)
Cochlear Implantation/methods , Cochlear Implants , Hearing Aids , Hearing Loss/rehabilitation , Sound Localization , Adult , Aged , Auditory Perception , Case-Control Studies , Female , Humans , Male , Middle Aged , Speech Perception , Young Adult
9.
Audiol Neurootol ; 20(3): 166-71, 2015.
Article in English | MEDLINE | ID: mdl-25832907

ABSTRACT

The aim of this article was to study sound source localization by cochlear implant (CI) listeners with low-frequency (LF) acoustic hearing in both the operated ear and in the contralateral ear. Eight CI listeners had symmetrical LF acoustic hearing and 4 had asymmetrical LF acoustic hearing. The effects of two variables were assessed: (i) the symmetry of the LF thresholds in the two ears and (ii) the presence/absence of bilateral acoustic amplification. Stimuli consisted of low-pass, high-pass, and wideband noise bursts presented in the frontal horizontal plane. Localization accuracy was 23° of error for the symmetrical listeners and 76° of error for the asymmetrical listeners. The presence of a unilateral CI used in conjunction with bilateral LF acoustic hearing does not impair sound source localization accuracy, but amplification for acoustic hearing can be detrimental to sound source localization accuracy.


Subject(s)
Cochlear Implantation , Cochlear Implants , Hearing Loss, Sensorineural/physiopathology , Sound Localization/physiology , Speech Perception/physiology , Acoustic Stimulation , Adult , Aged , Female , Hearing Tests , Humans , Male , Middle Aged
10.
Audiol Neurootol ; 20(3): 183-8, 2015.
Article in English | MEDLINE | ID: mdl-25896774

ABSTRACT

In this report, we used filtered noise bands to constrain listeners' access to interaural level differences (ILDs) and interaural time differences (ITDs) in a sound source localization task. The samples of interest were listeners with single-sided deafness (SSD) who had been fit with a cochlear implant in the deafened ear (SSD-CI). The comparison samples included listeners with normal hearing and bimodal hearing, i.e., with a cochlear implant in 1 ear and low-frequency acoustic hearing in the other ear. The results indicated that (i) sound source localization was better in the SSD-CI condition than in the SSD condition, (ii) SSD-CI patients rely on ILD cues for sound source localization, (iii) SSD-CI patients show functional localization abilities within 1-3 months after device activation and (iv) SSD-CI patients show better sound source localization than bimodal CI patients but, on average, poorer localization than normal-hearing listeners. One SSD-CI patient showed a level of localization within normal limits. We provide an account for the relative localization abilities of the groups by reference to the differences in access to ILD cues.


Subject(s)
Cochlear Implants , Hearing Loss, Unilateral/physiopathology , Sound Localization/physiology , Acoustic Stimulation , Adult , Cues , Female , Humans , Male , Middle Aged
11.
Audiol Neurootol ; 19(1): 57-71, 2014.
Article in English | MEDLINE | ID: mdl-24356514

ABSTRACT

The purpose of this study was to examine the availability of binaural cues for adult, bilateral cochlear implant (CI) patients, bimodal patients and hearing preservation patients using a multiple-baseline, observational study design. Speech recognition was assessed using the Bamford-Kowal-Bench Speech-in-Noise (BKB-SIN) test as well as the AzBio sentences [Spahr AJ, et al: Ear Hear 2012;33:112-117] presented in a multi-talker babble at a +5 dB signal-to-noise ratio (SNR). Test conditions included speech at 0° with noise presented at 0° (S0N0), 90° (S0N90) and 270° (S0N270). Estimates of summation, head shadow (HS), squelch and spatial release from masking (SRM) were calculated. Though nonwwe of the subject groups consistently showed access to binaural cues, the hearing preservation patients exhibited a significant correlation between summation and squelch whereas the bilateral and bimodal participants did not. That is to say, the two effects associated with binaural hearing - summation and squelch - were positively correlated only for the listeners with bilateral acoustic hearing. This finding provides evidence for the supposition that implant recipients with bilateral acoustic hearing have access to binaural cues, which should, in theory, provide greater benefit in noisy listening environments. It is likely, however, that the chosen test environment negatively affected the outcomes. Specifically, the spatially separated noise conditions directed noise toward the microphone (mic) port of the behind-the-ear (BTE) hearing aid and implant processor. Thus, it is possible that in more realistic listening environments for which the diffuse noise is not directed toward the processor/hearing aid mic, hearing preservation patients have binaural cues for improved speech understanding.


Subject(s)
Auditory Threshold/physiology , Cochlear Implantation , Cochlear Implants , Hearing Loss, Sensorineural/physiopathology , Speech Perception/physiology , Adult , Aged , Cues , Female , Hearing Loss, Sensorineural/surgery , Hearing Tests , Humans , Male , Middle Aged , Sound Localization/physiology , Young Adult
12.
Audiol Neurootol ; 19(4): 234-8, 2014.
Article in English | MEDLINE | ID: mdl-24992987

ABSTRACT

The aim of this project was to determine for bimodal cochlear implant (CI) patients, i.e. patients with low-frequency hearing in the ear contralateral to the implant, how speech understanding varies as a function of the difference in level between the CI signal and the acoustic signal. The data suggest that (1) acoustic signals perceived as significantly softer than a CI signal can contribute to speech understanding in the bimodal condition, (2) acoustic signals that are slightly softer than, or balanced with, a CI signal provide the largest benefit to speech understanding, and (3) acoustic signals presented at maximum comfortable loudness levels provide nearly as much benefit as signals that have been balanced with a CI signal.


Subject(s)
Cochlear Implants , Deafness/rehabilitation , Signal Detection, Psychological , Speech Perception , Acoustic Stimulation , Aged , Cochlear Implantation , Humans , Middle Aged , Noise
13.
Ear Hear ; 35(4): 410-7, 2014.
Article in English | MEDLINE | ID: mdl-24950254

ABSTRACT

OBJECTIVE: The aims of this study were to (1) detect the presence and edge frequency (fe) of a cochlear dead region in the ear with residual acoustic hearing for bimodal cochlear implant users, and (2) determine whether amplification based on the presence or absence of a dead region would improve speech understanding and sound quality. DESIGN: Twenty-two listeners with a cochlear implant in one ear and residual acoustic hearing in the nonimplanted ear were tested. Eleven listeners had a cochlear dead region in the acoustic-hearing ear and 11 did not. Dead regions were assessed with the threshold-equalizing noise (TEN) and the sweeping noise, psychophysical tuning curve tests. Speech understanding was assessed with monosyllabic words and the AzBio sentences at +10 dB signal-to-noise ratio. Speech- and music-quality judgments were obtained with the Judgment of Sound Quality questionnaire. RESULTS: Using shifted tips of the psychophysical tuning curve as a basis for diagnosis, the TEN had high sensitivity (0.91) and poor specificity (0.55) for this population. The value of fe was lower when estimated with the sweeping noise, psychophysical tuning curve test than with the TEN test. For the listeners with cochlear dead regions, speech understanding, speech quality and music quality were best when no amplification was applied for frequencies within the dead region. For listeners without dead regions, speech understanding was best with full-bandwidth amplification and was reduced when amplification was not applied when the audiometric threshold exceeded 80 dB HL. CONCLUSION: The data from this study suggest that, to improve bimodal benefit for listeners who combine electric and acoustic stimulation, audiologists should routinely test for the presence of cochlear dead regions and determine amplification bandwidth accordingly.


Subject(s)
Acoustic Stimulation/methods , Cochlea/physiopathology , Cochlear Implantation/methods , Electric Stimulation Therapy/methods , Hearing Aids , Hearing Loss, Sensorineural/surgery , Adult , Aged , Aged, 80 and over , Auditory Threshold , Cochlear Implants , Female , Hearing Loss, Sensorineural/rehabilitation , Humans , Male , Middle Aged , Signal-To-Noise Ratio
14.
Ear Hear ; 35(6): 633-40, 2014.
Article in English | MEDLINE | ID: mdl-25127322

ABSTRACT

OBJECTIVES: The aims of this study were (i) to determine the magnitude of the interaural level differences (ILDs) that remain after cochlear implant (CI) signal processing and (ii) to relate the ILDs to the pattern of errors for sound source localization on the horizontal plane. DESIGN: The listeners were 16 bilateral CI patients fitted with MED-EL CIs and 34 normal-hearing listeners. The stimuli were wideband, high-pass, and low-pass noise signals. ILDs were calculated by passing signals, filtered by head-related transfer functions (HRTFs) to a Matlab simulation of MED-EL signal processing. RESULTS: For the wideband signal and high-pass signals, maximum ILDs of 15 to 17 dB in the input signal were reduced to 3 to 4 dB after CI signal processing. For the low-pass signal, ILDs were reduced to 1 to 2 dB. For wideband and high-pass signals, the largest ILDs for ±15 degree speaker locations were between 0.4 and 0.7 dB; for the ±30 degree speaker locations between 0.9 and 1.3 dB; for the 45 degree speaker locations between 2.4 and 2.9 dB; for the ±60 degree speaker locations, between 3.2 and 4.1 dB; and for the ±75 degree speaker locations between 2.7 and 3.4 dB. All of the CI patients in all the stimulus conditions showed poorer localization than the normal-hearing listeners. Localization accuracy for the CI patients was best for the wideband and high-pass signals and was poorest for the low-pass signal. CONCLUSIONS: Localization accuracy was related to the magnitude of the ILD cues available to the normal-hearing listeners and CI patients. The pattern of localization errors for the CI patients was related to the magnitude of the ILD differences among loudspeaker locations. The error patterns for the wideband and high-pass signals, suggest that, for the conditions of this experiment, patients, on an average, sorted signals on the horizontal plane into four sectors-on each side of the midline, one sector including 0, 15, and possibly 30 degree speaker locations, and a sector from 45 degree speaker locations to 75 degree speaker locations. The resolution within a sector was relatively poor.


Subject(s)
Cochlear Implantation/methods , Deaf-Blind Disorders/rehabilitation , Signal Processing, Computer-Assisted , Sound Localization , Adult , Aged , Case-Control Studies , Cochlear Implants , Deaf-Blind Disorders/physiopathology , Female , Humans , Male , Speech Perception , Young Adult
15.
Ear Hear ; 35(4): 418-22, 2014.
Article in English | MEDLINE | ID: mdl-24658601

ABSTRACT

OBJECTIVES: The goal of this study was to create and validate a new set of sentence lists that could be used to evaluate the speech-perception abilities of listeners with hearing loss in cases where adult materials are inappropriate due to difficulty level or content. The authors aimed to generate a large number of sentence lists with an equivalent level of difficulty for the evaluation of performance over time and across conditions. DESIGN: The original Pediatric AzBio sentence corpus included 450 sentences recorded from one female talker. All sentences included in the corpus were successfully repeated by kindergarten and first-grade students with normal hearing. The mean intelligibility of each sentence was estimated by processing each sentence through a cochlear implant simulation and calculating the mean percent correct score achieved by 15 normal-hearing listeners. After sorting sentences by mean percent correct scores, 320 sentences were assigned to 16 lists of equivalent difficulty. List equivalency was then validated by presenting all sentence lists, in a novel random order, to adults and children with hearing loss. A final-validation stage examined single-list comparisons from adult and pediatric listeners tested in research or clinical settings. RESULTS: The results of the simulation study allowed for the creation of 16 lists of 20 sentences. The average intelligibility of each list ranged from 78.4 to 78.7%. List equivalency was then validated, when the results of 16 adult cochlear implant users and 9 pediatric hearing aid and cochlear implant users revealed no significant differences across lists. The binomial distribution model was used to account for the inherent variability observed in the lists. This model was also used to generate 95% confidence intervals for one and two list comparisons. A retrospective analysis of 361 instances from 78 adult cochlear implant users and 48 instances from 36 pediatric cochlear implant users revealed that the 95% confidence intervals derived from the model captured 94% of all responses (385 of 409). CONCLUSIONS: The cochlear implant simulation was shown to be an effective method for estimating the intelligibility of individual sentences for use in the evaluation of cochlear implant users. Furthermore, the method used for constructing equivalent sentence lists and estimating the inherent variability of the materials has also been validated. Thus, the AzBio Pediatric Sentence Lists are equivalent and appropriate for the assessment of speech-understanding abilities of children with hearing loss as well as adults for whom performance on AzBio sentences is near the floor.


Subject(s)
Cochlear Implantation , Hearing Aids , Hearing Loss, Sensorineural/surgery , Speech Discrimination Tests/methods , Speech Perception , Adult , Child , Child, Preschool , Female , Hearing Loss, Sensorineural/rehabilitation , Humans , Male , Reproducibility of Results , Speech Intelligibility
16.
Front Hum Neurosci ; 18: 1434786, 2024.
Article in English | MEDLINE | ID: mdl-39086377

ABSTRACT

Cochlear implant (CI) systems differ in terms of electrode design and signal processing. It is likely that patients fit with different implant systems will experience different percepts when presented speech via their implant. The sound quality of speech can be evaluated by asking single-sided-deaf (SSD) listeners fit with a cochlear implant (CI) to modify clean signals presented to their typically hearing ear to match the sound quality of signals presented to their CI ear. In this paper, we describe very close matches to CI sound quality, i.e., similarity ratings of 9.5 to 10 on a 10-point scale, by ten patients fit with a 28 mm electrode array and MED EL signal processing. The modifications required to make close approximations to CI sound quality fell into two groups: One consisted of a restricted frequency bandwidth and spectral smearing while a second was characterized by a wide bandwidth and no spectral smearing. Both sets of modifications were different from those found for patients with shorter electrode arrays who chose upshifts in voice pitch and formant frequencies to match CI sound quality. The data from matching-based metrics of CI sound quality document that speech sound-quality differs for patients fit with different CIs and among patients fit with the same CI.

17.
Ear Hear ; 34(2): 133-41, 2013.
Article in English | MEDLINE | ID: mdl-23075632

ABSTRACT

OBJECTIVES: Patients with a cochlear implant (CI) in one ear and a hearing aid in the other ear commonly achieve the highest speech-understanding scores when they have access to both electrically and acoustically stimulated information. At issue in this study was whether a measure of auditory function in the hearing aided ear would predict the benefit to speech understanding when the information from the aided ear was added to the information from the CI. DESIGN: The subjects were 22 bimodal listeners with a CI in one ear and low-frequency acoustic hearing in the nonimplanted ear. The subjects were divided into two groups-one with mild-to-moderate low-frequency loss and one with severe-to-profound loss. Measures of auditory function included (1) audiometric thresholds at 750 Hz or lower, (2) speech-understanding scores (words in quiet and sentences in noise), and (3) spectral-modulation detection (SMD) thresholds. In the SMD task, one stimulus was a flat spectrum noise and the other was a noise with sinusoidal modulations at 1.0 peak/octave. RESULTS: Significant correlations were found among all three measures of auditory function and the benefit to speech understanding when the acoustic and electric stimulation were combined. Benefit was significantly correlated with audiometric thresholds (r = -0.814), acoustic speech understanding (r = 0.635), and SMD thresholds (r = -0.895) in the hearing aided ear. However, only the SMD threshold was significantly correlated with benefit within the group with mild-to-moderate loss (r = -0.828) and within the group with severe-to-profound loss (r = -0.896). CONCLUSIONS: The SMD threshold at 1 cycle/octave has the potential to provide clinicians with information relevant to the question of whether an ear with low-frequency hearing is likely to add to the intelligibility of speech provided by a CI.


Subject(s)
Auditory Threshold/physiology , Cochlear Implants , Hearing Aids , Hearing Loss, Sensorineural/therapy , Speech Perception/physiology , Acoustic Stimulation , Aged , Audiometry, Pure-Tone , Cochlear Implantation , Combined Modality Therapy , Humans , Middle Aged
18.
Ear Hear ; 34(2): 245-8, 2013.
Article in English | MEDLINE | ID: mdl-23183045

ABSTRACT

OBJECTIVES: The authors describe the localization and speech-understanding abilities of a patient fit with bilateral cochlear implants (CIs) for whom acoustic low-frequency hearing was preserved in both cochleae. DESIGN: Three signals were used in the localization experiments: low-pass, high-pass, and wideband noise. Speech understanding was assessed with the AzBio sentences presented in noise. RESULTS: Localization accuracy was best in the aided, bilateral acoustic hearing condition, and was poorer in both the bilateral CI condition and when the bilateral CIs were used in addition to bilateral low-frequency hearing. Speech understanding was best when low-frequency acoustic hearing was combined with at least one CI. CONCLUSIONS: The authors found that (1) for sound source localization in patients with bilateral CIs and bilateral hearing preservation, interaural level difference cues may dominate interaural time difference cues and (2) hearing-preservation surgery can be of benefit to patients fit with bilateral CIs.


Subject(s)
Hearing Loss, Sensorineural/surgery , Sound Localization/physiology , Speech Perception/physiology , Adult , Aged , Case-Control Studies , Cochlear Implantation , Hearing Loss, Sensorineural/physiopathology , Humans , Male , Middle Aged , Treatment Outcome , Young Adult
19.
Ear Hear ; 34(4): 413-25, 2013.
Article in English | MEDLINE | ID: mdl-23446225

ABSTRACT

OBJECTIVE: The aim of this study was to assess the benefit of having preserved acoustic hearing in the implanted ear for speech recognition in complex listening environments. DESIGN: The present study included a within-subjects, repeated-measures design including 21 English-speaking and 17 Polish-speaking cochlear implant (CI) recipients with preserved acoustic hearing in the implanted ear. The patients were implanted with electrodes that varied in insertion depth from 10 to 31 mm. Mean preoperative low-frequency thresholds (average of 125, 250, and 500 Hz) in the implanted ear were 39.3 and 23.4 dB HL for the English- and Polish-speaking participants, respectively. In one condition, speech perception was assessed in an eight-loudspeaker environment in which the speech signals were presented from one loudspeaker and restaurant noise was presented from all loudspeakers. In another condition, the signals were presented in a simulation of a reverberant environment with a reverberation time of 0.6 sec. The response measures included speech reception thresholds (SRTs) and percent correct sentence understanding for two test conditions: CI plus low-frequency hearing in the contralateral ear (bimodal condition) and CI plus low-frequency hearing in both ears (best-aided condition). A subset of six English-speaking listeners were also assessed on measures of interaural time difference thresholds for a 250-Hz signal. RESULTS: Small, but significant, improvements in performance (1.7-2.1 dB and 6-10 percentage points) were found for the best-aided condition versus the bimodal condition. Postoperative thresholds in the implanted ear were correlated with the degree of electric and acoustic stimulation (EAS) benefit for speech recognition in diffuse noise. There was no reliable relationship among measures of audiometric threshold in the implanted ear nor elevation in threshold after surgery and improvement in speech understanding in reverberation. There was a significant correlation between interaural time difference threshold at 250 Hz and EAS-related benefit for the adaptive speech reception threshold. CONCLUSIONS: The findings of this study suggest that (1) preserved low-frequency hearing improves speech understanding for CI recipients, (2) testing in complex listening environments, in which binaural timing cues differ for signal and noise, may best demonstrate the value of having two ears with low-frequency acoustic hearing, and (3) preservation of binaural timing cues, although poorer than observed for individuals with normal hearing, is possible after unilateral cochlear implantation with hearing preservation and is associated with EAS benefit. The results of this study demonstrate significant communicative benefit for hearing preservation in the implanted ear and provide support for the expansion of CI criteria to include individuals with low-frequency thresholds in even the normal to near-normal range.


Subject(s)
Cochlear Implantation/methods , Environment , Hearing Loss, Sensorineural/surgery , Speech Perception , Adult , Aged , Audiometry, Pure-Tone , Auditory Threshold , Electrodes, Implanted , Female , Humans , Male , Middle Aged , Treatment Outcome , Young Adult
20.
Ear Hear ; 33(6): e70-9, 2012.
Article in English | MEDLINE | ID: mdl-22622705

ABSTRACT

OBJECTIVES: It was hypothesized that auditory training would allow bimodal patients to combine in a better manner the low-frequency acoustic information provided by a hearing aid with the electric information provided by a cochlear implant, thus maximizing the benefit of combining acoustic (A) and electric (E) stimulation (EAS). DESIGN: Performance in quiet or in the presence of a multitalker babble at +5 dB signal to noise ratio was evaluated in seven bimodal patients before and after auditory training. The performance measures comprised identification of vowels and consonants, consonant-nucleus-consonant words, sentences, voice gender, and emotion. Baseline performance was evaluated in the A-alone, E-alone, and combined EAS conditions once per week for 3 weeks. A phonetic-contrast training protocol was used to facilitate speech perceptual learning. Patients trained at home 1 hour a day, 5 days a week, for 4 weeks with both their cochlear implant and hearing aid devices on. Performance was remeasured after the 4 weeks of training and 1 month after training stopped. RESULTS: After training, there was significant improvement in vowel, consonant, and consonant-nucleus-consonant word identification in the E and EAS conditions. The magnitude of improvement in the E condition was equivalent to that in the EAS condition. The improved performance was largely retained 1 month after training stopped. CONCLUSION: Auditory training, in the form administered in this study, can improve bimodal patients' overall speech understanding by improving E-alone performance.


Subject(s)
Acoustic Stimulation/methods , Cochlear Implantation/rehabilitation , Cochlear Implants , Deafness/rehabilitation , Hearing Aids , Speech Reception Threshold Test , Aged , Combined Modality Therapy , Female , Humans , Male , Middle Aged , Perceptual Masking , Pitch Discrimination , Sound Spectrography , Speech Acoustics , Speech Discrimination Tests
SELECTION OF CITATIONS
SEARCH DETAIL