Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 81
Filtrar
1.
Ear Hear ; 2024 Mar 13.
Artículo en Inglés | MEDLINE | ID: mdl-38472134

RESUMEN

OBJECTIVES: The independence of left and right automatic gain controls (AGCs) used in cochlear implants can distort interaural level differences and thereby compromise dynamic sound source localization. We assessed the degree to which synchronizing left and right AGCs mitigates those difficulties as indicated by listeners' ability to use the changes in interaural level differences that come with head movements to avoid front-back reversals (FBRs). DESIGN: Broadband noise stimuli were presented from one of six equally spaced loudspeakers surrounding the listener. Sound source identification was tested for stimuli presented at 70 dBA (above AGC threshold) for 10 bilateral cochlear implant patients, under conditions where (1) patients remained stationary and (2) free head movements within ±30° were encouraged. These conditions were repeated for both synchronized and independent AGCs. The same conditions were run at 50 dBA, below the AGC threshold, to assess listeners' baseline performance when AGCs were not engaged. In this way, the expected high variability in listener performance could be separated from effects of independent AGCs to reveal the degree to which synchronizing AGCs could restore localization performance to what it was without AGC compression. RESULTS: The mean rate of FBRs was higher for sound stimuli presented at 70 dBA with independent AGCs, both with and without head movements, than at 50 dBA, suggesting that when AGCs were independently engaged they contributed to poorer front-back localization. When listeners remained stationary, synchronizing AGCs did not significantly reduce the rate of FBRs. When AGCs were independent at 70 dBA, head movements did not have a significant effect on the rate of FBRs. Head movements did have a significant group effect on the rate of FBRs at 50 dBA when AGCs were not engaged and at 70 dBA when AGCs were synchronized. Synchronization of AGCs, together with head movements, reduced the rate of FBRs to approximately what it was in the 50-dBA baseline condition. Synchronizing AGCs also had a significant group effect on listeners' overall percent correct localization. CONCLUSIONS: Synchronizing AGCs allowed for listeners to mitigate front-back confusions introduced by unsynchronized AGCs when head motion was permitted, returning individual listener performance to roughly what it was in the 50-dBA baseline condition when AGCs were not engaged. Synchronization of AGCs did not overcome localization deficiencies which were observed when AGCs were not engaged, and which are therefore unrelated to AGC compression.

2.
Cochlear Implants Int ; 23(5): 270-279, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-35672886

RESUMEN

The AzBio sentence test is widely used to assess speech perception pre- and post-cochlear implantation. This study created and validated a Hebrew version of AzBio (HeBio) and tested its intelligibility amidst background noise.In Experiment 1, 1,000 recorded Hebrew sentences were presented via five-channel vocoder to 10 normal hearing (NH) listeners for intelligibility testing. In Experiment 2, HeBio lists were presented to 25 post-lingual cochlear implant (CI) users amidst four-talker babble noise (4TBN) or in quiet, along with one-syllable word test. In Experiment 3, 20 NH listeners were presented with eight HeBio lists in two noise conditions [4TBN, speech shaped noise (SSN)] and four SNRs (+3, 0 dB, -3 dB, -6 dB).HeBio lists (33) produced 82% average understanding, no inter-list intelligibility differences among NH, and equal intelligibility for CI users. One-syllable words predicted 67% of the variance in HeBio among CI users. Higher intelligibility was found for SSN than for 4TBN, and the mean speech receptive threshold (SRT) was more negative for SSN than for 4TBN.HeBio results were similar to AzBio. Results obtained with two noise types were as expected. HeBio is recommended for evaluation of different populations in quiet and noise.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Percepción del Habla , Humanos , Ruido , Pruebas de Discriminación del Habla/métodos
3.
Front Hum Neurosci ; 16: 863891, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35399353

RESUMEN

Patients fit with cochlear implants (CIs) commonly indicate at the time of device fitting and for some time after, that the speech signal sounds abnormal. A high pitch or timbre is one component of the abnormal percept. In this project, our aim was to determine whether a number of years of CI use reduced perceived upshifts in frequency spectrum and/or voice fundamental frequency. The participants were five individuals who were deaf in one ear and who had normal hearing in the other ear. The deafened ears had been implanted with a 18.5 mm electrode array which resulted in signal input frequencies being directed to locations in the spiral ganglion (SG) that were between one and two octaves higher than the input frequencies. The patients judged the similarity of a clean signal (a male-voice sentence) presented to their implanted ear and candidate, implant-like, signals presented to their normal-hearing (NH) ear. Matches to implant sound quality were obtained, on average, at 8 months after device activation (see section "Time 1") and at 35 months after activation (see section "Time 2"). At Time 1, the matches to CI sound quality were characterized, most generally, by upshifts in the frequency spectrum and in voice pitch. At Time 2, for four of the five patients, frequency spectrum values remained elevated. For all five patients F0 values remained elevated. Overall, the data offer little support for the proposition that, for patients fit with shorter electrode arrays, cortical plasticity nudges the cortical representation of the CI voice toward more normal, or less upshifted, frequency values between 8 and 35 months after device activation. Cortical plasticity may be limited when there are large differences between frequencies in the input signal and the locations in the SG stimulated by those frequencies.

4.
J Speech Lang Hear Res ; 64(7): 2811-2824, 2021 07 16.
Artículo en Inglés | MEDLINE | ID: mdl-34100627

RESUMEN

Purpose For bilaterally implanted patients, the automatic gain control (AGC) in both left and right cochlear implant (CI) processors is usually neither linked nor synchronized. At high AGC compression ratios, this lack of coordination between the two processors can distort interaural level differences, the only useful interaural difference cue available to CI patients. This study assessed the improvement, if any, in the utility of interaural level differences for sound source localization in the frontal hemifield when AGCs were synchronized versus independent and when listeners were stationary versus allowed to move their heads. Method Sound source identification of broadband noise stimuli was tested for seven bilateral CI patients using 13 loudspeakers in the frontal hemifield, under conditions where AGCs were linked and unlinked. For half the conditions, patients remained stationary; in the other half, they were encouraged to rotate or reorient their heads within a range of approximately ± 30° during sound presentation. Results In general, those listeners who already localized reasonably well with independent AGCs gained the least from AGC synchronization, perhaps because there was less room for improvement. Those listeners who performed worst with independent AGCs gained the most from synchronization. All listeners performed as well or better with synchronization than without; however, intersubject variability was high. Head movements had little impact on the effectiveness of synchronization of AGCs. Conclusion Synchronization of AGCs offers one promising strategy for improving localization performance in the frontal hemifield for bilaterally implanted CI patients. Supplemental Material https://doi.org/10.23641/asha.14681412.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Localización de Sonidos , Percepción del Habla , Movimientos de la Cabeza , Humanos
5.
J Am Acad Audiol ; 32(1): 39-44, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-33296930

RESUMEN

BACKGROUND: Both the Roger remote microphone and on-ear, adaptive beamforming technologies (e.g., Phonak UltraZoom) have been shown to improve speech understanding in noise for cochlear implant (CI) listeners when tested in audio-only (A-only) test environments. PURPOSE: Our aim was to determine if adult and pediatric CI recipients benefited from these technologies in a more common environment-one in which both audio and visual cues were available and when overall performance was high. STUDY SAMPLE: Ten adult CI listeners (Experiment 1) and seven pediatric CI listeners (Experiment 2) were tested. DESIGN: Adults were tested in quiet and in two levels of noise (level 1 and level 2) in A-only and audio-visual (AV) environments. There were four device conditions: (1) an ear canal-level, omnidirectional microphone (T-mic) in quiet, (2) the T-mic in noise, (3) an adaptive directional mic (UltraZoom) in noise, and (4) a wireless, remote mic (Roger Pen) in noise. Pediatric listeners were tested in quiet and in level 1 noise in A-only and AV environments. The test conditions were: (1) a behind-the-ear level omnidirectional mic (processor mic) in quiet, (2) the processor mic in noise, (3) the T-mic in noise, and (4) the Roger Pen in noise. DATA COLLECTION AND ANALYSES: In each test condition, sentence understanding was assessed (percent correct) and ease of listening ratings were obtained. The sentence understanding data were entered into repeated-measures analyses of variance. RESULTS: For both adult and pediatric listeners in the AV test conditions in level 1 noise, performance with the Roger Pen was significantly higher than with the T-mic. For both populations, performance in level 1 noise with the Roger Pen approached the level of baseline performance in quiet. Ease of listening in noise was rated higher in the Roger Pen conditions than in the T-mic or processor mic conditions in both A-only and AV test conditions. CONCLUSION: The Roger remote mic and on-ear directional mic technologies benefit both speech understanding and ease of listening in a realistic laboratory test environment and are likely do the same in real-world listening environments.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Percepción del Habla , Adulto , Niño , Humanos , Ruido , Tecnología
6.
Ear Hear ; 41(6): 1660-1674, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33136640

RESUMEN

OBJECTIVES: We investigated the ability of single-sided deaf listeners implanted with a cochlear implant (SSD-CI) to (1) determine the front-back and left-right location of sound sources presented from loudspeakers surrounding the listener and (2) use small head rotations to further improve their localization performance. The resulting behavioral data were used for further analyses investigating the value of so-called "monaural" spectral shape cues for front-back sound source localization. DESIGN: Eight SSD-CI patients were tested with their cochlear implant (CI) on and off. Eight normal-hearing (NH) listeners, with one ear plugged during the experiment, and another group of eight NH listeners, with neither ear plugged, were also tested. Gaussian noises of 3-sec duration were band-pass filtered to 2-8 kHz and presented from 1 of 6 loudspeakers surrounding the listener, spaced 60° apart. Perceived sound source localization was tested under conditions where the patients faced forward with the head stationary, and under conditions where they rotated their heads between (Equation is included in full-text article.). RESULTS: (1) Under stationary listener conditions, unilaterally-plugged NH listeners and SSD-CI listeners (with their CIs both on and off) were nearly at chance in determining the front-back location of high-frequency sound sources. (2) Allowing rotational head movements improved performance in both the front-back and left-right dimensions for all listeners. (3) For SSD-CI patients with their CI turned off, head rotations substantially reduced front-back reversals, and the combination of turning on the CI with head rotations led to near-perfect resolution of front-back sound source location. (4) Turning on the CI also improved left-right localization performance. (5) As expected, NH listeners with both ears unplugged localized to the correct front-back and left-right hemifields both with and without head movements. CONCLUSIONS: Although SSD-CI listeners demonstrate a relatively poor ability to distinguish the front-back location of sound sources when their head is stationary, their performance is substantially improved with head movements. Most of this improvement occurs when the CI is off, suggesting that the NH ear does most of the "work" in this regard, though some additional gain is introduced with turning the CI on. During head turns, these listeners appear to primarily rely on comparing changes in head position to changes in monaural level cues produced by the direction-dependent attenuation of high-frequency sounds that result from acoustic head shadowing. In this way, SSD-CI listeners overcome limitations to the reliability of monaural spectral and level cues under stationary conditions. SSD-CI listeners may have learned, through chronic monaural experience before CI implantation, or with the relatively impoverished spatial cues provided by their CI-implanted ear, to exploit the monaural level cue. Unilaterally-plugged NH listeners were also able to use this cue during the experiment to realize approximately the same magnitude of benefit from head turns just minutes after plugging, though their performance was less accurate than that of the SSD-CI listeners, both with and without their CI turned on.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Localización de Sonidos , Movimientos de la Cabeza , Humanos , Reproducibilidad de los Resultados
7.
J Am Acad Audiol ; 31(7): 547-550, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-32340054

RESUMEN

BACKGROUND: Previous research has found that when the location of a talker was varied and an auditory prompt indicated the location of the talker, the addition of visual information produced a significant and large improvement in speech understanding for listeners with bilateral cochlear implants (CIs) but not with a unilateral CI. Presumably, the sound-source localization ability of the bilateral CI listeners allowed them to orient to the auditory prompt and benefit from visual information for the subsequent target sentence. PURPOSE: The goal of this project was to assess the robustness of previous research by using a different test environment, a different CI, different test material, and a different response measure. RESEARCH DESIGN: Nine listeners fit with bilateral CIs were tested in a simulation of a crowded restaurant. Auditory-visual (AV) sentence material was presented from loudspeakers and video monitors at 0, +90, and -90 degrees. Each trial started with the presentation of an auditory alerting phrase from one of the three target loudspeakers followed by an AV target sentence from that loudspeaker/monitor. On each trial, the two nontarget monitors showed the speaker mouthing a different sentence. Sentences were presented in noise in four test conditions: one CI, one CI plus vision, bilateral CIs, and bilateral CIs plus vision. RESULTS: Mean percent words correct for the four test conditions were: one CI, 43%; bilateral CI, 60%; one CI plus vision, 52%; and bilateral CI plus vision, 84%. Visual information did not significantly improve performance in the single CI conditions but did improve performance in the bilateral CI conditions. The magnitude of improvement for two CIs versus one CI in the AV condition was approximately twice that for two CIs versus one CI in the auditory condition. CONCLUSIONS: Our results are consistent with previous data showing the large value of bilateral implants in a complex AV listening environment. The results indicate that the value of bilateral CIs for speech understanding is significantly underestimated in standard, auditory-only, single-speaker, test environments.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Localización de Sonidos , Percepción del Habla , Humanos , Ruido
9.
Acoust Sci Technol ; 41(1): 113-120, 2020 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-34305431

RESUMEN

A review of data published or presented by the authors from two populations of subjects (normal hearing listeners and patients fit with cochlear implants, CIs) involving research on sound source localization when listeners move is provided. The overall theme of the review is that sound source localization requires an integration of auditory-spatial and head-position cues and is, therefore, a multisystem process. Research with normal hearing listeners includes that related to the Wallach Azimuth Illusion, and additional aspects of sound source localization perception when listeners and sound sources rotate. Research with CI patients involves investigations of sound source localization performance by patients fit with a single CI, bilateral CIs, a CI and a hearing aid (bimodal patients), and single-sided deaf patients with one normal functioning ear and the other ear fit with a CI. Past research involving CI patients who were stationary and more recent data based on CI patients' use of head rotation to localize sound sources is summarized.

10.
Audiol Neurootol ; 24(5): 264-269, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31661682

RESUMEN

OBJECTIVE: Our aim was to determine the effect of acute changes in cochlear place of stimulation on cochlear implant (CI) sound quality. DESIGN: In Experiment 1, 5 single-sided deaf (SSD) listeners fitted with a long (28-mm) electrode array were tested. Basal shifts in place of stimulation were implemented by turning off the most apical electrodes and reassigning the filters to more basal electrodes. In Experiment 2, 2 SSD patients fitted with a shorter (16.5-mm) electrode array were tested. Both basal and apical shifts in place of stimulation were implemented. The apical shifts were accomplished by current steering and creating a virtual place of stimulation more apical that that of the most apical electrode. RESULTS: Listeners matched basal shifts by shifting, in the normal-hearing ear, the overall spectrum up in frequency and/or increasing voice pitch (F0). Listeners matched apical shifts by shifting down the overall frequency spectrum in the normal-hearing ear. CONCLUSION: One factor determining CI voice quality is the location of stimulation along the cochlear partition.


Asunto(s)
Percepción Auditiva/fisiología , Cóclea/cirugía , Implantación Coclear , Implantes Cocleares , Sordera/rehabilitación , Estimulación Acústica , Femenino , Pruebas Auditivas , Humanos , Masculino , Persona de Mediana Edad
11.
J Speech Lang Hear Res ; 62(9): 3493-3499, 2019 09 20.
Artículo en Inglés | MEDLINE | ID: mdl-31415186

RESUMEN

Purpose Our aim was to make audible for normal-hearing listeners the Mickey Mouse™ sound quality of cochlear implants (CIs) often found following device activation. Method The listeners were 3 single-sided deaf patients fit with a CI and who had 6 months or less of CI experience. Computed tomography imaging established the location of each electrode contact in the cochlea and allowed an estimate of the place frequency of the tissue nearest each electrode. For the most apical electrodes, this estimate ranged from 650 to 780 Hz. To determine CI sound quality, a clean signal (a sentence) was presented to the CI ear via a direct connect cable and candidate, and CI-like signals were presented to the ear with normal hearing via an insert receiver. The listeners rated the similarity of the candidate signals to the sound of the CI on a 1- to 10-point scale, with 10 being a complete match. Results To make the match to CI sound quality, all 3 patients need an upshift in formant frequencies (300-800 Hz) and a metallic sound quality. Two of the 3 patients also needed an upshift in voice pitch (10-80 Hz) and a muffling of sound quality. Similarity scores ranged from 8 to 9.7. Conclusion The formant frequency upshifts, fundamental frequency upshifts, and metallic sound quality experienced by the listeners can be linked to the relatively basal locations of the electrode contacts and short duration experience with their devices. The perceptual consequence was not the voice quality of Mickey Mouse™ but rather that of Munchkins in The Wizard of Oz for whom both formant frequencies and voice pitch were upshifted. Supplemental Material https://doi.org/10.23641/asha.9341651.


Asunto(s)
Percepción Auditiva , Implantes Cocleares , Sordera/fisiopatología , Sordera/rehabilitación , Sonido , Adulto , Femenino , Humanos , Persona de Mediana Edad
12.
J Neurol Surg B Skull Base ; 80(2): 178-186, 2019 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-30931226

RESUMEN

Unilateral severe-to-profound sensorineural hearing loss (SNHL), also known as single sided deafness (SSD), is a problem that affects both children and adults, and can have severe and detrimental effects on multiple aspects of life including music appreciation, speech understanding in noise, speech and language acquisition, performance in the classroom and/or the workplace, and quality of life. Additionally, the loss of binaural hearing in SSD patients affects those processes that rely on two functional ears including sound localization, binaural squelch and summation, and the head shadow effect. Over the last decade, there has been increasing interest in cochlear implantation for SSD to restore binaural hearing. Early data are promising that cochlear implantation for SSD can help to restore binaural functionality, improve quality of life, and may faciliate reversal of neuroplasticity related to auditory deprivation in the pediatric population. Additionally, this new patient population has allowed researchers the opportunity to investigate the age-old question "what does a cochlear implant (CI) sound like?."

13.
Annu Rev Neurosci ; 42: 47-65, 2019 07 08.
Artículo en Inglés | MEDLINE | ID: mdl-30699049

RESUMEN

The modern cochlear implant (CI) is the most successful neural prosthesis developed to date. CIs provide hearing to the profoundly hearing impaired and allow the acquisition of spoken language in children born deaf. Results from studies enabled by the CI have provided new insights into (a) minimal representations at the periphery for speech reception, (b) brain mechanisms for decoding speech presented in quiet and in acoustically adverse conditions, (c) the developmental neuroscience of language and hearing, and (d) the mechanisms and time courses of intramodal and cross-modal plasticity. Additionally, the results have underscored the interconnectedness of brain functions and the importance of top-down processes in perception and learning. The findings are described in this review with emphasis on the developing brain and the acquisition of hearing and spoken language.


Asunto(s)
Percepción Auditiva/fisiología , Implantes Cocleares , Período Crítico Psicológico , Desarrollo del Lenguaje , Animales , Trastornos de la Percepción Auditiva/etiología , Encéfalo/crecimiento & desarrollo , Implantación Coclear , Comprensión , Señales (Psicología) , Sordera/congénito , Sordera/fisiopatología , Sordera/psicología , Sordera/cirugía , Diseño de Equipo , Humanos , Trastornos del Desarrollo del Lenguaje/etiología , Trastornos del Desarrollo del Lenguaje/prevención & control , Aprendizaje/fisiología , Plasticidad Neuronal , Estimulación Luminosa
14.
Int J Pediatr Otorhinolaryngol ; 118: 128-133, 2019 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-30623849

RESUMEN

OBJECTIVE: To evaluate outcomes in pediatric and adolescent patients with single-sided deafness (SSD) undergoing cochlear implantation. METHODS: A retrospective cohort design at two tertiary level academic cochlear implant centers. The subjects included nine children ages 1.5 to 15 years-old with single-sided deafness (SSD) who had undergone cochlear implantation in the affected ear. Objective outcome measures included were speech reception testing in quiet and noise, bimodal speech reception threshold testing in noise, tinnitus suppression, and device usage. RESULTS: Nine pediatric and adolescent patients with SSD were implanted between 2011 and 2017. The median age at implantation was 8.9 years (range, 1.5-15.1) and the children had a median duration of deafness 2.9 years (range, 0.8-9.5). There was variability in testing measures due to patient age. Median pre-operative aided word recognition scores on the affected side were <30% regardless of the testing paradigm used. Six patients had pre-operative word testing (4 CNC, median score 25%; 2 MLNT, 8% and 17%). Four patients had pre-operative sentence testing (3 AzBio, median score 44%; 1 HINT-C, 57%). Median post-implantation follow-up interval was 12.3 months (range, 3-27.6 months). Six subjects had post-operative word recognition testing (CNC median, 70%; MLNT 50%, 92%) with a median improvement of 45.5% points. Five subjects had post-operative sentence testing (AzBio, median 82%; HINT, median 76%), with a median improvement of 40.5% points. Eight patients are full time users of their device. Tinnitus and bimodal speech reception thresholds in noise were improved. CONCLUSION: Pediatric subjects with SSD benefit substantially from cochlear implantation. Objective speech outcome measures are improved in both quiet and noise, and bimodal speech reception thresholds in noise are greatly improved. There is a low rate of device non-use.


Asunto(s)
Implantación Coclear , Pérdida Auditiva Unilateral/cirugía , Audición , Percepción del Habla , Adolescente , Audiometría del Habla , Umbral Auditivo , Niño , Preescolar , Femenino , Humanos , Lactante , Masculino , Ruido , Periodo Posoperatorio , Periodo Preoperatorio , Estudios Retrospectivos , Acúfeno/prevención & control , Resultado del Tratamiento
15.
Ear Hear ; 40(3): 501-516, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30285977

RESUMEN

OBJECTIVE: The objectives of this study were to assess the effectiveness of various measures of speech understanding in distinguishing performance differences between adult bimodal and bilateral cochlear implant (CI) recipients and to provide a preliminary evidence-based tool guiding clinical decisions regarding bilateral CI candidacy. DESIGN: This study used a multiple-baseline, cross-sectional design investigating speech recognition performance for 85 experienced adult CI recipients (49 bimodal, 36 bilateral). Speech recognition was assessed in a standard clinical test environment with a single loudspeaker using the minimum speech test battery for adult CI recipients as well as with an R-SPACE 8-loudspeaker, sound-simulation system. All participants were tested in three listening conditions for each measure including each ear alone as well as in the bilateral/bimodal condition. In addition, we asked each bimodal listener to provide a yes/no answer to the question, "Do you think you need a second CI?" RESULTS: This study yielded three primary findings: (1) there were no significant differences between bimodal and bilateral CI performance or binaural summation on clinical measures of speech recognition, (2) an adaptive speech recognition task in the R-SPACE system revealed significant differences in performance and binaural summation between bimodal and bilateral CI users, with bilateral CI users achieving significantly better performance and greater summation, and (3) the patient's answer to the question, "Do you think you need a second CI?" held high sensitivity (100% hit rate) for identifying likely bilateral CI candidates and moderately high specificity (77% correct rejection rate) for correctly identifying listeners best suited with a bimodal hearing configuration. CONCLUSIONS: Clinics cannot rely on current clinical measures of speech understanding, with a single loudspeaker, to determine bilateral CI candidacy for adult bimodal listeners nor to accurately document bilateral benefit relative to a previous bimodal hearing configuration. Speech recognition in a complex listening environment, such as R-SPACE, is a sensitive and appropriate measure for determining bilateral CI candidacy and also likely for documenting bilateral benefit relative to a previous bimodal configuration. In the absence of an available R-SPACE system, asking the patient whether or not s/he thinks s/he needs a second CI is a highly sensitive measure, which may prove clinically useful.


Asunto(s)
Implantación Coclear/métodos , Audífonos , Pérdida Auditiva Bilateral/rehabilitación , Pérdida Auditiva Sensorineural/rehabilitación , Percepción del Habla , Adulto , Anciano , Anciano de 80 o más Años , Toma de Decisiones Clínicas , Implantes Cocleares , Femenino , Humanos , Masculino , Persona de Mediana Edad , Medición de Resultados Informados por el Paciente , Adulto Joven
16.
J Am Acad Audiol ; 30(8): 731-734, 2019 09.
Artículo en Inglés | MEDLINE | ID: mdl-30417824

RESUMEN

BACKGROUND: When cochlear implant (CI) listeners use a directional microphone or beamformer system to improve speech understanding in noise, the gain in understanding for speech presented from the front of the listener coexists with a decrease in speech understanding from the back. One way to maximize the usefulness of these systems is to keep a microphone in the omnidirectional mode in low noise and then switch to directional mode in high noise. PURPOSE: The purpose of this experiment was to assess the levels of speech understanding in noise allowed by a new signal processing algorithm for MED EL CIs, AutoAdaptive, which operates in the manner described previously. RESEARCH DESIGN: Seven listeners fit with bilateral CIs were tested in a simulation of a crowded restaurant with speech presented from the front and from the back at three noise levels, 45, 55, and 65 dB SPL. DATA COLLECTION AND ANALYSIS: The listeners were seated in the middle of an array of eight loudspeakers. Sentences from the AzBio sentence lists were presented from loudspeakers at 0 or 180° azimuth. Restaurant noise at 45, 55, and 65 dB SPL was presented from all eight loudspeakers. The speech understanding scores (words correct) were subjected to a two-factor (speaker location and noise level), repeated measures, analysis of variance with posttests. RESULTS: The analysis of variance showed a main effect for level and location and a significant interaction. Posttests showed that speech understanding scores from front and back loudspeakers did not differ significantly at the 45- and 55-dB noise levels but did differ significantly at the 65-dB noise level-with increased scores for signals from the front and decreased scores for signals from the back. CONCLUSIONS: The AutoAdaptive feature provides omnidirectional benefit at low noise levels, i.e., similar levels of speech understanding for talkers in front of, and in back of, a listener and beamformer benefit at higher noise levels, i.e., increased speech understanding for signals from in front. The automatic switching feature will be of value to the many patients who prefer not to manually switch programs on their CIs.


Asunto(s)
Implantes Cocleares , Ruido , Percepción del Habla , Acústica , Anciano , Femenino , Humanos , Masculino , Persona de Mediana Edad , Diseño de Prótesis
17.
Audiol Neurootol ; 23(5): 270-276, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30537753

RESUMEN

OBJECTIVE: Our primary aim was to determine, in a simulation of a crowded restaurant, the value to speech understanding of (i) a unilateral cochlear implant (CI), (ii) a CI plus CROS (contralateral routing of signals) aid system and (iii) bilateral CIs when tested with and without beamforming microphones. DESIGN: The listeners were 7 CI listeners who had used bilateral CIs for an average of 9 years. The listeners were tested with three device configurations (bilateral CI, unilateral CI + CROS, and unilateral CI), two signal processing conditions (without and with beamformers) and with speech either from +90°, -90°, or from the front. Speech understanding scores for the TIMIT sentences were obtained in the 8-loudspeaker R-SPACETM test environment - an environment which simulates listening in a crowded restaurant. RESULTS: In the unilateral condition, speech understanding, relative to speech directed to the CI ear, fell by 17% when speech was from the front and fell 28% when speech was to the side opposite the CI. These deficits were overcome with both CI-CROS and bilateral CIs, and scores for the two devices did not differ significantly for any location of speech input. Beamformer microphones improved speech understanding for speech from the front and depressed speech understanding for speech from the sides for all device configurations. Patients with bilateral CIs and beamformers achieved slightly, but significantly, higher scores for speech from the front than patients with CI-CROS and beamformers. CONCLUSIONS: CI-CROS is a valuable addition to the hardware options available to patients fit with a single CI. For patients fit with bilateral CIs, bilateral beamformers are a valuable addition in the condition of speech coming from in front of the listener. The small differences in performance in the CI-CROS and bilateral CI conditions suggest that patient preference for bilateral CIs is based largely on factors other than speech understanding in noise.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Localización de Sonidos/fisiología , Percepción del Habla/fisiología , Habla/fisiología , Adolescente , Adulto , Anciano , Femenino , Humanos , Masculino , Persona de Mediana Edad , Ruido , Restaurantes
18.
J Speech Lang Hear Res ; 61(5): 1306-1321, 2018 05 17.
Artículo en Inglés | MEDLINE | ID: mdl-29800361

RESUMEN

Purpose: The primary purpose of this study was to assess speech understanding in quiet and in diffuse noise for adult cochlear implant (CI) recipients utilizing bimodal hearing or bilateral CIs. Our primary hypothesis was that bilateral CI recipients would demonstrate less effect of source azimuth in the bilateral CI condition due to symmetric interaural head shadow. Method: Sentence recognition was assessed for adult bilateral (n = 25) CI users and bimodal listeners (n = 12) in three conditions: (1) source location certainty regarding fixed target azimuth, (2) source location uncertainty regarding roving target azimuth, and (3) Condition 2 repeated, allowing listeners to turn their heads, as needed. Results: (a) Bilateral CI users exhibited relatively similar performance regardless of source azimuth in the bilateral CI condition; (b) bimodal listeners exhibited higher performance for speech directed to the better hearing ear even in the bimodal condition; (c) the unilateral, better ear condition yielded higher performance for speech presented to the better ear versus speech to the front or to the poorer ear; (d) source location certainty did not affect speech understanding performance; and (e) head turns did not improve performance. The results confirmed our hypothesis that bilateral CI users exhibited less effect of source azimuth than bimodal listeners. That is, they exhibited similar performance for speech recognition irrespective of source azimuth, whereas bimodal listeners exhibited significantly poorer performance with speech originating from the poorer hearing ear (typically the nonimplanted ear). Conclusions: Bilateral CI users overcame ear and source location effects observed for the bimodal listeners. Bilateral CI users have access to head shadow on both sides, whereas bimodal listeners generally have interaural asymmetry in both speech understanding and audible bandwidth limiting the head shadow benefit obtained from the poorer ear (generally the nonimplanted ear). In summary, we found that, in conditions with source location uncertainty and increased ecological validity, bilateral CI performance was superior to bimodal listening.


Asunto(s)
Implantes Cocleares , Comprensión , Movimientos de la Cabeza , Pérdida Auditiva/rehabilitación , Percepción del Habla , Adulto , Anciano , Anciano de 80 o más Años , Femenino , Pérdida Auditiva/psicología , Humanos , Masculino , Persona de Mediana Edad , Ruido , Psicoacústica , Localización de Sonidos , Incertidumbre
19.
Ear Hear ; 39(6): 1224-1231, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29664750

RESUMEN

OBJECTIVES: We report on the ability of patients fit with bilateral cochlear implants (CIs) to distinguish the front-back location of sound sources both with and without head movements. At issue was (i) whether CI patients are more prone to front-back confusions than normal hearing listeners for wideband, high-frequency stimuli; and (ii) if CI patients can utilize dynamic binaural difference cues, in tandem with their own head rotation, to resolve these front-back confusions. Front-back confusions offer a binary metric to gain insight into CI patients' ability to localize sound sources under dynamic conditions not generally measured in laboratory settings where both the sound source and patient are static. DESIGN: Three-second duration Gaussian noise samples were bandpass filtered to 2 to 8 kHz and presented from one of six loudspeaker locations located 60° apart, surrounding the listener. Perceived sound source localization for seven listeners bilaterally implanted with CIs, was tested under conditions where the patient faced forward and did not move their head and under conditions where they were encouraged to moderately rotate their head. The same conditions were repeated for 5 of the patients with one implant turned off (the implant at the better ear remained on). A control group of normal hearing listeners was also tested for a baseline of comparison. RESULTS: All seven CI patients demonstrated a high rate of front-back confusions when their head was stationary (41.9%). The proportion of front-back confusions was reduced to 6.7% when these patients were allowed to rotate their head within a range of approximately ± 30°. When only one implant was turned on, listeners' localization acuity suffered greatly. In these conditions, head movement or the lack thereof made little difference to listeners' performance. CONCLUSIONS: Bilateral implantation can offer CI listeners the ability to track dynamic auditory spatial difference cues and compare these changes to changes in their own head position, resulting in a reduced rate of front-back confusions. This suggests that, for these patients, estimates of auditory acuity based solely on static laboratory settings may underestimate their real-world localization abilities.


Asunto(s)
Percepción Auditiva , Implantes Cocleares , Movimientos de la Cabeza , Localización de Sonidos , Anciano , Señales (Psicología) , Femenino , Audición , Humanos , Masculino , Persona de Mediana Edad
20.
J Am Acad Audiol ; 29(3): 197-205, 2018 03.
Artículo en Inglés | MEDLINE | ID: mdl-29488870

RESUMEN

BACKGROUND: Sentence understanding scores for patients with cochlear implants (CIs) when tested in quiet are relatively high. However, sentence understanding scores for patients with CIs plummet with the addition of noise. PURPOSE: To assess, for patients with CIs (MED-EL), (1) the value to speech understanding of two new, noise-reducing microphone settings and (2) the effect of the microphone settings on sound source localization. RESEARCH DESIGN: Single-subject, repeated measures design. For tests of speech understanding, repeated measures on (1) number of CIs (one, two), (2) microphone type (omni, natural, adaptive beamformer), and (3) type of noise (restaurant, cocktail party). For sound source localization, repeated measures on type of signal (low-pass [LP], high-pass [HP], broadband noise). STUDY SAMPLE: Ten listeners, ranging in age from 48 to 83 yr (mean = 57 yr), participated in this prospective study. INTERVENTION: Speech understanding was assessed in two noise environments using monaural and bilateral CIs fit with three microphone types. Sound source localization was assessed using three microphone types. DATA COLLECTION AND ANALYSIS: In Experiment 1, sentence understanding scores (in terms of percent words correct) were obtained in quiet and in noise. For each patient, noise was first added to the signal to drive performance off of the ceiling in the bilateral CI-omni microphone condition. The other conditions were then administered at that signal-to-noise ratio in quasi-random order. In Experiment 2, sound source localization accuracy was assessed for three signal types using a 13-loudspeaker array over a 180° arc. The dependent measure was root-mean-score error. RESULTS: Both the natural and adaptive microphone settings significantly improved speech understanding in the two noise environments. The magnitude of the improvement varied between 16 and 19 percentage points for tests conducted in the restaurant environment and between 19 and 36 percentage points for tests conducted in the cocktail party environment. In the restaurant and cocktail party environments, both the natural and adaptive settings, when implemented on a single CI, allowed scores that were as good as, or better, than scores in the bilateral omni test condition. Sound source localization accuracy was unaltered by either the natural or adaptive settings for LP, HP, or wideband noise stimuli. CONCLUSION: The data support the use of the natural microphone setting as a default setting. The natural setting (1) provides better speech understanding in noise than the omni setting, (2) does not impair sound source localization, and (3) retains low-frequency sensitivity to signals from the rear. Moreover, bilateral CIs equipped with adaptive beamforming technology can engender speech understanding scores in noise that fall only a little short of scores for a single CI in quiet.


Asunto(s)
Implantes Cocleares , Localización de Sonidos , Percepción del Habla , Anciano , Anciano de 80 o más Años , Audiometría del Habla , Pabellón Auricular , Femenino , Humanos , Masculino , Persona de Mediana Edad , Ruido/efectos adversos , Estudios Prospectivos , Diseño de Prótesis
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...