Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
1.
Am J Audiol ; : 1-12, 2023 Dec 05.
Artículo en Inglés | MEDLINE | ID: mdl-38052055

RESUMEN

PURPOSE: The U.S. Food and Drug Administration finalized regulations for over-the-counter hearing aids (OTC-HAs) on August 17, 2022. Little is known about the comparative performance of OTC-HAs and prescription HAs. This study compared amplification accuracy of prescription HAs and direct-to-consumer devices (DTCDs, including personal sound amplification products [PSAPs] and OTC-HAs). METHOD: Eleven devices were programmed to meet prescriptive targets in an acoustic manikin for three degrees of hearing loss. Devices consisted of high- and low-end HAs, PSAPS, and OTC-HAs. Each was tested, and deviations from target measured with an HA analyzer at every combination of 10 frequencies and low-, average-, and high-level inputs. Accuracy was compared using a multilevel Poisson model with device-specific intercepts controlling for input level, frequency, and device type. RESULTS: For mild-moderate hearing loss, deviations from targets were not statistically different between high- and low-end HAs, but PSAPs (5.50 dB, SE = 0.92 dB) and OTC-HAs (8.83 dB, SE = 1.10 dB) had larger differentials. For flat moderate hearing loss, compared to high-end HAs, average differentials were larger for all device types at all input levels and frequencies (Low HA: 3.82 dB, SE = 1.10 dB; PSAP: 9.24 dB, SE = 1.22 dB; OTC-HA: 8.61 dB, SE = 1.19 dB). For mild sloping to severe hearing loss, compared to high-end HAs, OTC-HAs (9.72 dB, SE = 1.20 dB) and PSAPs (7.34 dB, SE = 1.07 dB) had larger differentials and significant variability at the highest and lowest frequencies. Half (three) of the PSAPs and OTC-HAs met most targets within ±5 dB. CONCLUSIONS: DTCDs were unable to meet prescriptive targets for severe types of hearing loss but could meet them for mild hearing loss. This study provides an examination of current hearing devices. More research is needed to determine whether meeting prescriptive targets provides any benefit in the outcomes and performance with DTCD devices.

2.
Brain Topogr ; 36(5): 686-697, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37393418

RESUMEN

BACKGROUND: Functional near-infrared spectroscopy (fNIRS) is a viable non-invasive technique for functional neuroimaging in the cochlear implant (CI) population; however, the effects of acoustic stimulus features on the fNIRS signal have not been thoroughly examined. This study examined the effect of stimulus level on fNIRS responses in adults with normal hearing or bilateral CIs. We hypothesized that fNIRS responses would correlate with both stimulus level and subjective loudness ratings, but that the correlation would be weaker with CIs due to the compression of acoustic input to electric output. METHODS: Thirteen adults with bilateral CIs and 16 with normal hearing (NH) completed the study. Signal-correlated noise, a speech-shaped noise modulated by the temporal envelope of speech stimuli, was used to determine the effect of stimulus level in an unintelligible speech-like stimulus between the range of soft to loud speech. Cortical activity in the left hemisphere was recorded. RESULTS: Results indicated a positive correlation of cortical activation in the left superior temporal gyrus with stimulus level in both NH and CI listeners with an additional correlation between cortical activity and perceived loudness for the CI group. The results are consistent with the literature and our hypothesis. CONCLUSIONS: These results support the potential of fNIRS to examine auditory stimulus level effects at a group level and the importance of controlling for stimulus level and loudness in speech recognition studies. Further research is needed to better understand cortical activation patterns for speech recognition as a function of both stimulus presentation level and perceived loudness.


Asunto(s)
Corteza Auditiva , Implantes Cocleares , Percepción del Habla , Adulto , Humanos , Espectroscopía Infrarroja Corta/métodos , Corteza Auditiva/diagnóstico por imagen , Corteza Auditiva/fisiología , Lóbulo Temporal/diagnóstico por imagen , Lóbulo Temporal/fisiología , Estimulación Acústica
3.
Trends Hear ; 27: 23312165231186040, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37415497

RESUMEN

Information regarding sound-source spatial location provides several speech-perception benefits, including auditory spatial cues for perceptual talker separation and localization cues to face the talker to obtain visual speech information. These benefits have typically been examined separately. A real-time processing algorithm for sound-localization degradation (LocDeg) was used to investigate how spatial-hearing benefits interact in a multitalker environment. Normal-hearing adults performed auditory-only and auditory-visual sentence recognition with target speech and maskers presented from loudspeakers at -90°, -36°, 36°, or 90° azimuths. For auditory-visual conditions, one target and three masking talker videos (always spatially separated) were rendered virtually in rectangular windows at these locations on a head-mounted display. Auditory-only conditions presented blank windows at these locations. Auditory target speech (always spatially aligned with the target video) was presented in co-located speech-shaped noise (experiment 1) or with three co-located or spatially separated auditory interfering talkers corresponding to the masker videos (experiment 2). In the co-located conditions, the LocDeg algorithm did not affect auditory-only performance but reduced target orientation accuracy, reducing auditory-visual benefit. In the multitalker environment, two spatial-hearing benefits were observed: perceptually separating competing speech based on auditory spatial differences and orienting to the target talker to obtain visual speech cues. These two benefits were additive, and both were diminished by the LocDeg algorithm. Although visual cues always improved performance when the target was accurately localized, there was no strong evidence that they provided additional assistance in perceptually separating co-located competing speech. These results highlight the importance of sound localization in everyday communication.


Asunto(s)
Localización de Sonidos , Percepción del Habla , Adulto , Humanos , Habla , Enmascaramiento Perceptual , Audición , Trastornos de la Audición
4.
J Speech Lang Hear Res ; 66(7): 2450-2460, 2023 07 12.
Artículo en Inglés | MEDLINE | ID: mdl-37257284

RESUMEN

PURPOSE: Individuals with hearing impairment have higher risks of mental illnesses. We sought to develop a richer understanding of how the presence of any hearing impairment affects three types (prescription medication, outpatient services, and inpatient services) of mental health services utilization (MHSU) and perceived unmet needs for mental health care; also, we aimed to identify sociodemographic factors associated with outpatient mental health services use among those with hearing impairment and discuss potential implications under the U.S. health care system. METHOD: Using secondary data from the 2015-2019 National Survey on Drug Use and Health, our study included U.S. adults aged ≥ 18 years who reported serious mental illnesses (SMIs) in the past year. Multivariable logistic regression was used to examine associations of hearing impairment with MHSU and perceived unmet mental health care needs. RESULTS: The study sample comprised 12,541 adults with SMIs. Prevalence of MHSU (medication: 55.5% vs. 57.5%; outpatient: 37.1% vs. 44.2%; inpatient: 6.6% vs.7.1%) and unmet needs for mental health care (47.5% vs. 43.3%) were estimated among survey respondents who reported hearing impairment and those who did not, respectively. Those with hearing impairment were significantly less likely to report outpatient MHSU (OR = 0.73, 95% CI [0.60, 0.90]). CONCLUSIONS: MHSU was low while perceived unmet needs for mental health care were high among individuals with SMIs, regardless of hearing status. In addition, patients with hearing impairment were significantly less likely to report outpatient MHSU than their counterparts. Enhancing communication is essential to improve access to mental health care for those with hearing impairment.


Asunto(s)
Pérdida Auditiva , Trastornos Mentales , Servicios de Salud Mental , Adulto , Humanos , Estudios Transversales , Trastornos Mentales/epidemiología , Trastornos Mentales/terapia , Aceptación de la Atención de Salud , Pérdida Auditiva/epidemiología , Accesibilidad a los Servicios de Salud
5.
Ear Hear ; 41(3): 576-590, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-31436754

RESUMEN

OBJECTIVES: Single-sided deafness cochlear-implant (SSD-CI) listeners and bilateral cochlear-implant (BI-CI) listeners gain near-normal levels of head-shadow benefit but limited binaural benefits. One possible reason for these limited binaural benefits is that cochlear places of stimulation tend to be mismatched between the ears. SSD-CI and BI-CI patients might benefit from a binaural fitting that reallocates frequencies to reduce interaural place mismatch. However, this approach could reduce monaural speech recognition and head-shadow benefit by excluding low- or high-frequency information from one ear. This study examined how much frequency information can be excluded from a CI signal in the poorer-hearing ear without reducing head-shadow benefits and how these outcomes are influenced by interaural asymmetry in monaural speech recognition. DESIGN: Speech-recognition thresholds for sentences in speech-shaped noise were measured for 6 adult SSD-CI listeners, 12 BI-CI listeners, and 9 normal-hearing listeners presented with vocoder simulations. Stimuli were presented using nonindividualized in-the-ear or behind-the-ear head-related impulse-response simulations with speech presented from a 70° azimuth (poorer-hearing side) and noise from 70° (better-hearing side), thereby yielding a better signal-to-noise ratio (SNR) at the poorer-hearing ear. Head-shadow benefit was computed as the improvement in bilateral speech-recognition thresholds gained from enabling the CI in the poorer-hearing, better-SNR ear. High- or low-pass filtering was systematically applied to the head-related impulse-response-filtered stimuli presented to the poorer-hearing ear. For the SSD-CI listeners and SSD-vocoder simulations, only high-pass filtering was applied, because the CI frequency allocation would never need to be adjusted downward to frequency-match the ears. For the BI-CI listeners and BI-vocoder simulations, both low and high pass filtering were applied. The normal-hearing listeners were tested with two levels of performance to examine the effect of interaural asymmetry in monaural speech recognition (vocoder synthesis-filter slopes: 5 or 20 dB/octave). RESULTS: Mean head-shadow benefit was smaller for the SSD-CI listeners (~7 dB) than for the BI-CI listeners (~14 dB). For SSD-CI listeners, frequencies <1236 Hz could be excluded; for BI-CI listeners, frequencies <886 or >3814 Hz could be excluded from the poorer-hearing ear without reducing head-shadow benefit. Bilateral performance showed greater immunity to filtering than monaural performance, with gradual changes in performance as a function of filter cutoff. Real and vocoder-simulated CI users with larger interaural asymmetry in monaural performance had less head-shadow benefit. CONCLUSIONS: The "exclusion frequency" ranges that could be removed without diminishing head-shadow benefit are interpreted in terms of low importance in the speech intelligibility index and a small head-shadow magnitude at low frequencies. Although groups and individuals with greater performance asymmetry gained less head-shadow benefit, the magnitudes of these factors did not predict the exclusion frequency range. Overall, these data suggest that for many SSD-CI and BI-CI listeners, the frequency allocation for the poorer-ear CI can be shifted substantially without sacrificing head-shadow benefit, at least for energetic maskers. Considering the two ears together as a single system may allow greater flexibility in discarding redundant frequency content from a CI in one ear when considering bilateral programming solutions aimed at reducing interaural frequency mismatch.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Percepción del Habla , Adulto , Audición , Humanos , Ruido
6.
J Acoust Soc Am ; 145(2): 1129, 2019 02.
Artículo en Inglés | MEDLINE | ID: mdl-30823825

RESUMEN

This study developed and tested a real-time processing algorithm designed to degrade sound localization (LocDeg algorithm) without affecting binaural benefits for speech reception in noise. Input signals were divided into eight frequency channels. The odd-numbered channels were mixed between the ears to confuse the direction of interaural cues while preserving interaural cues in the even-numbered channels. The LocDeg algorithm was evaluated for normal-hearing listeners performing sound localization and speech-reception tasks. Results showed that the LocDeg algorithm successfully degraded sound-localization performance without affecting speech-reception performance or spatial release from masking for speech in noise. The LocDeg algorithm did, however, degrade speech-reception performance in a task involving spatially separated talkers in a multi-talker environment, which is thought to depend on differences in perceived spatial location of concurrent talkers. This LocDeg algorithm could be a valuable tool for isolating the importance of sound-localization ability from other binaural benefits in real-world environments.


Asunto(s)
Umbral Auditivo/fisiología , Procesamiento de Señales Asistido por Computador , Localización de Sonidos/fisiología , Adulto , Algoritmos , Femenino , Humanos , Masculino , Ruido , Prueba del Umbral de Recepción del Habla , Adulto Joven
7.
J Speech Lang Hear Res ; 61(5): 1306-1321, 2018 05 17.
Artículo en Inglés | MEDLINE | ID: mdl-29800361

RESUMEN

Purpose: The primary purpose of this study was to assess speech understanding in quiet and in diffuse noise for adult cochlear implant (CI) recipients utilizing bimodal hearing or bilateral CIs. Our primary hypothesis was that bilateral CI recipients would demonstrate less effect of source azimuth in the bilateral CI condition due to symmetric interaural head shadow. Method: Sentence recognition was assessed for adult bilateral (n = 25) CI users and bimodal listeners (n = 12) in three conditions: (1) source location certainty regarding fixed target azimuth, (2) source location uncertainty regarding roving target azimuth, and (3) Condition 2 repeated, allowing listeners to turn their heads, as needed. Results: (a) Bilateral CI users exhibited relatively similar performance regardless of source azimuth in the bilateral CI condition; (b) bimodal listeners exhibited higher performance for speech directed to the better hearing ear even in the bimodal condition; (c) the unilateral, better ear condition yielded higher performance for speech presented to the better ear versus speech to the front or to the poorer ear; (d) source location certainty did not affect speech understanding performance; and (e) head turns did not improve performance. The results confirmed our hypothesis that bilateral CI users exhibited less effect of source azimuth than bimodal listeners. That is, they exhibited similar performance for speech recognition irrespective of source azimuth, whereas bimodal listeners exhibited significantly poorer performance with speech originating from the poorer hearing ear (typically the nonimplanted ear). Conclusions: Bilateral CI users overcame ear and source location effects observed for the bimodal listeners. Bilateral CI users have access to head shadow on both sides, whereas bimodal listeners generally have interaural asymmetry in both speech understanding and audible bandwidth limiting the head shadow benefit obtained from the poorer ear (generally the nonimplanted ear). In summary, we found that, in conditions with source location uncertainty and increased ecological validity, bilateral CI performance was superior to bimodal listening.


Asunto(s)
Implantes Cocleares , Comprensión , Movimientos de la Cabeza , Pérdida Auditiva/rehabilitación , Percepción del Habla , Adulto , Anciano , Anciano de 80 o más Años , Femenino , Pérdida Auditiva/psicología , Humanos , Masculino , Persona de Mediana Edad , Ruido , Psicoacústica , Localización de Sonidos , Incertidumbre
8.
Ear Hear ; 38(5): 521-538, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28399064

RESUMEN

Speech perception is inherently a multisensory process involving integration of auditory and visual cues. Multisensory integration in cochlear implant (CI) recipients is a unique circumstance in that the integration occurs after auditory deprivation and the provision of hearing via the CI. Despite the clear importance of multisensory cues for perception, in general, and for speech intelligibility, specifically, the topic of multisensory perceptual benefits in CI users has only recently begun to emerge as an area of inquiry. We review the research that has been conducted on multisensory integration in CI users to date and suggest a number of areas needing further research. The overall pattern of results indicates that many CI recipients show at least some perceptual gain that can be attributable to multisensory integration. The extent of this gain, however, varies based on a number of factors, including age of implantation and specific task being assessed (e.g., stimulus detection, phoneme perception, word recognition). Although both children and adults with CIs obtain audiovisual benefits for phoneme, word, and sentence stimuli, neither group shows demonstrable gain for suprasegmental feature perception. Additionally, only early-implanted children and the highest performing adults obtain audiovisual integration benefits similar to individuals with normal hearing. Increasing age of implantation in children is associated with poorer gains resultant from audiovisual integration, suggesting a sensitive period in development for the brain networks that subserve these integrative functions, as well as length of auditory experience. This finding highlights the need for early detection of and intervention for hearing loss, not only in terms of auditory perception, but also in terms of the behavioral and perceptual benefits of audiovisual processing. Importantly, patterns of auditory, visual, and audiovisual responses suggest that underlying integrative processes may be fundamentally different between CI users and typical-hearing listeners. Future research, particularly in low-level processing tasks such as signal detection will help to further assess mechanisms of multisensory integration for individuals with hearing loss, both with and without CIs.


Asunto(s)
Percepción Auditiva/fisiología , Implantes Cocleares , Percepción Visual/fisiología , Adulto , Niño , Humanos , Tiempo de Reacción
9.
Ear Hear ; 37(3): 282-8, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-26901264

RESUMEN

OBJECTIVES: The primary purpose of this study was to examine the effect of acoustic bandwidth on bimodal benefit for speech recognition in normal-hearing children with a cochlear implant (CI) simulation in one ear and low-pass filtered stimuli in the contralateral ear. The effect of acoustic bandwidth on bimodal benefit in children was compared with the pattern of adults with normal hearing. Our hypothesis was that children would require a wider acoustic bandwidth than adults to (1) derive bimodal benefit, and (2) obtain asymptotic bimodal benefit. DESIGN: Nineteen children (6 to 12 years) and 10 adults with normal hearing participated in the study. Speech recognition was assessed via recorded sentences presented in a 20-talker babble. The AzBio female-talker sentences were used for the adults and the pediatric AzBio sentences (BabyBio) were used for the children. A CI simulation was presented to the right ear and low-pass filtered stimuli were presented to the left ear with the following cutoff frequencies: 250, 500, 750, 1000, and 1500 Hz. RESULTS: The primary findings were (1) adults achieved higher performance than children when presented with only low-pass filtered acoustic stimuli, (2) adults and children performed similarly in all the simulated CI and bimodal conditions, (3) children gained significant bimodal benefit with the addition of low-pass filtered speech at 250 Hz, and (4) unlike previous studies completed with adult bimodal patients, adults and children with normal hearing gained additional significant bimodal benefit with cutoff frequencies up to 1500 Hz with most of the additional benefit gained with energy below 750 Hz. CONCLUSIONS: Acoustic bandwidth effects on simulated bimodal benefit were similar in children and adults with normal hearing. Should the current results generalize to children with CIs, these results suggest pediatric CI recipients may derive significant benefit from minimal acoustic hearing (<250 Hz) in the nonimplanted ear and increasing benefit with broader bandwidth. Knowledge of the effect of acoustic bandwidth on bimodal benefit in children may help direct clinical decisions regarding a second CI, continued bimodal hearing, and even optimizing acoustic amplification for the nonimplanted ear.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Sordera/rehabilitación , Ruido , Percepción del Habla , Adulto , Niño , Simulación por Computador , Sordera/fisiopatología , Femenino , Humanos , Masculino , Adulto Joven
10.
Otol Neurotol ; 37(2): e50-5, 2016 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-26756155

RESUMEN

OBJECTIVE: The primary goal was to establish normative data for the Pediatric AzBio "BabyBio," QuickSIN, and BKB-SIN measures in the sound field for children with normal hearing. SETTING: Tertiary care hospital; cochlear implant (CI) program. PATIENTS: Forty-one children with normal hearing were recruited across four age groups (5-6, 7-8, 9-10, and 11-12 yr). INTERVENTIONS: Sentence recognition testing was assessed at four different signal-to-noise ratios (SNRs, +10, +5, 0, and -5 dB) for BabyBio sentences as well as for the BKB-SIN and QuickSIN tests. All measures were presented in the sound field at 60 dBA except QuickSIN, which was presented at 70 dBA. MAIN OUTCOME MEASURES: BabyBio sentence recognition, BKB-SIN SNR-50, and QuickSIN SNR-50 were analyzed to establish sound field norms. RESULTS: BabyBio sentence recognition approached ceiling at all SNRs with mean scores ranging from 86% at -5 dB SNR to 99.3% at +10 dB SNR. Mean QuickSIN SNR-50 was 6.6 dB. Mean BKB-SIN SNR-50 was 1.6 dB with sound field data being consistent with insert earphone normative data in the BKB-SIN manual. Performance for all measures improved with age. CONCLUSION: Children with normal hearing achieve ceiling-level performance for BabyBio sentence recognition at SNRs used for clinical CI testing (≥ 0 dB SNR) and approach ceiling level even at -5 dB SNR. Consistent with previous reports, speech recognition in noise improved with age from 5 to 12 years in children with normal hearing. Thus, speech recognition in noise might also increase in the CI population across the same age range warranting age-specific norms for CI recipients. Last, the QuickSIN test could be substituted for the BKB-SIN test with appropriate age-normative data.


Asunto(s)
Pruebas de Discriminación del Habla/normas , Percepción del Habla , Prueba del Umbral de Recepción del Habla/normas , Niño , Preescolar , Implantación Coclear , Implantes Cocleares , Femenino , Humanos , Lenguaje , Masculino , Ruido , Valores de Referencia , Relación Señal-Ruido , Habla , Pruebas de Discriminación del Habla/métodos , Prueba del Umbral de Recepción del Habla/métodos
11.
J Am Acad Audiol ; 26(3): 289-98, 2015 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-25751696

RESUMEN

BACKGROUND: Bilateral implant recipients theoretically have access to binaural cues. Research in postlingually deafened adults with cochlear implants (CIs) indicates minimal evidence for true binaural hearing. Congenitally deafened children who experience spatial hearing with bilateral CIs, however, might perceive binaural cues in the CI signal differently. There is limited research examining binaural hearing in children with CIs, and the few published studies are limited by the use of unrealistic speech stimuli and background noise. PURPOSE: The purposes of this study were to (1) replicate our previous study of binaural hearing in postlingually deafened adults with AzBio sentences in prelingually deafened children with the pediatric version of the AzBio sentences, and (2) replicate previous studies of binaural hearing in children with CIs using more open-set sentences and more realistic background noise (i.e., multitalker babble). RESEARCH DESIGN: The study was a within-participant, repeated-measures design. STUDY SAMPLE: The study sample consisted of 14 children with bilateral CIs with at least 25 mo of listening experience. DATA COLLECTION AND ANALYSIS: Speech recognition was assessed using sentences presented in multitalker babble at a fixed signal-to-noise ratio. Test conditions included speech at 0° with noise presented at 0° (S0N0), on the side of the first CI (90° or 270°) (S0N1stCI), and on the side of the second CI (S0N2ndCI) as well as speech presented at 0° with noise presented semidiffusely from eight speakers at 45° intervals. Estimates of summation, head shadow, squelch, and spatial release from masking were calculated. RESULTS: Results of test conditions commonly reported in the literature (S0N0, S0N1stCI, S0N2ndCI) are consistent with results from previous research in adults and children with bilateral CIs, showing minimal summation and squelch but typical head shadow and spatial release from masking. However, bilateral benefit over the better CI with speech at 0° was much larger with semidiffuse noise. CONCLUSIONS: Congenitally deafened children with CIs have similar availability of binaural hearing cues to postlingually deafened adults with CIs within the same experimental design. It is possible that the use of realistic listening environments, such as semidiffuse background noise as in Experiment II, would reveal greater binaural hearing benefit for bilateral CI recipients. Future research is needed to determine whether (1) availability of binaural cues for children correlates with interaural time and level differences, (2) different listening environments are more sensitive to binaural hearing benefits, and (3) differences exist between pediatric bilateral recipients receiving implants in the same or sequential surgeries.


Asunto(s)
Percepción Auditiva/fisiología , Implantación Coclear , Implantes Cocleares , Señales (Psicología) , Sordera/terapia , Pérdida Auditiva Bilateral/terapia , Adolescente , Niño , Preescolar , Sordera/fisiopatología , Sordera/psicología , Ambiente , Femenino , Pérdida Auditiva Bilateral/fisiopatología , Pérdida Auditiva Bilateral/psicología , Humanos , Lactante , Masculino , Ruido
12.
J Am Acad Audiol ; 26(2): 145-54, 2015 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-25690775

RESUMEN

BACKGROUND: With improved surgical techniques and electrode design, an increasing number of cochlear implant (CI) recipients have preserved acoustic hearing in the implanted ear, thereby resulting in bilateral acoustic hearing. There are currently no guidelines, however, for clinicians with respect to audiometric criteria and the recommendation of amplification in the implanted ear. The acoustic bandwidth necessary to obtain speech perception benefit from acoustic hearing in the implanted ear is unknown. Additionally, it is important to determine if, and in which listening environments, acoustic hearing in both ears provides more benefit than hearing in just one ear, even with limited residual hearing. PURPOSE: The purposes of this study were to (1) determine whether acoustic hearing in an ear with a CI provides as much speech perception benefit as an equivalent bandwidth of acoustic hearing in the nonimplanted ear, and (2) determine whether acoustic hearing in both ears provides more benefit than hearing in just one ear. RESEARCH DESIGN: A repeated-measures, within-participant design was used to compare performance across listening conditions. STUDY SAMPLE: Seven adults with CIs and bilateral residual acoustic hearing (hearing preservation) were recruited for the study. DATA COLLECTION AND ANALYSIS: Consonant-nucleus-consonant word recognition was tested in four conditions: CI alone, CI + acoustic hearing in the nonimplanted ear, CI + acoustic hearing in the implanted ear, and CI + bilateral acoustic hearing. A series of low-pass filters were used to examine the effects of acoustic bandwidth through an insert earphone with amplification. Benefit was defined as the difference among conditions. The benefit of bilateral acoustic hearing was tested in both diffuse and single-source background noise. RESULTS were analyzed using repeated-measures analysis of variance. RESULTS: Similar benefit was obtained for equivalent acoustic frequency bandwidth in either ear. Acoustic hearing in the nonimplanted ear provided more benefit than the implanted ear only in the wideband condition, most likely because of better audiometric thresholds (>500 Hz) in the nonimplanted ear. Bilateral acoustic hearing provided more benefit than unilateral hearing in either ear alone, but only in diffuse background noise. CONCLUSIONS: RESULTS support use of amplification in the implanted ear if residual hearing is present. The benefit of bilateral acoustic hearing (hearing preservation) should not be tested in quiet or with spatially coincident speech and noise, but rather in spatially separated speech and noise (e.g., diffuse background noise).


Asunto(s)
Implantación Coclear , Implantes Cocleares , Pérdida Auditiva Sensorineural/fisiopatología , Pérdida Auditiva Sensorineural/terapia , Pérdida Auditiva Unilateral/fisiopatología , Percepción del Habla/fisiología , Adulto , Anciano , Anciano de 80 o más Años , Umbral Auditivo/fisiología , Estudios de Cohortes , Femenino , Audífonos , Pérdida Auditiva Unilateral/terapia , Humanos , Masculino , Persona de Mediana Edad
13.
J Am Acad Audiol ; 26(1): 51-8; quiz 109-10, 2015 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-25597460

RESUMEN

BACKGROUND: Despite improvements in cochlear implants (CIs), CI recipients continue to experience significant communicative difficulty in background noise. Many potential solutions have been proposed to help increase signal-to-noise ratio in noisy environments, including signal processing and external accessories. To date, however, the effect of microphone location on speech recognition in noise has focused primarily on hearing aid users. PURPOSE: The purpose of this study was to (1) measure physical output for the T-Mic as compared with the integrated behind-the-ear (BTE) processor mic for various source azimuths, and (2) to investigate the effect of CI processor mic location for speech recognition in semi-diffuse noise with speech originating from various source azimuths as encountered in everyday communicative environments. RESEARCH DESIGN: A repeated-measures, within-participant design was used to compare performance across listening conditions. STUDY SAMPLE: A total of 11 adults with Advanced Bionics CIs were recruited for this study. DATA COLLECTION AND ANALYSIS: Physical acoustic output was measured on a Knowles Experimental Mannequin for Acoustic Research (KEMAR) for the T-Mic and BTE mic, with broadband noise presented at 0 and 90° (directed toward the implant processor). In addition to physical acoustic measurements, we also assessed recognition of sentences constructed by researchers at Texas Instruments, the Massachusetts Institute of Technology, and the Stanford Research Institute (TIMIT sentences) at 60 dBA for speech source azimuths of 0, 90, and 270°. Sentences were presented in a semi-diffuse restaurant noise originating from the R-SPACE 8-loudspeaker array. Signal-to-noise ratio was determined individually to achieve approximately 50% correct in the unilateral implanted listening condition with speech at 0°. Performance was compared across the T-Mic, 50/50, and the integrated BTE processor mic. RESULTS: The integrated BTE mic provided approximately 5 dB attenuation from 1500-4500 Hz for signals presented at 0° as compared with 90° (directed toward the processor). The T-Mic output was essentially equivalent for sources originating from 0 and 90°. Mic location also significantly affected sentence recognition as a function of source azimuth, with the T-Mic yielding the highest performance for speech originating from 0°. CONCLUSIONS: These results have clinical implications for (1) future implant processor design with respect to mic location, (2) mic settings for implant recipients, and (3) execution of advanced speech testing in the clinic.


Asunto(s)
Umbral Auditivo , Implantes Cocleares , Pérdida Auditiva/cirugía , Ruido , Localización de Sonidos/fisiología , Percepción del Habla/fisiología , Preescolar , Femenino , Pérdida Auditiva/fisiopatología , Humanos , Masculino , Diseño de Prótesis , Procesamiento de Señales Asistido por Computador , Prueba del Umbral de Recepción del Habla
14.
Hear Res ; 312: 28-37, 2014 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-24607490

RESUMEN

The purpose of this study was to investigate horizontal plane localization and interaural time difference (ITD) thresholds for 14 adult cochlear implant recipients with hearing preservation in the implanted ear. Localization to broadband noise was assessed in an anechoic chamber with a 33-loudspeaker array extending from -90 to +90°. Three listening conditions were tested including bilateral hearing aids, bimodal (implant + contralateral hearing aid) and best aided (implant + bilateral hearing aids). ITD thresholds were assessed, under headphones, for low-frequency stimuli including a 250-Hz tone and bandpass noise (100-900 Hz). Localization, in overall rms error, was significantly poorer in the bimodal condition (mean: 60.2°) as compared to both bilateral hearing aids (mean: 46.1°) and the best-aided condition (mean: 43.4°). ITD thresholds were assessed for the same 14 adult implant recipients as well as 5 normal-hearing adults. ITD thresholds were highly variable across the implant recipients ranging from the range of normal to ITDs not present in real-world listening environments (range: 43 to over 1600 µs). ITD thresholds were significantly correlated with localization, the degree of interaural asymmetry in low-frequency hearing, and the degree of hearing preservation related benefit in the speech reception threshold (SRT). These data suggest that implant recipients with hearing preservation in the implanted ear have access to binaural cues and that the sensitivity to ITDs is significantly correlated with localization and degree of preserved hearing in the implanted ear.


Asunto(s)
Umbral Auditivo/fisiología , Implantación Coclear , Implantes Cocleares , Audición/fisiología , Tiempo de Reacción/fisiología , Localización de Sonidos/fisiología , Estimulación Acústica/métodos , Anciano , Anciano de 80 o más Años , Audífonos , Pruebas Auditivas , Humanos , Persona de Mediana Edad , Ruido , Percepción del Habla/fisiología
15.
Audiol Neurootol ; 19(3): 151-63, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24556850

RESUMEN

We examined the effects of acoustic bandwidth on bimodal benefit for speech recognition in adults with a cochlear implant (CI) in one ear and low-frequency acoustic hearing in the contralateral ear. The primary aims were to (1) replicate Zhang et al. [Ear Hear 2010;31:63-69] with a steeper filter roll-off to examine the low-pass bandwidth required to obtain bimodal benefit for speech recognition and expand results to include different signal-to-noise ratios (SNRs) and talker genders, (2) determine whether the bimodal benefit increased with acoustic low-pass bandwidth and (3) determine whether an equivalent bimodal benefit was obtained with acoustic signals of similar low-pass and pass band bandwidth, but different center frequencies. Speech recognition was assessed using words presented in quiet and sentences in noise (+10, +5 and 0 dB SNRs). Acoustic stimuli presented to the nonimplanted ear were filtered into the following bands: <125, 125-250, <250, 250-500, <500, 250-750, <750 Hz and wide-band (full, nonfiltered bandwidth). The primary findings were: (1) the minimum acoustic low-pass bandwidth that produced a significant bimodal benefit was <250 Hz for male talkers in quiet and for female talkers in multitalker babble, but <125 Hz for male talkers in background noise, and the observed bimodal benefit did not vary significantly with SNR; (2) the bimodal benefit increased systematically with acoustic low-pass bandwidth up to <750 Hz for a male talker in quiet and female talkers in noise and up to <500 Hz for male talkers in noise, and (3) a similar bimodal benefit was obtained with low-pass and band-pass-filtered stimuli with different center frequencies (e.g. <250 vs. 250-500 Hz), meaning multiple frequency regions contain useful cues for bimodal benefit. Clinical implications are that (1) all aidable frequencies should be amplified in individuals with bimodal hearing, and (2) verification of audibility at 125 Hz is unnecessary unless it is the only aidable frequency.


Asunto(s)
Umbral Auditivo/fisiología , Implantes Cocleares , Pérdida Auditiva Sensorineural/fisiopatología , Pérdida Auditiva Unilateral/fisiopatología , Percepción del Habla/fisiología , Estimulación Acústica , Adulto , Anciano de 80 o más Años , Implantación Coclear , Femenino , Pérdida Auditiva Sensorineural/cirugía , Pérdida Auditiva Unilateral/cirugía , Pruebas Auditivas , Humanos , Masculino , Persona de Mediana Edad
16.
Audiol Neurootol ; 19(1): 57-71, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24356514

RESUMEN

The purpose of this study was to examine the availability of binaural cues for adult, bilateral cochlear implant (CI) patients, bimodal patients and hearing preservation patients using a multiple-baseline, observational study design. Speech recognition was assessed using the Bamford-Kowal-Bench Speech-in-Noise (BKB-SIN) test as well as the AzBio sentences [Spahr AJ, et al: Ear Hear 2012;33:112-117] presented in a multi-talker babble at a +5 dB signal-to-noise ratio (SNR). Test conditions included speech at 0° with noise presented at 0° (S0N0), 90° (S0N90) and 270° (S0N270). Estimates of summation, head shadow (HS), squelch and spatial release from masking (SRM) were calculated. Though nonwwe of the subject groups consistently showed access to binaural cues, the hearing preservation patients exhibited a significant correlation between summation and squelch whereas the bilateral and bimodal participants did not. That is to say, the two effects associated with binaural hearing - summation and squelch - were positively correlated only for the listeners with bilateral acoustic hearing. This finding provides evidence for the supposition that implant recipients with bilateral acoustic hearing have access to binaural cues, which should, in theory, provide greater benefit in noisy listening environments. It is likely, however, that the chosen test environment negatively affected the outcomes. Specifically, the spatially separated noise conditions directed noise toward the microphone (mic) port of the behind-the-ear (BTE) hearing aid and implant processor. Thus, it is possible that in more realistic listening environments for which the diffuse noise is not directed toward the processor/hearing aid mic, hearing preservation patients have binaural cues for improved speech understanding.


Asunto(s)
Umbral Auditivo/fisiología , Implantación Coclear , Implantes Cocleares , Pérdida Auditiva Sensorineural/fisiopatología , Percepción del Habla/fisiología , Adulto , Anciano , Señales (Psicología) , Femenino , Pérdida Auditiva Sensorineural/cirugía , Pruebas Auditivas , Humanos , Masculino , Persona de Mediana Edad , Localización de Sonidos/fisiología , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...