Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 38
Filtrar
1.
Front Pediatr ; 11: 1252452, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38078311

RESUMEN

Introduction: This study evaluated the ability of children (8-12 years) with mild bilateral or unilateral hearing loss (MBHL/UHL) listening unaided, or normal hearing (NH) to locate and understand talkers in varying auditory/visual acoustic environments. Potential differences across hearing status were examined. Methods: Participants heard sentences presented by female talkers from five surrounding locations in varying acoustic environments. A localization-only task included two conditions (auditory only, visually guided auditory) in three acoustic environments (favorable, typical, poor). Participants were asked to locate each talker. A speech perception task included four conditions [auditory-only, visually guided auditory, audiovisual, auditory-only from 0° azimuth (baseline)] in a single acoustic environment. Participants were asked to locate talkers, then repeat what was said. Results: In the localization-only task, participants were better able to locate talkers and looking times were shorter with visual guidance to talker location. Correct looking was poorest and looking times longest in the poor acoustic environment. There were no significant effects of hearing status/age. In the speech perception task, performance was highest in the audiovisual condition and was better in the visually guided and auditory-only conditions than in the baseline condition. Although audiovisual performance was best overall, children with MBHL or UHL performed more poorly than peers with NH. Better-ear pure-tone averages for children with MBHL had a greater effect on keyword understanding than did poorer-ear pure-tone averages for children with UHL. Conclusion: Although children could locate talkers more easily and quickly with visual information, finding locations alone did not improve speech perception. Best speech perception occurred in the audiovisual condition; however, poorer performance by children with MBHL or UHL suggested that being able to see talkers did not overcome reduced auditory access. Children with UHL exhibited better speech perception than children with MBHL, supporting benefits of NH in at least one ear.

2.
Semin Hear ; 44(Suppl 1): S36-S48, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36970648

RESUMEN

Numerous studies have shown that children with mild bilateral (MBHL) or unilateral hearing loss (UHL) experience speech perception difficulties in poor acoustics. Much of the research in this area has been conducted via laboratory studies using speech-recognition tasks with a single talker and presentation via earphones and/or from a loudspeaker located directly in front of the listener. Real-world speech understanding is more complex, however, and these children may need to exert greater effort than their peers with normal hearing to understand speech, potentially impacting progress in a number of developmental areas. This article discusses issues and research relative to speech understanding in complex environments for children with MBHL or UHL and implications for real-world listening and understanding.

3.
J Am Acad Audiol ; 2022 Dec 28.
Artículo en Inglés | MEDLINE | ID: mdl-36577441

RESUMEN

BACKGROUND: Remote-microphone (RM) systems are designed to reduce the impact of poor acoustics on speech understanding. However, there is limited research examining the effects of adding reverberation to noise on speech understanding when using hearing aids (HAs) and RM systems. Given the significant challenges posed by environments with poor acoustics for children who are hard of hearing, we evaluated the ability of a novel RM system to address the effects of noise and reverberation. PURPOSE: We assessed the effect of a recently developed RM system on aided speech perception of children who were hard of hearing in noise and reverberation and how their performance compared to peers with "normal" hearing. The effect of aided speech audibility on sentence recognition when using an RM system also was assessed. STUDY SAMPLE: Twenty-two children with mild to severe hearing loss and 17 children with normal "hearing" (7-18 years) participated. DATA COLLECTION AND ANALYSIS: An adaptive procedure was used to determine the signal-to-noise ratio for 50 and 95% correct sentence recognition in noise and noise plus reverberation (RT 300 ms). Linear mixed models were used to examine the effect of listening conditions on speech recognition with RMs for children who were hard of hearing compared to children with "normal" hearing and the effects of aided audibility on performance across all listening conditions for children who were hard of hearing. RESULTS: Children who were hard of hearing had poorer speech recognition for HAs alone than for HAs plus RM. Regardless of hearing status, children had poorer speech recognition in noise plus reverberation than in noise alone. Children who were hard of hearing had poorer speech recognition than peers with "normal" hearing when using HAs alone but comparable or better speech recognition with HAs plus RM. Children with better-aided audibility with the HAs showed better speech recognition with the HAs alone and with HAs plus RM. CONCLUSIONS: Providing HAs that maximize speech audibility and coupling them with RM systems has the potential to improve communication access and outcomes for children who are hard of hearing in environments with noise and reverberation.

4.
J Speech Lang Hear Res ; 63(7): 2468-2482, 2020 07 20.
Artículo en Inglés | MEDLINE | ID: mdl-32574079

RESUMEN

Objective The purpose of this study was to evaluate the effects of hearing aid-based rerouting systems (remote microphone [RM] and contralateral routing of signals [CROS]) on speech recognition and comprehension for children with limited usable hearing unilaterally. A secondary purpose was to evaluate students' perceptions of CROS benefits in classrooms. Method Twenty children aged 10-16 years with limited useable hearing in one ear completed tasks of sentence recognition and comprehension in a laboratory. For both tasks, speech was presented from one of four loudspeakers in an interleaved fashion. Speech loudspeakers were either midline, monaural direct, or monaural indirect, and noise loudspeakers surrounded the participant. Throughout testing, the RM was always near the midline loudspeaker. Six established users of CROS systems completed a newly developed questionnaire that queried experiences in diverse listening situations. Results There were no effects of RM or CROS use on performance for speech presented from front or monaural direct loudspeakers. However, for monaural indirect loudspeakers, CROS improved sentence recognition and RM impaired recognition. In the comprehension task, CROS improved comprehension by 11 rationalized arcsine units, but RM did not affect comprehension. Questionnaire results demonstrated that students report CROS benefits for talkers in the front and from the side, but not for situations requiring localization. Conclusions The results support CROS benefits without CROS disadvantages in a laboratory environment that reflects a dynamic classroom. Thus, CROS systems have the potential to improve hearing in contemporary classrooms for students, especially if there is only a single microphone.


Asunto(s)
Audífonos , Localización de Sonidos , Percepción del Habla , Niño , Comprensión , Humanos , Ruido , Habla
5.
Am J Audiol ; 29(2): 244-258, 2020 Jun 08.
Artículo en Inglés | MEDLINE | ID: mdl-32250641

RESUMEN

Purpose The primary purpose of this study was to explore the efficacy of using virtual reality (VR) technology in hearing research with children by comparing speech perception abilities in a typical laboratory environment and a simulated VR classroom environment. Method The study included 48 final participants (40 children and eight young adults). The study design utilized a speech perception task in conjunction with a localization demand in auditory-only (AO) and auditory-visual (AV) conditions. Tasks were completed in simulated classroom acoustics in both a typical laboratory environment and in a virtual classroom environment accessed using an Oculus Rift head-mounted display. Results Speech perception scores were higher for AV conditions over AO conditions across age groups. In addition, interaction effects of environment (i.e., laboratory environment and VR classroom environment) and visual accessibility (i.e., AV vs. AO) indicated that children's performance on the speech perception task in the VR classroom was more similar to their performance in the laboratory environment for AV tasks than it was for AO tasks. AO tasks showed improvement in speech perception scores from the laboratory to the VR classroom environment, whereas AV conditions showed little significant change. Conclusion These results suggest that VR head-mounted displays are a viable research tool in AV tasks for children, increasing flexibility for audiovisual testing in a typical laboratory environment.


Asunto(s)
Estimulación Acústica/métodos , Estimulación Luminosa/métodos , Percepción del Habla , Realidad Virtual , Adulto , Niño , Femenino , Pruebas Auditivas/métodos , Humanos , Masculino , Instituciones Académicas , Localización de Sonidos , Adulto Joven
6.
Lang Speech Hear Serv Sch ; 51(1): 55-67, 2020 01 08.
Artículo en Inglés | MEDLINE | ID: mdl-31913801

RESUMEN

Purpose Because of uncertainty about the level of hearing where hearing aids should be provided to children, the goal of the current study was to develop audibility-based hearing aid candidacy criteria based on the relationship between unaided hearing and language outcomes in a group of children with hearing loss who did not wear hearing aids. Method Unaided hearing and language outcomes were examined for 52 children with mild-to-severe hearing losses. A group of 52 children with typical hearing matched for age, nonverbal intelligence, and socioeconomic status was included as a comparison group representing the range of optimal language outcomes. Two audibility-based criteria were considered: (a) the level of unaided hearing where unaided children with hearing loss fell below the median for children with typical hearing and (b) the level of unaided hearing where the slope of language outcomes changed significantly based on an iterative, piecewise regression modeling approach. Results The level of unaided audibility for children with hearing loss that was associated with differences in language development from children with typical hearing or based on the modeling approach varied across outcomes and criteria but converged at an unaided speech intelligibility index of 80. Conclusions Children with hearing loss who have unaided speech intelligibility index values less than 80 may be at risk for delays in language development without hearing aids. The unaided speech intelligibility index potentially could be used as a clinical criterion for hearing aid fitting candidacy for children with hearing loss.


Asunto(s)
Audífonos , Pérdida Auditiva Bilateral/rehabilitación , Pruebas Auditivas/normas , Desarrollo del Lenguaje , Inteligibilidad del Habla , Percepción del Habla , Acústica , Audiometría , Niño , Preescolar , Sordera , Femenino , Humanos , Inteligencia , Lenguaje , Masculino , Resultado del Tratamiento
7.
Lang Speech Hear Serv Sch ; 51(1): 98-102, 2020 01 08.
Artículo en Inglés | MEDLINE | ID: mdl-31913804

RESUMEN

Purpose This epilogue discusses messages that we can take forward from the articles in the forum. A common theme throughout the forum is the ongoing need for research. The forum begins with evidence of potential progressive hearing loss in infants with mild bilateral hearing loss, who may be missed by current newborn hearing screening protocols, and supports the need for consensus regarding early identification in this population. Consensus regarding management similarly is a continuing need. Three studies add to the growing body of evidence that children with mild bilateral or unilateral hearing loss are at risk for difficulties in speech understanding in adverse environments, as well as delays in language and cognition, and that difficulties may persist beyond early childhood. Ambivalence regarding if and when children with mild bilateral or unilateral hearing loss should be fitted with personal amplification also impacts management decisions. Two articles address current evidence and support the need for further research into factors influencing decisions regarding amplification in these populations. A third article examines new criteria to determine hearing aid candidacy in children with mild hearing loss. The final contribution in this forum discusses listening-related fatigue in children with unilateral hearing loss. The absence of research specific to this population is evidence for the need for further investigation. Ongoing research that addresses difficulties experienced by children with mild bilateral and unilateral hearing loss and potential management options can help guide us toward interventions that are specific for the needs of these children.


Asunto(s)
Audiología/métodos , Audífonos , Pérdida Auditiva Bilateral/epidemiología , Pérdida Auditiva Bilateral/rehabilitación , Pérdida Auditiva Unilateral/epidemiología , Pérdida Auditiva Unilateral/rehabilitación , Habla , Niño , Preescolar , Pérdida Auditiva Bilateral/diagnóstico , Pérdida Auditiva Unilateral/diagnóstico , Humanos , Lactante , Índice de Severidad de la Enfermedad
8.
Ear Hear ; 41(4): 790-803, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-31584502

RESUMEN

OBJECTIVES: Unilateral hearing loss increases the risk of academic and behavioral challenges for school-aged children. Previous research suggests that remote microphone (RM) systems offer the most consistent benefits for children with unilateral hearing loss in classroom environments relative to other nonsurgical interventions. However, generalizability of previous laboratory work is limited because of the specific listening situations evaluated, which often included speech and noise signals originating from the side. In addition, early studies focused on speech recognition tasks requiring limited cognitive engagement. However, those laboratory conditions do not reflect characteristics of contemporary classrooms, which are cognitively demanding and typically include multiple talkers of interest in relatively diffuse background noise. The purpose of this study was to evaluate the potential effects of rerouting amplification systems, specifically a RM system and a contralateral routing of signal (CROS) system, on speech recognition and comprehension of school-age children in a laboratory environment designed to emulate the dynamic characteristics of contemporary classrooms. It was expected that listeners would benefit from the CROS system when the head shadow limits audibility (e.g., monaural indirect listening). It was also expected that listeners would benefit from the RM system only when the RM was near the talker of interest. DESIGN: Twenty-one children (10 to 14 years, M = 11.86) with normal hearing participated in laboratory tests of speech recognition and comprehension. Unilateral hearing loss was simulated by presenting speech-shaped masking noise to one ear via an insert earphone. Speech stimuli were presented from 1 of 4 loudspeakers located at either 0°, +45°, -90°, and -135° or 0°, -45°, +90°, and +135°. Cafeteria noise was presented from separate loudspeakers surrounding the listener. Participants repeated sentences (sentence recognition) and also answered questions after listening to an unfamiliar story (comprehension). They were tested unaided, with a RM system (microphone near the front loudspeaker), and with a CROS system (ear-level microphone on the ear with simulated hearing loss). RESULTS: Relative to unaided listening, both rerouting systems reduced sentence recognition performance for most signals originating near the ear with normal hearing (monaural direct loudspeakers). Only the RM system improved speech recognition for midline signals, which were near the RM. Only the CROS system significantly improved speech recognition for signals originating near the ear with simulated hearing loss (monaural indirect loudspeakers). Although the benefits were generally small (approximately 6.5 percentage points), the CROS system also improved comprehension scores, which reflect overall listening across all four loudspeakers. Conversely, the RM system did not improve comprehension scores relative to unaided listening. CONCLUSIONS: Benefits of the CROS system in this study were small, specific to situations where speech is directed toward the ear with hearing loss, and relative only to a RM system utilizing one microphone. Although future study is warranted to evaluate the generalizability of the findings, the data demonstrate both CROS and RM systems are nonsurgical interventions that have the potential to improve speech recognition and comprehension for children with limited useable unilateral hearing in dynamic, noisy classroom situations.


Asunto(s)
Audífonos , Percepción Auditiva , Niño , Audición , Humanos , Ruido , Percepción del Habla
9.
Front Neurosci ; 13: 1093, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31680828

RESUMEN

Objectives: Children with hearing loss listen and learn in environments with noise and reverberation, but perform more poorly in noise and reverberation than children with normal hearing. Even with amplification, individual differences in speech recognition are observed among children with hearing loss. Few studies have examined the factors that support speech understanding in noise and reverberation for this population. This study applied the theoretical framework of the Ease of Language Understanding (ELU) model to examine the influence of auditory, cognitive, and linguistic factors on speech recognition in noise and reverberation for children with hearing loss. Design: Fifty-six children with hearing loss and 50 age-matched children with normal hearing who were 7-10 years-old participated in this study. Aided sentence recognition was measured using an adaptive procedure to determine the signal-to-noise ratio for 50% correct (SNR50) recognition in steady-state speech-shaped noise. SNR50 was also measured with noise plus a simulation of 600 ms reverberation time. Receptive vocabulary, auditory attention, and visuospatial working memory were measured. Aided speech audibility indexed by the Speech Intelligibility Index was measured through the hearing aids of children with hearing loss. Results: Children with hearing loss had poorer aided speech recognition in noise and reverberation than children with typical hearing. Children with higher receptive vocabulary and working memory skills had better speech recognition in noise and noise plus reverberation than peers with poorer skills in these domains. Children with hearing loss with higher aided audibility had better speech recognition in noise and reverberation than peers with poorer audibility. Better audibility was also associated with stronger language skills. Conclusions: Children with hearing loss are at considerable risk for poor speech understanding in noise and in conditions with noise and reverberation. Consistent with the predictions of the ELU model, children with stronger vocabulary and working memory abilities performed better than peers with poorer skills in these domains. Better aided speech audibility was associated with better recognition in noise and noise plus reverberation conditions for children with hearing loss. Speech audibility had direct effects on speech recognition in noise and reverberation and cumulative effects on speech recognition in noise through a positive association with language development over time.

10.
Int J Audiol ; 58(12): 805-815, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-31486692

RESUMEN

Objective: Provide recommendations to audiologists for the management of children with unilateral hearing loss (UHL) and for needed research that can lend further insight into important unanswered questions.Design: An international panel of experts on children with UHL was convened following a day and a half of presentations on the same. The evidence reviewed for this parameter was gathered through web-based literature searches specifically designed for academic and health care resources, recent systematic reviews of literature, and new research presented at the conference that underwent peer review for publication by the time of this writing.Study sample: Expert opinions and electronic databases including Cumulative Index to Nursing and Allied Health Literature (CINAHL), Cochrane Library, Education Resources Information Centre (ERIC), Google Scholar, PsycINFO, PubMed, ScienceDirect, and Turning Research into Practice (TRIP) Database.Results: The resulting practice parameter requires a personalised, family-centred process: (1) routine surveillance of speech-language, psychosocial, auditory, and academic or pre-academic development; (2) medical assessments for determination of aetiology of hearing loss; (3) assessment of hearing technologies; and (4) considerations for family-centred counselling.Conclusions: This practice parameter provides guidance to clinical audiologists on individualising the management of children with UHL. In addition, the paper concludes with recommendations for research priorities.


Asunto(s)
Pérdida Auditiva Unilateral/terapia , Niño , Audífonos , Pérdida Auditiva Unilateral/diagnóstico , Pruebas Auditivas , Humanos
11.
Ear Hear ; 39(4): 783-794, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29252979

RESUMEN

OBJECTIVES: Visual information from talkers facilitates speech intelligibility for listeners when audibility is challenged by environmental noise and hearing loss. Less is known about how listeners actively process and attend to visual information from different talkers in complex multi-talker environments. This study tracked looking behavior in children with normal hearing (NH), mild bilateral hearing loss (MBHL), and unilateral hearing loss (UHL) in a complex multi-talker environment to examine the extent to which children look at talkers and whether looking patterns relate to performance on a speech-understanding task. It was hypothesized that performance would decrease as perceptual complexity increased and that children with hearing loss would perform more poorly than their peers with NH. Children with MBHL or UHL were expected to demonstrate greater attention to individual talkers during multi-talker exchanges, indicating that they were more likely to attempt to use visual information from talkers to assist in speech understanding in adverse acoustics. It also was of interest to examine whether MBHL, versus UHL, would differentially affect performance and looking behavior. DESIGN: Eighteen children with NH, eight children with MBHL, and 10 children with UHL participated (8-12 years). They followed audiovisual instructions for placing objects on a mat under three conditions: a single talker providing instructions via a video monitor, four possible talkers alternately providing instructions on separate monitors in front of the listener, and the same four talkers providing both target and nontarget information. Multi-talker background noise was presented at a 5 dB signal-to-noise ratio during testing. An eye tracker monitored looking behavior while children performed the experimental task. RESULTS: Behavioral task performance was higher for children with NH than for either group of children with hearing loss. There were no differences in performance between children with UHL and children with MBHL. Eye-tracker analysis revealed that children with NH looked more at the screens overall than did children with MBHL or UHL, though individual differences were greater in the groups with hearing loss. Listeners in all groups spent a small proportion of time looking at relevant screens as talkers spoke. Although looking was distributed across all screens, there was a bias toward the right side of the display. There was no relationship between overall looking behavior and performance on the task. CONCLUSIONS: The present study examined the processing of audiovisual speech in the context of a naturalistic task. Results demonstrated that children distributed their looking to a variety of sources during the task, but that children with NH were more likely to look at screens than were those with MBHL/UHL. However, all groups looked at the relevant talkers as they were speaking only a small proportion of the time. Despite variability in looking behavior, listeners were able to follow the audiovisual instructions and children with NH demonstrated better performance than children with MBHL/UHL. These results suggest that performance on some challenging multi-talker audiovisual tasks is not dependent on visual fixation to relevant talkers for children with NH or with MBHL/UHL.


Asunto(s)
Fijación Ocular , Pérdida Auditiva Bilateral/fisiopatología , Pérdida Auditiva Unilateral/fisiopatología , Percepción del Habla , Percepción Visual , Estudios de Casos y Controles , Niño , Conducta Infantil , Femenino , Humanos , Masculino , Índice de Severidad de la Enfermedad , Análisis y Desempeño de Tareas
12.
J Am Acad Audiol ; 28(9): 823-837, 2017 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-28972471

RESUMEN

BACKGROUND: Nonlinear frequency compression (NFC) can improve the audibility of high-frequency sounds by lowering them to a frequency where audibility is better; however, this lowering results in spectral distortion. Consequently, performance is a combination of the effects of increased access to high-frequency sounds and the detrimental effects of spectral distortion. Previous work has demonstrated positive benefits of NFC on speech recognition when NFC is set to improve audibility while minimizing distortion. However, the extent to which NFC impacts listening effort is not well understood, especially for children with sensorineural hearing loss (SNHL). PURPOSE: To examine the impact of NFC on recognition and listening effort for speech in adults and children with SNHL. RESEARCH DESIGN: Within-subject, quasi-experimental study. Participants listened to amplified nonsense words that were (1) frequency-lowered using NFC, (2) low-pass filtered at 5 kHz to simulate the restricted bandwidth (RBW) of conventional hearing aid processing, or (3) low-pass filtered at 10 kHz to simulate extended bandwidth (EBW) amplification. STUDY SAMPLE: Fourteen children (8-16 yr) and 14 adults (19-65 yr) with mild-to-severe SNHL. INTERVENTION: Participants listened to speech processed by a hearing aid simulator that amplified input signals to fit a prescriptive target fitting procedure. DATA COLLECTION AND ANALYSIS: Participants were blinded to the type of processing. Participants' responses to each nonsense word were analyzed for accuracy and verbal-response time (VRT; listening effort). A multivariate analysis of variance and linear mixed model were used to determine the effect of hearing-aid signal processing on nonsense word recognition and VRT. RESULTS: Both children and adults identified the nonsense words and initial consonants better with EBW and NFC than with RBW. The type of processing did not affect the identification of the vowels or final consonants. There was no effect of age on recognition of the nonsense words, initial consonants, medial vowels, or final consonants. VRT did not change significantly with the type of processing or age. CONCLUSION: Both adults and children demonstrated improved speech recognition with access to the high-frequency sounds in speech. Listening effort as measured by VRT was not affected by access to high-frequency sounds.


Asunto(s)
Audífonos , Pérdida Auditiva Sensorineural/fisiopatología , Acústica del Lenguaje , Percepción del Habla/fisiología , Adolescente , Adulto , Anciano , Niño , Audición/fisiología , Pérdida Auditiva Sensorineural/psicología , Pérdida Auditiva Sensorineural/rehabilitación , Humanos , Persona de Mediana Edad , Relación Señal-Ruido
13.
Ear Hear ; 38(3): e180-e192, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28045838

RESUMEN

OBJECTIVES: The purpose of this study was to examine word recognition in children who are hard of hearing (CHH) and children with normal hearing (CNH) in response to time-gated words presented in high- versus low-predictability sentences (HP, LP), where semantic cues were manipulated. Findings inform our understanding of how CHH combine cognitive-linguistic and acoustic-phonetic cues to support spoken word recognition. It was hypothesized that both groups of children would be able to make use of linguistic cues provided by HP sentences to support word recognition. CHH were expected to require greater acoustic information (more gates) than CNH to correctly identify words in the LP condition. In addition, it was hypothesized that error patterns would differ across groups. DESIGN: Sixteen CHH with mild to moderate hearing loss and 16 age-matched CNH participated (5 to 12 years). Test stimuli included 15 LP and 15 HP age-appropriate sentences. The final word of each sentence was divided into segments and recombined with the sentence frame to create series of sentences in which the final word was progressively longer by the gated increments. Stimuli were presented monaurally through headphones and children were asked to identify the target word at each successive gate. They also were asked to rate their confidence in their word choice using a five- or three-point scale. For CHH, the signals were processed through a hearing aid simulator. Standardized language measures were used to assess the contribution of linguistic skills. RESULTS: Analysis of language measures revealed that the CNH and CHH performed within the average range on language abilities. Both groups correctly recognized a significantly higher percentage of words in the HP condition than in the LP condition. Although CHH performed comparably with CNH in terms of successfully recognizing the majority of words, differences were observed in the amount of acoustic-phonetic information needed to achieve accurate word recognition. CHH needed more gates than CNH to identify words in the LP condition. CNH were significantly lower in rating their confidence in the LP condition than in the HP condition. CHH, however, were not significantly different in confidence between the conditions. Error patterns for incorrect word responses across gates and predictability varied depending on hearing status. CONCLUSIONS: The results of this study suggest that CHH with age-appropriate language abilities took advantage of context cues in the HP sentences to guide word recognition in a manner similar to CNH. However, in the LP condition, they required more acoustic information (more gates) than CNH for word recognition. Differences in the structure of incorrect word responses and their nomination patterns across gates for CHH compared with their peers with NH suggest variations in how these groups use limited acoustic information to select word candidates.


Asunto(s)
Pérdida Auditiva , Percepción del Habla , Umbral Auditivo , Estudios de Casos y Controles , Niño , Preescolar , Femenino , Humanos , Lenguaje , Masculino
14.
J Speech Lang Hear Res ; 59(5): 1218-1232, 2016 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-27784030

RESUMEN

Purpose: This study examined the effects of stimulus type and hearing status on speech recognition and listening effort in children with normal hearing (NH) and children with mild bilateral hearing loss (MBHL) or unilateral hearing loss (UHL). Method: Children (5-12 years of age) with NH (Experiment 1) and children (8-12 years of age) with MBHL, UHL, or NH (Experiment 2) performed consonant identification and word and sentence recognition in background noise. Percentage correct performance and verbal response time (VRT) were assessed (onset time, total duration). Results: In general, speech recognition improved as signal-to-noise ratio (SNR) increased both for children with NH and children with MBHL or UHL. The groups did not differ on measures of VRT. Onset times were longer for incorrect than for correct responses. For correct responses only, there was a general increase in VRT with decreasing SNR. Conclusions: Findings indicate poorer sentence recognition in children with NH and MBHL or UHL as SNR decreases. VRT results suggest that greater effort was expended when processing stimuli that were incorrectly identified. Increasing VRT with decreasing SNR for correct responses also supports greater effort in poorer acoustic conditions. The absence of significant hearing status differences suggests that VRT was not differentially affected by MBHL, UHL, or NH for children in this study.


Asunto(s)
Pérdida Auditiva Bilateral/psicología , Pérdida Auditiva Unilateral/psicología , Ruido , Patrones de Reconocimiento Fisiológico , Percepción del Habla , Niño , Preescolar , Femenino , Humanos , Modelos Lineales , Masculino , Pruebas Neuropsicológicas
15.
J Speech Lang Hear Res ; 59(1): 110-21, 2016 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-26540194

RESUMEN

PURPOSE: This study compared masking release for adults and children with normal hearing and hearing loss. For the participants with hearing loss, masking release using simulated hearing aid amplification with 2 different compression speeds (slow, fast) was compared. METHOD: Sentence recognition in unmodulated noise was compared with recognition in modulated noise (masking release). Recognition was measured for participants with hearing loss using individualized amplification via the hearing-aid simulator. RESULTS: Adults with hearing loss showed greater masking release than the children with hearing loss. Average masking release was small (1 dB) and did not depend on hearing status. Masking release was comparable for slow and fast compression. CONCLUSIONS: The use of amplification in this study contrasts with previous studies that did not use amplification. The results suggest that when differences in audibility are reduced, participants with hearing loss may be able to take advantage of dips in the noise levels, similar to participants with normal hearing. Although children required a more favorable signal-to-noise ratio than adults for both unmodulated and modulated noise, masking release was not statistically different. However, the ability to detect a difference may have been limited by the small amount of masking release observed.


Asunto(s)
Pérdida Auditiva , Percepción del Habla , Estimulación Acústica/métodos , Adolescente , Adulto , Anciano , Envejecimiento/psicología , Umbral Auditivo , Niño , Femenino , Pérdida Auditiva/psicología , Pruebas Auditivas , Humanos , Pruebas del Lenguaje , Masculino , Persona de Mediana Edad , Ruido/efectos adversos , Patrones de Reconocimiento Fisiológico , Caracteres Sexuales , Acústica del Lenguaje , Adulto Joven
16.
J Am Acad Audiol ; 26(2): 128-37, 2015 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-25690773

RESUMEN

BACKGROUND: For the last decade, the importance of providing amplification up to 9-10 kHz has been supported by multiple studies involving children and adults. The extent to which a listener with hearing loss can benefit from bandwidth expansion is dependent on the audibility of high-frequency cues. The American National Standards Institute (ANSI) devised a standard method for measuring and reporting hearing aid bandwidth for quality-control purposes. However, ANSI bandwidth measurements were never intended to reflect the true frequency range that is audible for a speech stimulus for a person with hearing loss. PURPOSE: The purpose of this study was to (1) determine the maximum audible frequency of conventional hearing aids using a speech signal as the input through the hearing aid microphone for different degrees of hearing loss, (2) examine how the maximum audible frequency changes when the input stimulus is presented through hearing assistance technology (HAT) systems with cross-coupling of manufacturers' transmitters and receivers, and (3) evaluate how the maximum audible frequency compares with the upper limit of the ANSI bandwidth measure. RESEARCH DESIGN: Eight behind-the-ear hearing aids from five hearing aid manufacturers were selected based on a range of ANSI bandwidth upper frequency limits. Three audiometric configurations with varied degrees of high-frequency hearing loss were programmed into each hearing aid. Hearing aid responses were measured with the International Speech Test Signal (ISTS), broadband noise, and a short speech token (/asa/) as stimuli presented through a loudspeaker. HAT devices from three manufacturers were used to create five HAT scenarios. These instruments were coupled to the hearing aid programmed for the audiogram that provided the highest maximum audible frequency in the hearing aid analysis. The response from each HAT scenario was obtained using the same three stimuli as during the hearing aid analysis. STUDY SAMPLE: All measurements were collected in an audiometric sound booth on a Knowles Electronic Manikin for Acoustic Research (KEMAR). DATA COLLECTION AND ANALYSIS: A custom computer program was used to record responses from KEMAR. Maximum audible frequency was defined as the highest point where the Long-Term Average Speech Spectrum (LTASS) intersected the audiogram. RESULTS: The average maximum audible frequency measured through KEMAR ranged from 3.5 kHz to beyond 8 kHz and varied significantly across devices, audiograms, and stimuli. The specified upper limit of the ANSI bandwidth was not predictive of the maximum audible frequency across conditions. For most HAT systems, the maximum audible frequency for the hearing aid plus HAT condition was equivalent to the hearing aid for the same measurement configuration. In some cases, however, the HAT system imposed a lower maximum audible frequency than the hearing aid-only condition. CONCLUSIONS: The maximum audible frequency of behind-the-ear hearing aids is dependent on the degree of hearing loss, amplification device, and stimulus input. Estimating the maximum audible frequency by estimating the frequency where the speech spectrum intersects the audiogram in the high frequencies can assist clinicians in making decisions about which device or configuration of devices provides the greatest access to high-frequency information, as well as whether frequency-lowering technology should be used.


Asunto(s)
Audiometría , Percepción Auditiva/fisiología , Audífonos , Pérdida Auditiva/diagnóstico , Pérdida Auditiva/fisiopatología , Percepción del Habla/fisiología , Estimulación Acústica , Pérdida Auditiva/terapia , Humanos
17.
Ear Hear ; 36(1): 136-44, 2015 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-25170780

RESUMEN

OBJECTIVES: While classroom acoustics can affect educational performance for all students, the impact for children with minimal/mild hearing loss (MMHL) may be greater than for children with normal hearing (NH). The purpose of this study was to examine the effect of MMHL on children's speech recognition comprehension and looking behavior in a simulated classroom environment. It was hypothesized that children with MMHL would perform similarly to their peers with NH on the speech recognition task but would perform more poorly on the comprehension task. Children with MMHL also were expected to look toward talkers more often than children with NH. DESIGN: Eighteen children with MMHL and 18 age-matched children with NH participated. In a simulated classroom environment, children listened to lines from an elementary-age-appropriate play read by a teacher and four students reproduced over LCD monitors and loudspeakers located around the listener. A gyroscopic headtracking device was used to monitor looking behavior during the task. At the end of the play, comprehension was assessed by asking a series of 18 factual questions. Children also were asked to repeat 50 meaningful sentences with three key words each presented audio-only by a single talker either from the loudspeaker at 0 degree azimuth or randomly from the five loudspeakers. RESULTS: Both children with NH and those with MMHL performed at or near ceiling on the sentence recognition task. For the comprehension task, children with MMHL performed more poorly than those with NH. Assessment of looking behavior indicated that both groups of children looked at talkers while they were speaking less than 50% of the time. In addition, the pattern of overall looking behaviors suggested that, compared with older children with NH, a larger portion of older children with MMHL may demonstrate looking behaviors similar to younger children with or without MMHL. CONCLUSIONS: The results of this study demonstrate that, under realistic acoustic conditions, it is difficult to differentiate performance among children with MMHL and children with NH using a sentence recognition task. The more cognitively demanding comprehension task identified performance differences between these two groups. The comprehension task represented a condition in which the persons talking change rapidly and are not readily visible to the listener. Examination of looking behavior suggested that, in this complex task, attempting to visualize the talker may inefficiently utilize cognitive resources that would otherwise be allocated for comprehension.


Asunto(s)
Conducta Infantil , Pérdida Auditiva/fisiopatología , Ruido , Instituciones Académicas , Percepción del Habla/fisiología , Acústica , Audiometría de Tonos Puros , Umbral Auditivo , Estudios de Casos y Controles , Niño , Humanos , Índice de Severidad de la Enfermedad , Localización de Sonidos/fisiología
18.
J Am Acad Audiol ; 25(10): 983-98, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25514451

RESUMEN

BACKGROUND: Preference for speech and music processed with nonlinear frequency compression (NFC) and two controls (restricted bandwidth [RBW] and extended bandwidth [EBW] hearing aid processing) was examined in adults and children with hearing loss. PURPOSE: The purpose of this study was to determine if stimulus type (music, sentences), age (children, adults), and degree of hearing loss influence listener preference for NFC, RBW, and EBW. RESEARCH DESIGN: Design was a within-participant, quasi-experimental study. Using a round-robin procedure, participants listened to amplified stimuli that were (1) frequency lowered using NFC, (2) low-pass filtered at 5 kHz to simulate the RBW of conventional hearing aid processing, or (3) low-pass filtered at 11 kHz to simulate EBW amplification. The examiner and participants were blinded to the type of processing. Using a two-alternative forced-choice task, participants selected the preferred music or sentence passage. STUDY SAMPLE: Participants included 16 children (ages 8-16 yr) and 16 adults (ages 19-65 yr) with mild to severe sensorineural hearing loss. INTERVENTION: All participants listened to speech and music processed using a hearing aid simulator fit to the Desired Sensation Level algorithm v5.0a. RESULTS: Children and adults did not differ in their preferences. For speech, participants preferred EBW to both NFC and RBW. Participants also preferred NFC to RBW. Preference was not related to the degree of hearing loss. For music, listeners did not show a preference. However, participants with greater hearing loss preferred NFC to RBW more than participants with less hearing loss. Conversely, participants with greater hearing loss were less likely to prefer EBW to RBW. CONCLUSIONS: Both age groups preferred access to high-frequency sounds, as demonstrated by their preference for either the EBW or NFC conditions over the RBW condition. Preference for EBW can be limited for those with greater degrees of hearing loss, but participants with greater hearing loss may be more likely to prefer NFC. Further investigation using participants with more severe hearing loss may be warranted.


Asunto(s)
Estimulación Acústica/métodos , Audífonos , Pérdida Auditiva Sensorineural/rehabilitación , Adolescente , Adulto , Anciano , Audiología/instrumentación , Niño , Femenino , Humanos , Masculino , Análisis por Apareamiento , Persona de Mediana Edad , Música , Adulto Joven
19.
Am J Audiol ; 23(3): 326-36, 2014 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-25036922

RESUMEN

PURPOSE: This study examined children's ability to follow audio-visual instructions presented in noise and reverberation. METHOD: Children (8-12 years of age) with normal hearing followed instructions in noise or noise plus reverberation. Performance was compared for a single talker (ST), multiple talkers speaking one at a time (MT), and multiple talkers with competing comments from other talkers (MTC). Working memory was assessed using measures of digit span. RESULTS: Performance was better for children in noise than for those in noise plus reverberation. In noise, performance for ST was better than for either MT or MTC, and performance for MT was better than for MTC. In noise plus reverberation, performance for ST and MT was better than for MTC, but there were no differences between ST and MT. Digit span did not account for significant variance in the task. CONCLUSIONS: Overall, children performed better in noise than in noise plus reverberation. However, differing patterns across conditions for the 2 environments suggested that the addition of reverberation may have affected performance in a way that was not apparent in noise alone. Continued research is needed to examine the differing effects of noise and reverberation on children's speech understanding.


Asunto(s)
Comprensión , Ruido/efectos adversos , Percepción del Habla , Acústica , Niño , Femenino , Humanos , Masculino , Memoria a Corto Plazo
20.
J Educ Audiol ; 20: 24-33, 2014 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-26478719

RESUMEN

Audiovisual cues can improve speech perception in adverse acoustical environments when compared to auditory cues alone. In classrooms, where acoustics often are less than ideal, the availability of visual cues has the potential to benefit children during learning activities. The current study evaluated the effects of looking behavior on speech understanding of children (8-11 years) and adults during comprehension and sentence repetition tasks in a simulated classroom environment. For the comprehension task, results revealed an effect of looking behavior (looking required versus looking not required) for older children and adults only. Within the looking-behavior conditions, age effects also were evident. There was no effect of looking behavior for the sentence-repetition task (looking versus no looking) but an age effect also was found. The current findings suggest that looking behavior may impact speech understanding differently depending on the task and the age of the listener. In classrooms, these potential differences should be taken into account when designing learning tasks.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...