Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 68
Filtrar
1.
Brain Commun ; 6(3): fcae175, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38846536

RESUMEN

Over the first years of life, the brain undergoes substantial organization in response to environmental stimulation. In a silent world, it may promote vision by (i) recruiting resources from the auditory cortex and (ii) making the visual cortex more efficient. It is unclear when such changes occur and how adaptive they are, questions that children with cochlear implants can help address. Here, we examined 7-18 years old children: 50 had cochlear implants, with delayed or age-appropriate language abilities, and 25 had typical hearing and language. High-density electroencephalography and functional near-infrared spectroscopy were used to evaluate cortical responses to a low-level visual task. Evidence for a 'weaker visual cortex response' and 'less synchronized or less inhibitory activity of auditory association areas' in the implanted children with language delays suggests that cross-modal reorganization can be maladaptive and does not necessarily strengthen the dominant visual sense.

2.
Percept Mot Skills ; 131(1): 74-105, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37977135

RESUMEN

Auditory-motor and visual-motor networks are often coupled in daily activities, such as when listening to music and dancing; but these networks are known to be highly malleable as a function of sensory input. Thus, congenital deafness may modify neural activities within the connections between the motor, auditory, and visual cortices. Here, we investigated whether the cortical responses of children with cochlear implants (CI) to a simple and repetitive motor task would differ from that of children with typical hearing (TH) and we sought to understand whether this response related to their language development. Participants were 75 school-aged children, including 50 with CI (with varying language abilities) and 25 controls with TH. We used functional near-infrared spectroscopy (fNIRS) to record cortical responses over the whole brain, as children squeezed the back triggers of a joystick that vibrated or not with the squeeze. Motor cortex activity was reflected by an increase in oxygenated hemoglobin concentration (HbO) and a decrease in deoxygenated hemoglobin concentration (HbR) in all children, irrespective of their hearing status. Unexpectedly, the visual cortex (supposedly an irrelevant region) was deactivated in this task, particularly for children with CI who had good language skills when compared to those with CI who had language delays. Presence or absence of vibrotactile feedback made no difference in cortical activation. These findings support the potential of fNIRS to examine cognitive functions related to language in children with CI.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Sordera , Niño , Humanos , Espectroscopía Infrarroja Corta/métodos , Implantación Coclear/métodos , Sordera/cirugía , Hemoglobinas
3.
J Speech Lang Hear Res ; 66(2): 765-774, 2023 02 13.
Artículo en Inglés | MEDLINE | ID: mdl-36724767

RESUMEN

PURPOSE: The present brain-behavior study examined whether sensory registration or neural inhibition processes explained variability in the behavioral most comfortable level (MCL) and background noise level (BNL) components of the acceptable noise level (ANL) measure. METHOD: A traditional auditory gating paradigm was used to evoke neural responses to pairs of pure-tone stimuli in 32 adult listeners with normal hearing. Relationships between behavioral ANL, MCL, and BNL components and cortical responses to each of the paired stimuli were analyzed using linear mixed-effects regression analyses. RESULTS: Neural responses elicited by Stimulus 2 in the gating paradigm significantly predicted the computed ANL response. The MCL component was significantly associated with responses elicited by Stimulus 1 of the pair. The BNL component of the ANL was significantly associated with neural responses to both Stimulus 1 and Stimulus 2. CONCLUSIONS: The results suggest neural processes related to neural inhibition support the ANL and BNL component while neural stimulus registration properties are associated with the MCL a listener chooses. These findings suggest that differential neural mechanisms underlie the separate MCL and BNL components of the ANL response.


Asunto(s)
Ruido , Percepción del Habla , Adulto , Humanos , Percepción del Habla/fisiología , Umbral Auditivo/fisiología
4.
J Am Acad Audiol ; 33(3): 142-148, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-36216041

RESUMEN

PURPOSE: Cochlear implant (CI) recipients often experience speech recognition difficulty in noise in small group settings with multiple talkers. In traditional remote microphones systems, one talker wears a remote microphone that wirelessly delivers speech to the CI processor. This system will not transmit signals from multiple talkers in a small group. However, remote microphone systems with multiple microphones allowing for adaptive beamforming may be beneficial for small group situations with multiple talkers. Specifically, a remote microphone with an adaptive multiple-microphone beamformer may be placed in the center of the small group, and the beam (i.e., polar lobe) may be automatically steered toward the direction associated with the most favorable speech-to-noise ratio. The signal from the remote microphone can then be wirelessly delivered to the CI sound processor. Alternately, each of the talkers in a small group may use a remote microphone that is part of a multi-talker network that wirelessly delivers the remote microphone signal to the CI sound processor. The purpose of this study was to compare the potential benefit of an adaptive multiple-microphone beamformer remote microphone system and a multi-talker network remote microphone system. METHOD: Twenty recipients, ages 12 to 84 years, with Advanced Bionics CIs completed sentence-recognition-in-noise tasks while seated at a desk surrounded by three loudspeakers at 0, 90, and 270 degrees. These speakers randomly presented the target speech while competing noise was presented from four loudspeakers located in the corners of the room. Testing was completed in three conditions: 1) CI alone, 2) Remote microphone system with an adaptive multiple-microphone beamformer, and 3) and a multi-talker network remote microphone system each with five different signal levels (15 total conditions). RESULTS: Significant differences were found across all signal levels and technology conditions. Relative to the CI alone, sentence recognition improvements ranged from 14-23 percentage points with the adaptive multiple-microphone beamformer and 27-47 percentage points with the multi-talker network with superior performance for the latter remote microphone system. CONCLUSIONS: Both remote microphone systems significantly improved speech recognition in noise of CI recipients when listening in small group settings, but the multi-talker network provided superior performance.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Percepción del Habla , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Niño , Humanos , Persona de Mediana Edad , Ruido , Diseño de Prótesis , Adulto Joven
5.
J Commun Disord ; 99: 106252, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36007485

RESUMEN

INTRODUCTION: Auditory challenges are both common and disruptive for autistic children and evidence suggests that listening difficulties may be linked to academic underachievement (Ashburner, Ziviani & Rodger, 2008). Such deficits may also contribute to issues with attention, behavior, and communication (Ashburner et al., 2008; Riccio, Cohen, Garrison & Smith, 2005). The present study aims to summarize the auditory challenges of autistic children with normal pure-tone hearing thresholds, and perceived listening difficulties, seen at auditory-ASD clinics in the US and Australia. METHODS: Data were compiled on a comprehensive, auditory-focused test battery in a large clinical sample of school-age autistic children with normal pure-tone hearing to date (N = 71, 6-14 years). Measures included a parent-reported auditory sensory processing questionnaire and tests of speech recognition in noise, binaural integration, attention, auditory memory and listening comprehension. Individual test performance was compared to normative data from children with no listening difficulties. RESULTS: Over 40% of patients exhibited significantly reduced speech recognition in noise and abnormal dichotic integration that were not attributed to deficits in attention. The majority of patients (86%) performed abnormally on at least one auditory measure, suggesting that functional auditory issues can exist in autistic patients despite normal pure-tone sensitivity. CONCLUSION: Including functional listening measures during audiological evaluations may improve clinicians' ability to detect and manage the auditory challenges impacting this population. Learner Outcomes: 1) Readers will be able to describe the auditory difficulties experienced by some autistic patients (ASD). 2) Readers will be able to describe clinical measures potentially useful for detecting listening difficulties in high-functioning autistic children.


Asunto(s)
Trastorno Autístico , Percepción del Habla , Atención , Percepción Auditiva , Niño , Pruebas Auditivas , Humanos , Ruido
6.
J Am Acad Audiol ; 33(2): 66-74, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-35512843

RESUMEN

BACKGROUND: Children with hearing loss frequently experience difficulty understanding speech in the presence of noise. Although remote microphone systems are likely to be the most effective solution to improve speech recognition in noise, the focus of this study centers on the evaluation of hearing aid noise management technologies including directional microphones, adaptive noise reduction (ANR), and frequency-gain shaping. These technologies can improve children's speech recognition, listening comfort, and/or sound quality in noise. However, individual contributions of these technologies as well as the effect of hearing aid microphone mode on localization abilities in children is unknown. PURPOSE: The objectives of this study were to (1) compare children's speech recognition and subjective perceptions across five hearing aid noise management technology conditions and (2) compare localization abilities across three hearing aid microphone modes. RESEARCH DESIGN: A single-group, repeated measures design was used to evaluate performance differences and subjective ratings. STUDY SAMPLE: Fourteen children with mild to moderately severe hearing loss. DATA COLLECTION AND ANALYSIS: Children's sentence recognition, listening comfort, sound quality, and localization were assessed in a room with an eight-loudspeaker array. RESULTS AND CONCLUSION: The use of adaptive directional microphone technology improves children's speech recognition in noise when the signal of interest arrives from the front and is spatially separated from the competing noise. In contrast, the use of adaptive directional microphone technology may result in a decrease in speech recognition in noise when the signal of interest arrives from behind. The use of a microphone mode that mimics the natural directivity of the unaided auricle provides a slight improvement in speech recognition in noise compared with omnidirectional use with limited decrement in speech recognition in noise when the signal of interest arrives from behind. The use of ANR and frequency-gain shaping provide no change in children's speech recognition in noise. The use of adaptive directional microphone technology, ANR, and frequency-gain shaping improve children's listening comfort, perceived ability to understand speech in noise, and overall listening experience. Children prefer to use each of these noise management technologies regardless of whether the signal of interest arrives from the front or from behind. The use of adaptive directional microphone technology does not result in a decrease in children's localization abilities when compared with the omnidirectional condition. The best localization performance occurred with use of the microphone mode that mimicked the directivity of the unaided auricle.


Asunto(s)
Audífonos , Pérdida Auditiva Sensorineural , Pérdida Auditiva , Percepción del Habla , Niño , Pérdida Auditiva Sensorineural/rehabilitación , Humanos , Ruido , Tecnología
7.
Laryngoscope ; 132 Suppl 1: S1-S10, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34013978

RESUMEN

OBJECTIVES: Utilize a multi-institutional outcomes database to determine expected performance for adult cochlear implant (CI) users. Estimate the percentage of patients who are high performers and achieve performance plateau. STUDY DESIGN: Retrospective database study. METHODS: Outcomes from 9,448 implantations were mined to identify 804 adult, unilateral recipients who had one preoperative and at least one postoperative consonant-nucleus-consonant (CNC) word score. Results were examined to determine percent-correct CNC word recognition preoperatively and at 1, 3, 6, 12, and 24 months after activation. Outcomes from 318 similar patients who also had at least three postoperative CNC word scores were examined. Linear mixed-effects regression was used to examine CNC word performance over time. The time when each patient achieved maximum performance was recorded as a surrogate for time of performance plateau. Patients were assigned as candidates for less intense follow-up if they were high performers and achieved performance plateau. RESULTS: Among 804 patients with at least one postoperative score, CNC score improved at all time intervals. Average performance after the 3-month time interval was 47.2% to 51.5%, indicating a CNC ≥ 50% cutoff for high performers. Among 318 patients with at least three postoperative scores, performance improved from 1 to 3 (P = .001), 3 to 6 (P = .001), and 6 to 12 (P = .01) months. Scores from the 12- and 24-month intervals did not significantly differ (P = .09). By 12 months after activation, 59.7% of patients were considered candidates for less intense follow-up. CONCLUSION: Findings suggest that CNC ≥ 50% is a reasonable cutoff to separate high performers from low performers. Within 12 months after activation, 59.7% of patients were good candidates for less intense follow-up. LEVEL OF EVIDENCE: 3 Laryngoscope, 132:S1-S10, 2022.


Asunto(s)
Cuidados Posteriores/métodos , Implantación Coclear , Implantes Cocleares , Adolescente , Adulto , Cuidados Posteriores/normas , Anciano , Anciano de 80 o más Años , Implantación Coclear/métodos , Bases de Datos como Asunto , Pruebas Auditivas , Humanos , Persona de Mediana Edad , Evaluación de Resultado en la Atención de Salud , Estudios Retrospectivos , Adulto Joven
8.
J Am Acad Audiol ; 33(4): 196-205, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-34758503

RESUMEN

BACKGROUND: For children with hearing loss, the primary goal of hearing aids is to provide improved access to the auditory environment within the limits of hearing aid technology and the child's auditory abilities. However, there are limited data examining aided speech recognition at very low (40 decibels A [dBA]) and low (50 dBA) presentation levels. PURPOSE: Due to the paucity of studies exploring aided speech recognition at low presentation levels for children with hearing loss, the present study aimed to (1) compare aided speech recognition at different presentation levels between groups of children with "normal" hearing and hearing loss, (2) explore the effects of aided pure tone average and aided Speech Intelligibility Index (SII) on aided speech recognition at low presentation levels for children with hearing loss ranging in degree from mild to severe, and (3) evaluate the effect of increasing low-level gain on aided speech recognition of children with hearing loss. RESEARCH DESIGN: In phase 1 of this study, a two-group, repeated-measures design was used to evaluate differences in speech recognition. In phase 2 of this study, a single-group, repeated-measures design was used to evaluate the potential benefit of additional low-level hearing aid gain for low-level aided speech recognition of children with hearing loss. STUDY SAMPLE: The first phase of the study included 27 school-age children with mild to severe sensorineural hearing loss and 12 school-age children with "normal" hearing. The second phase included eight children with mild to moderate sensorineural hearing loss. INTERVENTION: Prior to the study, children with hearing loss were fitted binaurally with digital hearing aids. Children in the second phase were fitted binaurally with digital study hearing aids and completed a trial period with two different gain settings: (1) gain required to match hearing aid output to prescriptive targets (i.e., primary program), and (2) a 6-dB increase in overall gain for low-level inputs relative to the primary program. In both phases of this study, real-ear verification measures were completed to ensure the hearing aid output matched prescriptive targets. DATA COLLECTION AND ANALYSIS: Phase 1 included monosyllabic word recognition and syllable-final plural recognition at three presentation levels (40, 50, and 60 dBA). Phase 2 compared speech recognition performance for the same test measures and presentation levels with two differing gain prescriptions. CONCLUSION: In phase 1 of the study, aided speech recognition was significantly poorer in children with hearing loss at all presentation levels. Higher aided SII in the better ear (55 dB sound pressure level input) was associated with higher Consonant-Nucleus-Consonant word recognition at a 40 dBA presentation level. In phase 2, increasing the hearing aid gain for low-level inputs provided a significant improvement in syllable-final plural recognition at very low-level inputs and resulted in a nonsignificant trend toward better monosyllabic word recognition at very low presentation levels. Additional research is needed to document the speech recognition difficulties children with hearing aids may experience with low-level speech in the real world as well as the potential benefit or detriment of providing additional low-level hearing aid gain.


Asunto(s)
Sordera , Audífonos , Pérdida Auditiva Sensorineural , Pérdida Auditiva , Percepción del Habla , Niño , Humanos , Pérdida Auditiva/rehabilitación , Pérdida Auditiva Sensorineural/rehabilitación , Inteligibilidad del Habla
9.
J Am Acad Audiol ; 32(6): 379-385, 2021 06.
Artículo en Inglés | MEDLINE | ID: mdl-34731905

RESUMEN

BACKGROUND: Neurological, structural, and behavioral abnormalities are widely reported in individuals with autism spectrum disorder (ASD); yet there are no objective markers to date. We postulated that by using dominant and nondominant ear data, underlying differences in auditory evoked potentials (AEPs) between ASD and control groups can be recognized. PURPOSE: The primary purpose was to identify if significant differences exist in AEPs recorded from dominant and nondominant ear stimulation in (1) children with ASD and their matched controls, (2) adults with ASD and their matched controls, and (3) a combined child and adult ASD group and control group. The secondary purpose was to explore the association between the significant findings of this study with those obtained in our previous study that evaluated the effects of auditory training on AEPs in individuals with ASD. RESEARCH DESIGN: Factorial analysis of variance with interaction was performed. STUDY SAMPLE: Forty subjects with normal hearing between the ages of 9 and 25 years were included. Eleven children and 9 adults with ASD were age- and gender-matched with neurotypical peers. DATA COLLECTION AND ANALYSIS: Auditory brainstem responses (ABRs) and auditory late responses (ALRs) were recorded. Adult and child ASD subjects were compared with non-ASD adult and child control subjects, respectively. The combined child and adult ASD group was compared with the combined child and adult control group. RESULTS: No significant differences in ABR latency or amplitude were observed between ASD and control groups. ALR N1 amplitude in the dominant ear was significantly smaller for the ASD adult group compared with their control group. Combined child and adult data showed significantly smaller amplitude for ALR N1 and longer ALR P2 latency in the dominant ear for the ASD group compared with the control group. In our earlier study, the top predictor of behavioral improvement following auditory training was ALR N1 amplitude in the dominant ear. Correspondingly, the ALR N1 amplitude in the dominant ear yielded group differences in the current study. CONCLUSIONS: ALR peak N1 amplitude is proposed as the most feasible AEP marker in the evaluation of ASD.


Asunto(s)
Trastorno del Espectro Autista , Estimulación Acústica , Adolescente , Adulto , Niño , Potenciales Evocados Auditivos , Potenciales Evocados Auditivos del Tronco Encefálico , Humanos , Adulto Joven
10.
J Am Acad Audiol ; 32(7): 433-444, 2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-34847584

RESUMEN

BACKGROUND: Considerable variability exists in the speech recognition abilities achieved by children with cochlear implants (CIs) due to varying demographic and performance variables including language abilities. PURPOSE: This article examines the factors associated with speech recognition performance of school-aged children with CIs who were grouped by language ability. RESEARCH DESIGN: This is a single-center cross-sectional study with repeated measures for subjects across two language groups. STUDY SAMPLE: Participants included two groups of school-aged children, ages 7 to 17 years, who received unilateral or bilateral CIs by 4 years of age. The High Language group (N = 26) had age-appropriate spoken-language abilities, and the Low Language group (N = 24) had delays in their spoken-language abilities. DATA COLLECTION AND ANALYSIS: Group comparisons were conducted to examine the impact of demographic characteristics on word recognition in quiet and sentence recognition in quiet and noise. RESULTS: Speech recognition in quiet and noise was significantly poorer in the Low Language compared with the High Language group. Greater hours of implant use and better adherence to auditory-verbal (AV) therapy appointments were associated with higher speech recognition in quiet and noise. CONCLUSION: To ensure maximal speech recognition in children with low-language outcomes, professionals should develop strategies to ensure that families support full-time CI use and have the means to consistently attend AV appointments.


Asunto(s)
Implantes Cocleares , Habla , Adolescente , Niño , Estudios Transversales , Humanos , Instituciones Académicas
11.
Am J Audiol ; 30(3): 481-496, 2021 Sep 10.
Artículo en Inglés | MEDLINE | ID: mdl-34106734

RESUMEN

Purpose Meta-analyses were conducted to compare pre- to postoperative speech recognition improvements and postoperative scores after cochlear implantation in younger (< 60 years) and older (> 60 years) adults. Method Studies were identified with electronic databases and through manual search of the literature. In the primary analyses, effect sizes between pre- and postoperative scores for each age group were calculated using a formula appropriate for repeated-measures designs. Using the effect sizes, two separate meta-analyses using a random-effects restricted maximum likelihood model were conducted for experiments using word and sentence recognition stimuli in quiet. Secondary meta-analyses were conducted to examine average postimplant, percent correct word recognition, sentence recognition, and speech recognition in noise in studies that included both older and younger age groups. Traditional Hedges's g effect sizes were calculated between the two groups. Results For the primary analyses, experiments using word and sentence recognition stimuli yielded significant, large effect sizes for the younger and older adult cochlear implant recipients with no significant differences between the older and younger age groups. However, the secondary meta-analyses of postoperative scores suggested significant differences between age groups for stimuli in quiet and noise. Conclusions Although older and younger adults with implants achieve the same magnitude of pre- to postimplant speech recognition benefit in quiet, the overall postoperative speech recognition outcomes in quiet and noise are superior in younger over older adults. Strategies to mitigate these group differences are critical for ensuring optimal outcomes in elderly individuals who are candidates for cochlear implants.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Percepción del Habla , Anciano , Humanos , Ruido
12.
Lang Speech Hear Serv Sch ; 52(3): 889-898, 2021 07 07.
Artículo en Inglés | MEDLINE | ID: mdl-34185568

RESUMEN

Purpose The COVID-19 pandemic introduced new educational challenges for students, teachers, and caregivers due to the changed and varied learning environments, use of face masks, and social distancing requirements. These challenges are particularly pronounced for students with hearing loss who often require specific accommodations to allow for equal access to the curriculum. The purpose of this study was to document the potential difficulties that students with hearing loss faced during the pandemic and to generate recommendations to promote learning and engagement based on findings. Method A qualitative survey was designed to document the frequency of various learning situations (i.e., in person, remote virtual, and blended), examine the accessibility of technology and course content, and quantify hearing issues associated with safety measures and technology use in school-age students with hearing loss. Survey questions were informed from key educational issues reported in published articles and guidelines. The survey was completed by 416 educational personnel who work with students with hearing loss. Results Respondents indicated that most of their schools were providing remote or blended (in-person and remote) learning consisting of synchronous and asynchronous learning. Common accommodations for students with hearing loss were only provided some of the time with the exception of sign language interpreters, which were provided for almost all students who required them. According to the respondents, both students and caregivers reported issues or discomfort with the technology required for remote learning. Conclusion To ensure that students with hearing loss are provided equal access to the curriculum, additional accommodations should be considered to address issues arising from pandemic-related changes to school and learning practices including closed captioning, transcripts/notes, recordings of lectures, sign language interpreters, student check-ins, and family-directed resources to assist with technology issues.


Asunto(s)
Educación de Personas con Discapacidad Auditiva , Pérdida Auditiva , Aprendizaje , Enseñanza , Adolescente , COVID-19 , Niño , Preescolar , Curriculum , Humanos , Masculino , Máscaras , Pandemias , Personas con Deficiencia Auditiva , Instituciones Académicas , Estudiantes
13.
J Am Acad Audiol ; 32(3): 180-185, 2021 03.
Artículo en Inglés | MEDLINE | ID: mdl-33873219

RESUMEN

BACKGROUND: Cochlear implant (CI) recipients frequently experience difficulty understanding speech over the telephone and rely on hearing assistive technology (HAT) to improve performance. Bilateral inter-processor audio streaming technology using nearfield magnetic induction is an advanced technology incorporated within a hearing aid or CI processor that can deliver telephone audio signals captured at one sound processor to the sound processor at the opposite ear. To date, limited data exist examining the efficacy of this technology in CI users to improve speech understanding on the telephone. PURPOSE: The primary objective of this study was to examine telephone speech recognition outcomes in bilateral CI recipients in a bilateral inter-processor audio streaming condition (DuoPhone) compared with a monaural condition (i.e., telephone listening with one sound processor) in quiet and in background noise. Outcomes in the monaural and bilateral conditions using either a telecoil or T-Mic2 technology were also assessed. The secondary aim was to examine how deactivating microphone input in the contralateral processor in the bilateral wireless streaming conditions, and thereby modifying the signal-to-noise ratio, affected speech recognition in noise. RESEARCH DESIGN: A repeated-measures design was used to evaluate speech recognition performance in quiet and competing noise with the telephone signal transmitted acoustically or via the telecoil to the ipsilateral sound processor microphone in monaural and bilateral wireless streaming listening conditions. STUDY SAMPLE: Nine bilateral CI users with Advanced Bionics HiRes 90K and/or CII devices were included in the study. DATA COLLECTION AND ANALYSIS: The effects of phone input (monaural [DuoPhone Off] vs. bilateral [DuoPhone on]) and processor input (T-Mic2 vs. telecoil) on word recognition in quiet and noise were assessed using separate repeated-measures analysis of variance. Effect of the contralateral device mic deactivation on speech recognition outcomes for the T-Mic2 DuoPhone conditions was assessed using paired Student's t-tests. RESULTS: Telephone speech recognition was significantly better in the bilateral inter-processor streaming conditions relative to the monaural conditions in both quiet and noise. Speech recognition outcomes were similar in quiet and noise when using the T-Mic2 and telecoil in the monaural and bilateral conditions. For the acoustic DuoPhone conditions using the T-Mic2, speech recognition in noise was significantly better when the microphone of the contralateral processor was disabled. CONCLUSION: Inter-processor audio streaming allows for bilateral listening on the telephone and produces better speech recognition in quiet and in noise compared with monaural listening conditions for adult CI recipients.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Percepción del Habla , Adulto , Audición , Humanos , Teléfono
14.
J Speech Lang Hear Res ; 64(4): 1404-1412, 2021 04 14.
Artículo en Inglés | MEDLINE | ID: mdl-33755510

RESUMEN

Purpose Auditory sensory gating is a neural measure of inhibition and is typically measured with a click or tonal stimulus. This electrophysiological study examined if stimulus characteristics and the use of speech stimuli affected auditory sensory gating indices. Method Auditory event-related potentials were elicited using natural speech, synthetic speech, and nonspeech stimuli in a traditional auditory gating paradigm in 15 adult listeners with normal hearing. Cortical responses were recorded at 64 electrode sites, and peak amplitudes and latencies to the different stimuli were extracted. Individual data were analyzed using repeated-measures analysis of variance. Results Significant gating of P1-N1-P2 peaks was observed for all stimulus types. N1-P2 cortical responses were affected by stimulus type, with significantly less neural inhibition of the P2 response observed for natural speech compared to nonspeech and synthetic speech. Conclusions Auditory sensory gating responses can be measured using speech and nonspeech stimuli in listeners with normal hearing. The results of the study indicate the amount of gating and neural inhibition observed is affected by the spectrotemporal characteristics of the stimuli used to evoke the neural responses.


Asunto(s)
Corteza Auditiva , Percepción del Habla , Estimulación Acústica , Adulto , Potenciales Evocados Auditivos , Humanos , Filtrado Sensorial , Habla
15.
J Am Acad Audiol ; 31(9): 680-689, 2020 10.
Artículo en Inglés | MEDLINE | ID: mdl-33316826

RESUMEN

BACKGROUND: Auditory-processing deficits are common in children and adults who are diagnosed with autism spectrum disorder (ASD). These deficits are evident across multiple domains as exhibited by the results from subjective questionnaires from parents, teachers, and individuals with ASD and from behavioral auditory-processing testing. PURPOSE: Few studies compare subjective and behavioral performance of adults and children diagnosed with ASD using commercially available tests of auditory processing. The primary goal of the present study is to compare the performance of adults and children with ASD to age-matched, neurotypical peers. The secondary goal is to examine the effect of age on auditory-processing performance in individuals with ASD relative to age-matched peers. RESEARCH DESIGN: A four-group, quasi-experimental design with repeated measures was used in this study. STUDY SAMPLE: Forty-two adults and children were separated into four groups of participants: (1) 10 children with ASD ages 14 years or younger; (2) 10 age-matched, neurotypical children; (3) 11 adolescents and young adults with ASD ages 16 years and older; and (4) 11 age-matched, neurotypical adolescents or young adults. DATA COLLECTION AND ANALYSIS: Data from each participant were collected in one test session. Data were analyzed with analysis of variance (ANOVA), repeated measures ANOVA, or nonparametric analyses. Effect sizes were calculated to compare performance between those with ASD and those who were neurotypical within each age group. RESULTS: Across all the questionnaires and the majority of the behavioral test measures, participants with ASD had significantly poorer ratings or auditory-processing performance than age-matched, neurotypical peers. Adults had more favorable performance than children on several of the test measures. Medium to large effect sizes corroborated the significant results. CONCLUSION: Overall, the questionnaires and behavioral tests used in this study were sensitive to detecting auditory-processing differences between individuals diagnosed with ASD and those who are considered neurotypical. On most test measures, children performed more poorly than adults. The findings in this study support that both children and adults with ASD exhibit auditory-processing difficulties. Appropriate school and work accommodations will be necessary to ensure appropriate access to speech in challenging environments.


Asunto(s)
Trastorno del Espectro Autista , Adolescente , Percepción Auditiva , Niño , Humanos , Habla , Adulto Joven
16.
Semin Hear ; 41(4): 277-290, 2020 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-33364677

RESUMEN

School classrooms are noisy and reverberant environments, and the poor acoustics can be a barrier to successful learning in children, particularly those with multiple disabilities, auditory processing issues, and hearing loss. A new set of listening challenges have been imposed by the recent global pandemic and subsequent online learning requirements. The goal of this article is to review the impact of poor acoustics on the performance of children with auditory processing issues, mild hearing loss, and unilateral hearing loss. In addition, we will summarize the evidence in support of remote microphone technology by these populations.

17.
J Am Acad Audiol ; 31(9): 666-673, 2020 10.
Artículo en Inglés | MEDLINE | ID: mdl-33225433

RESUMEN

BACKGROUND: Cochlear implant qualifying criteria for adult patients with public insurance policies are stricter than the labeled manufacturer criteria. It remains unclear whether insurance payer status affects expedient access to implants for adult patients who could derive benefit from the devices. PURPOSE: This study examined whether insurance payer status affected access to cochlear implant services and longitudinal speech-perception outcomes in adult cochlear implant recipients. RESEARCH DESIGN: Retrospective cross-sectional study. STUDY SAMPLE: Sixty-eight data points were queried from the Health Insurance Portability and Accountability Act-Secure, Encrypted, Research Management and Evaluation Solution database which consists of 12,388 de-identified data points from adult and pediatric cochlear implant recipients. DATA ANALYSIS: Linear mixed-effects models were used to determine whether insurance payer status affected expedient access to cochlear implants and whether payer status predicted longitudinal postoperative speech-perception scores in quiet and noise. RESULTS: Results from linear mixed-effects regression models indicated that insurance payer status was a significant predictor of behavioral speech-perception scores in quiet and in background noise, with patients with public insurance experiencing poorer outcomes. In addition, extended wait time to receive a cochlear implant was predicted to significantly decrease speech-perception outcomes for patients with public insurance. CONCLUSION: This study documented patients covered by public health insurance wait longer to receive cochlear implants and experience poorer postoperative speech-perception outcomes. These results have important clinical implications regarding the cochlear implant candidacy criteria and intervention protocols.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Seguro , Percepción del Habla , Adulto , Niño , Estudios Transversales , Humanos , Estudios Retrospectivos , Habla
18.
Am J Audiol ; 29(4): 851-861, 2020 Dec 09.
Artículo en Inglés | MEDLINE | ID: mdl-32966101

RESUMEN

Purpose This retrospective study used a cochlear implant registry to determine how performing speech recognition candidacy testing in quiet versus noise influenced patient selection, speech recognition, and self-report outcomes. Method Database queries identified 1,611 cochlear implant recipients who were divided into three implant candidacy qualifying groups based on preoperative speech perception scores (≤ 40% correct) on the AzBio sentence test: quiet qualifying group, +10 dB SNR qualifying group, and +5 dB SNR qualifying group. These groups were evaluated for demographic and preoperative hearing characteristics. Repeated-measures analysis of variance was used to compare pre- and postoperative performance on the AzBio in quiet and noise with qualifying group as a between-subjects factor. For a subset of recipients, pre- to postoperative changes on the Speech, Spatial and Qualities of Hearing Scale were also evaluated. Results Of the 1,611 patients identified as cochlear implant candidates, 63% of recipients qualified in quiet, 10% qualified in a +10 dB SNR, and 27% qualified in a +5 dB SNR. Postoperative speech perception scores in quiet and noise significantly improved for all qualifying groups. Across qualifying groups, the greatest speech perception improvements were observed when tested in the same qualifying listening condition. For a subset of patients, the total Speech, Spatial and Qualities of Hearing Scale ratings improved significantly as well. Conclusion Patients who qualified for cochlear implantation in quiet or background noise test conditions showed significant improvement in speech perception and quality of life scores, especially when the qualifying noise condition was used to track performance.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Percepción del Habla , Audición , Humanos , Calidad de Vida , Sistema de Registros , Estudios Retrospectivos
19.
J Am Acad Audiol ; 31(2): 96-104, 2020 02.
Artículo en Inglés | MEDLINE | ID: mdl-31267957

RESUMEN

BACKGROUND: Identifying objective changes following an auditory training program is central to the assessment of the program's efficacy. PURPOSE: This study aimed (1) to objectively determine the efficacy of a 12-week auditory processing training (APT) program in individuals with autism spectrum disorder using auditory evoked potentials (AEPs) and (2) to identify the top central AEP predictors of the overall score on the Test of Auditory Processing Skills-3 (TAPS-3), the primary behavioral outcome measure of the APT program published in our earlier article. RESEARCH DESIGN: A one-group pretraining, posttraining design was used. STUDY SAMPLE: The sample included 15 children and young adults diagnosed with autism spectrum disorder. Participants underwent the APT program consisting of computerized dichotic training, one-on-one therapist-directed auditory training, and the use of remote microphone technology at home and in the classroom. DATA COLLECTION AND ANALYSIS: All participants underwent pre- and posttraining auditory brain stem responses (ABRs), complex auditory brain stem responses (cABRs), and auditory late responses (ALRs). Test results from ABRs and ALRs were grouped based on scores obtained in their dominant and nondominant ears. Paired t-tests were used to assess the efficacy of the training program, and least absolute shrinkage and selection operator regression was used to assess the relationship between ALRs and the TAPS-3 overall summed raw score reported in our earlier article. RESULTS AND CONCLUSIONS: When compared with pretraining results, posttraining results showed shorter ABR latencies and larger amplitudes. The cABRs showed decreased latencies of the frequency following waves, a reduction in pitch error, and enhancement of pitch strength and phase shift. ALR results indicated shorter latencies and larger amplitudes. Our earlier article showed that the TAPS-3 overall score was significantly higher after training. This study showed that the top three ALR predictors of TAPS-3 outcomes were P1 amplitude in the dominant ear, and N1 amplitude in the dominant and nondominant ears.


Asunto(s)
Percepción Auditiva/fisiología , Trastorno del Espectro Autista/fisiopatología , Potenciales Evocados Auditivos del Tronco Encefálico/fisiología , Adolescente , Niño , Potenciales Evocados Auditivos/fisiología , Femenino , Humanos , Masculino , Tiempo de Reacción/fisiología , Adulto Joven
20.
J Am Acad Audiol ; 31(1): 50-60, 2020 01.
Artículo en Inglés | MEDLINE | ID: mdl-31429403

RESUMEN

BACKGROUND: Children with hearing loss often experience difficulty understanding speech in noisy and reverberant classrooms. Traditional remote microphone use, in which the teacher wears a remote microphone that captures her speech and wirelessly delivers it to radio receivers coupled to a child's hearing aids, is often ineffective for small-group listening and learning activities. A potential solution is to place a remote microphone in the middle of the desk used for small-group learning situations to capture the speech of the peers around the desk and wirelessly deliver the speech to the child's hearing aids. PURPOSE: The objective of this study was to compare speech recognition of children using hearing aids across three conditions: (1) hearing aid in an omnidirectional microphone mode (HA-O), (2) hearing aid with automatic activation of a directional microphone (HA-ADM) (i.e., the hearing aid automatically switches in noisy environments from omnidirectional mode to a directional mode with a cardioid polar plot pattern), and (3) HA-ADM with simultaneous use of a remote microphone (RM) in a "Small Group" mode (HA-ADM-RM). The Small Group mode is designed to pick up multiple near-field talkers. An additional objective of this study was to compare the subjective listening preferences of children between the HA-ADM and HA-ADM-RM conditions. RESEARCH DESIGN: A single-group, repeated measures design was used to evaluate performance differences obtained in the three technology conditions. Sentence recognition in noise was assessed in a classroom setting with each technology, while sentences were presented at a fixed level from three different loudspeakers surrounding a desk (0, 90, and 270° azimuth) at which the participant was seated. This arrangement was intended to simulate a small-group classroom learning activity. STUDY SAMPLE: Fifteen children with moderate to moderately severe hearing loss. DATA COLLECTION AND ANALYSIS: Speech recognition was evaluated in the three hearing technology conditions, and subjective auditory preference was evaluated in the HA-ADM and HA-ADM-RM conditions. RESULTS: The use of the remote microphone system in the Small Group mode resulted in a statistically significant improvement in sentence recognition in noise of 24 and 21 percentage points compared with the HA-O and HA-ADM conditions, respectively (individual benefit ranged from -8.6 to 61.1 and 3.4 to 44 percentage points, respectively). There was not a significant difference in sentence recognition in noise between the HA-O and HA-ADM conditions when the remote microphone system was not in use. Eleven of the 14 participants who completed the subjective rating scale reported at least a slight preference for the use of the remote microphone system in the Small Group mode. CONCLUSIONS: Objective and subjective measures of sentence recognition indicated that use of remote microphone technology with the Small Group mode may improve hearing performance in small-group learning activities. Sentence recognition in noise improved by 24 percentage points compared to the HA-O condition, and children expressed a preference for the use of the remote microphone Small Group technology regarding listening comfort, sound quality, speech intelligibility, background noise reduction, and overall listening experience.


Asunto(s)
Sordera/rehabilitación , Audífonos , Percepción del Habla , Adolescente , Umbral Auditivo , Niño , Diseño de Equipo , Humanos , Ruido/efectos adversos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...