RESUMO
Cochlear implants (CIs) are neuroprosthetic devices that can provide hearing to deaf people1. Despite the benefits offered by CIs, the time taken for hearing to be restored and perceptual accuracy after long-term CI use remain highly variable2,3. CI use is believed to require neuroplasticity in the central auditory system, and differential engagement of neuroplastic mechanisms might contribute to the variability in outcomes4-7. Despite extensive studies on how CIs activate the auditory system4,8-12, the understanding of CI-related neuroplasticity remains limited. One potent factor enabling plasticity is the neuromodulator noradrenaline from the brainstem locus coeruleus (LC). Here we examine behavioural responses and neural activity in LC and auditory cortex of deafened rats fitted with multi-channel CIs. The rats were trained on a reward-based auditory task, and showed considerable individual differences of learning rates and maximum performance. LC photometry predicted when CI subjects began responding to sounds and longer-term perceptual accuracy. Optogenetic LC stimulation produced faster learning and higher long-term accuracy. Auditory cortical responses to CI stimulation reflected behavioural performance, with enhanced responses to rewarded stimuli and decreased distinction between unrewarded stimuli. Adequate engagement of central neuromodulatory systems is thus a potential clinically relevant target for optimizing neuroprosthetic device use.
Assuntos
Implantes Cocleares , Surdez , Locus Cerúleo , Animais , Ratos , Implante Coclear , Surdez/fisiopatologia , Surdez/terapia , Audição/fisiologia , Aprendizagem/fisiologia , Locus Cerúleo/citologia , Locus Cerúleo/fisiologia , Plasticidade Neuronal , Norepinefrina/metabolismo , Córtex Auditivo/citologia , Córtex Auditivo/fisiologia , Córtex Auditivo/fisiopatologia , Neurônios/fisiologia , Recompensa , Optogenética , FotometriaRESUMO
OBJECTIVES: Despite performing well in standard clinical assessments of speech perception, many cochlear implant (CI) users report experiencing significant difficulties when listening in real-world environments. We hypothesize that this disconnect may be related, in part, to the limited ecological validity of tests that are currently used clinically and in research laboratories. The challenges that arise from degraded auditory information provided by a CI, combined with the listener's finite cognitive resources, may lead to difficulties when processing speech material that is more demanding than the single words or single sentences that are used in clinical tests. DESIGN: Here, we investigate whether speech identification performance and processing effort (indexed by pupil dilation measures) are affected when CI users or normal-hearing control subjects are asked to repeat two sentences presented sequentially instead of just one sentence. RESULTS: Response accuracy was minimally affected in normal-hearing listeners, but CI users showed a wide range of outcomes, from no change to decrements of up to 45 percentage points. The amount of decrement was not predictable from the CI users' performance in standard clinical tests. Pupillometry measures tracked closely with task difficulty in both the CI group and the normal-hearing group, even though the latter had speech perception scores near ceiling levels for all conditions. CONCLUSIONS: Speech identification performance is significantly degraded in many (but not all) CI users in response to input that is only slightly more challenging than standard clinical tests; specifically, when two sentences are presented sequentially before requesting a response, instead of presenting just a single sentence at a time. This potential "2-sentence problem" represents one of the simplest possible scenarios that go beyond presentation of the single words or sentences used in most clinical tests of speech perception, and it raises the possibility that even good performers in single-sentence tests may be seriously impaired by other ecologically relevant manipulations. The present findings also raise the possibility that a clinical version of a 2-sentence test may provide actionable information for counseling and rehabilitating CI users, and for people who interact with them closely.
Assuntos
Implantes Cocleares , Percepção da Fala , Humanos , Masculino , Feminino , Adulto , Pessoa de Meia-Idade , Idoso , Estudos de Casos e Controles , Pupila/fisiologia , Adulto Jovem , Implante CoclearRESUMO
Binaural unmasking, a key feature of normal binaural hearing, can refer to the improved intelligibility of masked speech by adding masking that facilitates perceived separation of target and masker. A question relevant for cochlear implant users with single-sided deafness (SSD-CI) is whether binaural unmasking can still be achieved if the additional masking is spectrally degraded and shifted. CIs restore some aspects of binaural hearing to these listeners, although binaural unmasking remains limited. Notably, these listeners may experience a mismatch between the frequency information perceived through the CI and that perceived by their normal hearing ear. Employing acoustic simulations of SSD-CI with normal hearing listeners, the present study confirms a previous simulation study that binaural unmasking is severely limited when interaural frequency mismatch between the input frequency range and simulated place of stimulation exceeds 1-2 mm. The present study also shows that binaural unmasking is largely retained when the input frequency range is adjusted to match simulated place of stimulation, even at the expense of removing low-frequency information. This result bears implications for the mechanisms driving the type of binaural unmasking of the present study and for mapping the frequency range of the CI speech processor in SSD-CI users.
Assuntos
Implante Coclear , Implantes Cocleares , Surdez , Percepção da Fala , Surdez/diagnóstico , Audição , HumanosRESUMO
Speech prosody, including pitch contour, word stress, pauses, and vowel lengthening, can aid the detection of the clausal structure of a multi-clause sentence and this, in turn, can help listeners determine the meaning. However, for cochlear implant (CI) users, the reduced acoustic richness of the signal raises the question of whether CI users may have difficulty using sentence prosody to detect syntactic clause boundaries within sentences or whether this ability is rescued by the redundancy of the prosodic features that normally co-occur at clause boundaries. Twenty-two CI users, ranging in age from 19 to 77 years old, recalled three types of sentences: sentences in which the prosodic pattern was appropriate to the location of a clause boundary within the sentence (congruent prosody), sentences with reduced prosodic information, or sentences in which the location of the clause boundary and the prosodic marking of a clause boundary were placed in conflict. The results showed the presence of congruent prosody to be associated with superior sentence recall and a reduced processing effort as indexed by the pupil dilation. The individual differences in a standard test of word recognition (consonant-nucleus-consonant score) were related to the recall accuracy as well as the processing effort. The outcomes are discussed in terms of the redundancy of the prosodic features, which normally accompany a clause boundary and processing effort.
Assuntos
Implantes Cocleares , Percepção da Fala , Idioma , Rememoração Mental , FalaRESUMO
OBJECTIVES: Cochlear implants (CIs) restore speech perception in quiet but they also eliminate or distort many acoustic cues that are important for music enjoyment. Unfortunately, quantifying music enjoyment by CI users has been difficult because comparisons must rely on their recollection of music before they lost their hearing. Here, we aimed to assess music enjoyment in CI users using a readily interpretable reference based on acoustic hearing. The comparison was done by testing "single-sided deafness" (SSD) patients who have normal hearing (NH) in one ear and a CI in the other ear. The study also aimed to assess binaural musical enjoyment, with the reference being the experience of hearing with a single NH ear. Three experiments assessed the effect of adding different kinds of input to the second ear: electrical, vocoded, or unmodified. DESIGN: In experiment 1, music enjoyment in SSD-CI users was investigated using a modified version of the MUSHRA (MUltiple Stimuli with Hidden Reference and Anchor) method. Listeners rated their enjoyment of song segments on a scale of 0 to 200, where 100 represented the enjoyment obtained from a song segment presented to the NH ear, 0 represented a highly degraded version of the same song segment presented to the same ear, and 200 represented enjoyment subjectively rated as twice as good as the 100 reference. Stimuli consisted of acoustic only, electric only, acoustic and electric, as well as other conditions with low pass filtered acoustic stimuli. Acoustic stimulation was provided by headphone to the NH ear and electric stimulation was provided by direct audio input to the subject's speech processor. In experiment 2, the task was repeated using NH listeners who received vocoded stimuli instead of electric stimuli. Experiment 3 tested the effect of adding the same unmodified song segment to the second ear, also in NH listeners. RESULTS: Music presented through the CI only was very unpleasant, with an average rating of 20. Surprisingly, the combination of the unpleasant CI signal in one ear with acoustic stimulation in the other ear was rated more enjoyable (mean = 123) than acoustic processing alone. Presentation of the same monaural musical signal to both ears in NH listeners resulted with even greater enhancement of the experience compared with presentation to a single ear (mean = 159). Repeating the experiment using a vocoder to one ear of NH listeners resulted in interference rather than enhancement. CONCLUSIONS: Music enjoyment from electric stimulation is extremely poor relative to a readily interpretable NH baseline for CI-SSD listeners. However, the combination of this unenjoyable signal presented through a CI and an unmodified acoustic signal presented to a NH (or near-NH) contralateral ear results in enhanced music enjoyment with respect to the acoustic signal alone. Remarkably, this two-ear enhancement experienced by CI-SSD listeners represents a substantial fraction of the two-ear enhancement seen in NH listeners. This unexpected benefit of electroacoustic auditory stimulation will have to be considered in theoretical accounts of music enjoyment and may facilitate the quest to enhance music enjoyment in CI users.
Assuntos
Implante Coclear , Implantes Cocleares , Música , Percepção da Fala , Estimulação Acústica , Percepção Auditiva , HumanosRESUMO
OBJECTIVES: (1) To determine the effect of hearing aid (HA) bandwidth on bimodal speech perception in a group of unilateral cochlear implant (CI) patients with diverse degrees and configurations of hearing loss in the nonimplanted ear, (2) to determine whether there are demographic and audiometric characteristics that would help to determine the appropriate HA bandwidth for a bimodal patient. DESIGN: Participants were 33 experienced bimodal device users with postlingual hearing loss. Twenty three of them had better speech perception with the CI than the HA (CI>HA group) and 10 had better speech perception with the HA than the CI (HA>CI group). Word recognition in sentences (AzBio sentences at +10 dB signal to noise ratio presented at 0° azimuth) and in isolation [CNC (consonant-nucleus-consonant) words] was measured in unimodal conditions [CI alone or HAWB, which indicates HA alone in the wideband (WB) condition] and in bimodal conditions (BMWB, BM2k, BM1k, and BM500) as the bandwidth of an actual HA was reduced from WB to 2 kHz, 1 kHz, and 500 Hz. Linear mixed-effect modeling was used to quantify the relationship between speech recognition and listening condition and to assess how audiometric or demographic covariates might influence this relationship in each group. RESULTS: For the CI>HA group, AzBio scores were significantly higher (on average) in all bimodal conditions than in the best unimodal condition (CI alone) and were highest at the BMWB condition. For CNC scores, on the other hand, there was no significant improvement over the CI-alone condition in any of the bimodal conditions. The opposite pattern was observed in the HA>CI group. CNC word scores were significantly higher in the BM2k and BMWB conditions than in the best unimodal condition (HAWB), but none of the bimodal conditions were significantly better than the best unimodal condition for AzBio sentences (and some of the restricted bandwidth conditions were actually worse). Demographic covariates did not interact significantly with bimodal outcomes, but some of the audiometric variables did. For CI>HA participants with a flatter audiometric configuration and better mid-frequency hearing, bimodal AzBio scores were significantly higher than the CI-alone score with the WB setting (BMWB) but not with other bandwidths. In contrast, CI>HA participants with more steeply sloping hearing loss and poorer mid-frequency thresholds (≥82.5 dB) had significantly higher bimodal AzBio scores in all bimodal conditions, and the BMWB did not differ significantly from the restricted bandwidth conditions. HA>CI participants with mild low-frequency hearing loss showed the highest levels of bimodal improvement over the best unimodal condition on CNC words. They were also less affected by HA bandwidth reduction compared with HA>CI participants with poorer low-frequency thresholds. CONCLUSIONS: The pattern of bimodal performance as a function of the HA bandwidth was found to be consistent with the degree and configuration of hearing loss for both patients with CI>HA performance and for those with HA>CI performance. Our results support fitting the HA for all bimodal patients with the widest bandwidth consistent with effective audibility.
Assuntos
Implante Coclear , Auxiliares de Audição , Perda Auditiva Bilateral/reabilitação , Percepção da Fala , Idoso , Implantes Cocleares , Feminino , Humanos , Masculino , Pessoa de Meia-IdadeRESUMO
Cochlear implant (CI) recipients have difficulty understanding speech in noise even at moderate signal-to-noise ratios. Knowing the mechanisms they use to understand speech in noise may facilitate the search for better speech processing algorithms. In the present study, a computational model is used to assess whether CI users' vowel identification in noise can be explained by formant frequency cues (F1 and F2). Vowel identification was tested with 12 unilateral CI users in quiet and in noise. Formant cues were measured from vowels in each condition, specific to each subject's speech processor. Noise distorted the location of vowels in the F2 vs F1 plane in comparison to quiet. The best fit model to subjects' data in quiet produced model predictions in noise that were within 8% of actual scores on average. Predictions in noise were much better when assuming that subjects used a priori knowledge regarding how formant information is degraded in noise (experiment 1). However, the model's best fit to subjects' confusion matrices in noise was worse than in quiet, suggesting that CI users utilize formant cues to identify vowels in noise, but to a different extent than how they identify vowels in quiet (experiment 2).
Assuntos
Implante Coclear/instrumentação , Implantes Cocleares , Ruído/efeitos adversos , Mascaramento Perceptivo , Pessoas com Deficiência Auditiva/reabilitação , Acústica da Fala , Inteligibilidade da Fala , Percepção da Fala , Estimulação Acústica , Adulto , Idoso , Algoritmos , Sinais (Psicologia) , Estimulação Elétrica , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Pessoas com Deficiência Auditiva/psicologia , Processamento de Sinais Assistido por ComputadorRESUMO
Cochlear implants are neuroprosthetic devices that provide hearing to deaf patients, although outcomes are highly variable even with prolonged training and use. The central auditory system must process cochlear implant signals, but it is unclear how neural circuits adapt-or fail to adapt-to such inputs. The knowledge of these mechanisms is required for development of next-generation neuroprosthetics that interface with existing neural circuits and enable synaptic plasticity to improve perceptual outcomes. Here, we describe a new system for cochlear implant insertion, stimulation, and behavioral training in rats. Animals were first ensured to have significant hearing loss via physiological and behavioral criteria. We developed a surgical approach for multichannel (2- or 8-channel) array insertion, comparable with implantation procedures and depth in humans. Peripheral and cortical responses to stimulation were used to program the implant objectively. Animals fitted with implants learned to use them for an auditory-dependent task that assesses frequency detection and recognition in a background of environmentally and self-generated noise and ceased responding appropriately to sounds when the implant was temporarily inactivated. This physiologically calibrated and behaviorally validated system provides a powerful opportunity to study the neural basis of neuroprosthetic device use and plasticity.
Assuntos
Implante Coclear/métodos , Implantes Cocleares , Perda Auditiva/reabilitação , Perda Auditiva/cirurgia , Recuperação de Função Fisiológica/fisiologia , Potenciais de Ação/fisiologia , Animais , Modelos Animais de Doenças , Estimulação Elétrica , Potenciais Evocados Auditivos do Tronco Encefálico/fisiologia , Feminino , Lateralidade Funcional , Microrradiografia , Ratos , Ratos Sprague-Dawley , Osso Temporal/diagnóstico por imagem , Osso Temporal/patologia , Osso Temporal/fisiopatologiaRESUMO
Even though speech signals trigger coding in the cochlea to convey speech information to the central auditory structures, little is known about the neural mechanisms involved in such processes. The purpose of this study was to understand the encoding of formant cues and how it relates to vowel recognition in listeners. Neural representations of formants may differ across listeners; however, it was hypothesized that neural patterns could still predict vowel recognition. To test the hypothesis, the frequency-following response (FFR) and vowel recognition were obtained from 38 normal-hearing listeners using four different vowels, allowing direct comparisons between behavioral and neural data in the same individuals. FFR was employed because it provides an objective and physiological measure of neural activity that can reflect formant encoding. A mathematical model was used to describe vowel confusion patterns based on the neural responses to vowel formant cues. The major findings were (1) there were large variations in the accuracy of vowel formant encoding across listeners as indexed by the FFR, (2) these variations were systematically related to vowel recognition performance, and (3) the mathematical model of vowel identification was successful in predicting good vs poor vowel identification performers based exclusively on physiological data.
Assuntos
Reconhecimento Psicológico/fisiologia , Percepção da Fala/fisiologia , Adulto , Idoso , Cóclea/fisiologia , Sinais (Psicologia) , Potenciais Evocados Auditivos/fisiologia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Modelos Neurológicos , Método de Monte Carlo , Mascaramento Perceptivo/fisiologia , Fonética , Acústica da Fala , Adulto JovemRESUMO
OBJECTIVES: Commercially available cochlear implant systems attempt to deliver frequency information going down to a few hundred Hertz, but the electrode arrays are not designed to reach the most apical regions of the cochlea, which correspond to these low frequencies. This may cause a mismatch between the frequencies presented by a cochlear implant electrode array and the frequencies represented at the corresponding location in a normal-hearing cochlea. In the following study, the mismatch between the frequency presented at a given cochlear angle and the frequency expected by an acoustic hearing ear at the corresponding angle is examined for the cochlear implant systems that are most commonly used in the United States. DESIGN: The angular insertion of each of the electrodes on four different electrode arrays (MED-EL Standard, MED-EL Flex28, Advanced Bionics HiFocus 1J, and Cochlear Contour Advance) was estimated from X-ray. For the angular location of each electrode on each electrode array, the predicted spiral ganglion frequency was estimated. The predicted spiral ganglion frequency was compared with the center frequency provided by the corresponding electrode using the manufacturer's default frequency-to-electrode allocation. RESULTS: Differences across devices were observed for the place of stimulation for frequencies below 650 Hz. Longer electrode arrays (i.e., the MED-EL Standard and Flex28) demonstrated smaller deviations from the spiral ganglion map than the other electrode arrays. For insertion angles up to approximately 270°, the frequencies presented at a given location were typically approximately an octave below what would be expected by a spiral ganglion frequency map, while the deviations were larger for angles deeper than 270°. For frequencies above 650 Hz, the frequency to angle relationship was consistent across all four electrode models. CONCLUSIONS: A mismatch was observed between the predicted frequency and the default frequency provided by every electrode on all electrode arrays. The mismatch can be reduced by changing the default frequency allocations, inserting electrodes deeper into the cochlea, or allowing cochlear implant users to adapt to the mismatch. Further studies are required to fully assess the clinical significance of the frequency mismatch.
Assuntos
Implante Coclear/métodos , Implantes Cocleares , Surdez/reabilitação , Percepção da Altura Sonora , Gânglio Espiral da Cóclea , Eletrodos Implantados , HumanosRESUMO
Objectives: To provide a level-adjusted correction to the current standard relating anatomical cochlear place to characteristic frequency in humans, and to re-evaluate anatomical frequency mismatch in cochlear implant (CI) recipients considering this correction. It is hypothesized that a level-adjusted place-frequency function may represent a more accurate tonotopic benchmark for CIs in comparison to the current standard. Design: The present analytical study compiled data from fifteen previous animal studies that reported iso-intensity responses from cochlear structures at different stimulation levels. Extracted outcome measures were characteristic frequencies and centroid-based best frequencies at 70 dB SPL input from 47 specimens spanning a broad range of cochlear locations. A simple relationship was used to transform these measures to human estimates of characteristic and best frequencies, and non-linear regression was applied to these estimates to determine how the standard human place-frequency function should be adjusted to reflect best frequency rather than characteristic frequency. The proposed level-adjusted correction was then compared to average place-frequency positions of commonly used CI devices when programmed with clinical settings. Results: The present study showed that the best frequency at 70 dB SPL (BF70) tends to shift away from characteristic frequency (CF). The amount of shift was statistically significant (signed-rank test z = 5.143, p < 0.001), but the amount and direction of shift depended on cochlear location. At cochlear locations up to 600° from the base, BF70 shifted downwards in frequency relative to CF by about 4 semitones on average. Beyond 600° from the base, BF70 shifted upwards in frequency relative to CF by about 6 semitones on average. In terms of spread (90% prediction interval), the amount of shift between CF and BF70 varied from relatively no shift to nearly an octave of shift. With the new level-adjusted frequency-place function, the amount of anatomical frequency mismatch for devices programmed with standard of care settings is less extreme than originally thought, and may be nonexistent for all but the most apical electrodes. Conclusions: The present study validates the current standard for relating cochlear place to characteristic frequency, and introduces a level-adjusted correction for how best frequency shifts away from characteristic frequency at moderately loud stimulation levels. This correction may represent a more accurate tonotopic reference for CIs. To the extent that it does, its implementation may potentially enhance perceptual accommodation and speech understanding in CI users, thereby improving CI outcomes and contributing to advancements in the programming and clinical management of CIs.
RESUMO
Objective: The process of resident recruitment is costly, and our surgical residency program expends significant time on the resident selection process while balancing general duties and responsibilities. The aim of our study was to explore the relationship between otolaryngology-head and surgery (OHNS) residents' National Residency Matching Program (NRMP) rank-list position at our institution and their subsequent residency performance. Study Design: Retrospective cohort study. Setting: Single site institution. Methods: We retrospectively reviewed 7 consecutive resident classes (2011-2017) at a single tertiary OHNS residency program. We reviewed each resident's absolute rank order in the NRMP matches. Measures of residency performance included overall faculty evaluation during postgraduate year 5 (PGY5), annual in-service examination scores (scaled score), and the number of manuscripts published in peer-reviewed journals. Correlations between NRMP rank order and subsequent residency performance were assessed using Spearman's rho correlation coefficients (ρ). Results: Twenty-eight residents entered residency training between 2011 and 2017. The average rank position of the trainees during this study was 9.7 (range: 1-22). We found no significant correlation between rank order and faculty evaluation during PGY5 (ρ = 0.097, P = .625) or number of publications (ρ = -0.256, P = .189). Additionally, when assessing the association between rank order and annual Otolaryngology Training Examination-scaled scores, no statistically significant relationship was found between the 2 (P > .05). Conclusion: Our results showed that there were no significant correlations between OHNS rank order and various measures of success in residency training, which aligns with existing literature. Further investigation of this relationship should be conducted to ensure the applicability of our findings.
RESUMO
Background/Objective: Bilaterally implanted cochlear implant (CI) users do not consistently have access to interaural time differences (ITDs). ITDs are crucial for restoring the ability to localize sounds and understand speech in noisy environments. Lack of access to ITDs is partly due to lack of communication between clinical processors across the ears and partly because processors must use relatively high rates of stimulation to encode envelope information. Speech understanding is best at higher stimulation rates, but sensitivity to ITDs in the timing of pulses is best at low stimulation rates. Methods: We implemented a practical "mixed rate" strategy that encodes ITD information using a low stimulation rate on some channels and speech information using high rates on the remaining channels. The strategy was tested using a bilaterally synchronized research processor, the CCi-MOBILE. Nine bilaterally implanted CI users were tested on speech understanding and were asked to judge the location of a sound based on ITDs encoded using this strategy. Results: Performance was similar in both tasks between the control strategy and the new strategy. Conclusions: We discuss the benefits and drawbacks of the sound coding strategy and provide guidelines for utilizing synchronized processors for developing strategies.
RESUMO
OBJECTIVE: To characterize transimpedance matrix (TIM) heatmap patterns in patients at risk of labyrinthine abnormality to better understand accuracy and possible TIM limitations. STUDY DESIGN: Retrospective review of TIM patterns, preoperative, and postoperative imaging. SETTING: Tertiary referral center. PATIENTS: Patients undergoing cochlear implantation with risk of labyrinthine abnormality. INTERVENTION: None. RESULTS: Seventy-seven patients were evaluated. Twenty-five percent (n = 19) of patients had a TIM pattern variant identified. These variants were separated into 10 novel categories. Overall, 9% (n = 6) of electrodes were malpositioned on intraoperative x-ray, of which 50% (n = 3) were underinserted, 17% (n = 1) were overinserted, 17% (n = 1) had a tip foldover, and 17% (n = 1) had a coiled electrode. The number of patients with a variant TIM pattern and normal x-ray was 18% (n = 14), and the number of patients with normal TIM pattern and malposition noted on x-ray was 3% (n = 2; both were electrode underinsertions that were recognized due to open circuits and surgical visualization).A newly defined skip heat pattern was identified in patients with IP2/Mondini malformation and interscalar septum width <0.5 mm at the cochlear pars ascendens of the basal turn. CONCLUSIONS: This study defines novel patterns for TIM heatmap characterization to facilitate collaborative and comparative research moving forward. In doing so, it highlights a new pattern termed skip heat, which corresponds with a deficient interscalar septum of the cochlea pars ascendens of the basal turn in patients with IP2 malformation. Overall, the data assist the surgeon in better understanding the implications and limitations of TIM patterns within groups of patients with risk of labyrinthine abnormalities.
Assuntos
Implante Coclear , Implantes Cocleares , Impedância Elétrica , Humanos , Implante Coclear/métodos , Estudos Retrospectivos , Feminino , Masculino , Pré-Escolar , Criança , Lactente , Orelha Interna/anormalidades , Orelha Interna/diagnóstico por imagem , Adolescente , Adulto , Pessoa de Meia-IdadeRESUMO
Neurobiological correlates of adaptation to spectrally degraded speech were investigated with fMRI before and after exposure to a portable real-time speech processor that implements an acoustic simulation model of a cochlear implant (CI). The speech processor, in conjunction with isolating insert earphones and a microphone to capture environment sounds, was worn by participants over a two week chronic exposure period. fMRI and behavioral speech comprehension testing were conducted before and after this two week period. After using the simulator each day for 2h, participants significantly improved in word and sentence recognition scores. fMRI shows that these improvements came accompanied by changes in patterns of neuronal activation. In particular, we found additional recruitment of visual, motor, and working memory areas after the perceptual training period. These findings suggest that the human brain is able to adapt in a short period of time to a degraded auditory signal under a natural learning environment, and gives insight on how a CI might interact with the central nervous system. This paradigm can be furthered to investigate neural correlates of new rehabilitation, training, and signal processing strategies non-invasively in normal hearing listeners to improve CI patient outcomes.
Assuntos
Adaptação Fisiológica/fisiologia , Mapeamento Encefálico , Encéfalo/fisiologia , Implantes Cocleares , Percepção da Fala/fisiologia , Estimulação Acústica , Adulto , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Adulto JovemRESUMO
OBJECTIVES: The purpose of this study was to determine how the bandwidth of the hearing aid (HA) fitting affects bimodal speech recognition of listeners with a cochlear implant (CI) in one ear and severe-to-profound hearing loss in the unimplanted ear (but with residual hearing sufficient for wideband amplification using National Acoustic Laboratories Revised, Profound [NAL-RP] prescriptive guidelines; unaided thresholds no poorer than 95 dB HL through 2000 Hz). DESIGN: Recognition of sentence material in quiet and in noise was measured with the CI alone and with CI plus HA as the amplification provided by the HA in the high and mid-frequency regions was systematically reduced from the wideband condition (NAL-RP prescription). Modified bandwidths included upper frequency cutoffs of 2000, 1000, or 500 Hz. RESULTS: On average, significant bimodal benefit was obtained when the HA provided amplification at all frequencies with aidable residual hearing. Limiting the HA bandwidth to only low-frequency amplification (below 1000 Hz) did not yield significant improvements in performance over listening with the CI alone. CONCLUSIONS: These data suggest the importance of providing amplification across as wide a frequency region as permitted by audiometric thresholds in the HA used by bimodal users.
Assuntos
Implante Coclear , Implantes Cocleares , Auxiliares de Audição , Perda Auditiva Neurossensorial/reabilitação , Percepção da Fala , Idoso , Idoso de 80 Anos ou mais , Audiometria da Fala , Limiar Auditivo , Terapia Combinada , Audição , Humanos , Pessoa de Meia-Idade , Ruído , Localização de Som , Resultado do TratamentoRESUMO
OBJECTIVES: Perception of spectrally degraded speech is particularly difficult when the signal is also distorted along the frequency axis. This might be particularly important for post-lingually deafened recipients of cochlear implants (CIs), who must adapt to a signal where there may be a mismatch between the frequencies of an input signal and the characteristic frequencies of the neurons stimulated by the CI. However, there is a lack of tools that can be used to identify whether an individual has adapted fully to a mismatch in the frequency-to-place relationship and if so, to find a frequency table that ameliorates any negative effects of an unadapted mismatch. The goal of the proposed investigation is to test the feasibility of whether real-time selection of frequency tables can be used to identify cases in which listeners have not fully adapted to a frequency mismatch. The assumption underlying this approach is that listeners who have not adapted to a frequency mismatch will select a frequency table that minimizes any such mismatches, even at the expense of reducing the information provided by this frequency table. DESIGN: Thirty-four normal-hearing adults listened to a noise-vocoded acoustic simulation of a CI and adjusted the frequency table in real time until they obtained a frequency table that sounded "most intelligible" to them. The use of an acoustic simulation was essential to this study because it allowed the authors to explicitly control the degree of frequency mismatch present in the simulation. None of the listeners had any previous experience with vocoded speech, in order to test the hypothesis that the real-time selection procedure could be used to identify cases in which a listener has not adapted to a frequency mismatch. After obtaining a self-selected table, the authors measured consonant nucleus consonant word-recognition scores with that self-selected table and two other frequency tables: a "frequency-matched" table that matched the analysis filters with the noisebands of the noise-vocoder simulation, and a "right information" table that is similar to that used in most CI speech processors, but in this simulation results in a frequency shift equivalent to 6.5 mm of cochlear space. RESULTS: Listeners tended to select a table that was very close to, but shifted slightly lower in frequency from the frequency-matched table. The real-time selection process took on average 2 to 3 min for each trial, and the between-trial variability was comparable with that previously observed with closely related procedures. The word-recognition scores with the self-selected table were clearly higher than with the right-information table and slightly higher than with the frequency-matched table. CONCLUSIONS: Real-time self-selection of frequency tables may be a viable tool for identifying listeners who have not adapted to a mismatch in the frequency-to-place relationship, and to find a frequency table that is more appropriate for them. Moreover, the small but significant improvements in word-recognition ability observed with the self-selected table suggest that these listeners based their selections on intelligibility rather than some other factor. The within-subject variability in the real-time selection procedure was comparable with that of a genetic algorithm, and the speed of the real-time procedure appeared to be faster than either a genetic algorithm or a simplex procedure.
Assuntos
Estimulação Acústica/métodos , Audiologia/métodos , Percepção Auditiva/fisiologia , Implantes Cocleares/normas , Surdez/reabilitação , Percepção da Fala/fisiologia , Adulto , Implante Coclear/métodos , Implante Coclear/normas , Simulação por Computador , Estudos de Viabilidade , Feminino , Humanos , Masculino , Pessoa de Meia-IdadeRESUMO
Introduction: In spite of its apparent ease, comprehension of spoken discourse represents a complex linguistic and cognitive operation. The difficulty of such an operation can increase when the speech is degraded, as is the case with cochlear implant users. However, the additional challenges imposed by degraded speech may be mitigated to some extent by the linguistic context and pace of presentation. Methods: An experiment is reported in which young adults with age-normal hearing recalled discourse passages heard with clear speech or with noise-band vocoding used to simulate the sound of speech produced by a cochlear implant. Passages were varied in inter-word predictability and presented either without interruption or in a self-pacing format that allowed the listener to control the rate at which the information was delivered. Results: Results showed that discourse heard with clear speech was better recalled than discourse heard with vocoded speech, discourse with a higher average inter-word predictability was better recalled than discourse with a lower average inter-word predictability, and self-paced passages were recalled better than those heard without interruption. Of special interest was the semantic hierarchy effect: the tendency for listeners to show better recall for main ideas than mid-level information or detail from a passage as an index of listeners' ability to understand the meaning of a passage. The data revealed a significant effect of inter-word predictability, in that passages with lower predictability had an attenuated semantic hierarchy effect relative to higher-predictability passages. Discussion: Results are discussed in terms of broadening cochlear implant outcome measures beyond current clinical measures that focus on single-word and sentence repetition.