RESUMO
Mechanistic insight is achieved only when experiments are employed to test formal or computational models. Furthermore, in analogy to lesion studies, phantom perception may serve as a vehicle to understand the fundamental processing principles underlying healthy auditory perception. With a special focus on tinnitus-as the prime example of auditory phantom perception-we review recent work at the intersection of artificial intelligence, psychology and neuroscience. In particular, we discuss why everyone with tinnitus suffers from (at least hidden) hearing loss, but not everyone with hearing loss suffers from tinnitus. We argue that intrinsic neural noise is generated and amplified along the auditory pathway as a compensatory mechanism to restore normal hearing based on adaptive stochastic resonance. The neural noise increase can then be misinterpreted as auditory input and perceived as tinnitus. This mechanism can be formalized in the Bayesian brain framework, where the percept (posterior) assimilates a prior prediction (brain's expectations) and likelihood (bottom-up neural signal). A higher mean and lower variance (i.e. enhanced precision) of the likelihood shifts the posterior, evincing a misinterpretation of sensory evidence, which may be further confounded by plastic changes in the brain that underwrite prior predictions. Hence, two fundamental processing principles provide the most explanatory power for the emergence of auditory phantom perceptions: predictive coding as a top-down and adaptive stochastic resonance as a complementary bottom-up mechanism. We conclude that both principles also play a crucial role in healthy auditory perception. Finally, in the context of neuroscience-inspired artificial intelligence, both processing principles may serve to improve contemporary machine learning techniques.
Assuntos
Perda Auditiva , Zumbido , Humanos , Zumbido/psicologia , Teorema de Bayes , Inteligência Artificial , Percepção Auditiva , Vias AuditivasRESUMO
Tinnitus is a sound heard by 15% of the general population in the absence of any external sound. Because external sounds can sometimes mask tinnitus, tinnitus is assumed to affect the perception of external sounds, leading to hypotheses such as "tinnitus filling in the temporal gap" in animal models and "tinnitus inducing hearing difficulty" in human subjects. Here we compared performance in temporal, spectral, intensive, masking and speech-in-noise perception tasks between 45 human listeners with chronic tinnitus (18 females and 27 males with a range of ages and degrees of hearing loss) and 27 young, normal-hearing listeners without tinnitus (11 females and 16 males). After controlling for age, hearing loss, and stimulus variables, we discovered that, contradictory to the widely held assumption, tinnitus does not interfere with the perception of external sounds in 32 of the 36 measures. We interpret the present result to reflect a bottom-up pathway for the external sound and a separate top-down pathway for tinnitus. We propose that these two perceptual pathways can be independently modulated by attention, which leads to the asymmetrical interaction between external and internal sounds, and several other puzzling tinnitus phenomena such as discrepancy in loudness between tinnitus rating and matching. The present results suggest not only a need for new theories involving attention and central noise in animal tinnitus models but also a shift in focus from treating tinnitus to managing its comorbid conditions when addressing complaints about hearing difficulty in individuals with tinnitus.SIGNIFICANCE STATEMENT Tinnitus, or ringing in the ears, is a neurologic disorder that affects 15% of the general population. Here we discovered an asymmetrical relationship between tinnitus and external sounds: although external sounds have been widely used to cover up tinnitus, tinnitus does not impair, and sometimes even improves, the perception of external sounds. This counterintuitive discovery contradicts the general belief held by scientists, clinicians, and even individuals with tinnitus themselves, who often report hearing difficulty, especially in noise. We attribute the counterintuitive discovery to two independent pathways: the bottom-up perception of external sounds and the top-down perception of tinnitus. Clinically, the present work suggests a shift in focus from treating tinnitus itself to treating its comorbid conditions and secondary effects.
Assuntos
Percepção Auditiva , Percepção da Fala , Zumbido/psicologia , Adulto , Atenção , Vias Auditivas/fisiopatologia , Doença Crônica , Discriminação Psicológica , Feminino , Perda Auditiva/psicologia , Humanos , Masculino , Pessoa de Meia-Idade , Ruído , Zumbido/fisiopatologia , Adulto JovemRESUMO
Wenhui Mao and coauthors discuss possible implications of the COVID-19 pandemic for health aspirations in low- and middle-income countries.
Assuntos
Vacinas contra COVID-19/administração & dosagem , COVID-19/mortalidade , COVID-19/prevenção & controle , Saúde Global/tendências , Cobertura Universal do Seguro de Saúde/tendências , Acessibilidade aos Serviços de Saúde/tendências , Humanos , Mortalidade/tendênciasRESUMO
OBJECTIVES: Electric stimulation is used to treat a number of neurologic disorders such as epilepsy and depression. However, delivering the required current to far-field neural targets is often ineffective because of current spread through low-impedance pathways. Here, the specific aims are to develop an empirical measure for current passing through the human head and to optimize stimulation strategies for targeting deeper structures, including the auditory nerve, by utilizing the cochlear implant (CI). MATERIALS AND METHODS: Outward input/output (I/O) functions were obtained by CI stimulation and recording scalp potentials in five CI subjects. Conversely, inward I/O functions were obtained by noninvasive transcranial electric stimulation (tES) and recording intracochlear potentials using the onboard recording capability of the CI. RESULTS: I/O measures indicate substantial current spread, with a maximum of 2.2% gain recorded at the inner ear target during tES (mastoid-to-mastoid electrode configuration). Similarly, CI stimulation produced a maximum of 1.1% gain at the scalp electrode nearest the CI return electrode. Gain varied with electrode montage according to a point source model that accounted for distances between the stimulating and recording electrodes. Within the same electrode montages, current gain patterns varied across subjects suggesting the importance of tissue properties, geometry, and electrode positioning. CONCLUSION: These results provide a novel objective measure of electric stimulation in the human head, which can help to optimize stimulation parameters that improve neural excitation of deep structures by reducing the influence of current spread. CONFLICT OF INTEREST: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Assuntos
Implantes Cocleares , Estimulação Elétrica , Eletrodos Implantados , Cabeça , HumanosRESUMO
OBJECTIVES: Electro-acoustic stimulation (EAS) enhances speech and music perception in cochlear-implant (CI) users who have residual low-frequency acoustic hearing. For CI users who do not have low-frequency acoustic hearing, tactile stimulation may be used in a similar fashion as residual low-frequency acoustic hearing to enhance CI performance. Previous studies showed that electro-tactile stimulation (ETS) enhanced speech recognition in noise and tonal language perception for CI listeners. Here, we examined the effect of ETS on melody recognition in both musician and nonmusician CI users. DESIGN: Nine musician and eight nonmusician CI users were tested in a melody recognition task with or without rhythmic cues in three testing conditions: CI only (E), tactile only (T), and combined CI and tactile stimulation (ETS). RESULTS: Overall, the combined electrical and tactile stimulation enhanced the melody recognition performance in CI users by 9% points. Two additional findings were observed. First, musician CI users outperformed nonmusicians CI users in melody recognition, but the size of the enhancement effect was similar between the two groups. Second, the ETS enhancement was significantly higher with nonrhythmic melodies than rhythmic melodies in both groups. CONCLUSIONS: These findings suggest that, independent of musical experience, the size of the ETS enhancement depends on integration efficiency between tactile and auditory stimulation, and that the mechanism of the ETS enhancement is improved electric pitch perception. The present study supports the hypothesis that tactile stimulation can be used to improve pitch perception in CI users.
Assuntos
Implante Coclear , Implantes Cocleares , Música , Percepção da Fala , Estimulação Acústica , Humanos , Percepção da Altura SonoraRESUMO
To examine difficulties experienced by cochlear implant (CI) users when perceiving non-native speech, intelligibility of non-native speech was compared in conditions with single and multiple alternating talkers. Compared to listeners with normal hearing, no rapid talker-dependent adaptation was observed and performance was approximately 40% lower for CI users following increased exposure in both talker conditions. Results suggest that lower performance for CI users may stem from combined effects of limited spectral resolution, which diminishes perceptible differences across accents, and limited access to talker-specific acoustic features of speech, which reduces the ability to adapt to non-native speech in a talker-dependent manner.
Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Cognição , FalaRESUMO
OBJECTIVE: The present study aimed to measure bimodal benefits and probe their underlying mechanisms in Mandarin-speaking cochlear implant (CI) subjects who had contralateral residual acoustic hearing. DESIGN: The subjects recognised words or phonemes from the Mandarin Lexical Neighborhood Test in noise at a 10-dB signal-to-noise ratio (SNR) with acoustic stimulation, electric stimulation or the combined bimodal stimulation. STUDY SAMPLE: Thirteen Mandarin-speaking subjects wore a CI in one ear and had residual acoustic hearing in the contralateral ear. Six of the subjects (5.2-13.0 years) had pre-lingual onset of severe hearing loss, and seven of them (8.6-45.8 years) had post-lingual onset of severe hearing loss. RESULTS: Both groups of subjects produced a significant bimodal benefit in word recognition in noise. Consonants and tones accounted for the bimodal benefit. The bimodal integration efficiency was negatively correlated with the duration of deafness in the implanted ear for vowel recognition but positively correlated with CI or bimodal experience for consonant recognition. CONCLUSIONS: The present results support preservation of residual acoustic hearing, early cochlear implantation and continuous use of bimodal hearing for subjects who have significant residual hearing in the non-implanted ear.
Assuntos
Implante Coclear/instrumentação , Implantes Cocleares , Perda Auditiva/reabilitação , Audição , Idioma , Pessoas com Deficiência Auditiva/reabilitação , Percepção da Fala , Estimulação Acústica , Adolescente , Adulto , Audiometria da Fala , Criança , Pré-Escolar , China , Estimulação Elétrica , Perda Auditiva/diagnóstico , Perda Auditiva/fisiopatologia , Perda Auditiva/psicologia , Humanos , Pessoa de Meia-Idade , Pessoas com Deficiência Auditiva/psicologia , Fonética , Reconhecimento Psicológico , Acústica da Fala , Inteligibilidade da FalaRESUMO
OBJECTIVE: To record envelope following responses (EFRs) to monaural amplitude-modulated broadband noise carriers in which amplitude modulation (AM) depth was slowly changed over time and to compare these objective electrophysiological measures to subjective behavioral thresholds in young normal hearing and older subjects. PARTICIPANTS: three groups of subjects included a young normal-hearing group (YNH 18 to 28 years; pure-tone average = 5 dB HL), a first older group ("O1"; 41 to 62 years; pure-tone average = 19 dB HL), and a second older group ("O2"; 67 to 82 years; pure-tone average = 35 dB HL). Electrophysiology: In condition 1, the AM depth (41 Hz) of a white noise carrier, was continuously varied from 2% to 100% (5%/s). EFRs were analyzed as a function of the AM depth. In condition 2, auditory steady-state responses were recorded to fixed AM depths (100%, 75%, 50%, and 25%) at a rate of 41 Hz. Psychophysics: A 3 AFC (alternative forced choice) procedure was used to track the AM depth needed to detect AM at 41 Hz (AM detection). The minimum AM depth capable of eliciting a statistically detectable EFR was defined as the physiological AM detection threshold. RESULTS: Across all ages, the fixed AM depth auditory steady-state response and swept AM EFR yielded similar response amplitudes. Statistically significant correlations (r = 0.48) were observed between behavioral and physiological AM detection thresholds. Older subjects had slightly higher (not significant) behavioral AM detection thresholds than younger subjects. AM detection thresholds did not correlate with age. All groups showed a sigmoidal EFR amplitude versus AM depth function but the shape of the function differed across groups. The O2 group reached EFR amplitude plateau levels at lower modulation depths than the normal-hearing group and had a narrower neural dynamic range. In the young normal-hearing group, the EFR phase did not differ with AM depth, whereas in the older group, EFR phase showed a consistent decrease with increasing AM depth. The degree of phase change (or phase slope) was significantly correlated to the pure-tone threshold at 4 kHz. CONCLUSIONS: EFRs can be recorded using either the swept modulation depth or the discrete AM depth techniques. Sweep recordings may provide additional valuable information at suprathreshold intensities including the plateau level, slope, and dynamic range. Older subjects had a reduced neural dynamic range compared with younger subjects suggesting that aging affects the ability of the auditory system to encode subtle differences in the depth of AM. The phase-slope differences are likely related to differences in low and high-frequency contributions to EFR. The behavioral-physiological AM depth threshold relationship was significant but likely too weak to be clinically useful in the present individual subjects who did not suffer from apparent temporal processing deficits.
Assuntos
Envelhecimento/fisiologia , Potenciais Evocados Auditivos do Tronco Encefálico/fisiologia , Audição/fisiologia , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Audiometria de Tons Puros , Potenciais Evocados Auditivos/fisiologia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto JovemRESUMO
Long-term loudness perception of a sound has been presumed to depend on the spatial distribution of activated auditory nerve fibers as well as their temporal firing pattern. The relative contributions of those two factors were investigated by measuring loudness adaptation to sinusoidally amplitude-modulated 12-kHz tones. The tones had a total duration of 180 s and were either unmodulated or 100%-modulated at one of three frequencies (4, 20, or 100 Hz), and additionally varied in modulation depth from 0% to 100% at the 4-Hz frequency only. Every 30 s, normal-hearing subjects estimated the loudness of one of the stimuli played at 15 dB above threshold in random order. Without any amplitude modulation, the loudness of the unmodulated tone after 180 s was only 20% of the loudness at the onset of the stimulus. Amplitude modulation systematically reduced the amount of loudness adaptation, with the 100%-modulated stimuli, regardless of modulation frequency, maintaining on average 55%-80% of the loudness at onset after 180 s. Because the present low-frequency amplitude modulation produced minimal changes in long-term spectral cues affecting the spatial distribution of excitation produced by a 12-kHz pure tone, the present result indicates that neural synchronization is critical to maintaining loudness perception over time.
Assuntos
Adaptação Fisiológica , Vias Auditivas/fisiologia , Percepção Sonora/fisiologia , Estimulação Acústica , Adulto , Limiar Auditivo/fisiologia , Feminino , Habituação Psicofisiológica , Humanos , Masculino , Fibras Nervosas/fisiologia , Percepção da Altura Sonora/fisiologia , Psicoacústica , SomRESUMO
Abnormal auditory adaptation is a standard clinical tool for diagnosing auditory nerve disorders due to acoustic neuromas. In the present study we investigated auditory adaptation in auditory neuropathy owing to disordered function of inner hair cell ribbon synapses (temperature-sensitive auditory neuropathy) or auditory nerve fibres. Subjects were tested when afebrile for (i) psychophysical loudness adaptation to comfortably-loud sustained tones; and (ii) physiological adaptation of auditory brainstem responses to clicks as a function of their position in brief 20-click stimulus trains (#1, 2, 3 20). Results were compared with normal hearing listeners and other forms of hearing impairment. Subjects with ribbon synapse disorder had abnormally increased magnitude of loudness adaptation to both low (250 Hz) and high (8000 Hz) frequency tones. Subjects with auditory nerve disorders had normal loudness adaptation to low frequency tones; all but one had abnormal adaptation to high frequency tones. Adaptation was both more rapid and of greater magnitude in ribbon synapse than in auditory nerve disorders. Auditory brainstem response measures of adaptation in ribbon synapse disorder showed Wave V to the first click in the train to be abnormal both in latency and amplitude, and these abnormalities increased in magnitude or Wave V was absent to subsequent clicks. In contrast, auditory brainstem responses in four of the five subjects with neural disorders were absent to every click in the train. The fifth subject had normal latency and abnormally reduced amplitude of Wave V to the first click and abnormal or absent responses to subsequent clicks. Thus, dysfunction of both synaptic transmission and auditory neural function can be associated with abnormal loudness adaptation and the magnitude of the adaptation is significantly greater with ribbon synapse than neural disorders.
Assuntos
Estimulação Acústica/métodos , Adaptação Fisiológica/fisiologia , Nervo Coclear/patologia , Células Ciliadas Auditivas Internas/fisiologia , Hiperacusia/fisiopatologia , Adolescente , Adulto , Idoso , Percepção Auditiva/fisiologia , Criança , Nervo Coclear/fisiologia , Feminino , Transtornos da Audição/diagnóstico , Transtornos da Audição/fisiopatologia , Humanos , Hiperacusia/diagnóstico , Percepção Sonora/fisiologia , Masculino , Pessoa de Meia-Idade , Adulto JovemRESUMO
Introduction: Objectively predicting speech intelligibility is important in both telecommunication and human-machine interaction systems. The classic method relies on signal-to-noise ratios (SNR) to successfully predict speech intelligibility. One exception is clear speech, in which a talker intentionally articulates as if speaking to someone who has hearing loss or is from a different language background. As a result, at the same SNR, clear speech produces higher intelligibility than conversational speech. Despite numerous efforts, no objective metric can successfully predict the clear speech benefit at the sentence level. Methods: We proposed a Syllable-Rate-Adjusted-Modulation (SRAM) index to predict the intelligibility of clear and conversational speech. The SRAM used as short as 1 s speech and estimated its modulation power above the syllable rate. We compared SRAM with three reference metrics: envelope-regression-based speech transmission index (ER-STI), hearing-aid speech perception index version 2 (HASPI-v2) and short-time objective intelligibility (STOI), and five automatic speech recognition systems: Amazon Transcribe, Microsoft Azure Speech-To-Text, Google Speech-To-Text, wav2vec2 and Whisper. Results: SRAM outperformed the three reference metrics (ER-STI, HASPI-v2 and STOI) and the five automatic speech recognition systems. Additionally, we demonstrated the important role of syllable rate in predicting speech intelligibility by comparing SRAM with the total modulation power (TMP) that was not adjusted by the syllable rate. Discussion: SRAM can potentially help understand the characteristics of clear speech, screen speech materials with high intelligibility, and convert conversational speech into clear speech.
RESUMO
Attention plays an important role in not only the awareness and perception of tinnitus but also its interactions with external sounds. Recent evidence suggests that attention is heightened in the tinnitus brain, likely as a result of relatively local cortical changes specific to deafferentation sites or global changes that help maintain normal cognitive capabilities in individuals with hearing loss. However, most electrophysiological studies have used passive listening paradigms to probe the tinnitus brain and produced mixed results in terms of finding a distinctive biomarker for tinnitus. Here, we designed a selective attention task, in which human adults attended to one of two interleaved tonal (500 Hz and 5 kHz) sequences. In total, 16 tinnitus (5 females) and 13 age- and hearing-matched control (8 females) subjects participated in the study, with the tinnitus subjects matching the tinnitus pitch to 5.4 kHz (range = 1.9-10.8 kHz). Cortical responses were recorded in both passive and attentive listening conditions, producing no differences in P1, N1, and P2 between the tinnitus and control subjects under any conditions. However, a different pattern of results emerged when the difference was examined between the attended and unattended responses. This attention-modulated cortical response was significantly greater in the tinnitus than control subjects: 3.9-times greater for N1 at 5 kHz (95% CI: 2.9 to 5.0, p = 0.007, ηp2 = 0.24) and 3.0 for P2 at 500 Hz (95% CI: 1.9 to 4.5, p = 0.026, ηp2 = 0.17). We interpreted the greater N1 modulation as local neural changes specific to the tinnitus frequency and the greater P2 as global changes to hearing loss. These two cortical measures were used to differentiate between the tinnitus and control subjects, producing 83.3% sensitivity and 76.9% specificity (AUC = 0.81, p = 0.006). These results suggest that the tinnitus brain is more plastic than that of the matched non-tinnitus controls and that the attention-modulated cortical response can be developed as a clinically meaningful biomarker for tinnitus.
RESUMO
Although the telephone band (0.3-3 kHz) provides sufficient information for speech recognition, the contribution of the non-telephone band (<0.3 and >3 kHz) is unclear. To investigate its contribution, speech intelligibility and talker identification were evaluated using consonants, vowels, and sentences. The non-telephone band produced relatively good intelligibility for consonants (76.0%) and sentences (77.4%), but not vowels (11.5%). The non-telephone band supported good talker identification only with sentences (74.5%), but not vowels (45.8%) or consonants (10.8%). Furthermore, the non-telephone band cannot produce satisfactory speech intelligibility in noise at the sentence level, suggesting the importance of full-band access in realistic listening.
Assuntos
Inteligibilidade da Fala , Percepção da Fala , Humanos , Percepção da Fala/fisiologia , Masculino , Feminino , Telefone , Adulto , Adulto Jovem , Fonética , RuídoRESUMO
OBJECTIVE: Develop a novel and highly efficient framework that decodes Inferior Colliculus (IC) neural activities for phoneme recognition. METHODS: We propose using Hyperdimensional Computing (HDC) to support an efficient phoneme recognition algorithm, in contrast to widely applied Deep Neural Networks (DNN). The high-dimensional representation and operations in HDC are rooted in human brain functionalities and naturally parallelizable, showing the potential for efficient neural activity analysis. Our proposed method includes a spatial and temporal-aware HDC encoder that effectively captures global and local patterns. As part of our framework, we deploy the lightweight HDC-based algorithm on a highly customizable and flexible hardware platform, i.e., Field Programmable Gate Arrays (FPGA), for optimal algorithm speedup. To evaluate our method, we record IC neural activities on gerbils while playing the sound of different phonemes. RESULTS: We compare our proposed method with multiple baseline machine learning algorithms in recognition quality and learning efficiency, across different hardware platforms. The results show that our method generally achieves better classification quality than the best-performing baseline. Compared to the Deep Residual Neural Network (i.e., ResNet), our method shows a speedup up to 74×, 67×, 210× on CPU, GPU, and FPGA respectively. We achieve up to 15% (10%) higher accuracy in consonant (vowel) classification than ResNet. CONCLUSION: By leveraging brain-inspired HDC for IC neural activity encoding and phoneme classification, we achieve orders of magnitude runtime speedup while improving accuracy in various challenging task settings. SIGNIFICANCE: Decoding IC neural activities is an important step to enhance understanding about human auditory system. However, these responses from the central auditory system are noisy and contain high variance, demanding large-scale datasets and iterative model fine-tuning. The proposed HDC-based framework is more scalable and viable for future real-world deployment thanks to its fast training and overall better quality.
Assuntos
Algoritmos , Gerbillinae , Colículos Inferiores , Colículos Inferiores/fisiologia , Animais , Aprendizado de Máquina , Processamento de Sinais Assistido por Computador , Redes Neurais de Computação , Aprendizado Profundo , Reconhecimento Automatizado de Padrão/métodosRESUMO
Understanding speech-in-noise is difficult for most cochlear implant (CI) users. Speech-in-noise segregation cues are well understood for acoustic hearing but not for electric hearing. This study investigated the effects of stimulation rate and onset delay on synthetic vowel-in-noise recognition in CI subjects. In experiment I, synthetic vowels were presented at 50, 145, or 795 pulse/s and noise at the same three rates, yielding nine combinations. Recognition improved significantly if the noise had a lower rate than the vowel, suggesting that listeners can use temporal gaps in the noise to detect a synthetic vowel. This hypothesis is supported by accurate prediction of synthetic vowel recognition using a temporal integration window model. Using lower rates a similar trend was observed in normal hearing subjects. Experiment II found that for CI subjects, a vowel onset delay improved performance if the noise had a lower or higher rate than the synthetic vowel. These results show that differing rates or onset times can improve synthetic vowel-in-noise recognition, indicating a need to develop speech processing strategies that encode or emphasize these cues.
Assuntos
Implante Coclear/instrumentação , Implantes Cocleares , Correção de Deficiência Auditiva/psicologia , Sinais (Psicologia) , Ruído/efeitos adversos , Mascaramento Perceptivo , Pessoas com Deficiência Auditiva/reabilitação , Reconhecimento Psicológico , Percepção da Fala , Estimulação Acústica , Adulto , Idoso , Análise de Variância , Audiometria da Fala , Estudos de Casos e Controles , Compreensão , Estimulação Elétrica , Feminino , Humanos , Percepção Sonora , Masculino , Pessoa de Meia-Idade , Reconhecimento Fisiológico de Modelo , Pessoas com Deficiência Auditiva/psicologia , Desenho de Prótese , Psicoacústica , Acústica da Fala , Inteligibilidade da Fala , Fatores de TempoRESUMO
Across bilateral cochlear implants, contralateral threshold shift has been investigated as a function of electrode difference between the masking and probe electrodes. For contralateral electric masking, maximum threshold elevations occurred when the position of the masker and probe electrode was approximately place-matched across ears. The amount of masking diminished with increasing masker-probe electrode separation. Place-dependent masking occurred in both sequentially implanted ears, and was not affected by the masker intensity or the time delay from the masker onset. When compared to previous contralateral masking results in normal hearing, the similarities between place-dependent central masking patterns suggest comparable mechanisms of overlapping excitation in the central auditory nervous system.
Assuntos
Percepção Auditiva , Implante Coclear/instrumentação , Implantes Cocleares , Correção de Deficiência Auditiva/psicologia , Mascaramento Perceptivo , Pessoas com Deficiência Auditiva/reabilitação , Adolescente , Adulto , Idoso , Análise de Variância , Audiometria de Tons Puros , Limiar Auditivo , Criança , Estimulação Elétrica , Humanos , Percepção Sonora , Pessoa de Meia-Idade , Pessoas com Deficiência Auditiva/psicologia , Psicoacústica , Fatores de TempoRESUMO
The addition of low-passed (LP) speech or even a tone following the fundamental frequency (F0) of speech has been shown to benefit speech recognition for cochlear implant (CI) users with residual acoustic hearing. The mechanisms underlying this benefit are still unclear. In this study, eight bimodal subjects (CI users with acoustic hearing in the non-implanted ear) and eight simulated bimodal subjects (using vocoded and LP speech) were tested on vowel and consonant recognition to determine the relative contributions of acoustic and phonetic cues, including F0, to the bimodal benefit. Several listening conditions were tested (CI/Vocoder, LP, T(F0-env), CI/Vocoder + LP, CI/Vocoder + T(F0-env)). Compared with CI/Vocoder performance, LP significantly enhanced both consonant and vowel perception, whereas a tone following the F0 contour of target speech and modulated with an amplitude envelope of the maximum frequency of the F0 contour (T(F0-env)) enhanced only consonant perception. Information transfer analysis revealed a dual mechanism in the bimodal benefit: The tone representing F0 provided voicing and manner information, whereas LP provided additional manner, place, and vowel formant information. The data in actual bimodal subjects also showed that the degree of the bimodal benefit depended on the cutoff and slope of residual acoustic hearing.
Assuntos
Implantes Cocleares , Audição/fisiologia , Fonética , Percepção da Fala/fisiologia , Estimulação Acústica , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Limiar Auditivo/fisiologia , Estudos de Casos e Controles , Simulação por Computador , Feminino , Perda Auditiva Neurossensorial/fisiopatologia , Humanos , Masculino , Pessoa de Meia-Idade , Ruído , Mascaramento Perceptivo/fisiologia , Reconhecimento Psicológico/fisiologia , Adulto JovemRESUMO
Cochlear implants have been the most successful neural prosthesis, with one million users globally. Researchers used the source-filter model and speech vocoder to design the modern multi-channel implants, allowing implantees to achieve 70%-80% correct sentence recognition in quiet, on average. Researchers also used the cochlear implant to help understand basic mechanisms underlying loudness, pitch, and cortical plasticity. While front-end processing advances improved speech recognition in noise, the unilateral implant speech recognition in quiet has plateaued since the early 1990s. This lack of progress calls for action on re-designing the cochlear stimulating interface and collaboration with the general neurotechnology community.
Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Ruído , Reconhecimento PsicológicoRESUMO
PURPOSE: With the rapid development of new technologies and resources, many avenues exist to adapt and grow as a profession. Embracing change can lead to growth, evolution, and new opportunities. Audiologists have the potential to harness many of these technological advancements to improve patient health care. Adoption and incorporation of these new technologies will likely benefit educational experiences, research methods, clinical practice, and clinical outcomes. METHOD: This commentary highlights some historical perspectives and accepted practices while illustrating opportunities to embrace new ideas and technologies. We also provide examples of how such adoption may yield positive outcomes. Specifically, we address embracing technology in audiology education, how artificial intelligence may influence patient performance in realistic listening scenarios, the convergence between hearing aids and consumer electronics, and the emergence of audiology telehealth services and their inclusion in clinical practice. Models of change are also discussed and related to audiology. CONCLUSION: This commentary aims to be a call to action for the entire profession of audiology to consider conscientiously the adoption of useful, evidence-based technological advancements in education, research, and clinical practice.
Assuntos
Audiologia , Auxiliares de Audição , Inteligência Artificial , Audiologistas , Audiologia/métodos , Escolaridade , HumanosRESUMO
Animal studies have discovered that noise, even at levels that produce no permanent threshold shift, may cause cochlear damage and selective nerve degeneration. A hallmark of such damage, or synaptopathy, is recovered threshold but reduced suprathreshold amplitude for the auditory brainstem response (ABR) wave I. The objective of the present study is to evaluate whether the ABR wave I amplitude or slope can be used to diagnose tinnitus in humans. A total of 43 human subjects, consisting of 21 with tinnitus and 22 without tinnitus, participated in the study. The subjects were on average 44 ± 24 (standard deviation) years old and 16 were female; a subgroup of 19 were young adults with normal audiograms from 125 to 8000 Hz. The ABR was measured using ear canal recording tiptrodes for clicks, 1000, 4000 and 8000 Hz tone bursts at 30, 50, and 70 dB nHL. Compared with control subjects, tinnitus subjects did not show reduced ABR wave I amplitude or slope in either the entire group of 21 tinnitus subjects or a subset of tinnitus subjects with normal audiograms. Despite the small sample size and diverse tinnitus population, the present result suggests that low signal-to-noise ratios in non-invasive measurement of the ABR limit its clinical utility in diagnosing tinnitus in humans.