Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Ear Hear ; 45(2): 451-464, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38062570

RESUMEN

OBJECTIVES: Motivated by the growing need for hearing screening in China, the present study has two objectives. First, to develop and validate a new test, called the Chinese Zodiac-in-noise (ZIN) test, for large-scale hearing screening in China. Second, to conduct a large-scale remote hearing screening in China, using the ZIN test developed. DESIGN: The ZIN test was developed following a similar procedure as the digits-in-noise test but emphasizes the importance of consonant recognition by employing the 12 zodiac animals in traditional Chinese culture as speech materials. It measures the speech reception threshold (SRT) using triplets of Chinese zodiac animals in speech-shaped noise with an adaptive procedure. RESULTS: Normative data of the test were obtained in a group of 140 normal-hearing listeners, and the performance of the test was validated by comparisons with pure-tone audiometry in 116 listeners with various hearing abilities. The ZIN test has a reference SRT of -11.0 ± 1.6 dB in normal-hearing listeners with a test-retest variability of 1.7 dB and can be completed in 3 minutes. The ZIN SRT is highly correlated with the better-ear pure-tone threshold ( r = 0.82). With a cutoff value of -7.7 dB, the ZIN test has a sensitivity of 0.85 and a specificity of 0.94 for detecting a hearing loss of 25 dB HL or more at the better ear.A large-scale remote hearing screening involving 30,552 participants was performed using the ZIN test. The large-scale study found a hearing loss proportion of 21.0% across the study sample, with a high proportion of 57.1% in the elderly study sample aged over 60 years. Age and gender were also observed to have associations with hearing loss, with older individuals and males being more likely to have hearing loss. CONCLUSIONS: The Chinese ZIN test is a valid and efficient solution for large-scale hearing screening in China. Its remote applications may improve access to hearing screening and enhance public awareness of hearing health.


Asunto(s)
Sordera , Pérdida Auditiva , Percepción del Habla , Anciano , Masculino , Humanos , Persona de Mediana Edad , Habla , Ruido , Pérdida Auditiva/diagnóstico , Audiometría de Tonos Puros/métodos , Umbral Auditivo , Audición , Prueba del Umbral de Recepción del Habla/métodos
2.
Artículo en Inglés | MEDLINE | ID: mdl-37159306

RESUMEN

Perception with electric neuroprostheses is sometimes expected to be simulated using properly designed physical stimuli. Here, we examined a new acoustic vocoder model for electric hearing with cochlear implants (CIs) and hypothesized that comparable speech encoding can lead to comparable perceptual patterns for CI and normal hearing (NH) listeners. Speech signals were encoded using FFT-based signal processing stages including band-pass filtering, temporal envelope extraction, maxima selection, and amplitude compression and quantization. These stages were specifically implemented in the same manner by an Advanced Combination Encoder (ACE) strategy in CI processors and Gaussian-enveloped Tones (GET) or Noise (GEN) vocoders for NH. Adaptive speech reception thresholds (SRTs) in noise were measured using four Mandarin sentence corpora. Initial consonant (11 monosyllables) and final vowel (20 monosyllables) recognition were also measured. NaÏve NH listeners were tested using vocoded speech with the proposed GET/GEN vocoders as well as conventional vocoders (controls). Experienced CI listeners were tested using their daily-used processors. Results showed that: 1) there was a significant training effect on GET vocoded speech perception; 2) the GEN vocoded scores (SRTs with four corpora and consonant and vowel recognition scores) as well as the phoneme-level confusion pattern matched with the CI scores better than controls. The findings suggest that the same signal encoding implementations may lead to similar perceptual patterns simultaneously in multiple perception tasks. This study highlights the importance of faithfully replicating all signal processing stages in the modeling of perceptual patterns in sensory neuroprostheses. This approach has the potential to enhance our understanding of CI perception and accelerate the engineering of prosthetic interventions. The GET/GEN MATLAB program is freely available athttps://github.com/BetterCI/GETVocoder.


Asunto(s)
Implantes Cocleares , Percepción del Habla , Humanos , Audición , Acústica , Inteligibilidad del Habla , Estimulación Acústica
3.
Front Psychol ; 13: 1026116, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36324794

RESUMEN

Despite pitch being considered the primary cue for discriminating lexical tones, there are secondary cues such as loudness contour and duration, which may allow some cochlear implant (CI) tone discrimination even with severely degraded pitch cues. To isolate pitch cues from other cues, we developed a new disyllabic word stimulus set (Di) whose primary (pitch) and secondary (loudness) cue varied independently. This Di set consists of 270 disyllabic words, each having a distinct meaning depending on the perceived tone. Thus, listeners who hear the primary pitch cue clearly may hear a different meaning from listeners who struggle with the pitch cue and must rely on the secondary loudness contour. A lexical tone recognition experiment was conducted, which compared Di with a monosyllabic set of natural recordings. Seventeen CI users and eight normal-hearing (NH) listeners took part in the experiment. Results showed that CI users had poorer pitch cues encoding and their tone recognition performance was significantly influenced by the "missing" or "confusing" secondary cues with the Di corpus. The pitch-contour-based tone recognition is still far from satisfactory for CI users compared to NH listeners, even if some appear to integrate multiple cues to achieve high scores. This disyllabic corpus could be used to examine the performance of pitch recognition of CI users and the effectiveness of pitch cue enhancement based Mandarin tone enhancement strategies. The Di corpus is freely available online: https://github.com/BetterCI/DiTone.

4.
Artículo en Inglés | MEDLINE | ID: mdl-36044501

RESUMEN

The temporal-limits-encoder (TLE) strategy has been proposed to enhance the representation of temporal fine structure (TFS) in cochlear implants (CIs), which is vital for many aspects of sound perception but is typically discarded by most modern CI strategies. TLE works by computing an envelope modulator that is within the temporal pitch limits of CI electric hearing. This paper examines the TFS information encoded by TLE and evaluates the salience and usefulness of this information in CI users. Two experiments were conducted to compare pitch perception performance of TLE versus the widely-used Advanced Combinational Encoder (ACE) strategy. Experiment 1 investigated whether TLE processing improved pitch discrimination compared to ACE. Experiment 2 parametrically examined the effect of changing the lower frequency limit of the TLE modulator on pitch ranking. In both experiments, F0 difference limens were measured with synthetic harmonic complex tones using an adaptive procedure. Signal analysis of the outputs of TLE and ACE strategies showed that TLE introduces important temporal pitch cues that are not available with ACE. Results showed an improvement in pitch discrimination with TLE when the acoustic input had a lower F0 frequency. No significant effect of lower frequency limit was observed for pitch ranking, though a lower limit did tend to provide better outcomes. These results suggest that the envelope modulation introduced by TLE can improve pitch perception for CI listeners.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Percepción del Habla , Estimulación Acústica , Implantación Coclear/métodos , Señales (Psicología) , Humanos , Percepción de la Altura Tonal
5.
Front Med (Lausanne) ; 8: 740123, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34820392

RESUMEN

The cochlea plays a key role in the transmission from acoustic vibration to neural stimulation upon which the brain perceives the sound. A cochlear implant (CI) is an auditory prosthesis to replace the damaged cochlear hair cells to achieve acoustic-to-neural conversion. However, the CI is a very coarse bionic imitation of the normal cochlea. The highly resolved time-frequency-intensity information transmitted by the normal cochlea, which is vital to high-quality auditory perception such as speech perception in challenging environments, cannot be guaranteed by CIs. Although CI recipients with state-of-the-art commercial CI devices achieve good speech perception in quiet backgrounds, they usually suffer from poor speech perception in noisy environments. Therefore, noise suppression or speech enhancement (SE) is one of the most important technologies for CI. In this study, we introduce recent progress in deep learning (DL), mostly neural networks (NN)-based SE front ends to CI, and discuss how the hearing properties of the CI recipients could be utilized to optimize the DL-based SE. In particular, different loss functions are introduced to supervise the NN training, and a set of objective and subjective experiments is presented. Results verify that the CI recipients are more sensitive to the residual noise than the SE-induced speech distortion, which has been common knowledge in CI research. Furthermore, speech reception threshold (SRT) in noise tests demonstrates that the intelligibility of the denoised speech can be significantly improved when the NN is trained with a loss function bias to more noise suppression than that with equal attention on noise residue and speech distortion.

6.
Front Neurosci ; 14: 301, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32372902

RESUMEN

The cochlea "translates" the in-air vibrational acoustic "language" into the spikes of neural "language" that are then transmitted to the brain for auditory understanding and/or perception. During this intracochlear "translation" process, high resolution in time-frequency-intensity domains guarantees the high quality of the input neural information for the brain, which is vital for our outstanding hearing abilities. However, cochlear implants (CIs) have coarse artificial coding and interfaces, and CI users experience more challenges in common acoustic environments than their normal-hearing (NH) peers. Noise from sound sources that a listener has no interest in may be neglected by NH listeners, but they may distract a CI user. We discuss the CI noise-suppression techniques and introduce noise management for a new implant system. The monaural signal-to-noise ratio estimation-based noise suppression algorithm "eVoice," which is incorporated in the processors of Nurotron® EnduroTM, was evaluated in two speech perception experiments. The results show that speech intelligibility in stationary speech-shaped noise can be significantly improved with eVoice. Similar results have been observed in other CI devices with single-channel noise reduction techniques. Specifically, the mean speech reception threshold decrease in the present study was 2.2 dB. The Nurotron society already has more than 10,000 users, and eVoice is a start for noise management in the new system. Future steps on non-stationary-noise suppression, spatial-source separation, bilateral hearing, microphone configuration, and environment specification are warranted. The existing evidence, including our research, suggests that noise-suppression techniques should be applied in CI systems. The artificial hearing of CI listeners requires more advanced signal processing techniques to reduce brain effort and increase intelligibility in noisy settings.

7.
Hear Res ; 374: 58-68, 2019 03 15.
Artículo en Inglés | MEDLINE | ID: mdl-30732921

RESUMEN

Faster speech may facilitate more efficient communication, but if speech is too fast it becomes unintelligible. The maximum speeds at which Mandarin words were intelligible in a sentence context were quantified for normal hearing (NH) and cochlear implant (CI) listeners by measuring time-compression thresholds (TCTs) in an adaptive staircase procedure. In Experiment 1, both original and CI-vocoded time-compressed speech from the MSP (Mandarin speech perception) and MHINT (Mandarin hearing in noise test) corpora was presented to 10 NH subjects over headphones. In Experiment 2, original time-compressed speech was presented to 10 CI subjects and another 10 NH subjects through a loudspeaker in a soundproof room. Sentences were time-compressed without changing their spectral profile, and were presented up to three times within a single trial. At the end of each trial, the number of correctly identified words in the sentence was scored. A 50%-word recognition threshold was tracked in the psychophysical procedure. The observed median TCTs were very similar for MSP and MHINT speech. For NH listeners, median TCTs were around 16.7 syllables/s for normal speech, and 11.8 and 8.6 syllables/s respectively for 8 and 4 channel tone-carrier vocoded speech. For CI listeners, TCTs were only around 6.8 syllables/s. The interquartile range of the TCTs within each cohort was smaller than 3.0 syllables/s. Speech reception thresholds in noise were also measured in Experiment 2, and were found to be strongly correlated with TCTs for CI listeners. In conclusion, the Mandarin sentence TCTs were around 16.7 syllables/s for most NH subjects, but rarely faster than 10.0 syllables/s for CI listeners, which quantitatively illustrated upper limits of fast speech information processing with CIs.


Asunto(s)
Umbral Auditivo/fisiología , Implantes Cocleares , Lenguaje , Inteligibilidad del Habla/fisiología , Estimulación Acústica , Adulto , Algoritmos , Niño , Implantes Cocleares/estadística & datos numéricos , Femenino , Voluntarios Sanos , Humanos , Masculino , Psicoacústica , Procesamiento de Señales Asistido por Computador , Acústica del Lenguaje , Percepción del Habla/fisiología , Factores de Tiempo , Adulto Joven
8.
IEEE Trans Neural Syst Rehabil Eng ; 25(6): 641-649, 2017 06.
Artículo en Inglés | MEDLINE | ID: mdl-27448366

RESUMEN

Lexical tone recognition with current cochlear implants (CI) remains unsatisfactory due to significantly degraded pitch-related acoustic cues, which dominate the tone recognition by normal-hearing (NH) listeners. Several secondary cues (e.g., amplitude contour, duration, and spectral envelope) that influence tone recognition in NH listeners and CI users have been studied. This work proposes a loudness contour manipulation algorithm, namely Loudness-Tone (L-Tone), to investigate the effects of loudness contour on Mandarin tone recognition and the effectiveness of using loudness cue to enhance tone recognition for CI users. With L-Tone, the intensity of sound samples is multiplied by gain values determined by instantaneous fundamental frequencies (F0s) and pre-defined gain- F0 mapping functions. Perceptual experiments were conducted with a four-channel noise-band vocoder simulation in NH listeners and with CI users. The results suggested that 1) loudness contour is a useful secondary cue for Mandarin tone recognition, especially when pitch cues are significantly degraded; 2) L-Tone can be used to improve Mandarin tone recognition in both simulated and actual CI-hearing without significant negative effect on vowel and consonant recognition. L-Tone is a promising algorithm for incorporation into real-time CI processing and off-line CI rehabilitation training software.


Asunto(s)
Implantes Cocleares , Sordera/fisiopatología , Sordera/rehabilitación , Lenguaje , Percepción de la Altura Tonal , Espectrografía del Sonido/métodos , Percepción del Habla , Estimulación Acústica/métodos , Adolescente , Adulto , Percepción Auditiva , China , Señales (Psicología) , Sordera/diagnóstico , Femenino , Humanos , Masculino , Adulto Joven
9.
J Acoust Soc Am ; 139(1): 301-10, 2016 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-26827026

RESUMEN

Temporal envelope-based signal processing strategies are widely used in cochlear-implant (CI) systems. It is well recognized that the inability to convey temporal fine structure (TFS) in the stimuli limits CI users' performance, but it is still unclear how to effectively deliver the TFS. A strategy known as the temporal limits encoder (TLE), which employs an approach to derive the amplitude modulator to generate the stimuli coded in an interleaved-sampling strategy, has recently been proposed. The TLE modulator contains information related to the original temporal envelope and a slow-varying TFS from the band signal. In this paper, theoretical analyses are presented to demonstrate the superiority of TLE compared with two existing strategies, the clinically available continuous-interleaved-sampling (CIS) strategy and the experimental harmonic-single-sideband-encoder strategy. Perceptual experiments with vocoder simulations in normal-hearing listeners are conducted to compare the performance of TLE and CIS on two tasks (i.e., Mandarin speech reception in babble noise and tone recognition in quiet). The performance of the TLE modulator is mostly better than (for most tone-band vocoders) or comparable to (for noise-band vocoders) the CIS modulator on both tasks. This work implies that there is some potential for improving the representation of TFS with CIs by using a TLE strategy.


Asunto(s)
Implantes Cocleares , Ruido , Reconocimiento en Psicología/fisiología , Percepción del Habla/fisiología , Algoritmos , China , Femenino , Voluntarios Sanos , Humanos , Lenguaje , Masculino , Enmascaramiento Perceptual/fisiología , Fonética , Discriminación de la Altura Tonal/fisiología , Relación Señal-Ruido , Espectrografía del Sonido
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...