Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters

Database
Language
Affiliation country
Publication year range
1.
Hear Res ; 391: 107969, 2020 06.
Article in English | MEDLINE | ID: mdl-32320925

ABSTRACT

Speech recognition in noisy environments remains a challenge for cochlear implant (CI) recipients. Unwanted charge interactions between current pulses, both within and between electrode channels, are likely to impair performance. Here we investigate the effect of reducing the number of current pulses on speech perception. This was achieved by implementing a psychoacoustic temporal-masking model where current pulses in each channel were passed through a temporal integrator to identify and remove pulses that were less likely to be perceived by the recipient. The decision criterion of the temporal integrator was varied to control the percentage of pulses removed in each condition. In experiment 1, speech in quiet was processed with a standard Continuous Interleaved Sampling (CIS) strategy and with 25, 50 and 75% of pulses removed. In experiment 2, performance was measured for speech in noise with the CIS reference and with 50 and 75% of pulses removed. Speech intelligibility in quiet revealed no significant difference between reference and test conditions. For speech in noise, results showed a significant improvement of 2.4 dB when removing 50% of pulses and performance was not significantly different between the reference and when 75% of pulses were removed. Further, by reducing the overall amount of current pulses by 25, 50, and 75% but accounting for the increase in charge necessary to compensate for the decrease in loudness, estimated average power savings of 21.15, 40.95, and 63.45%, respectively, could be possible for this set of listeners. In conclusion, removing temporally masked pulses may improve speech perception in noise and result in substantial power savings.


Subject(s)
Cochlear Implantation/instrumentation , Cochlear Implants , Hearing Loss/therapy , Noise/adverse effects , Perceptual Masking , Persons With Hearing Impairments/rehabilitation , Speech Perception , Acoustic Stimulation , Aged , Aged, 80 and over , Electric Stimulation , Hearing , Hearing Loss/diagnosis , Hearing Loss/physiopathology , Hearing Loss/psychology , Humans , Loudness Perception , Male , Middle Aged , Persons With Hearing Impairments/psychology , Speech Intelligibility
2.
Sci Rep ; 9(1): 11428, 2019 08 06.
Article in English | MEDLINE | ID: mdl-31388053

ABSTRACT

Cochlear implant (CI) users receive only limited sound information through their implant, which means that they struggle to understand speech in noisy environments. Recent work has suggested that combining the electrical signal from the CI with a haptic signal that provides crucial missing sound information ("electro-haptic stimulation"; EHS) could improve speech-in-noise performance. The aim of the current study was to test whether EHS could enhance speech-in-noise performance in CI users using: (1) a tactile signal derived using an algorithm that could be applied in real time, (2) a stimulation site appropriate for a real-world application, and (3) a tactile signal that could readily be produced by a compact, portable device. We measured speech intelligibility in multi-talker noise with and without vibro-tactile stimulation of the wrist in CI users, before and after a short training regime. No effect of EHS was found before training, but after training EHS was found to improve the number of words correctly identified by an average of 8.3%-points, with some users improving by more than 20%-points. Our approach could offer an inexpensive and non-invasive means of improving speech-in-noise performance in CI users.


Subject(s)
Acoustic Stimulation/methods , Cochlear Implants , Electric Stimulation/methods , Hearing Loss/rehabilitation , Speech Perception/physiology , Acoustic Stimulation/instrumentation , Adult , Aged , Audiometry, Speech , Auditory Threshold/physiology , Electric Stimulation/instrumentation , Female , Hearing Loss/diagnosis , Humans , Male , Middle Aged , Noise/adverse effects , Persons With Hearing Impairments/rehabilitation , Treatment Outcome
3.
Trends Hear ; 22: 2331216518797838, 2018.
Article in English | MEDLINE | ID: mdl-30222089

ABSTRACT

Many cochlear implant (CI) users achieve excellent speech understanding in acoustically quiet conditions but most perform poorly in the presence of background noise. An important contributor to this poor speech-in-noise performance is the limited transmission of low-frequency sound information through CIs. Recent work has suggested that tactile presentation of this low-frequency sound information could be used to improve speech-in-noise performance for CI users. Building on this work, we investigated whether vibro-tactile stimulation can improve speech intelligibility in multi-talker noise. The signal used for tactile stimulation was derived from the speech-in-noise using a computationally inexpensive algorithm. Eight normal-hearing participants listened to CI simulated speech-in-noise both with and without concurrent tactile stimulation of their fingertip. Participants' speech recognition performance was assessed before and after a training regime, which took place over 3 consecutive days and totaled around 30 min of exposure to CI-simulated speech-in-noise with concurrent tactile stimulation. Tactile stimulation was found to improve the intelligibility of speech in multi-talker noise, and this improvement was found to increase in size after training. Presentation of such tactile stimulation could be achieved by a compact, portable device and offer an inexpensive and noninvasive means for improving speech-in-noise performance in CI users.


Subject(s)
Acoustic Stimulation/methods , Cochlear Implantation/methods , Hearing Loss/surgery , Speech Intelligibility/physiology , Speech Perception/physiology , Adult , Algorithms , Audiometry, Speech/methods , Auditory Perception/physiology , Auditory Threshold/physiology , Cochlear Implants , Female , Humans , Male , Noise , Sampling Studies , Sensitivity and Specificity , Simulation Training , Sound Localization/physiology , Young Adult
4.
J Acoust Soc Am ; 141(3): 1985, 2017 03.
Article in English | MEDLINE | ID: mdl-28372043

ABSTRACT

Machine-learning based approaches to speech enhancement have recently shown great promise for improving speech intelligibility for hearing-impaired listeners. Here, the performance of three machine-learning algorithms and one classical algorithm, Wiener filtering, was compared. Two algorithms based on neural networks were examined, one using a previously reported feature set and one using a feature set derived from an auditory model. The third machine-learning approach was a dictionary-based sparse-coding algorithm. Speech intelligibility and quality scores were obtained for participants with mild-to-moderate hearing impairments listening to sentences in speech-shaped noise and multi-talker babble following processing with the algorithms. Intelligibility and quality scores were significantly improved by each of the three machine-learning approaches, but not by the classical approach. The largest improvements for both speech intelligibility and quality were found by implementing a neural network using the feature set based on auditory modeling. Furthermore, neural network based techniques appeared more promising than dictionary-based, sparse coding in terms of performance and ease of implementation.


Subject(s)
Hearing Aids , Hearing Loss/rehabilitation , Machine Learning , Noise/adverse effects , Perceptual Masking , Persons With Hearing Impairments/rehabilitation , Signal Processing, Computer-Assisted , Speech Intelligibility , Speech Perception , Acoustic Stimulation , Aged , Audiometry, Speech , Electric Stimulation , Female , Hearing Loss/diagnosis , Hearing Loss/psychology , Humans , Male , Middle Aged , Neural Networks, Computer , Persons With Hearing Impairments/psychology , Recognition, Psychology
5.
Hear Res ; 344: 183-194, 2017 02.
Article in English | MEDLINE | ID: mdl-27913315

ABSTRACT

Speech understanding in noisy environments is still one of the major challenges for cochlear implant (CI) users in everyday life. We evaluated a speech enhancement algorithm based on neural networks (NNSE) for improving speech intelligibility in noise for CI users. The algorithm decomposes the noisy speech signal into time-frequency units, extracts a set of auditory-inspired features and feeds them to the neural network to produce an estimation of which frequency channels contain more perceptually important information (higher signal-to-noise ratio, SNR). This estimate is used to attenuate noise-dominated and retain speech-dominated CI channels for electrical stimulation, as in traditional n-of-m CI coding strategies. The proposed algorithm was evaluated by measuring the speech-in-noise performance of 14 CI users using three types of background noise. Two NNSE algorithms were compared: a speaker-dependent algorithm, that was trained on the target speaker used for testing, and a speaker-independent algorithm, that was trained on different speakers. Significant improvements in the intelligibility of speech in stationary and fluctuating noises were found relative to the unprocessed condition for the speaker-dependent algorithm in all noise types and for the speaker-independent algorithm in 2 out of 3 noise types. The NNSE algorithms used noise-specific neural networks that generalized to novel segments of the same noise type and worked over a range of SNRs. The proposed algorithm has the potential to improve the intelligibility of speech in noise for CI users while meeting the requirements of low computational complexity and processing delay for application in CI devices.


Subject(s)
Cochlear Implantation/instrumentation , Cochlear Implants , Neural Networks, Computer , Noise/adverse effects , Perceptual Masking , Persons With Hearing Impairments/rehabilitation , Signal Processing, Computer-Assisted , Speech Intelligibility , Speech Perception , Acoustic Stimulation , Acoustics , Adult , Aged , Aged, 80 and over , Algorithms , Audiometry, Speech , Comprehension , Electric Stimulation , Humans , Middle Aged , Persons With Hearing Impairments/psychology , Prosthesis Design , Sound Spectrography , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL