Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros

Bases de datos
Tipo del documento
Intervalo de año de publicación
1.
PLoS Biol ; 19(10): e3001439, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-34669696

RESUMEN

The ability to navigate "cocktail party" situations by focusing on sounds of interest over irrelevant, background sounds is often considered in terms of cortical mechanisms. However, subcortical circuits such as the pathway underlying the medial olivocochlear (MOC) reflex modulate the activity of the inner ear itself, supporting the extraction of salient features from auditory scene prior to any cortical processing. To understand the contribution of auditory subcortical nuclei and the cochlea in complex listening tasks, we made physiological recordings along the auditory pathway while listeners engaged in detecting non(sense) words in lists of words. Both naturally spoken and intrinsically noisy, vocoded speech-filtering that mimics processing by a cochlear implant (CI)-significantly activated the MOC reflex, but this was not the case for speech in background noise, which more engaged midbrain and cortical resources. A model of the initial stages of auditory processing reproduced specific effects of each form of speech degradation, providing a rationale for goal-directed gating of the MOC reflex based on enhancing the representation of the energy envelope of the acoustic waveform. Our data reveal the coexistence of 2 strategies in the auditory system that may facilitate speech understanding in situations where the signal is either intrinsically degraded or masked by extrinsic acoustic energy. Whereas intrinsically degraded streams recruit the MOC reflex to improve representation of speech cues peripherally, extrinsically masked streams rely more on higher auditory centres to denoise signals.


Asunto(s)
Tronco Encefálico/fisiología , Reflejo/fisiología , Percepción del Habla/fisiología , Habla/fisiología , Estimulación Acústica , Adolescente , Adulto , Corteza Auditiva/fisiología , Conducta , Cóclea/fisiología , Simulación por Computador , Femenino , Humanos , Masculino , Modelos Biológicos , Neuronas/fisiología , Ruido , Análisis y Desempeño de Tareas , Adulto Joven
2.
Artículo en Inglés | MEDLINE | ID: mdl-34232883

RESUMEN

Electroencephalogram (EEG)-based neurofeedback has been widely studied for tinnitus therapy in recent years. Most existing research relies on experts' cognitive prediction, and studies based on machine learning and deep learning are either data-hungry or not well generalizable to new subjects. In this paper, we propose a robust, data-efficient model for distinguishing tinnitus from the healthy state based on EEG-based tinnitus neurofeedback. We propose trend descriptor, a feature extractor with lower fineness, to reduce the effect of electrode noises on EEG signals, and a siamese encoder-decoder network boosted in a supervised manner to learn accurate alignment and to acquire high-quality transferable mappings across subjects and EEG signal channels. Our experiments show the proposed method significantly outperforms state-of-the-art algorithms when analyzing subjects' EEG neurofeedback to 90dB and 100dB sound, achieving an accuracy of 91.67%-94.44% in predicting tinnitus and control subjects in a subject-independent setting. Our ablation studies on mixed subjects and parameters show the method's stability in performance.


Asunto(s)
Neurorretroalimentación , Acúfeno , Algoritmos , Electroencefalografía , Humanos , Aprendizaje Automático , Acúfeno/diagnóstico
3.
J Acoust Soc Am ; 141(3): 1985, 2017 03.
Artículo en Inglés | MEDLINE | ID: mdl-28372043

RESUMEN

Machine-learning based approaches to speech enhancement have recently shown great promise for improving speech intelligibility for hearing-impaired listeners. Here, the performance of three machine-learning algorithms and one classical algorithm, Wiener filtering, was compared. Two algorithms based on neural networks were examined, one using a previously reported feature set and one using a feature set derived from an auditory model. The third machine-learning approach was a dictionary-based sparse-coding algorithm. Speech intelligibility and quality scores were obtained for participants with mild-to-moderate hearing impairments listening to sentences in speech-shaped noise and multi-talker babble following processing with the algorithms. Intelligibility and quality scores were significantly improved by each of the three machine-learning approaches, but not by the classical approach. The largest improvements for both speech intelligibility and quality were found by implementing a neural network using the feature set based on auditory modeling. Furthermore, neural network based techniques appeared more promising than dictionary-based, sparse coding in terms of performance and ease of implementation.


Asunto(s)
Audífonos , Pérdida Auditiva/rehabilitación , Aprendizaje Automático , Ruido/efectos adversos , Enmascaramiento Perceptual , Personas con Deficiencia Auditiva/rehabilitación , Procesamiento de Señales Asistido por Computador , Inteligibilidad del Habla , Percepción del Habla , Estimulación Acústica , Anciano , Audiometría del Habla , Estimulación Eléctrica , Femenino , Pérdida Auditiva/diagnóstico , Pérdida Auditiva/psicología , Humanos , Masculino , Persona de Mediana Edad , Redes Neurales de la Computación , Personas con Deficiencia Auditiva/psicología , Reconocimiento en Psicología
4.
Hear Res ; 344: 183-194, 2017 02.
Artículo en Inglés | MEDLINE | ID: mdl-27913315

RESUMEN

Speech understanding in noisy environments is still one of the major challenges for cochlear implant (CI) users in everyday life. We evaluated a speech enhancement algorithm based on neural networks (NNSE) for improving speech intelligibility in noise for CI users. The algorithm decomposes the noisy speech signal into time-frequency units, extracts a set of auditory-inspired features and feeds them to the neural network to produce an estimation of which frequency channels contain more perceptually important information (higher signal-to-noise ratio, SNR). This estimate is used to attenuate noise-dominated and retain speech-dominated CI channels for electrical stimulation, as in traditional n-of-m CI coding strategies. The proposed algorithm was evaluated by measuring the speech-in-noise performance of 14 CI users using three types of background noise. Two NNSE algorithms were compared: a speaker-dependent algorithm, that was trained on the target speaker used for testing, and a speaker-independent algorithm, that was trained on different speakers. Significant improvements in the intelligibility of speech in stationary and fluctuating noises were found relative to the unprocessed condition for the speaker-dependent algorithm in all noise types and for the speaker-independent algorithm in 2 out of 3 noise types. The NNSE algorithms used noise-specific neural networks that generalized to novel segments of the same noise type and worked over a range of SNRs. The proposed algorithm has the potential to improve the intelligibility of speech in noise for CI users while meeting the requirements of low computational complexity and processing delay for application in CI devices.


Asunto(s)
Implantación Coclear/instrumentación , Implantes Cocleares , Redes Neurales de la Computación , Ruido/efectos adversos , Enmascaramiento Perceptual , Personas con Deficiencia Auditiva/rehabilitación , Procesamiento de Señales Asistido por Computador , Inteligibilidad del Habla , Percepción del Habla , Estimulación Acústica , Acústica , Adulto , Anciano , Anciano de 80 o más Años , Algoritmos , Audiometría del Habla , Comprensión , Estimulación Eléctrica , Humanos , Persona de Mediana Edad , Personas con Deficiencia Auditiva/psicología , Diseño de Prótesis , Espectrografía del Sonido , Adulto Joven
5.
Trends Hear ; 192015 Dec 30.
Artículo en Inglés | MEDLINE | ID: mdl-26721926

RESUMEN

Sensitivity to interaural time differences (ITDs) conveyed in the temporal fine structure of low-frequency tones and the modulated envelopes of high-frequency sounds are considered comparable, particularly for envelopes shaped to transmit similar fidelity of temporal information normally present for low-frequency sounds. Nevertheless, discrimination performance for envelope modulation rates above a few hundred Hertz is reported to be poor-to the point of discrimination thresholds being unattainable-compared with the much higher (>1,000 Hz) limit for low-frequency ITD sensitivity, suggesting the presence of a low-pass filter in the envelope domain. Further, performance for identical modulation rates appears to decline with increasing carrier frequency, supporting the view that the low-pass characteristics observed for envelope ITD processing is carrier-frequency dependent. Here, we assessed listeners' sensitivity to ITDs conveyed in pure tones and in the modulated envelopes of high-frequency tones. ITD discrimination for the modulated high-frequency tones was measured as a function of both modulation rate and carrier frequency. Some well-trained listeners appear able to discriminate ITDs extremely well, even at modulation rates well beyond 500 Hz, for 4-kHz carriers. For one listener, thresholds were even obtained for a modulation rate of 800 Hz. The highest modulation rate for which thresholds could be obtained declined with increasing carrier frequency for all listeners. At 10 kHz, the highest modulation rate at which thresholds could be obtained was 600 Hz. The upper limit of sensitivity to ITDs conveyed in the envelope of high-frequency modulated sounds appears to be higher than previously considered.


Asunto(s)
Estimulación Acústica/métodos , Audición/fisiología , Percepción Sonora/fisiología , Tiempo de Reacción/fisiología , Localización de Sonidos/fisiología , Análisis de Varianza , Vías Auditivas/fisiología , Umbral Auditivo/fisiología , Femenino , Humanos , Masculino , Ruido/prevención & control , Discriminación de la Altura Tonal/fisiología , Valores de Referencia , Muestreo , Sensibilidad y Especificidad
6.
J Acoust Soc Am ; 133(4): 2288-300, 2013 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-23556596

RESUMEN

At high frequencies, interaural time differences (ITDs) are conveyed by the sound envelope. Sensitivity to envelope ITDs depends crucially on the envelope shape. Reverberation degrades the envelope shape, reducing the modulation depth of the envelope and the slope of its flanks. Reverberation also reduces the envelope interaural coherence (i.e., the similarity of the envelopes at two ears). The current study investigates the extent to which these changes affect sensitivity to envelope ITDs. The first experiment measured ITD discrimination thresholds at low and high frequencies in a simulated room. The stimulus was either a low-frequency narrowband noise or the same noise transposed to a higher frequency. The results suggest that the effect of reverberation on ITD thresholds was multiplicative. Given that the threshold without reverberation was larger for the transposed than for the low-frequency stimulus, this meant that, in absolute terms, the thresholds for the transposed stimulus showed a much greater increase due to reverberation than those for the low-frequency stimulus. Three further experiments indicated that the effect of reverberation on the envelope ITD thresholds was due to the combined effect of the reduction in the envelope modulation depth and slopes, as well as the decrease in the envelope interaural coherence.


Asunto(s)
Percepción Auditiva , Señales (Psicología) , Percepción del Tiempo , Estimulación Acústica , Adulto , Análisis de Varianza , Audiometría , Umbral Auditivo , Discriminación en Psicología , Femenino , Humanos , Masculino , Psicoacústica , Factores de Tiempo , Vibración , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA