Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 49
Filtrar
1.
J Neurosci ; 40(12): 2562-2572, 2020 03 18.
Artículo en Inglés | MEDLINE | ID: mdl-32094201

RESUMEN

When selectively attending to a speech stream in multi-talker scenarios, low-frequency cortical activity is known to synchronize selectively to fluctuations in the attended speech signal. Older listeners with age-related sensorineural hearing loss (presbycusis) often struggle to understand speech in such situations, even when wearing a hearing aid. Yet, it is unclear whether a peripheral hearing loss degrades the attentional modulation of cortical speech tracking. Here, we used psychoacoustics and electroencephalography (EEG) in male and female human listeners to examine potential effects of hearing loss on EEG correlates of speech envelope synchronization in cortex. Behaviorally, older hearing-impaired (HI) listeners showed degraded speech-in-noise recognition and reduced temporal acuity compared with age-matched normal-hearing (NH) controls. During EEG recordings, we used a selective attention task with two spatially separated simultaneous speech streams where NH and HI listeners both showed high speech recognition performance. Low-frequency (<10 Hz) envelope-entrained EEG responses were enhanced in the HI listeners, both for the attended speech, but also for tone sequences modulated at slow rates (4 Hz) during passive listening. Compared with the attended speech, responses to the ignored stream were found to be reduced in both HI and NH listeners, allowing for the attended target to be classified from single-trial EEG data with similar high accuracy in the two groups. However, despite robust attention-modulated speech entrainment, the HI listeners rated the competing speech task to be more difficult. These results suggest that speech-in-noise problems experienced by older HI listeners are not necessarily associated with degraded attentional selection.SIGNIFICANCE STATEMENT People with age-related sensorineural hearing loss often struggle to follow speech in the presence of competing talkers. It is currently unclear whether hearing impairment may impair the ability to use selective attention to suppress distracting speech in situations when the distractor is well segregated from the target. Here, we report amplified envelope-entrained cortical EEG responses to attended speech and to simple tones modulated at speech rates (4 Hz) in listeners with age-related hearing loss. Critically, despite increased self-reported listening difficulties, cortical synchronization to speech mixtures was robustly modulated by selective attention in listeners with hearing loss. This allowed the attended talker to be classified from single-trial EEG responses with high accuracy in both older hearing-impaired listeners and age-matched normal-hearing controls.


Asunto(s)
Atención/fisiología , Sincronización Cortical , Pérdida Auditiva Sensorineural/fisiopatología , Pérdida Auditiva Sensorineural/psicología , Estimulación Acústica , Anciano , Electroencefalografía , Potenciales Evocados Auditivos , Femenino , Humanos , Masculino , Persona de Mediana Edad , Psicoacústica , Desempeño Psicomotor , Reconocimiento en Psicología , Percepción del Habla
2.
J Acoust Soc Am ; 146(4): 2562, 2019 10.
Artículo en Inglés | MEDLINE | ID: mdl-31671986

RESUMEN

Four existing speech intelligibility models with different theoretical assumptions were used to predict previously published behavioural data. Those data showed that complex tones with pitch-related periodicity are far less effective maskers of speech than aperiodic noise. This so-called masker-periodicity benefit (MPB) far exceeded the fluctuating-masker benefit (FMB) obtained from slow masker envelope fluctuations. In contrast, the normal-hearing listeners hardly benefitted from periodicity in the target speech. All tested models consistently underestimated MPB and FMB, while most of them also overestimated the intelligibility of vocoded speech. To understand these shortcomings, the internal signal representations of the models were analysed in detail. The best-performing model, the correlation-based version of the speech-based envelope power spectrum model (sEPSMcorr), combined an auditory processing front end with a modulation filterbank and a correlation-based back end. This model was then modified to further improve the predictions. The resulting second version of the sEPSMcorr outperformed the original model with all tested maskers and accounted for about half the MPB, which can be attributed to reduced modulation masking caused by the periodic maskers. However, as the sEPSMcorr2 failed to account for the other half of the MPB, the results also indicate that future models should consider the contribution of pitch-related effects, such as enhanced stream segregation, to further improve their predictive power.


Asunto(s)
Enmascaramiento Perceptual , Periodicidad , Inteligibilidad del Habla , Habla , Estimulación Acústica , Humanos , Masculino , Modelos Teóricos , Ruido , Psicoacústica , Procesamiento de Señales Asistido por Computador , Espectrografía del Sonido
3.
Sci Rep ; 9(1): 10404, 2019 07 18.
Artículo en Inglés | MEDLINE | ID: mdl-31320656

RESUMEN

It remains unclear whether musical training is associated with improved speech understanding in a noisy environment, with different studies reaching differing conclusions. Even in those studies that have reported an advantage for highly trained musicians, it is not known whether the benefits measured in laboratory tests extend to more ecologically valid situations. This study aimed to establish whether musicians are better than non-musicians at understanding speech in a background of competing speakers or speech-shaped noise under more realistic conditions, involving sounds presented in space via a spherical array of 64 loudspeakers, rather than over headphones, with and without simulated room reverberation. The study also included experiments testing fundamental frequency discrimination limens (F0DLs), interaural time differences limens (ITDLs), and attentive tracking. Sixty-four participants (32 non-musicians and 32 musicians) were tested, with the two groups matched in age, sex, and IQ as assessed with Raven's Advanced Progressive matrices. There was a significant benefit of musicianship for F0DLs, ITDLs, and attentive tracking. However, speech scores were not significantly different between the two groups. The results suggest no musician advantage for understanding speech in background noise or talkers under a variety of conditions.


Asunto(s)
Percepción del Habla/fisiología , Habla/fisiología , Estimulación Acústica/métodos , Atención/fisiología , Niño , Preescolar , Femenino , Humanos , Masculino , Música , Ruido , Discriminación de la Altura Tonal/fisiología
4.
Trends Hear ; 23: 2331216519825930, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30755108

RESUMEN

It has been suggested that the most important factor for obtaining high speech intelligibility in noise with cochlear implant (CI) recipients is to preserve the low-frequency amplitude modulations of speech across time and frequency by, for example, minimizing the amount of noise in the gaps between speech segments. In contrast, it has also been argued that the transient parts of the speech signal, such as speech onsets, provide the most important information for speech intelligibility. The present study investigated the relative impact of these two factors on the potential benefit of noise reduction for CI recipients by systematically introducing noise estimation errors within speech segments, speech gaps, and the transitions between them. The introduction of these noise estimation errors directly induces errors in the noise reduction gains within each of these regions. Speech intelligibility in both stationary and modulated noise was then measured using a CI simulation tested on normal-hearing listeners. The results suggest that minimizing noise in the speech gaps can improve intelligibility, at least in modulated noise. However, significantly larger improvements were obtained when both the noise in the gaps was minimized and the speech transients were preserved. These results imply that the ability to identify the boundaries between speech segments and speech gaps may be one of the most important factors for a noise reduction algorithm because knowing the boundaries makes it possible to minimize the noise in the gaps as well as enhance the low-frequency amplitude modulations of the speech.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Inteligibilidad del Habla , Estimulación Acústica , Adulto , Algoritmos , Umbral Auditivo , Estimulación Eléctrica , Pruebas Auditivas , Humanos , Ruido , Percepción del Habla
5.
J Assoc Res Otolaryngol ; 20(3): 263-277, 2019 06.
Artículo en Inglés | MEDLINE | ID: mdl-30693416

RESUMEN

Several studies have shown that musical training leads to improved fundamental frequency (F0) discrimination for young listeners with normal hearing (NH). It is unclear whether a comparable effect of musical training occurs for listeners whose sensory encoding of F0 is degraded. To address this question, the effect of musical training was investigated for three groups of listeners (young NH, older NH, and older listeners with hearing impairment, HI). In a first experiment, F0 discrimination was investigated using complex tones that differed in harmonic content and phase configuration (sine, positive, or negative Schroeder). Musical training was associated with significantly better F0 discrimination of complex tones containing low-numbered harmonics for all groups of listeners. Part of this effect was caused by the fact that musicians were more robust than non-musicians to harmonic roving. Despite the benefit relative to their non-musicians counterparts, the older musicians, with or without HI, performed worse than the young musicians. In a second experiment, binaural sensitivity to temporal fine structure (TFS) cues was assessed for the same listeners by estimating the highest frequency at which an interaural phase difference was perceived. Performance was better for musicians for all groups of listeners and the use of TFS cues was degraded for the two older groups of listeners. These findings suggest that musical training is associated with an enhancement of both TFS cues encoding and F0 discrimination in young and older listeners with or without HI, although the musicians' benefit decreased with increasing hearing loss. Additionally, models of the auditory periphery and midbrain were used to examine the effect of HI on F0 encoding. The model predictions reflected the worsening in F0 discrimination with increasing HI and accounted for up to 80 % of the variance in the data.


Asunto(s)
Pérdida Auditiva/rehabilitación , Modelos Biológicos , Musicoterapia , Discriminación de la Altura Tonal , Psicoacústica , Adulto , Anciano , Humanos , Persona de Mediana Edad , Música , Adulto Joven
6.
Trends Hear ; 22: 2331216518807400, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30384803

RESUMEN

Pure-tone audiometry still represents the main measure to characterize individual hearing loss and the basis for hearing-aid fitting. However, the perceptual consequences of hearing loss are typically associated not only with a loss of sensitivity but also with a loss of clarity that is not captured by the audiogram. A detailed characterization of a hearing loss may be complex and needs to be simplified to efficiently explore the specific compensation needs of the individual listener. Here, it is hypothesized that any listener's hearing profile can be characterized along two dimensions of distortion: Type I and Type II. While Type I can be linked to factors affecting audibility, Type II reflects non-audibility-related distortions. To test this hypothesis, the individual performance data from two previous studies were reanalyzed using an unsupervised-learning technique to identify extreme patterns in the data, thus forming the basis for different auditory profiles. Next, a decision tree was determined to classify the listeners into one of the profiles. The analysis provides evidence for the existence of four profiles in the data. The most significant predictors for profile identification were related to binaural processing, auditory nonlinearity, and speech-in-noise perception. This approach could be valuable for analyzing other data sets to select the most relevant tests for auditory profiling and propose more efficient hearing-deficit compensation strategies.


Asunto(s)
Percepción Auditiva , Pérdida Auditiva/diagnóstico , Pruebas Auditivas/métodos , Audición , Personas con Deficiencia Auditiva/psicología , Estimulación Acústica , Audiometría de Tonos Puros , Umbral Auditivo , Estudios de Casos y Controles , Técnicas de Apoyo para la Decisión , Árboles de Decisión , Pérdida Auditiva/clasificación , Pérdida Auditiva/fisiopatología , Pérdida Auditiva/psicología , Pruebas Auditivas/estadística & datos numéricos , Humanos , Ruido/efectos adversos , Enmascaramiento Perceptual , Valor Predictivo de las Pruebas , Percepción del Habla , Prueba del Umbral de Recepción del Habla , Aprendizaje Automático no Supervisado
7.
Trends Hear ; 22: 2331216518800870, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30311552

RESUMEN

There is conflicting evidence about the relative benefit of slow- and fast-acting compression for speech intelligibility. It has been hypothesized that fast-acting compression improves audibility at low signal-to-noise ratios (SNRs) but may distort the speech envelope at higher SNRs. The present study investigated the effects of compression with a nearly instantaneous attack time but either fast (10 ms) or slow (500 ms) release times on consonant identification in hearing-impaired listeners. Consonant-vowel speech tokens were presented at a range of presentation levels in two conditions: in the presence of interrupted noise and in quiet (with the compressor "shadow-controlled" by the corresponding mixture of speech and noise). These conditions were chosen to disentangle the effects of consonant audibility and noise-induced forward masking on speech intelligibility. A small but systematic intelligibility benefit of fast-acting compression was found in both the quiet and the noisy conditions for the lower speech levels. No detrimental effects of fast-acting compression were observed when the speech level exceeded the level of the noise. These findings suggest that fast-acting compression provides an audibility benefit in fluctuating interferers when compared with slow-acting compression while not substantially affecting the perception of consonants at higher SNRs.


Asunto(s)
Estimulación Acústica/métodos , Audífonos , Pérdida Auditiva Sensorineural/rehabilitación , Espectrografía del Sonido/métodos , Inteligibilidad del Habla/fisiología , Adulto , Anciano , Estudios de Casos y Controles , Femenino , Pérdida Auditiva Sensorineural/diagnóstico , Humanos , Masculino , Fonética , Diseño de Prótesis , Valores de Referencia , Relación Señal-Ruido , Prueba del Umbral de Recepción del Habla , Adulto Joven
8.
Trends Hear ; 22: 2331216518789302, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30062913

RESUMEN

Validating hearing-aid fittings in prelingual infants is challenging because typical measures (aided audiometry, etc.) are impossible with infants. One objective alternative uses an aided auditory steady-state response (ASSR) measurement. To make an appropriate measurement, the hearing aid's signal-processing features must be activated (or deactivated) as if the ASSR stimulus was real speech. Rather than manipulating the hearing-aid settings to achieve this, an ASSR stimulus with speech-like properties was developed. This promotes clinical simplicity and face validity of the validation. The stimulus consists of narrow-band CE-Chirps®, modified to mimic the International Speech Test Signal (ISTS). This study examines the cost of introducing the speech-like features into the ASSR stimulus. Thus, 90 to 100 Hz ASSRs were recorded to the ISTS-modified stimulus as well as an equivalent stimulus without the ISTS modification, presented through insert phones to 10 young normal-hearing subjects. Noise-corrected ASSR magnitudes and clinically relevant detection times were estimated and analyzed with mixed-model analyses of variance. As a supplement, the observed changes to the ASSR magnitudes were compared with an objective characterization of the stimuli based on modulation power. The main findings were a reduction in ASSR magnitude of 4 dB and an increase in detection time by a factor of 1.5 for the ISTS-modified stimulus compared with the standard. Detection rates were unaffected given sufficient recording time. For clinical use of the hearing-aid validation procedure, the key metric is the detection time. While this varied considerably across subjects, the observed 50% mean increase corresponds to less than 1 min of additional recording time.


Asunto(s)
Estimulación Acústica/métodos , Audífonos , Ajuste de Prótesis , Habla , Pruebas de Impedancia Acústica/métodos , Adulto , Umbral Auditivo , Dinamarca , Audición , Humanos , Lactante , Adulto Joven
9.
Hear Res ; 367: 161-168, 2018 09.
Artículo en Inglés | MEDLINE | ID: mdl-30006111

RESUMEN

The ability to segregate sounds from different sound sources is thought to depend on the perceptual salience of differences between the sounds, such as differences in frequency or fundamental frequency (F0). F0 discrimination of complex tones is better for tones with low harmonics than for tones that only contain high harmonics, suggesting greater pitch salience for the former. This leads to the expectation that the sequential stream segregation (streaming) of complex tones should be better for tones with low harmonics than for tones with only high harmonics. However, the results of previous studies are conflicting about whether this is the case. The goals of this study were to determine the effect of harmonic rank on streaming and to establish whether streaming is related to F0 discrimination. Thirteen young normal-hearing participants were tested. Streaming was assessed for pure tones and complex tones containing harmonics with various ranks using sequences of ABA triplets, where A and B differed in frequency or in F0. The participants were asked to try to hear two streams and to indicate when they heard one and when they heard two streams. F0 discrimination was measured for the same tones that were used as A tones in the streaming experiment. Both streaming and F0 discrimination worsened significantly with increasing harmonic rank. There was a significant relationship between streaming and F0 discrimination, indicating that good F0 discrimination is associated with good streaming. This supports the idea that the extent of stream segregation depends on the salience of the perceptual difference between successive sounds.


Asunto(s)
Discriminación en Psicología , Discriminación de la Altura Tonal , Estimulación Acústica , Adulto , Audiometría de Tonos Puros , Umbral Auditivo , Señales (Psicología) , Femenino , Humanos , Masculino , Factores de Tiempo , Adulto Joven
10.
Int J Audiol ; 57(5): 345-353, 2018 05.
Artículo en Inglés | MEDLINE | ID: mdl-28971715

RESUMEN

OBJECTIVE: The aims were to 1) establish which of the four algorithms for estimating residual noise level and signal-to-noise ratio (SNR) in auditory brainstem responses (ABRs) perform better in terms of post-average wave-V peak latency and amplitude errors and 2) determine whether SNR or noise floor is a better stop criterion where the outcome measure is peak latency or amplitude. DESIGN: The performance of the algorithms was evaluated by numerical simulations using an ABR template combined with electroencephalographic (EEG) recordings obtained without sound stimulus. The suitability of a fixed SNR versus a fixed noise floor stop criterion was assessed when variations in the wave-V waveform shape reflecting inter-subject variation was introduced. STUDY SAMPLE: Over 100 hours of raw EEG noise was recorded from 17 adult subjects, under different conditions (e.g. sleep or movement). RESULTS: ABR feature accuracy was similar for the four algorithms. However, it was shown that a fixed noise floor leads to higher ABR wave-V amplitude accuracy; conversely, a fixed SNR yields higher wave-V latency accuracy. CONCLUSION: Similar performance suggests the use of the less computationally complex algorithms. Different stop criteria are recommended if the ABR peak latency or the amplitude is the outcome measure of interest.


Asunto(s)
Algoritmos , Umbral Auditivo/fisiología , Exactitud de los Datos , Electroencefalografía/estadística & datos numéricos , Potenciales Evocados Auditivos del Tronco Encefálico/fisiología , Estimulación Acústica/psicología , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Ruido , Variaciones Dependientes del Observador , Adulto Joven
11.
J Acoust Soc Am ; 142(5): 3216, 2017 11.
Artículo en Inglés | MEDLINE | ID: mdl-29195458

RESUMEN

This study investigated the influence of hearing-aid (HA) and cochlear-implant (CI) processing on consonant perception in normal-hearing (NH) listeners. Measured data were compared to predictions obtained with a speech perception model [Zaar and Dau (2017). J. Acoust. Soc. Am. 141, 1051-1064] that combines an auditory processing front end with a correlation-based template-matching back end. In terms of HA processing, effects of strong nonlinear frequency compression and impulse-noise suppression were measured in 10 NH listeners using consonant-vowel stimuli. Regarding CI processing, the consonant perception data from DiNino et al. [(2016). J. Acoust. Soc. Am. 140, 4404-4418] were considered, which were obtained with noise-vocoded vowel-consonant-vowel stimuli in 12 NH listeners. The inputs to the model were the same stimuli as were used in the corresponding experiments. The model predictions obtained for the two data sets showed a large agreement with the perceptual data both in terms of consonant recognition and confusions, demonstrating the model's sensitivity to supra-threshold effects of hearing-instrument signal processing on consonant perception. The results could be useful for the evaluation of hearing-instrument processing strategies, particularly when combined with simulations of individual hearing impairment.


Asunto(s)
Implantación Coclear/instrumentación , Implantes Cocleares , Audífonos , Personas con Deficiencia Auditiva/rehabilitación , Procesamiento de Señales Asistido por Computador , Acústica del Lenguaje , Percepción del Habla , Calidad de la Voz , Estimulación Acústica , Adulto , Audiometría del Habla , Umbral Auditivo , Estimulación Eléctrica , Humanos , Personas con Deficiencia Auditiva/psicología , Fonética , Diseño de Prótesis
12.
J Acoust Soc Am ; 141(4): 2556, 2017 04.
Artículo en Inglés | MEDLINE | ID: mdl-28464692

RESUMEN

This study investigated the effects of fast-acting hearing-aid compression on normal-hearing and hearing-impaired listeners' spatial perception in a reverberant environment. Three compression schemes-independent compression at each ear, linked compression between the two ears, and "spatially ideal" compression operating solely on the dry source signal-were considered using virtualized speech and noise bursts. Listeners indicated the location and extent of their perceived sound images on the horizontal plane. Linear processing was considered as the reference condition. The results showed that both independent and linked compression resulted in more diffuse and broader sound images as well as internalization and image splits, whereby more image splits were reported for the noise bursts than for speech. Only the spatially ideal compression provided the listeners with a spatial percept similar to that obtained with linear processing. The same general pattern was observed for both listener groups. An analysis of the interaural coherence and direct-to-reverberant ratio suggested that the spatial distortions associated with independent and linked compression resulted from enhanced reverberant energy. Thus, modifications of the relation between the direct and the reverberant sound should be avoided in amplification strategies that attempt to preserve the natural sound scene while restoring loudness cues.


Asunto(s)
Corrección de Deficiencia Auditiva/instrumentación , Exposición a Riesgos Ambientales/efectos adversos , Audífonos , Pérdida Auditiva Sensorineural/rehabilitación , Ruido/efectos adversos , Enmascaramiento Perceptual , Personas con Deficiencia Auditiva/psicología , Localización de Sonidos , Percepción del Habla , Estimulación Acústica , Adulto , Anciano , Anciano de 80 o más Años , Audiometría de Tonos Puros , Audiometría del Habla , Umbral Auditivo , Estudios de Casos y Controles , Señales (Psicología) , Diseño de Equipo , Femenino , Audición , Pérdida Auditiva Sensorineural/diagnóstico , Pérdida Auditiva Sensorineural/fisiopatología , Pérdida Auditiva Sensorineural/psicología , Humanos , Percepción Sonora , Masculino , Persona de Mediana Edad , Procesamiento de Señales Asistido por Computador , Vibración
13.
Neuroimage ; 156: 435-444, 2017 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-28412441

RESUMEN

Selectively attending to one speaker in a multi-speaker scenario is thought to synchronize low-frequency cortical activity to the attended speech signal. In recent studies, reconstruction of speech from single-trial electroencephalogram (EEG) data has been used to decode which talker a listener is attending to in a two-talker situation. It is currently unclear how this generalizes to more complex sound environments. Behaviorally, speech perception is robust to the acoustic distortions that listeners typically encounter in everyday life, but it is unknown whether this is mirrored by a noise-robust neural tracking of attended speech. Here we used advanced acoustic simulations to recreate real-world acoustic scenes in the laboratory. In virtual acoustic realities with varying amounts of reverberation and number of interfering talkers, listeners selectively attended to the speech stream of a particular talker. Across the different listening environments, we found that the attended talker could be accurately decoded from single-trial EEG data irrespective of the different distortions in the acoustic input. For highly reverberant environments, speech envelopes reconstructed from neural responses to the distorted stimuli resembled the original clean signal more than the distorted input. With reverberant speech, we observed a late cortical response to the attended speech stream that encoded temporal modulations in the speech signal without its reverberant distortion. Single-trial attention decoding accuracies based on 40-50s long blocks of data from 64 scalp electrodes were equally high (80-90% correct) in all considered listening environments and remained statistically significant using down to 10 scalp electrodes and short (<30-s) unaveraged EEG segments. In contrast to the robust decoding of the attended talker we found that decoding of the unattended talker deteriorated with the acoustic distortions. These results suggest that cortical activity tracks an attended speech signal in a way that is invariant to acoustic distortions encountered in real-life sound environments. Noise-robust attention decoding additionally suggests a potential utility of stimulus reconstruction techniques in attention-controlled brain-computer interfaces.


Asunto(s)
Atención/fisiología , Corteza Auditiva/fisiología , Percepción del Habla/fisiología , Estimulación Acústica , Adulto , Electroencefalografía , Femenino , Humanos , Masculino , Ruido , Adulto Joven
14.
J Acoust Soc Am ; 141(3): 1739, 2017 03.
Artículo en Inglés | MEDLINE | ID: mdl-28372055

RESUMEN

Consonant-vowel (CV) perception experiments provide valuable insights into how humans process speech. Here, two CV identification experiments were conducted in a group of hearing-impaired (HI) listeners, using 14 consonants followed by the vowel /ɑ/. The CVs were presented in quiet and with added speech-shaped noise at signal-to-noise ratios of 0, 6, and 12 dB. The HI listeners were provided with two different amplification schemes for the CVs. In the first experiment, a frequency-independent amplification (flat-gain) was provided and the CVs were presented at the most-comfortable loudness level. In the second experiment, a frequency-dependent prescriptive gain was provided. The CV identification results showed that, while the average recognition error score obtained with the frequency-dependent amplification was lower than that obtained with the flat-gain, the main confusions made by the listeners on a token basis remained the same in a majority of the cases. An entropy measure and an angular distance measure were proposed to assess the highly individual effects of the frequency-dependent gain on the consonant confusions in the HI listeners. The results suggest that the proposed measures, in combination with a well-controlled phoneme speech test, may be used to assess the impact of hearing-aid signal processing on speech intelligibility.


Asunto(s)
Estimulación Acústica/métodos , Audiometría del Habla/métodos , Audífonos , Pérdida Auditiva Sensorineural/rehabilitación , Personas con Deficiencia Auditiva/rehabilitación , Fonética , Acústica del Lenguaje , Percepción del Habla , Calidad de la Voz , Anciano , Estimulación Eléctrica , Diseño de Equipo , Pérdida Auditiva Sensorineural/diagnóstico , Pérdida Auditiva Sensorineural/psicología , Humanos , Ruido/efectos adversos , Enmascaramiento Perceptual , Personas con Deficiencia Auditiva/psicología , Valor Predictivo de las Pruebas , Reconocimiento en Psicología , Procesamiento de Señales Asistido por Computador , Inteligibilidad del Habla
15.
PLoS One ; 12(3): e0174776, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28355275

RESUMEN

It is well known that pure-tone audiometry does not sufficiently describe individual hearing loss (HL) and that additional measures beyond pure-tone sensitivity might improve the diagnostics of hearing deficits. Specifically, forward masking experiments to estimate basilar-membrane (BM) input-output (I/O) function have been proposed. However, such measures are very time consuming. The present study investigated possible modifications of the temporal masking curve (TMC) paradigm to improve time and measurement efficiency. In experiment 1, estimates of knee point (KP) and compression ratio (CR) of individual BM I/Os were derived without considering the corresponding individual "off-frequency" TMC. While accurate estimation of KPs was possible, it is difficult to ensure that the tested dynamic range is sufficient. Therefore, in experiment 2, a TMC-based paradigm, referred to as the "gap method", was tested. In contrast to the standard TMC paradigm, the maker level was kept fixed and the "gap threshold" was obtained, such that the masker just masks a low-level (12 dB sensation level) signal. It is argued that this modification allows for better control of the tested stimulus level range, which appears to be the main drawback of the conventional TMC method. The results from the present study were consistent with the literature when estimating KP levels, but showed some limitations regarding the estimation of the CR values. Perspectives and limitations of both approaches are discussed.


Asunto(s)
Audiometría de Tonos Puros/métodos , Membrana Basilar/fisiopatología , Pérdida Auditiva Sensorineural/fisiopatología , Enmascaramiento Perceptual/fisiología , Estimulación Acústica/métodos , Adulto , Umbral Auditivo , Membrana Basilar/fisiología , Femenino , Pérdida Auditiva Sensorineural/diagnóstico , Humanos , Masculino , Persona de Mediana Edad , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Factores de Tiempo
16.
J Acoust Soc Am ; 138(3): 1253-67, 2015 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-26428764

RESUMEN

Responses obtained in consonant perception experiments typically show a large variability across stimuli of the same phonetic identity. The present study investigated the influence of different potential sources of this response variability. It was distinguished between source-induced variability, referring to perceptual differences caused by acoustical differences in the speech tokens and/or the masking noise tokens, and receiver-related variability, referring to perceptual differences caused by within- and across-listener uncertainty. Consonant-vowel combinations consisting of 15 consonants followed by the vowel /i/ were spoken by two talkers and presented to eight normal-hearing listeners both in quiet and in white noise at six different signal-to-noise ratios. The obtained responses were analyzed with respect to the different sources of variability using a measure of the perceptual distance between responses. The speech-induced variability across and within talkers and the across-listener variability were substantial and of similar magnitude. The noise-induced variability, obtained with time-shifted realizations of the same random process, was smaller but significantly larger than the amount of within-listener variability, which represented the smallest effect. The results have implications for the design of consonant perception experiments and provide constraints for future models of consonant perception.


Asunto(s)
Percepción del Habla , Calidad de la Voz , Estimulación Acústica , Adulto , Audiometría del Habla , Umbral Auditivo , Femenino , Voluntarios Sanos , Humanos , Masculino , Ruido/efectos adversos , Enmascaramiento Perceptual , Reconocimiento en Psicología , Inteligibilidad del Habla , Factores de Tiempo , Adulto Joven
17.
Trends Hear ; 182014 Nov 23.
Artículo en Inglés | MEDLINE | ID: mdl-25421087

RESUMEN

For patients having residual hearing in one ear and a cochlear implant (CI) in the opposite ear, interaural place-pitch mismatches might be partly responsible for the large variability in individual benefit. Behavioral pitch-matching between the two ears has been suggested as a way to individualize the fitting of the frequency-to-electrode map but is rather tedious and unreliable. Here, an alternative method using two-formant vowels was developed and tested. The interaural spectral shift was inferred by comparing vowel spaces, measured by presenting the first formant (F1) to the nonimplanted ear and the second (F2) on either side. The method was first evaluated with eight normal-hearing listeners and vocoder simulations, before being tested with 11 CI users. Average vowel distributions across subjects showed a similar pattern when presenting F2 on either side, suggesting acclimatization to the frequency map. However, individual vowel spaces with F2 presented to the implant did not allow a reliable estimation of the interaural mismatch. These results suggest that interaural frequency-place mismatches can be derived from such vowel spaces. However, the method remains limited by difficulties in bimodal fusion of the two formants.


Asunto(s)
Implantación Coclear/instrumentación , Implantes Cocleares , Corrección de Deficiencia Auditiva/instrumentación , Personas con Deficiencia Auditiva/rehabilitación , Percepción de la Altura Tonal , Procesamiento de Señales Asistido por Computador , Acústica del Lenguaje , Percepción del Habla , Estimulación Acústica , Adulto , Anciano , Audiometría de Tonos Puros , Umbral Auditivo , Estudios de Casos y Controles , Humanos , Persona de Mediana Edad , Personas con Deficiencia Auditiva/psicología , Prueba del Umbral de Recepción del Habla , Adulto Joven
18.
J Acoust Soc Am ; 135(4): EL179-85, 2014 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-25236151

RESUMEN

The premise of this study is that models of hearing, in general, and of individual hearing impairment, in particular, can be improved by using speech test results as an integral part of the modeling process. A conceptual iterative procedure is presented which, for an individual, considers measures of sensitivity, cochlear compression, and phonetic confusions using the Diagnostic Rhyme Test (DRT) framework. The suggested approach is exemplified by presenting data from three hearing-impaired listeners and results obtained with models of the hearing impairment of the individuals. The work reveals that the DRT data provide valuable information of the damaged periphery and that the non-speech and speech data are complementary in obtaining the best model for an individual.


Asunto(s)
Audiometría del Habla , Pérdida Auditiva Sensorineural/diagnóstico , Pérdida Auditiva Sensorineural/psicología , Modelos Psicológicos , Personas con Deficiencia Auditiva/psicología , Psicoacústica , Inteligibilidad del Habla , Percepción del Habla , Estimulación Acústica , Adulto , Humanos , Persona de Mediana Edad , Ruido/efectos adversos , Enmascaramiento Perceptual , Valor Predictivo de las Pruebas , Adulto Joven
19.
J Acoust Soc Am ; 135(1): 323-33, 2014 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-24437772

RESUMEN

The perceptual organization of two-tone sequences into auditory streams was investigated using a modeling framework consisting of an auditory pre-processing front end [Dau et al., J. Acoust. Soc. Am. 102, 2892-2905 (1997)] combined with a temporal coherence-analysis back end [Elhilali et al., Neuron 61, 317-329 (2009)]. Two experimental paradigms were considered: (i) Stream segregation as a function of tone repetition time (TRT) and frequency separation (Δf) and (ii) grouping of distant spectral components based on onset/offset synchrony. The simulated and experimental results of the present study supported the hypothesis that forward masking enhances the ability to perceptually segregate spectrally close tone sequences. Furthermore, the modeling suggested that effects of neural adaptation and processing though modulation-frequency selective filters may enhance the sensitivity to onset asynchrony of spectral components, facilitating the listeners' ability to segregate temporally overlapping sounds into separate auditory objects. Overall, the modeling framework may be useful to study the contributions of bottom-up auditory features on "primitive" grouping, also in more complex acoustic scenarios than those considered here.


Asunto(s)
Vías Auditivas/fisiología , Enmascaramiento Perceptual , Percepción de la Altura Tonal , Percepción del Tiempo , Estimulación Acústica , Adaptación Psicológica , Adulto , Audiometría , Simulación por Computador , Humanos , Modelos Neurológicos , Patrones de Reconocimiento Fisiológico , Psicoacústica , Espectrografía del Sonido , Factores de Tiempo , Adulto Joven
20.
J Acoust Soc Am ; 135(1): 407-20, 2014 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-24437781

RESUMEN

Studies investigating speech-on-speech masking effects commonly use closed-set speech materials such as the coordinate response measure [Bolia et al. (2000). J. Acoust. Soc. Am. 107, 1065-1066]. However, these studies typically result in very low (i.e., negative) speech recognition thresholds (SRTs) when the competing speech signals are spatially separated. To achieve higher SRTs that correspond more closely to natural communication situations, an open-set, low-context, multi-talker speech corpus was developed. Three sets of 268 unique Danish sentences were created, and each set was recorded with one of three professional female talkers. The intelligibility of each sentence in the presence of speech-shaped noise was measured. For each talker, 200 approximately equally intelligible sentences were then selected and systematically distributed into 10 test lists. Test list homogeneity was assessed in a setup with a frontal target sentence and two concurrent masker sentences at ±50° azimuth. For a group of 16 normal-hearing listeners and a group of 15 elderly (linearly aided) hearing-impaired listeners, overall SRTs of, respectively, +1.3 dB and +6.3 dB target-to-masker ratio were obtained. The new corpus was found to be very sensitive to inter-individual differences and produced consistent results across test lists. The corpus is publicly available.


Asunto(s)
Audiometría del Habla/métodos , Ruido/efectos adversos , Enmascaramiento Perceptual , Personas con Deficiencia Auditiva/psicología , Inteligibilidad del Habla , Percepción del Habla , Estimulación Acústica , Adolescente , Adulto , Factores de Edad , Anciano , Anciano de 80 o más Años , Umbral Auditivo , Estudios de Casos y Controles , Dinamarca , Femenino , Humanos , Masculino , Persona de Mediana Edad , Psicoacústica , Reproducibilidad de los Resultados , Espectrografía del Sonido , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA