Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 56
Filtrar
1.
J Acoust Soc Am ; 155(4): 2482-2491, 2024 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-38587430

RESUMEN

Despite a vast literature on how speech intelligibility is affected by hearing loss and advanced age, remarkably little is known about the perception of talker-related information in these populations. Here, we assessed the ability of listeners to detect whether a change in talker occurred while listening to and identifying sentence-length sequences of words. Participants were recruited in four groups that differed in their age (younger/older) and hearing status (normal/impaired). The task was conducted in quiet or in a background of same-sex two-talker speech babble. We found that age and hearing loss had detrimental effects on talker change detection, in addition to their expected effects on word recognition. We also found subtle differences in the effects of age and hearing loss for trials in which the talker changed vs trials in which the talker did not change. These findings suggest that part of the difficulty encountered by older listeners, and by listeners with hearing loss, when communicating in group situations, may be due to a reduced ability to identify and discriminate between the participants in the conversation.


Asunto(s)
Sordera , Pérdida Auditiva , Humanos , Pérdida Auditiva/diagnóstico , Inteligibilidad del Habla
2.
J Acoust Soc Am ; 150(2): 1076, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-34470293

RESUMEN

This study aimed at predicting individual differences in speech reception thresholds (SRTs) in the presence of symmetrically placed competing talkers for young listeners with sensorineural hearing loss. An existing binaural model incorporating the individual audiogram was revised to handle severe hearing losses by (a) taking as input the target speech level at SRT in a given condition and (b) introducing a floor in the model to limit extreme negative better-ear signal-to-noise ratios. The floor value was first set using SRTs measured with stationary and modulated noises. The model was then used to account for individual variations in SRTs found in two previously published data sets that used speech maskers. The model accounted well for the variation in SRTs across listeners with hearing loss, based solely on differences in audibility. When considering listeners with normal hearing, the model could predict the best SRTs, but not the poorer SRTs, suggesting that other factors limit performance when audibility (as measured with the audiogram) is not compromised.


Asunto(s)
Inteligibilidad del Habla , Percepción del Habla , Umbral Auditivo , Individualidad , Ruido/efectos adversos , Prueba del Umbral de Recepción del Habla
3.
J Acoust Soc Am ; 147(2): 798, 2020 02.
Artículo en Inglés | MEDLINE | ID: mdl-32113297

RESUMEN

Negative masking (NM) is a ubiquitous finding in near-"threshold" psychophysics in which the detectability of a near-threshold signal improves when added to a copy of itself, i.e., a pedestal or masker. One interpretation of NM suggests that the pedestal acts as an informative cue, thereby reducing uncertainty and improving performance relative to detection in its absence. The purpose of this study was to test this hypothesis. Intensity discrimination thresholds were measured for 100-ms, 1000-Hz near-threshold tones. In the reference condition, thresholds were measured in quiet (no masker other than the pedestal). In comparison conditions, thresholds were measured in the presence of one of two additional maskers: a notched-noise masker or a random-frequency multitone masker. The additional maskers were intended to cause different amounts of uncertainty and, in turn, to differentially influence NM. The results were generally consistent with an uncertainty-based interpretation of NM: NM was found both in quiet and in notched-noise, yet it was eliminated by the multitone masker. A competing interpretation of NM based on nonlinear transduction does not account for all of the results. Profile analysis may have been a factor in performance and this suggests that NM may be attributable to, or influenced by, multiple mechanisms.


Asunto(s)
Ruido , Enmascaramiento Perceptual , Umbral Auditivo , Ruido/efectos adversos , Psicofísica , Incertidumbre
4.
J Acoust Soc Am ; 145(1): 440, 2019 01.
Artículo en Inglés | MEDLINE | ID: mdl-30710924

RESUMEN

The ability to identify the words spoken by one talker masked by two or four competing talkers was tested in young-adult listeners with sensorineural hearing loss (SNHL). In a reference/baseline condition, masking speech was colocated with target speech, target and masker talkers were female, and the masker was intelligible. Three comparison conditions included replacing female masker talkers with males, time-reversal of masker speech, and spatial separation of sources. All three variables produced significant release from masking. To emulate energetic masking (EM), stimuli were subjected to ideal time-frequency segregation retaining only the time-frequency units where target energy exceeded masker energy. Subjects were then tested with these resynthesized "glimpsed stimuli." For either two or four maskers, thresholds only varied about 3 dB across conditions suggesting that EM was roughly equal. Compared to normal-hearing listeners from an earlier study [Kidd, Mason, Swaminathan, Roverud, Clayton, and Best, J. Acoust. Soc. Am. 140, 132-144 (2016)], SNHL listeners demonstrated both greater energetic and informational masking as well as higher glimpsed thresholds. Individual differences were correlated across masking release conditions suggesting that listeners could be categorized according to their general ability to solve the task. Overall, both peripheral and central factors appear to contribute to the higher thresholds for SNHL listeners.


Asunto(s)
Pérdida Auditiva Sensorineural/fisiopatología , Percepción del Habla , Adolescente , Adulto , Umbral Auditivo , Femenino , Pérdida Auditiva Sensorineural/psicología , Humanos , Masculino , Enmascaramiento Perceptual
5.
J Acoust Soc Am ; 143(2): 1085, 2018 02.
Artículo en Inglés | MEDLINE | ID: mdl-29495693

RESUMEN

The ability to identify who is talking is an important aspect of communication in social situations and, while empirical data are limited, it is possible that a disruption to this ability contributes to the difficulties experienced by listeners with hearing loss. In this study, talker identification was examined under both quiet and masked conditions. Subjects were grouped by hearing status (normal hearing/sensorineural hearing loss) and age (younger/older adults). Listeners first learned to identify the voices of four same-sex talkers in quiet, and then talker identification was assessed (1) in quiet, (2) in speech-shaped, steady-state noise, and (3) in the presence of a single, unfamiliar same-sex talker. Both younger and older adults with hearing loss, as well as older adults with normal hearing, generally performed more poorly than younger adults with normal hearing, although large individual differences were observed in all conditions. Regression analyses indicated that both age and hearing loss were predictors of performance in quiet, and there was some evidence for an additional contribution of hearing loss in the presence of masking. These findings suggest that both hearing loss and age may affect the ability to identify talkers in "cocktail party" situations.


Asunto(s)
Envejecimiento/psicología , Pérdida Auditiva/psicología , Ruido/efectos adversos , Enmascaramiento Perceptual , Personas con Deficiencia Auditiva/psicología , Reconocimiento en Psicología , Acústica del Lenguaje , Percepción del Habla , Calidad de la Voz , Estimulación Acústica , Adulto , Factores de Edad , Anciano , Anciano de 80 o más Años , Audiometría del Habla , Boston , Femenino , Audición , Pérdida Auditiva/diagnóstico , Pérdida Auditiva/fisiopatología , Humanos , Masculino , Persona de Mediana Edad , South Carolina , Adulto Joven
6.
Ear Hear ; 39(4): 756-769, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29252977

RESUMEN

OBJECTIVES: The "visually guided hearing aid" (VGHA), consisting of a beamforming microphone array steered by eye gaze, is an experimental device being tested for effectiveness in laboratory settings. Previous studies have found that beamforming without visual steering can provide significant benefits (relative to natural binaural listening) for speech identification in spatialized speech or noise maskers when sound sources are fixed in location. The aim of the present study was to evaluate the performance of the VGHA in listening conditions in which target speech could switch locations unpredictably, requiring visual steering of the beamforming. To address this aim, the present study tested an experimental simulation of the VGHA in a newly designed dynamic auditory-visual word congruence task. DESIGN: Ten young normal-hearing (NH) and 11 young hearing-impaired (HI) adults participated. On each trial, three simultaneous spoken words were presented from three source positions (-30, 0, and 30 azimuth). An auditory-visual word congruence task was used in which participants indicated whether there was a match between the word printed on a screen at a location corresponding to the target source and the spoken target word presented acoustically from that location. Performance was compared for a natural binaural condition (stimuli presented using impulse responses measured on KEMAR), a simulated VGHA condition (BEAM), and a hybrid condition that combined lowpass-filtered KEMAR and highpass-filtered BEAM information (BEAMAR). In some blocks, the target remained fixed at one location across trials, and in other blocks, the target could transition in location between one trial and the next with a fixed but low probability. RESULTS: Large individual variability in performance was observed. There were significant benefits for the hybrid BEAMAR condition relative to the KEMAR condition on average for both NH and HI groups when the targets were fixed. Although not apparent in the averaged data, some individuals showed BEAM benefits relative to KEMAR. Under dynamic conditions, BEAM and BEAMAR performance dropped significantly immediately following a target location transition. However, performance recovered by the second word in the sequence and was sustained until the next transition. CONCLUSIONS: When performance was assessed using an auditory-visual word congruence task, the benefits of beamforming reported previously were generally preserved under dynamic conditions in which the target source could move unpredictably from one location to another (i.e., performance recovered rapidly following source transitions) while the observer steered the beamforming via eye gaze, for both young NH and young HI groups.


Asunto(s)
Diseño de Equipo , Fijación Ocular , Audífonos , Pérdida Auditiva/rehabilitación , Adolescente , Adulto , Atención , Estudios de Casos y Controles , Femenino , Humanos , Masculino , Procesamiento Espacial , Percepción del Habla , Adulto Joven
7.
J Acoust Soc Am ; 142(4): EL369, 2017 10.
Artículo en Inglés | MEDLINE | ID: mdl-29092558

RESUMEN

A hearing-aid strategy that combines a beamforming microphone array in the high frequencies with natural binaural signals in the low frequencies was examined. This strategy attempts to balance the benefits of beamforming (improved signal-to-noise ratio) with the benefits of binaural listening (spatial awareness and location-based segregation). The crossover frequency was varied from 200 to 1200 Hz, and performance was compared to full-spectrum binaural and beamformer conditions. Speech intelligibility in the presence of noise or competing speech was measured in listeners with and without hearing loss. Results showed that the optimal crossover frequency depended on the listener and the nature of the interference.


Asunto(s)
Corrección de Deficiencia Auditiva/instrumentación , Señales (Psicología) , Audífonos , Pérdida Auditiva Bilateral/rehabilitación , Pérdida Auditiva Sensorineural/rehabilitación , Personas con Deficiencia Auditiva/rehabilitación , Percepción del Habla , Estimulación Acústica , Adulto , Audiometría del Habla , Estudios de Casos y Controles , Comprensión , Diseño de Equipo , Femenino , Audición , Pérdida Auditiva Bilateral/diagnóstico , Pérdida Auditiva Bilateral/fisiopatología , Pérdida Auditiva Bilateral/psicología , Pérdida Auditiva Sensorineural/diagnóstico , Pérdida Auditiva Sensorineural/fisiopatología , Pérdida Auditiva Sensorineural/psicología , Humanos , Masculino , Ruido/efectos adversos , Enmascaramiento Perceptual , Personas con Deficiencia Auditiva/psicología , Procesamiento de Señales Asistido por Computador , Localización de Sonidos , Inteligibilidad del Habla
8.
Trends Hear ; 21: 2331216517722304, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28758567

RESUMEN

The aim of this study was to evaluate the performance of a visually guided hearing aid (VGHA) under conditions designed to capture some aspects of "real-world" communication settings. The VGHA uses eye gaze to steer the acoustic look direction of a highly directional beamforming microphone array. Although the VGHA has been shown to enhance speech intelligibility for fixed-location, frontal targets, it is currently not known whether these benefits persist in the face of frequent changes in location of the target talker that are typical of conversational turn-taking. Participants were 14 young adults, 7 with normal hearing and 7 with bilateral sensorineural hearing impairment. Target stimuli were sequences of 12 question-answer pairs that were embedded in a mixture of competing conversations. The participant's task was to respond via a key press after each answer indicating whether it was correct or not. Spatialization of the stimuli and microphone array processing were done offline using recorded impulse responses, before presentation over headphones. The look direction of the array was steered according to the eye movements of the participant as they followed a visual cue presented on a widescreen monitor. Performance was compared for a "dynamic" condition in which the target stimulus moved between three locations, and a "fixed" condition with a single target location. The benefits of the VGHA over natural binaural listening observed in the fixed condition were reduced in the dynamic condition, largely because visual fixation was less accurate.


Asunto(s)
Audífonos , Pérdida Auditiva Sensorineural/rehabilitación , Orientación Espacial , Diseño de Prótesis , Inteligibilidad del Habla , Adulto , Corrección de Deficiencia Auditiva/instrumentación , Femenino , Humanos , Masculino , Psicometría , Percepción del Habla , Adulto Joven
9.
J Acoust Soc Am ; 141(1): 81, 2017 01.
Artículo en Inglés | MEDLINE | ID: mdl-28147587

RESUMEN

In many situations, listeners with sensorineural hearing loss demonstrate reduced spatial release from masking compared to listeners with normal hearing. This deficit is particularly evident in the "symmetric masker" paradigm in which competing talkers are located to either side of a central target talker. However, there is some evidence that reduced target audibility (rather than a spatial deficit per se) under conditions of spatial separation may contribute to the observed deficit. In this study a simple "glimpsing" model (applied separately to each ear) was used to isolate the target information that is potentially available in binaural speech mixtures. Intelligibility of these glimpsed stimuli was then measured directly. Differences between normally hearing and hearing-impaired listeners observed in the natural binaural condition persisted for the glimpsed condition, despite the fact that the task no longer required segregation or spatial processing. This result is consistent with the idea that the performance of listeners with hearing loss in the spatialized mixture was limited by their ability to identify the target speech based on sparse glimpses, possibly as a result of some of those glimpses being inaudible.


Asunto(s)
Pérdida Auditiva Bilateral/psicología , Pérdida Auditiva Sensorineural/psicología , Localización de Sonidos , Inteligibilidad del Habla , Percepción del Habla , Estimulación Acústica , Adulto , Audiometría del Habla , Vías Auditivas/fisiopatología , Estudios de Casos y Controles , Señales (Psicología) , Femenino , Audición , Pérdida Auditiva Bilateral/diagnóstico , Pérdida Auditiva Bilateral/fisiopatología , Pérdida Auditiva Sensorineural/diagnóstico , Pérdida Auditiva Sensorineural/fisiopatología , Humanos , Masculino , Enmascaramiento Perceptual , Adulto Joven
10.
Trends Hear ; 202016 11 24.
Artículo en Inglés | MEDLINE | ID: mdl-27888257

RESUMEN

This report introduces a new speech task based on simple questions and answers. The task differs from a traditional sentence recall task in that it involves an element of comprehension and can be implemented in an ongoing fashion. It also contains two target items (the question and the answer) that may be associated with different voices and locations to create dynamic listening scenarios. A set of 227 questions was created, covering six broad categories (days of the week, months of the year, numbers, colors, opposites, and sizes). All questions and their one-word answers were spoken by 11 female and 11 male talkers. In this study, listeners were presented with question-answer pairs and asked to indicate whether the answer was true or false. Responses were given as simple button or key presses, which are quick to make and easy to score. Two preliminary experiments are presented that illustrate different ways of implementing the basic task. In the first experiment, question-answer pairs were presented in speech-shaped noise, and performance was compared across subjects, question categories, and time, to examine the different sources of variability. In the second experiment, sequences of question-answer pairs were presented amidst competing conversations in an ongoing, spatially dynamic listening scenario. Overall, the question-and-answer task appears to be feasible and could be implemented flexibly in a number of different ways.


Asunto(s)
Comprensión , Percepción del Habla , Habla , Femenino , Humanos , Masculino , Ruido
11.
J Acoust Soc Am ; 140(1): 132, 2016 07.
Artículo en Inglés | MEDLINE | ID: mdl-27475139

RESUMEN

Identification of target speech was studied under masked conditions consisting of two or four independent speech maskers. In the reference conditions, the maskers were colocated with the target, the masker talkers were the same sex as the target, and the masker speech was intelligible. The comparison conditions, intended to provide release from masking, included different-sex target and masker talkers, time-reversal of the masker speech, and spatial separation of the maskers from the target. Significant release from masking was found for all comparison conditions. To determine whether these reductions in masking could be attributed to differences in energetic masking, ideal time-frequency segregation (ITFS) processing was applied so that the time-frequency units where the masker energy dominated the target energy were removed. The remaining target-dominated "glimpses" were reassembled as the stimulus. Speech reception thresholds measured using these resynthesized ITFS-processed stimuli were the same for the reference and comparison conditions supporting the conclusion that the amount of energetic masking across conditions was the same. These results indicated that the large release from masking found under all comparison conditions was due primarily to a reduction in informational masking. Furthermore, the large individual differences observed generally were correlated across the three masking release conditions.


Asunto(s)
Enmascaramiento Perceptual , Inteligibilidad del Habla/fisiología , Percepción del Habla/fisiología , Adulto , Umbral Auditivo/fisiología , Femenino , Audición , Humanos , Masculino , Factores Sexuales , Habla , Factores de Tiempo , Adulto Joven
12.
J Neurosci ; 36(31): 8250-7, 2016 08 03.
Artículo en Inglés | MEDLINE | ID: mdl-27488643

RESUMEN

UNLABELLED: While conversing in a crowded social setting, a listener is often required to follow a target speech signal amid multiple competing speech signals (the so-called "cocktail party" problem). In such situations, separation of the target speech signal in azimuth from the interfering masker signals can lead to an improvement in target intelligibility, an effect known as spatial release from masking (SRM). This study assessed the contributions of two stimulus properties that vary with separation of sound sources, binaural envelope (ENV) and temporal fine structure (TFS), to SRM in normal-hearing (NH) human listeners. Target speech was presented from the front and speech maskers were either colocated with or symmetrically separated from the target in azimuth. The target and maskers were presented either as natural speech or as "noise-vocoded" speech in which the intelligibility was conveyed only by the speech ENVs from several frequency bands; the speech TFS within each band was replaced with noise carriers. The experiments were designed to preserve the spatial cues in the speech ENVs while retaining/eliminating them from the TFS. This was achieved by using the same/different noise carriers in the two ears. A phenomenological auditory-nerve model was used to verify that the interaural correlations in TFS differed across conditions, whereas the ENVs retained a high degree of correlation, as intended. Overall, the results from this study revealed that binaural TFS cues, especially for frequency regions below 1500 Hz, are critical for achieving SRM in NH listeners. Potential implications for studying SRM in hearing-impaired listeners are discussed. SIGNIFICANCE STATEMENT: Acoustic signals received by the auditory system pass first through an array of physiologically based band-pass filters. Conceptually, at the output of each filter, there are two principal forms of temporal information: slowly varying fluctuations in the envelope (ENV) and rapidly varying fluctuations in the temporal fine structure (TFS). The importance of these two types of information in everyday listening (e.g., conversing in a noisy social situation; the "cocktail-party" problem) has not been established. This study assessed the contributions of binaural ENV and TFS cues for understanding speech in multiple-talker situations. Results suggest that, whereas the ENV cues are important for speech intelligibility, binaural TFS cues are critical for perceptually segregating the different talkers and thus for solving the cocktail party problem.


Asunto(s)
Estimulación Acústica/métodos , Patrones de Reconocimiento Fisiológico/fisiología , Recreación , Localización de Sonidos/fisiología , Percepción del Habla/fisiología , Aglomeración , Señales (Psicología) , Femenino , Humanos , Masculino , Ruido , Análisis y Desempeño de Tareas , Adulto Joven
13.
Trends Hear ; 202016 Apr 08.
Artículo en Inglés | MEDLINE | ID: mdl-27059627

RESUMEN

Individuals with sensorineural hearing loss (SNHL) often experience more difficulty with listening in multisource environments than do normal-hearing (NH) listeners. While the peripheral effects of sensorineural hearing loss certainly contribute to this difficulty, differences in central processing of auditory information may also contribute. To explore this issue, it is important to account for peripheral differences between NH and these hearing-impaired (HI) listeners so that central effects in multisource listening can be examined. In the present study, NH and HI listeners performed a tonal pattern identification task at two distant center frequencies (CFs), 850 and 3500 Hz. In an attempt to control for differences in the peripheral representations of the stimuli, the patterns were presented at the same sensation level (15 dB SL), and the frequency deviation of the tones comprising the patterns was adjusted to obtain equal quiet pattern identification performance across all listeners at both CFs. Tonal sequences were then presented at both CFs simultaneously (informational masking conditions), and listeners were asked either to selectively attend to a source (CF) or to divide attention between CFs and identify the pattern at a CF designated after each trial. There were large differences between groups in the frequency deviations necessary to perform the pattern identification task. After compensating for these differences, there were small differences between NH and HI listeners in the informational masking conditions. HI listeners showed slightly greater performance asymmetry between the low and high CFs than did NH listeners, possibly due to central differences in frequency weighting between groups.


Asunto(s)
Trastornos de la Audición/diagnóstico , Patrones de Reconocimiento Fisiológico , Enmascaramiento Perceptual , Personas con Deficiencia Auditiva/psicología , Percepción del Habla , Encuestas y Cuestionarios , Estimulación Acústica , Adulto , Anciano , Anciano de 80 o más Años , Audiometría , Estudios de Casos y Controles , Evaluación de la Discapacidad , Femenino , Audífonos , Trastornos de la Audición/psicología , Trastornos de la Audición/terapia , Humanos , Masculino , Persona de Mediana Edad , Variaciones Dependientes del Observador , Personas con Deficiencia Auditiva/rehabilitación , Valor Predictivo de las Pruebas , Reproducibilidad de los Resultados , Adulto Joven
14.
Adv Exp Med Biol ; 894: 83-91, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27080649

RESUMEN

Hearing loss has been shown to reduce speech understanding in spatialized multitalker listening situations, leading to the common belief that spatial processing is disrupted by hearing loss. This paper describes related studies from three laboratories that explored the contribution of reduced target audibility to this deficit. All studies used a stimulus configuration in which a speech target presented from the front was masked by speech maskers presented symmetrically from the sides. Together these studies highlight the importance of adequate stimulus audibility for optimal performance in spatialized speech mixtures and suggest that reduced access to target speech information might explain a substantial portion of the "spatial" deficit observed in listeners with hearing loss.


Asunto(s)
Pérdida Auditiva/fisiopatología , Inteligibilidad del Habla , Estimulación Acústica , Adulto , Anciano , Umbral Auditivo , Humanos , Enmascaramiento Perceptual
16.
Trends Hear ; 192015 Jun 30.
Artículo en Inglés | MEDLINE | ID: mdl-26126896

RESUMEN

The benefit provided to listeners with sensorineural hearing loss (SNHL) by an acoustic beamforming microphone array was determined in a speech-on-speech masking experiment. Normal-hearing controls were tested as well. For the SNHL listeners, prescription-determined gain was applied to the stimuli, and performance using the beamformer was compared with that obtained using bilateral amplification. The listener identified speech from a target talker located straight ahead (0° azimuth) in the presence of four competing talkers that were either colocated with, or spatially separated from, the target. The stimuli were spatialized using measured impulse responses and presented via earphones. In the spatially separated masker conditions, the four maskers were arranged symmetrically around the target at ±15° and ±30° or at ±45° and ±90°. Results revealed that masked speech reception thresholds for spatially separated maskers were higher (poorer) on average for the SNHL than for the normal-hearing listeners. For most SNHL listeners in the wider masker separation condition, lower thresholds were obtained through the microphone array than through bilateral amplification. Large intersubject differences were found in both listener groups. The best masked speech reception thresholds overall were found for a hybrid condition that combined natural and beamforming listening in order to preserve localization for broadband sources.


Asunto(s)
Acústica , Audífonos , Pérdida Auditiva Sensorineural/rehabilitación , Ruido/efectos adversos , Enmascaramiento Perceptual , Personas con Deficiencia Auditiva/rehabilitación , Percepción del Habla , Estimulación Acústica , Adulto , Umbral Auditivo , Estudios de Casos y Controles , Señales (Psicología) , Diseño de Equipo , Pérdida Auditiva Sensorineural/diagnóstico , Pérdida Auditiva Sensorineural/psicología , Humanos , Personas con Deficiencia Auditiva/psicología , Psicoacústica , Reconocimiento en Psicología , Procesamiento de Señales Asistido por Computador , Espectrografía del Sonido , Inteligibilidad del Habla , Prueba del Umbral de Recepción del Habla , Adulto Joven
17.
Sci Rep ; 5: 11628, 2015 Jun 26.
Artículo en Inglés | MEDLINE | ID: mdl-26112910

RESUMEN

Are musicians better able to understand speech in noise than non-musicians? Recent findings have produced contradictory results. Here we addressed this question by asking musicians and non-musicians to understand target sentences masked by other sentences presented from different spatial locations, the classical 'cocktail party problem' in speech science. We found that musicians obtained a substantial benefit in this situation, with thresholds ~6 dB better than non-musicians. Large individual differences in performance were noted particularly for the non-musically trained group. Furthermore, in different conditions we manipulated the spatial location and intelligibility of the masking sentences, thus changing the amount of 'informational masking' (IM) while keeping the amount of 'energetic masking' (EM) relatively constant. When the maskers were unintelligible and spatially separated from the target (low in IM), musicians and non-musicians performed comparably. These results suggest that the characteristics of speech maskers and the amount of IM can influence the magnitude of the differences found between musicians and non-musicians in multiple-talker "cocktail party" environments. Furthermore, considering the task in terms of the EM-IM distinction provides a conceptual framework for future behavioral and neuroscientific studies which explore the underlying sensory and cognitive mechanisms contributing to enhanced "speech-in-noise" perception by musicians.


Asunto(s)
Audición/fisiología , Música , Enmascaramiento Perceptual/fisiología , Percepción del Habla/fisiología , Estimulación Acústica , Adulto , Análisis de Varianza , Umbral Auditivo/fisiología , Humanos , Individualidad , Ruido , Psicoacústica , Desempeño Psicomotor/fisiología , Distribución Aleatoria , Semántica , Adulto Joven
18.
J Acoust Soc Am ; 137(2): EL213-9, 2015 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-25698053

RESUMEN

When competing speech sounds are spatially separated, listeners can make use of the ear with the better target-to-masker ratio. Recent studies showed that listeners with normal hearing are able to efficiently make use of this "better-ear," even when it alternates between left and right ears at different times in different frequency bands, which may contribute to the ability to listen in spatialized speech mixtures. In the present study, better-ear glimpsing in listeners with bilateral sensorineural hearing impairment, who perform poorly in spatialized speech mixtures, was investigated. The results suggest that this deficit is not related to better-ear glimpsing.


Asunto(s)
Señales (Psicología) , Pérdida Auditiva Bilateral/psicología , Pérdida Auditiva Sensorineural/psicología , Ruido/efectos adversos , Enmascaramiento Perceptual , Personas con Deficiencia Auditiva/psicología , Percepción del Habla , Estimulación Acústica , Adaptación Psicológica , Adolescente , Adulto , Audiometría del Habla , Umbral Auditivo , Estudios de Casos y Controles , Femenino , Pérdida Auditiva Bilateral/diagnóstico , Pérdida Auditiva Sensorineural/diagnóstico , Humanos , Masculino , Psicometría , Inteligibilidad del Habla , Adulto Joven
19.
J Acoust Soc Am ; 135(2): 766-77, 2014 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-25234885

RESUMEN

This study examined the ability of listeners to utilize syntactic structure to extract a target stream of speech from among competing sounds. Target talkers were identified by voice or location, which was held constant throughout a test utterance, and paired with correct or incorrect (random word order) target sentence syntax. Both voice and location provided reliable cues for identifying target speech even when other features varied unpredictably. The target sentences were masked either by predominantly energetic maskers (noise bursts) or by predominantly informational maskers (similar speech in random word order). When the maskers were noise bursts, target sentence syntax had relatively minor effects on identification performance. However, when the maskers were other talkers, correct target sentence syntax resulted in significantly better speech identification performance than incorrect syntax. Furthermore, conformance to correct syntax alone was sufficient to accurately identify the target speech. The results were interpreted as supporting the idea that the predictability of the elements comprising streams of speech, as manifested by syntactic structure, is an important factor in binding words together into coherent streams. Furthermore, these findings suggest that predictability is particularly important for maintaining the coherence of an auditory stream over time under conditions high in informational masking.


Asunto(s)
Señales (Psicología) , Ruido/efectos adversos , Enmascaramiento Perceptual , Fonética , Acústica del Lenguaje , Percepción del Habla , Calidad de la Voz , Estimulación Acústica , Audiometría del Habla , Umbral Auditivo , Humanos , Reconocimiento en Psicología , Inteligibilidad del Habla , Factores de Tiempo
20.
J Acoust Soc Am ; 134(2): 1215-31, 2013 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-23927120

RESUMEN

This study examined the ability of human listeners to detect the presence and judge the strength of a statistical dependency among the elements comprising sequences of sounds. The statistical dependency was imposed by specifying transition matrices that determined the likelihood of occurrence of the sound elements. Markov chains were constructed from these transition matrices having states that were pure tones/noise bursts that varied along the stimulus dimensions of frequency and/or interaural time difference. Listeners reliably detected the presence of a statistical dependency in sequences of sounds varying along these stimulus dimensions. Furthermore, listeners were able to discriminate the relative strength of the dependency in pairs of successive sound sequences. Random variation along an irrelevant stimulus dimension had small but significant adverse effects on performance. A much greater decrement in performance was found when the sound sequences were concurrent. Likelihood ratios were computed based on the transition matrices to specify Ideal Observer performance for the experimental conditions. Preliminary modeling efforts were made based on degradations of Ideal Observer performance intended to represent human observer limitations. This experimental approach appears to be useful for examining auditory "stream" formation and maintenance over time based on the predictability of the constituent sound elements.


Asunto(s)
Percepción Auditiva , Modelos Estadísticos , Enmascaramiento Perceptual , Detección de Señal Psicológica , Estimulación Acústica , Análisis de Varianza , Audiometría de Tonos Puros , Humanos , Juicio , Funciones de Verosimilitud , Cadenas de Markov , Ruido/efectos adversos , Psicoacústica , Factores de Tiempo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA