Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 88
Filtrar
Más filtros

Bases de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
J Acoust Soc Am ; 155(4): 2482-2491, 2024 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-38587430

RESUMEN

Despite a vast literature on how speech intelligibility is affected by hearing loss and advanced age, remarkably little is known about the perception of talker-related information in these populations. Here, we assessed the ability of listeners to detect whether a change in talker occurred while listening to and identifying sentence-length sequences of words. Participants were recruited in four groups that differed in their age (younger/older) and hearing status (normal/impaired). The task was conducted in quiet or in a background of same-sex two-talker speech babble. We found that age and hearing loss had detrimental effects on talker change detection, in addition to their expected effects on word recognition. We also found subtle differences in the effects of age and hearing loss for trials in which the talker changed vs trials in which the talker did not change. These findings suggest that part of the difficulty encountered by older listeners, and by listeners with hearing loss, when communicating in group situations, may be due to a reduced ability to identify and discriminate between the participants in the conversation.


Asunto(s)
Sordera , Pérdida Auditiva , Humanos , Pérdida Auditiva/diagnóstico , Inteligibilidad del Habla
2.
J Acoust Soc Am ; 154(4): 2191-2202, 2023 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-37815410

RESUMEN

Psychophysical experiments explored how the repeated presentation of a context, consisting of an adaptor and a target, induces plasticity in the localization of an identical target presented alone on interleaved trials. The plasticity, and its time course, was examined both in a classroom and in an anechoic chamber. Adaptors and targets were 2 ms noise clicks and listeners were tasked with localizing the targets while ignoring the adaptors (when present). The context was either simple, consisting of a single-click adaptor and a target, or complex, containing either a single-click or an eight-click adaptor that varied from trial to trial. The adaptor was presented either from a frontal or a lateral location, fixed within a run. The presence of context caused responses to the isolated targets to be displaced up to 14° away from the adaptor location. This effect was stronger and slower if the context was complex, growing over the 5 min duration of the runs. Additionally, the simple context buildup had a slower onset in the classroom. Overall, the results illustrate that sound localization is subject to slow adaptive processes that depend on the spatial and temporal structure of the context and on the level of reverberation in the environment.


Asunto(s)
Localización de Sonidos , Localización de Sonidos/fisiología , Ruido/efectos adversos , Factores de Tiempo
3.
Proc Natl Acad Sci U S A ; 115(14): E3286-E3295, 2018 04 03.
Artículo en Inglés | MEDLINE | ID: mdl-29555752

RESUMEN

Listeners with sensorineural hearing loss often have trouble understanding speech amid other voices. While poor spatial hearing is often implicated, direct evidence is weak; moreover, studies suggest that reduced audibility and degraded spectrotemporal coding may explain such problems. We hypothesized that poor spatial acuity leads to difficulty deploying selective attention, which normally filters out distracting sounds. In listeners with normal hearing, selective attention causes changes in the neural responses evoked by competing sounds, which can be used to quantify the effectiveness of attentional control. Here, we used behavior and electroencephalography to explore whether control of selective auditory attention is degraded in hearing-impaired (HI) listeners. Normal-hearing (NH) and HI listeners identified a simple melody presented simultaneously with two competing melodies, each simulated from different lateral angles. We quantified performance and attentional modulation of cortical responses evoked by these competing streams. Compared with NH listeners, HI listeners had poorer sensitivity to spatial cues, performed more poorly on the selective attention task, and showed less robust attentional modulation of cortical responses. Moreover, across NH and HI individuals, these measures were correlated. While both groups showed cortical suppression of distracting streams, this modulation was weaker in HI listeners, especially when attending to a target at midline, surrounded by competing streams. These findings suggest that hearing loss interferes with the ability to filter out sound sources based on location, contributing to communication difficulties in social situations. These findings also have implications for technologies aiming to use neural signals to guide hearing aid processing.


Asunto(s)
Atención/fisiología , Percepción Auditiva/fisiología , Umbral Auditivo/fisiología , Discriminación en Psicología , Pérdida Auditiva Sensorineural/fisiopatología , Pérdida Auditiva Sensorineural/psicología , Percepción Espacial/fisiología , Adulto , Estudios de Casos y Controles , Femenino , Humanos , Masculino , Persona de Mediana Edad , Modelos Teóricos , Adulto Joven
4.
J Acoust Soc Am ; 150(2): 1311, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-34470281

RESUMEN

Previous studies have shown that for high-rate click trains and low-frequency pure tones, interaural time differences (ITDs) at the onset of stimulus contribute most strongly to the overall lateralization percept (receive the largest perceptual weight). Previous studies have also shown that when these stimuli are modulated, ITDs during the rising portion of the modulation cycle receive increased perceptual weight. Baltzell, Cho, Swaminathan, and Best [(2020). J. Acoust. Soc. Am. 147, 3883-3894] measured perceptual weights for a pair of spoken words ("two" and "eight"), and found that word-initial phonemes receive larger weight than word-final phonemes, suggesting a "word-onset dominance" for speech. Generalizability of this conclusion was limited by a coarse temporal resolution and limited stimulus set. In the present study, temporal weighting functions (TWFs) were measured for four spoken words ("two," "eight," "six," and "nine"). Stimuli were partitioned into 30-ms bins, ITDs were applied independently to each bin, and lateralization judgements were obtained. TWFs were derived using a hierarchical regression model. Results suggest that "word-initial" onset dominance does not generalize across words and that TWFs depend in part on acoustic changes throughout the stimulus. Two model-based predictions were generated to account for observed TWFs, but neither could fully account for the perceptual data.


Asunto(s)
Localización de Sonidos , Estimulación Acústica , Juicio , Habla
5.
J Acoust Soc Am ; 150(2): 1076, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-34470293

RESUMEN

This study aimed at predicting individual differences in speech reception thresholds (SRTs) in the presence of symmetrically placed competing talkers for young listeners with sensorineural hearing loss. An existing binaural model incorporating the individual audiogram was revised to handle severe hearing losses by (a) taking as input the target speech level at SRT in a given condition and (b) introducing a floor in the model to limit extreme negative better-ear signal-to-noise ratios. The floor value was first set using SRTs measured with stationary and modulated noises. The model was then used to account for individual variations in SRTs found in two previously published data sets that used speech maskers. The model accounted well for the variation in SRTs across listeners with hearing loss, based solely on differences in audibility. When considering listeners with normal hearing, the model could predict the best SRTs, but not the poorer SRTs, suggesting that other factors limit performance when audibility (as measured with the audiogram) is not compromised.


Asunto(s)
Inteligibilidad del Habla , Percepción del Habla , Umbral Auditivo , Individualidad , Ruido/efectos adversos , Prueba del Umbral de Recepción del Habla
6.
Proc Natl Acad Sci U S A ; 114(36): 9743-9748, 2017 09 05.
Artículo en Inglés | MEDLINE | ID: mdl-28827336

RESUMEN

Studies of auditory looming bias have shown that sources increasing in intensity are more salient than sources decreasing in intensity. Researchers have argued that listeners are more sensitive to approaching sounds compared with receding sounds, reflecting an evolutionary pressure. However, these studies only manipulated overall sound intensity; therefore, it is unclear whether looming bias is truly a perceptual bias for changes in source distance, or only in sound intensity. Here we demonstrate both behavioral and neural correlates of looming bias without manipulating overall sound intensity. In natural environments, the pinnae induce spectral cues that give rise to a sense of externalization; when spectral cues are unnatural, sounds are perceived as closer to the listener. We manipulated the contrast of individually tailored spectral cues to create sounds of similar intensity but different naturalness. We confirmed that sounds were perceived as approaching when spectral contrast decreased, and perceived as receding when spectral contrast increased. We measured behavior and electroencephalography while listeners judged motion direction. Behavioral responses showed a looming bias in that responses were more consistent for sounds perceived as approaching than for sounds perceived as receding. In a control experiment, looming bias disappeared when spectral contrast changes were discontinuous, suggesting that perceived motion in distance and not distance itself was driving the bias. Neurally, looming bias was reflected in an asymmetry of late event-related potentials associated with motion evaluation. Hence, both our behavioral and neural findings support a generalization of the auditory looming bias, representing a perceptual preference for approaching auditory objects.


Asunto(s)
Percepción Auditiva/fisiología , Estimulación Acústica , Adulto , Sesgo Atencional/fisiología , Corteza Auditiva/fisiología , Señales (Psicología) , Electroencefalografía , Potenciales Evocados/fisiología , Femenino , Humanos , Masculino , Modelos Neurológicos , Localización de Sonidos/fisiología , Adulto Joven
7.
J Acoust Soc Am ; 147(3): 1469, 2020 03.
Artículo en Inglés | MEDLINE | ID: mdl-32237797

RESUMEN

Spatial perception is an important part of a listener's experience and ability to function in everyday environments. However, the current understanding of how well listeners can locate sounds is based on measurements made using relatively simple stimuli and tasks. Here the authors investigated sound localization in a complex and realistic environment for listeners with normal and impaired hearing. A reverberant room containing a background of multiple talkers was simulated and presented to listeners in a loudspeaker-based virtual sound environment. The target was a short speech stimulus presented at various azimuths and distances relative to the listener. To ensure that the target stimulus was detectable to the listeners with hearing loss, masked thresholds were first measured on an individual basis and used to set the target level. Despite this compensation, listeners with hearing loss were less accurate at locating the target, showing increased front-back confusion rates and higher root-mean-square errors. Poorer localization was associated with poorer masked thresholds and with more severe low-frequency hearing loss. Localization accuracy in the multitalker background was lower than in quiet and also declined for more distant targets. However, individual accuracy in noise and quiet was strongly correlated.


Asunto(s)
Pérdida Auditiva Sensorineural , Pérdida Auditiva , Localización de Sonidos , Percepción del Habla , Audición , Pérdida Auditiva Sensorineural/diagnóstico , Pruebas Auditivas , Humanos , Habla
8.
J Acoust Soc Am ; 148(5): 3246, 2020 11.
Artículo en Inglés | MEDLINE | ID: mdl-33261378

RESUMEN

This work aims to predict speech intelligibility against harmonic maskers. Unlike noise maskers, harmonic maskers (including speech) have a harmonic structure that may allow for a release from masking based on fundamental frequency (F0). Mechanisms, such as spectral glimpsing and harmonic cancellation, have been proposed to explain F0 segregation, but their relative contributions and ability to predict behavioral data have not been explored. A speech intelligibility model was developed that includes both spectral glimpsing and harmonic cancellation. The model was used to fit the data of two experiments from Deroche, Culling, Chatterjee, and Limb [J. Acoust. Soc. Am. 135, 2873-2884 (2014)], in which speech reception thresholds were measured for stationary harmonic maskers varying in their F0 and degree of harmonicity. Key model parameters (jitter in the masker F0, shape of the cancellation filter, frequency limit for cancellation, and signal-to-noise ratio ceiling) were optimized by maximizing the correspondence between the predictions and data. The model was able to accurately describe the effects associated with varying the masker F0 and harmonicity. Across both experiments, the correlation between data and predictions was 0.99, and the mean and largest absolute prediction errors were lower than 0.5 and 1 dB, respectively.


Asunto(s)
Inteligibilidad del Habla , Percepción del Habla , Estimulación Acústica , Umbral Auditivo , Audición , Ruido/efectos adversos , Enmascaramiento Perceptual
9.
J Acoust Soc Am ; 147(2): EL144, 2020 02.
Artículo en Inglés | MEDLINE | ID: mdl-32113285

RESUMEN

This study tested the hypothesis that adding noise to a speech mixture may cause both energetic masking by obscuring parts of the target message and informational masking by impeding the segregation of competing voices. The stimulus was the combination of two talkers-one target and one masker-presented either in quiet or in noise. Target intelligibility was measured in this mixture and for conditions in which the speech was "glimpsed" in order to quantify the energetic masking present. The results suggested that the addition of background noise exacerbated informational masking, primarily by increasing the sparseness of the speech.


Asunto(s)
Percepción del Habla , Habla , Ruido/efectos adversos , Enmascaramiento Perceptual , Factores de Tiempo
10.
J Acoust Soc Am ; 147(3): 1648, 2020 03.
Artículo en Inglés | MEDLINE | ID: mdl-32237827

RESUMEN

Ideal time-frequency segregation (ITFS) is a signal processing technique that may be used to estimate the energetic and informational components of speech-on-speech masking. A core assumption of ITFS is that it roughly emulates the effects of energetic masking (EM) in a speech mixture. Thus, when speech identification thresholds are measured for ITFS-processed stimuli and compared to thresholds for unprocessed stimuli, the difference can be attributed to informational masking (IM). Interpreting this difference as a direct metric of IM, however, is complicated by the fine time-frequency (T-F) resolution typically used during ITFS, which may yield target "glimpses" that are too narrow/brief to be resolved by the ear in the mixture. Estimates of IM, therefore, may be inflated because the full effects of EM are not accounted for. Here, T-F resolution was varied during ITFS to determine if/how estimates of IM depend on processing resolution. Speech identification thresholds were measured for speech and noise maskers after ITFS. Reduced frequency resolution yielded poorer thresholds for both masker types. Reduced temporal resolution did so for noise maskers only. Results suggest that processing resolution strongly influences estimates of IM and implies that current approaches to predicting masked speech intelligibility should be modified to account for IM.


Asunto(s)
Inteligibilidad del Habla , Percepción del Habla , Ruido/efectos adversos , Enmascaramiento Perceptual , Habla
11.
J Acoust Soc Am ; 147(3): 1546, 2020 03.
Artículo en Inglés | MEDLINE | ID: mdl-32237845

RESUMEN

Listeners with sensorineural hearing loss routinely experience less spatial release from masking (SRM) in speech mixtures than listeners with normal hearing. Hearing-impaired listeners have also been shown to have degraded temporal fine structure (TFS) sensitivity, a consequence of which is degraded access to interaural time differences (ITDs) contained in the TFS. Since these "binaural TFS" cues are critical for spatial hearing, it has been hypothesized that degraded binaural TFS sensitivity accounts for the limited SRM experienced by hearing-impaired listeners. In this study, speech stimuli were noise-vocoded using carriers that were systematically decorrelated across the left and right ears, thus simulating degraded binaural TFS sensitivity. Both (1) ITD sensitivity in quiet and (2) SRM in speech mixtures spatialized using ITDs (or binaural release from masking; BRM) were measured as a function of TFS interaural decorrelation in young normal-hearing and hearing-impaired listeners. This allowed for the examination of the relationship between ITD sensitivity and BRM over a wide range of ITD thresholds. This paper found that, for a given ITD sensitivity, hearing-impaired listeners experienced less BRM than normal-hearing listeners, suggesting that binaural TFS sensitivity can account for only a modest portion of the BRM deficit in hearing-impaired listeners. However, substantial individual variability was observed.


Asunto(s)
Sordera , Pérdida Auditiva Sensorineural , Pérdida Auditiva , Percepción del Habla , Umbral Auditivo , Audición , Pérdida Auditiva Sensorineural/diagnóstico , Humanos , Ruido/efectos adversos , Enmascaramiento Perceptual , Habla
12.
J Acoust Soc Am ; 147(6): 3883, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-32611137

RESUMEN

Numerous studies have demonstrated that the perceptual weighting of interaural time differences (ITDs) is non-uniform in time and frequency, leading to reports of spectral and temporal "dominance" regions. It is unclear however, how these dominance regions apply to spectro-temporally complex stimuli such as speech. The authors report spectro-temporal weighting functions for ITDs in a pair of naturally spoken speech tokens ("two" and "eight"). Each speech token was composed of two phonemes, and was partitioned into eight frequency regions over two time bins (one time bin for each phoneme). To derive lateralization weights, ITDs for each time-frequency bin were drawn independently from a normal distribution with a mean of 0 and a standard deviation of 200 µs, and listeners were asked to indicate whether the speech token was presented from the left or right. ITD thresholds were also obtained for each of the 16 time-frequency bins in isolation. The results suggest that spectral dominance regions apply to speech, and that ITDs carried by phonemes in the first position of the syllable contribute more strongly to lateralization judgments than ITDs carried by phonemes in the second position. The results also show that lateralization judgments are partially accounted for by ITD sensitivity across time-frequency bins.


Asunto(s)
Localización de Sonidos , Percepción del Habla , Estimulación Acústica , Habla
13.
J Acoust Soc Am ; 147(3): 1562, 2020 03.
Artículo en Inglés | MEDLINE | ID: mdl-32237858

RESUMEN

To capture the demands of real-world listening, laboratory-based speech-in-noise tasks must better reflect the types of speech and environments listeners encounter in everyday life. This article reports the development of original sentence materials that were produced spontaneously with varying vocal efforts. These sentences were extracted from conversations between a talker pair (female/male) communicating in different realistic acoustic environments to elicit normal, raised and loud vocal efforts. In total, 384 sentences were extracted to provide four equivalent lists of 16 sentences at the three efforts for the two talkers. The sentences were presented to 32 young, normally hearing participants in stationary noise at five signal-to-noise ratios from -8 to 0 dB in 2 dB steps. Psychometric functions were fitted for each sentence, revealing an average 50% speech reception threshold (SRT50) of -5.2 dB, and an average slope of 17.2%/dB. Sentences were then level-normalised to adjust their individual SRT50 to the mean (-5.2 dB). The sentences may be combined with realistic background noise to provide an assessment method that better captures the perceptual demands of everyday communication.


Asunto(s)
Percepción del Habla , Femenino , Audición , Pruebas Auditivas , Humanos , Lenguaje , Masculino , Ruido/efectos adversos , Prueba del Umbral de Recepción del Habla
14.
J Acoust Soc Am ; 145(6): EL508, 2019 06.
Artículo en Inglés | MEDLINE | ID: mdl-31255153

RESUMEN

Sensitivity to interaural time differences (ITDs) was measured in two groups of listeners, one with normal hearing and one with sensorineural hearing loss. ITD detection thresholds were measured for pure tones and for speech (a single word), in quiet and in the presence of noise. It was predicted that effects of hearing loss would be reduced for speech as compared to tones due to the redundancy of information across frequency. Thresholds were better overall, and the effects of hearing loss less pronounced, for speech than for tones. There was no evidence that effects of hearing loss were exacerbated in noise.


Asunto(s)
Percepción Auditiva/fisiología , Sordera/fisiopatología , Pérdida Auditiva/fisiopatología , Audición/fisiología , Estimulación Acústica/métodos , Adulto , Umbral Auditivo/fisiología , Femenino , Pérdida Auditiva Sensorineural/diagnóstico , Humanos , Masculino , Persona de Mediana Edad , Percepción del Habla/fisiología
15.
J Acoust Soc Am ; 146(5): 3215, 2019 11.
Artículo en Inglés | MEDLINE | ID: mdl-31795657

RESUMEN

When a target talker speaks in the presence of competing talkers, the listener must not only segregate the voices but also understand the target message based on a limited set of spectrotemporal regions ("glimpses") in which the target voice dominates the acoustic mixture. Here, the hypothesis that a broad audible bandwidth is more critical for these sparse representations of speech than it is for intact speech is tested. Listeners with normal hearing were presented with sentences that were either intact, or progressively "glimpsed" according to a competing two-talker masker presented at various levels. This was achieved by using an ideal binary mask to exclude time-frequency units in the target that would be dominated by the masker in the natural mixture. In each glimpsed condition, speech intelligibility was measured for a range of low-pass conditions (cutoff frequencies from 500 to 8000 Hz). Intelligibility was poorer for sparser speech, and the bandwidth required for optimal intelligibility increased with the sparseness of the speech. The combined effects of glimpsing and bandwidth reduction were well captured by a simple metric based on the proportion of audible target glimpses retained. The findings may be relevant for understanding the impact of high-frequency hearing loss on everyday speech communication.

16.
J Acoust Soc Am ; 145(1): 440, 2019 01.
Artículo en Inglés | MEDLINE | ID: mdl-30710924

RESUMEN

The ability to identify the words spoken by one talker masked by two or four competing talkers was tested in young-adult listeners with sensorineural hearing loss (SNHL). In a reference/baseline condition, masking speech was colocated with target speech, target and masker talkers were female, and the masker was intelligible. Three comparison conditions included replacing female masker talkers with males, time-reversal of masker speech, and spatial separation of sources. All three variables produced significant release from masking. To emulate energetic masking (EM), stimuli were subjected to ideal time-frequency segregation retaining only the time-frequency units where target energy exceeded masker energy. Subjects were then tested with these resynthesized "glimpsed stimuli." For either two or four maskers, thresholds only varied about 3 dB across conditions suggesting that EM was roughly equal. Compared to normal-hearing listeners from an earlier study [Kidd, Mason, Swaminathan, Roverud, Clayton, and Best, J. Acoust. Soc. Am. 140, 132-144 (2016)], SNHL listeners demonstrated both greater energetic and informational masking as well as higher glimpsed thresholds. Individual differences were correlated across masking release conditions suggesting that listeners could be categorized according to their general ability to solve the task. Overall, both peripheral and central factors appear to contribute to the higher thresholds for SNHL listeners.


Asunto(s)
Pérdida Auditiva Sensorineural/fisiopatología , Percepción del Habla , Adolescente , Adulto , Umbral Auditivo , Femenino , Pérdida Auditiva Sensorineural/psicología , Humanos , Masculino , Enmascaramiento Perceptual
17.
Ear Hear ; 39(4): 756-769, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29252977

RESUMEN

OBJECTIVES: The "visually guided hearing aid" (VGHA), consisting of a beamforming microphone array steered by eye gaze, is an experimental device being tested for effectiveness in laboratory settings. Previous studies have found that beamforming without visual steering can provide significant benefits (relative to natural binaural listening) for speech identification in spatialized speech or noise maskers when sound sources are fixed in location. The aim of the present study was to evaluate the performance of the VGHA in listening conditions in which target speech could switch locations unpredictably, requiring visual steering of the beamforming. To address this aim, the present study tested an experimental simulation of the VGHA in a newly designed dynamic auditory-visual word congruence task. DESIGN: Ten young normal-hearing (NH) and 11 young hearing-impaired (HI) adults participated. On each trial, three simultaneous spoken words were presented from three source positions (-30, 0, and 30 azimuth). An auditory-visual word congruence task was used in which participants indicated whether there was a match between the word printed on a screen at a location corresponding to the target source and the spoken target word presented acoustically from that location. Performance was compared for a natural binaural condition (stimuli presented using impulse responses measured on KEMAR), a simulated VGHA condition (BEAM), and a hybrid condition that combined lowpass-filtered KEMAR and highpass-filtered BEAM information (BEAMAR). In some blocks, the target remained fixed at one location across trials, and in other blocks, the target could transition in location between one trial and the next with a fixed but low probability. RESULTS: Large individual variability in performance was observed. There were significant benefits for the hybrid BEAMAR condition relative to the KEMAR condition on average for both NH and HI groups when the targets were fixed. Although not apparent in the averaged data, some individuals showed BEAM benefits relative to KEMAR. Under dynamic conditions, BEAM and BEAMAR performance dropped significantly immediately following a target location transition. However, performance recovered by the second word in the sequence and was sustained until the next transition. CONCLUSIONS: When performance was assessed using an auditory-visual word congruence task, the benefits of beamforming reported previously were generally preserved under dynamic conditions in which the target source could move unpredictably from one location to another (i.e., performance recovered rapidly following source transitions) while the observer steered the beamforming via eye gaze, for both young NH and young HI groups.


Asunto(s)
Diseño de Equipo , Fijación Ocular , Audífonos , Pérdida Auditiva/rehabilitación , Adolescente , Adulto , Atención , Estudios de Casos y Controles , Femenino , Humanos , Masculino , Procesamiento Espacial , Percepción del Habla , Adulto Joven
18.
J Acoust Soc Am ; 144(5): 2896, 2018 11.
Artículo en Inglés | MEDLINE | ID: mdl-30522291

RESUMEN

Cubick and Dau [(2016). Acta Acust. Acust. 102, 547-557] showed that speech reception thresholds (SRTs) in noise, obtained with normal-hearing listeners, were significantly higher with hearing aids (HAs) than without. Some listeners reported a change in their spatial perception of the stimuli due to the HA processing, with auditory images often being broader and closer to the head or even internalized. The current study investigated whether worse speech intelligibility with HAs might be explained by distorted spatial perception and the resulting reduced ability to spatially segregate the target speech from the interferers. SRTs were measured in normal-hearing listeners with or without HAs in the presence of three interfering talkers or speech-shaped noises. Furthermore, listeners were asked to sketch their spatial perception of the acoustic scene. Consistent with the previous study, SRTs increased with HAs. Spatial release from masking was lower with HAs than without. The effects were similar for noise and speech maskers and appeared to be accounted for by changes to energetic masking. This interpretation was supported by results from a binaural speech intelligibility model. Even though the sketches indicated a change of spatial perception with HAs, no direct link between spatial perception and segregation of talkers could be shown.


Asunto(s)
Percepción Auditiva/fisiología , Audífonos/efectos adversos , Audición/fisiología , Inteligibilidad del Habla/fisiología , Adulto , Umbral Auditivo , Australia/epidemiología , Femenino , Humanos , Masculino , Ruido , Enmascaramiento Perceptual , Personas con Deficiencia Auditiva/rehabilitación , Personas con Deficiencia Auditiva/estadística & datos numéricos , Grupos de Población , Percepción Espacial/fisiología , Percepción del Habla/fisiología
19.
J Acoust Soc Am ; 143(2): 1085, 2018 02.
Artículo en Inglés | MEDLINE | ID: mdl-29495693

RESUMEN

The ability to identify who is talking is an important aspect of communication in social situations and, while empirical data are limited, it is possible that a disruption to this ability contributes to the difficulties experienced by listeners with hearing loss. In this study, talker identification was examined under both quiet and masked conditions. Subjects were grouped by hearing status (normal hearing/sensorineural hearing loss) and age (younger/older adults). Listeners first learned to identify the voices of four same-sex talkers in quiet, and then talker identification was assessed (1) in quiet, (2) in speech-shaped, steady-state noise, and (3) in the presence of a single, unfamiliar same-sex talker. Both younger and older adults with hearing loss, as well as older adults with normal hearing, generally performed more poorly than younger adults with normal hearing, although large individual differences were observed in all conditions. Regression analyses indicated that both age and hearing loss were predictors of performance in quiet, and there was some evidence for an additional contribution of hearing loss in the presence of masking. These findings suggest that both hearing loss and age may affect the ability to identify talkers in "cocktail party" situations.


Asunto(s)
Envejecimiento/psicología , Pérdida Auditiva/psicología , Ruido/efectos adversos , Enmascaramiento Perceptual , Personas con Deficiencia Auditiva/psicología , Reconocimiento en Psicología , Acústica del Lenguaje , Percepción del Habla , Calidad de la Voz , Estimulación Acústica , Adulto , Factores de Edad , Anciano , Anciano de 80 o más Años , Audiometría del Habla , Boston , Femenino , Audición , Pérdida Auditiva/diagnóstico , Pérdida Auditiva/fisiopatología , Humanos , Masculino , Persona de Mediana Edad , South Carolina , Adulto Joven
20.
Int J Audiol ; 57(3): 221-229, 2018 03.
Artículo en Inglés | MEDLINE | ID: mdl-28826285

RESUMEN

OBJECTIVE: The National Acoustic Laboratories Dynamic Conversations Test (NAL-DCT) is a new test of speech comprehension that incorporates a realistic environment and dynamic speech materials that capture certain features of everyday conversations. The goal of this study was to assess the suitability of the test for studying the consequences of hearing loss and amplification in older listeners. DESIGN: Unaided and aided comprehension scores were measured for single-, two- and three-talker passages, along with unaided and aided sentence recall. To characterise the relevant cognitive abilities of the group, measures of short-term working memory, verbal information-processing speed and reading comprehension speed were collected. STUDY SAMPLE: Participants were 41 older listeners with varying degrees of hearing loss. RESULTS: Performance on both the NAL-DCT and the sentence test was strongly driven by hearing loss, but performance on the NAL-DCT was additionally related to a composite cognitive deficit score. Benefits of amplification were measurable but influenced by individual test SNRs. CONCLUSIONS: The NAL-DCT is sensitive to the same factors as a traditional sentence recall test, but in addition is sensitive to the cognitive factors required for speech processing. The test shows promise as a tool for research concerned with real-world listening.


Asunto(s)
Envejecimiento/psicología , Audiometría del Habla/métodos , Pérdida Auditiva Sensorineural/diagnóstico , Personas con Deficiencia Auditiva/psicología , Percepción del Habla , Estimulación Acústica , Factores de Edad , Anciano , Anciano de 80 o más Años , Cognición , Comprensión , Corrección de Deficiencia Auditiva/instrumentación , Femenino , Audición , Audífonos , Pérdida Auditiva Sensorineural/fisiopatología , Pérdida Auditiva Sensorineural/psicología , Pérdida Auditiva Sensorineural/rehabilitación , Humanos , Masculino , Persona de Mediana Edad , Ruido/efectos adversos , Enmascaramiento Perceptual , Valor Predictivo de las Pruebas , Reproducibilidad de los Resultados , Índice de Severidad de la Enfermedad , Inteligibilidad del Habla
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA