Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 30
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Ear Hear ; 44(2): 318-329, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36395512

RESUMEN

OBJECTIVES: Some cochlear implant (CI) users are fitted with a CI in each ear ("bilateral"), while others have a CI in one ear and a hearing aid in the other ("bimodal"). Presently, evaluation of the benefits of bilateral or bimodal CI fitting does not take into account the integration of frequency information across the ears. This study tests the hypothesis that CI listeners, especially bimodal CI users, with a more precise integration of frequency information across ears ("sharp binaural pitch fusion") will derive greater benefit from voice gender differences in a multi-talker listening environment. DESIGN: Twelve bimodal CI users and twelve bilateral CI users participated. First, binaural pitch fusion ranges were measured using the simultaneous, dichotic presentation of reference and comparison stimuli (electric pulse trains for CI ears and acoustic tones for HA ears) in opposite ears, with reference stimuli fixed and comparison stimuli varied in frequency/electrode to find the range perceived as a single sound. Direct electrical stimulation was used in implanted ears through the research interface, which allowed selective stimulation of one electrode at a time, and acoustic stimulation was used in the non-implanted ears through the headphone. Second, speech-on-speech masking performance was measured to estimate masking release by voice gender difference between target and maskers (VGRM). The VGRM was calculated as the difference in speech recognition thresholds of target sounds in the presence of same-gender or different-gender maskers. RESULTS: Voice gender differences between target and masker talkers improved speech recognition performance for the bimodal CI group, but not the bilateral CI group. The bimodal CI users who benefited the most from voice gender differences were those who had the narrowest range of acoustic frequencies that fused into a single sound with stimulation from a single electrode from the CI in the opposite ear. There was no similar voice gender difference benefit of narrow binaural fusion range for the bilateral CI users. CONCLUSIONS: The findings suggest that broad binaural fusion reduces the acoustical information available for differentiating individual talkers in bimodal CI users, but not for bilateral CI users. In addition, for bimodal CI users with narrow binaural fusion who benefit from voice gender differences, bilateral implantation could lead to a loss of that benefit and impair their ability to selectively attend to one talker in the presence of multiple competing talkers. The results suggest that binaural pitch fusion, along with an assessment of residual hearing and other factors, could be important for assessing bimodal and bilateral CI users.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Audífonos , Percepción del Habla , Humanos , Factores Sexuales
2.
J Acoust Soc Am ; 154(6): 3799-3809, 2023 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-38109404

RESUMEN

Computational models are used to predict the performance of human listeners for carefully specified signal and noise conditions. However, there may be substantial discrepancies between the conditions under which listeners are tested and those used for model predictions. Thus, models may predict better performance than exhibited by the listeners, or they may "fail" to capture the ability of the listener to respond to subtle stimulus conditions. This study tested a computational model devised to predict a listener's ability to detect an aircraft in various soundscapes. The model and listeners processed the same sound recordings under carefully specified testing conditions. Details of signal and masker calibration were carefully matched, and the model was tested using the same adaptive tracking paradigm. Perhaps most importantly, the behavioral results were not available to the modeler before the model predictions were presented. Recordings from three different aircraft were used as the target signals. Maskers were derived from recordings obtained at nine locations ranging from very quiet rural environments to suburban and urban settings. Overall, with a few exceptions, model predictions matched the performance of the listeners very well. Discussion focuses on those differences and possible reasons for their occurrence.


Asunto(s)
Enmascaramiento Perceptual , Percepción del Habla , Humanos , Umbral Auditivo , Ruido , Aeronaves , Simulación por Computador
3.
J Neurophysiol ; 123(3): 936-944, 2020 03 01.
Artículo en Inglés | MEDLINE | ID: mdl-31940239

RESUMEN

Recent evidence has shown that auditory information may be used to improve postural stability, spatial orientation, navigation, and gait, suggesting an auditory component of self-motion perception. To determine how auditory and other sensory cues integrate for self-motion perception, we measured motion perception during yaw rotations of the body and the auditory environment. Psychophysical thresholds in humans were measured over a range of frequencies (0.1-1.0 Hz) during self-rotation without spatial auditory stimuli, rotation of a sound source around a stationary listener, and self-rotation in the presence of an earth-fixed sound source. Unisensory perceptual thresholds and the combined multisensory thresholds were found to be frequency dependent. Auditory thresholds were better at lower frequencies, and vestibular thresholds were better at higher frequencies. Expressed in terms of peak angular velocity, multisensory vestibular and auditory thresholds ranged from 0.39°/s at 0.1 Hz to 0.95°/s at 1.0 Hz and were significantly better over low frequencies than either the auditory-only (0.54°/s to 2.42°/s at 0.1 and 1.0 Hz, respectively) or vestibular-only (2.00°/s to 0.75°/s at 0.1 and 1.0 Hz, respectively) unisensory conditions. Monaurally presented auditory cues were less effective than binaural cues in lowering multisensory thresholds. Frequency-independent thresholds were derived, assuming that vestibular thresholds depended on a weighted combination of velocity and acceleration cues, whereas auditory thresholds depended on displacement and velocity cues. These results elucidate fundamental mechanisms for the contribution of audition to balance and help explain previous findings, indicating its significance in tasks requiring self-orientation.NEW & NOTEWORTHY Auditory information can be integrated with visual, proprioceptive, and vestibular signals to improve balance, orientation, and gait, but this process is poorly understood. Here, we show that auditory cues significantly improve sensitivity to self-motion perception below 0.5 Hz, whereas vestibular cues contribute more at higher frequencies. Motion thresholds are determined by a weighted combination of displacement, velocity, and acceleration information. These findings may help understand and treat imbalance, particularly in people with sensory deficits.


Asunto(s)
Percepción Auditiva/fisiología , Percepción de Movimiento/fisiología , Propiocepción/fisiología , Umbral Sensorial/fisiología , Localización de Sonidos/fisiología , Percepción Espacial/fisiología , Adulto , Femenino , Humanos , Masculino , Adulto Joven
4.
Ear Hear ; 41(6): 1450-1460, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33136622

RESUMEN

OBJECTIVES: Individuals who use hearing aids (HAs) or cochlear implants (CIs) can experience broad binaural pitch fusion, such that sounds differing in pitch by as much as 3 to 4 octaves are perceptually integrated across ears. Previously, it was shown in HA users that the fused pitch is a weighted average of the two monaural pitches, ranging from equal weighting to dominance by the lower pitch. The goal of this study was to systematically measure the fused pitches in adult CI users, and determine whether CI users experience similar pitch averaging effects as observed in HA users. DESIGN: Twelve adult CI users (Cochlear Ltd, Sydney, Australia) participated in this study: six bimodal CI users, who wear a CI with a contralateral HA, and six bilateral CI users. Stimuli to HA ears were acoustic pure tones, and stimuli to CI ears were biphasic pulse trains delivered to individual electrodes. Fusion ranges, the ranges of frequencies/electrodes in the comparison ear that were fused with a single electrode (electrode 22, 18, 12, or 6) in the reference ear, were measured using simultaneous, dichotic presentation of reference and comparison stimuli in opposite ears, and varying the comparison stimulus. Once the fusion ranges were measured, the fused binaural pitch of a reference-pair stimulus combination was measured by finding a pitch match to monaural comparison stimuli presented to the paired stimulus ear. RESULTS: Fusion pitch weighting in CI users varied depending on the pitch difference of the reference-pair stimulus combination, with equal pitch averaging occurring for stimuli closer in pitch and lower pitch dominance occurring for stimuli farther apart in pitch. The averaging region was typically 0.5 to 2.3 octaves around the reference for bimodal CI users and 0.4 to 1.5 octaves for bilateral CI users. In some cases, a bias in the averaging region was observed toward the ear with greater stimulus variability. CONCLUSIONS: Fusion pitch weighting effects in CI users were similar to those observed previously in HA users. However, CI users showed greater inter-subject variability in both pitch averaging ranges and bias effects. These findings suggest that binaural pitch averaging could be a common underlying mechanism in hearing-impaired listeners.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Audífonos , Pérdida Auditiva , Adulto , Australia , Humanos
5.
Ear Hear ; 41(6): 1772-1774, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33136650

RESUMEN

OBJECTIVES: Vestibular reflexes have traditionally formed the cornerstone of vestibular evaluation, but perceptual tests have recently gained attention for use in research studies and potential clinical applications. However, the unknown reliability of perceptual thresholds limits their current importance. This is addressed here by establishing the test-retest reliability of vestibular perceptual testing. DESIGN: Perceptual detection thresholds to earth-vertical, yaw-axis rotations were collected in 15 young healthy people. Participants were tested at two time intervals (baseline, 5 to 14 days later) using an adaptive psychophysical procedure. RESULTS: Thresholds to 1 Hz rotations ranged from 0.69 to 2.99°/s (mean: 1.49°/s; SD: 0.63). They demonstrated an excellent intraclass correlation (0.92; 95% confidence interval: 0.77 to 0.97) with a minimum detectable difference of 0.45°/s. CONCLUSIONS: The excellent test-retest reliability of perceptual vestibular testing supports its use as a research tool and motivates further exploration for its use as a novel clinical technique.


Asunto(s)
Vestíbulo del Laberinto , Humanos , Reflejo Vestibuloocular , Reproducibilidad de los Resultados
6.
Ear Hear ; 41(6): 1545-1559, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33136630

RESUMEN

OBJECTIVES: Binaural pitch fusion is the perceptual integration of stimuli that evoke different pitches between the ears into a single auditory image. Adults who use hearing aids (HAs) or cochlear implants (CIs) often experience abnormally broad binaural pitch fusion, such that sounds differing in pitch by as much as 3 to 4 octaves are fused across ears, leading to spectral averaging and speech perception interference. The main goal of this study was to measure binaural pitch fusion in children with different hearing device combinations and compare results across groups and with adults. A second goal was to examine the relationship of binaural pitch fusion to interaural pitch differences or pitch match range, a measure of sequential pitch discriminability. DESIGN: Binaural pitch fusion was measured in children between the ages of 6.1 and 11.1 years with bilateral HAs (n = 9), bimodal CI (n = 10), bilateral CIs (n = 17), as well as normal-hearing (NH) children (n = 21). Depending on device combination, stimuli were pure tones or electric pulse trains delivered to individual electrodes. Fusion ranges were measured using simultaneous, dichotic presentation of reference and comparison stimuli in opposite ears, and varying the comparison stimulus to find the range that fused with the reference stimulus. Interaural pitch match functions were measured using sequential presentation of reference and comparison stimuli, and varying the comparison stimulus to find the pitch match center and range. RESULTS: Children with bilateral HAs had significantly broader binaural pitch fusion than children with NH, bimodal CI, or bilateral CIs. Children with NH and bilateral HAs, but not children with bimodal or bilateral CIs, had significantly broader fusion than adults with the same hearing status and device configuration. In children with bilateral CIs, fusion range was correlated with several variables that were also correlated with each other: pure-tone average in the second implanted ear before CI, and duration of prior bilateral HA, bimodal CI, or bilateral CI experience. No relationship was observed between fusion range and pitch match differences or range. CONCLUSIONS: The findings suggest that binaural pitch fusion is still developing in this age range and depends on hearing device combination but not on interaural pitch differences or discriminability.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Audífonos , Percepción del Habla , Adulto , Niño , Audición , Pruebas Auditivas , Humanos
7.
J Acoust Soc Am ; 147(3): EL246, 2020 03.
Artículo en Inglés | MEDLINE | ID: mdl-32237828

RESUMEN

The nature of the visual input that integrates with the audio signal to yield speech processing advantages remains controversial. This study tests the hypothesis that the information extracted for audiovisual integration includes co-occurring suprasegmental dynamic changes in the acoustic and visual signal. English sentences embedded in multi-talker babble noise were presented to native English listeners in audio-only and audiovisual modalities. A significant intelligibility enhancement with the visual analogs congruent to the acoustic amplitude envelopes was observed. These results suggest that dynamic visual modulation provides speech rhythmic information that can be integrated online with the audio signal to enhance speech intelligibility.

8.
J Neurophysiol ; 120(4): 1572-1577, 2018 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-30020839

RESUMEN

A single event can generate asynchronous sensory cues due to variable encoding, transmission, and processing delays. To be interpreted as being associated in time, these cues must occur within a limited time window, referred to as a "temporal binding window" (TBW). We investigated the hypothesis that vestibular deficits could disrupt temporal visual-vestibular integration by determining the relationships between vestibular threshold and TBW in participants with normal vestibular function and with vestibular hypofunction. Vestibular perceptual thresholds to yaw rotation were characterized and compared with the TBWs obtained from participants who judged whether a suprathreshold rotation occurred before or after a brief visual stimulus. Vestibular thresholds ranged from 0.7 to 16.5 deg/s and TBWs ranged from 13.8 to 395 ms. Among all participants, TBW and vestibular thresholds were well correlated ( R2 = 0.674, P < 0.001), with vestibular-deficient patients having higher thresholds and wider TBWs. Participants reported that the rotation onset needed to lead the light flash by an average of 80 ms for the visual and vestibular cues to be perceived as occurring simultaneously. The wide TBWs in vestibular-deficient participants compared with normal functioning participants indicate that peripheral sensory loss can lead to abnormal multisensory integration. A reduced ability to temporally combine sensory cues appropriately may provide a novel explanation for some symptoms reported by patients with vestibular deficits. Even among normal functioning participants, a high correlation between TBW and vestibular thresholds was observed, suggesting that these perceptual measurements are sensitive to small differences in vestibular function. NEW & NOTEWORTHY While spatial visual-vestibular integration has been well characterized, the temporal integration of these cues is not well understood. The relationship between sensitivity to whole body rotation and duration of the temporal window of visual-vestibular integration was examined using psychophysical techniques. These parameters were highly correlated for those with normal vestibular function and for patients with vestibular hypofunction. Reduced temporal integration performance in patients with vestibular hypofunction may explain some symptoms associated with vestibular loss.


Asunto(s)
Percepción de Movimiento , Umbral Sensorial , Vestíbulo del Laberinto/fisiología , Adulto , Femenino , Humanos , Masculino , Tiempo de Reacción , Rotación
9.
Ear Hear ; 39(2): 390-397, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-28945657

RESUMEN

OBJECTIVES: Binaural pitch fusion is the fusion of stimuli that evoke different pitches between the ears into a single auditory image. Individuals who use hearing aids or bimodal cochlear implants (CIs) experience abnormally broad binaural pitch fusion, such that sounds differing in pitch by as much as 3-4 octaves are fused across ears, leading to spectral averaging and speech perception interference. The goal of this study was to determine if adult bilateral CI users also experience broad binaural pitch fusion. DESIGN: Stimuli were pulse trains delivered to individual electrodes. Fusion ranges were measured using simultaneous, dichotic presentation of reference and comparison stimuli in opposite ears, and varying the comparison stimulus to find the range that fused with the reference stimulus. RESULTS: Bilateral CI listeners had binaural pitch fusion ranges varying from 0 to 12 mm (average 6.1 ± 3.9 mm), where 12 mm indicates fusion over all electrodes in the array. No significant correlations of fusion range were observed with any subject factors related to age, hearing loss history, or hearing device history, or with any electrode factors including interaural electrode pitch mismatch, pitch match bandwidth, or within-ear electrode discrimination abilities. CONCLUSIONS: Bilateral CI listeners have abnormally broad fusion, similar to hearing aid and bimodal CI listeners. This broad fusion may explain the variability of binaural benefits for speech perception in quiet and in noise in bilateral CI users.


Asunto(s)
Implantes Cocleares , Percepción de la Altura Tonal/fisiología , Localización de Sonidos/fisiología , Percepción del Habla/fisiología , Adolescente , Adulto , Factores de Edad , Sordera/fisiopatología , Sordera/rehabilitación , Femenino , Audición/fisiología , Humanos , Masculino , Persona de Mediana Edad , Ruido , Adulto Joven
10.
J Acoust Soc Am ; 142(2): 780, 2017 08.
Artículo en Inglés | MEDLINE | ID: mdl-28863555

RESUMEN

Both bimodal cochlear implant and bilateral hearing aid users can exhibit broad binaural pitch fusion, the fusion of dichotically presented tones over a broad range of pitch differences between ears [Reiss, Ito, Eggleston, and Wozny. (2014). J. Assoc. Res. Otolaryngol. 15(2), 235-248; Reiss, Eggleston, Walker, and Oh. (2016). J. Assoc. Res. Otolaryngol. 17(4), 341-356; Reiss, Shayman, Walker, Bennett, Fowler, Hartling, Glickman, Lasarev, and Oh. (2017). J. Acoust. Soc. Am. 143(3), 1909-1920]. Further, the fused binaural pitch is often a weighted average of the different pitches perceived in the two ears. The current study was designed to systematically measure these pitch averaging phenomena in bilateral hearing aid users with broad fusion. The fused binaural pitch of the reference-pair tone combination was initially measured by pitch-matching to monaural comparison tones presented to the pair tone ear. The averaged results for all subjects showed two distinct trends: (1) The fused binaural pitch was dominated by the lower-pitch component when the pair tone was either 0.14 octaves below or 0.78 octaves above the reference tone; (2) pitch averaging occurred when the pair tone was between the two boundaries above, with the most equal weighting at 0.38 octaves above the reference tone. Findings from two subjects suggest that randomization or alternation of the comparison ear can eliminate this asymmetry in the pitch averaging range. Overall, these pitch averaging phenomena suggest that spectral distortions and thus binaural interference may arise during binaural stimulation in hearing-impaired listeners with broad fusion.


Asunto(s)
Corrección de Deficiencia Auditiva/instrumentación , Audífonos , Pérdida Auditiva/rehabilitación , Personas con Deficiencia Auditiva/rehabilitación , Percepción de la Altura Tonal , Estimulación Acústica , Adulto , Anciano , Anciano de 80 o más Años , Audiometría de Tonos Puros , Umbral Auditivo , Estimulación Eléctrica , Diseño de Equipo , Femenino , Audición , Pérdida Auditiva/diagnóstico , Pérdida Auditiva/fisiopatología , Pérdida Auditiva/psicología , Humanos , Masculino , Persona de Mediana Edad , Personas con Deficiencia Auditiva/psicología
11.
J Acoust Soc Am ; 141(3): 1909, 2017 03.
Artículo en Inglés | MEDLINE | ID: mdl-28372056

RESUMEN

Binaural pitch fusion is the fusion of dichotically presented tones that evoke different pitches between the ears. In normal-hearing (NH) listeners, the frequency range over which binaural pitch fusion occurs is usually <0.2 octaves. Recently, broad fusion ranges of 1-4 octaves were demonstrated in bimodal cochlear implant users. In the current study, it was hypothesized that hearing aid (HA) users would also exhibit broad fusion. Fusion ranges were measured in both NH and hearing-impaired (HI) listeners with hearing losses ranging from mild-moderate to severe-profound, and relationships of fusion range with demographic factors and with diplacusis were examined. Fusion ranges of NH and HI listeners averaged 0.17 ± 0.13 octaves and 1.7 ± 1.5 octaves, respectively. In HI listeners, fusion ranges were positively correlated with a principal component measure of the covarying factors of young age, early age of hearing loss onset, and long durations of hearing loss and HA use, but not with hearing threshold, amplification level, or diplacusis. In NH listeners, no correlations were observed with age, hearing threshold, or diplacusis. The association of broad fusion with early onset, long duration of hearing loss suggests a possible role of long-term experience with hearing loss and amplification in the development of broad fusion.


Asunto(s)
Audífonos , Pérdida Auditiva/rehabilitación , Personas con Deficiencia Auditiva/psicología , Personas con Deficiencia Auditiva/rehabilitación , Percepción de la Altura Tonal , Estimulación Acústica , Adulto , Factores de Edad , Anciano , Anciano de 80 o más Años , Audiometría de Tonos Puros , Umbral Auditivo , Pruebas de Audición Dicótica , Femenino , Pérdida Auditiva/diagnóstico , Pérdida Auditiva/psicología , Humanos , Masculino , Persona de Mediana Edad , Discriminación de la Altura Tonal , Índice de Severidad de la Enfermedad , Adulto Joven
12.
J Acoust Soc Am ; 138(5): 2848-59, 2015 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-26627761

RESUMEN

In psychoacoustics, a multi-channel model has traditionally been used to describe detection improvement for multicomponent signals. This model commonly postulates that energy or information within either the frequency or time domain is transformed into a probabilistic decision variable across the auditory channels, and that their weighted linear summation determines optimum detection performance when compared to a critical value such as a decision criterion. In this study, representative integration-based channel models, specifically focused on signal-processing properties of the auditory periphery are reviewed (e.g., Durlach's channel model). In addition, major limitations of the previous channel models are described when applied to spectral, temporal, and spectrotemporal integration performance by human listeners. Here, integration refers to detection threshold improvements as the number of brief tone bursts in a signal is increased. Previous versions of the multi-channel model underestimate listener performance in these experiments. Further, they are unable to apply a single processing unit to signals which vary simultaneously in time and frequency. Improvements to the previous channel models are proposed by considering more realistic conditions such as correlated signal responses in the auditory channels, nonlinear properties in system performance, and a peripheral processing unit operating in both time and frequency domains.

13.
Semin Hear ; 45(1): 110-122, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38370520

RESUMEN

Maintaining balance involves the combination of sensory signals from the visual, vestibular, proprioceptive, and auditory systems. However, physical and biological constraints ensure that these signals are perceived slightly asynchronously. The brain only recognizes them as simultaneous when they occur within a period of time called the temporal binding window (TBW). Aging can prolong the TBW, leading to temporal uncertainty during multisensory integration. This effect might contribute to imbalance in the elderly but has not been examined with respect to vestibular inputs. Here, we compared the vestibular-related TBW in 13 younger and 12 older subjects undergoing 0.5 Hz sinusoidal rotations about the earth-vertical axis. An alternating dichotic auditory stimulus was presented at the same frequency but with the phase varied to determine the temporal range over which the two stimuli were perceived as simultaneous at least 75% of the time, defined as the TBW. The mean TBW among younger subjects was 286 ms (SEM ± 56 ms) and among older subjects was 560 ms (SEM ± 52 ms). TBW was related to vestibular sensitivity among younger but not older subjects, suggesting that a prolonged TBW could be a mechanism for imbalance in the elderly person independent of changes in peripheral vestibular function.

14.
JASA Express Lett ; 3(2): 025203, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36858994

RESUMEN

Inputs delivered to different sensory organs provide us with complementary speech information about the environment. The goal of this study was to establish which multisensory characteristics can facilitate speech recognition in noise. The major finding is that the tracking of temporal cues of visual/tactile speech synced with auditory speech can play a key role in speech-in-noise performance. This suggests that multisensory interactions are fundamentally important for speech recognition ability in noisy environments, and they require salient temporal cues. The amplitude envelope, serving as a reliable temporal cue source, can be applied through different sensory modalities when speech recognition is compromised.


Asunto(s)
Señales (Psicología) , Percepción del Habla , Habla
15.
J Audiol Otol ; 27(2): 88-96, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-36950808

RESUMEN

BACKGROUND AND OBJECTIVES: The digits-in-noise (DIN) test was developed as a simple and time-efficient hearing-in-noise test worldwide. The Korean version of the DIN (K-DIN) test was previously validated for both normal-hearing and hearing-impaired listeners. This study aimed to explore the factors influencing the outcomes of the K-DIN test further by analyzing the threshold (representing detection ability) and slope (representing test difficulty) parameters for the psychometric curve fit. Subjects and. METHODS: In total, 35 young adults with normal hearing participated in the K-DIN test under the following four experimental conditions: 1) background noise (digit-shaped vs. pink noise); 2) gender of the speaker (male vs. female); 3) ear side (right vs. left); and 4) digit presentation levels (55, 65, 75, and 85 dB). The digits were presented using the method of constant stimuli procedure. Participant responses to the stimulus trials were used to fit a psychometric function, and the threshold and slope parameters were estimated according to pre-determined criteria. The accuracy of fit performance was determined using the root-mean-square error calculation. RESULTS: The listener's digit detection ability (threshold) was slightly better with pink noise than with digit-shaped noise, with similar test difficulties (slopes) across the digits. Gender and the tested ear side influenced neither the detection ability nor the task difficulty. Additionally, lower presentation levels (55 and 65 dB) elicited better thresholds than the higher presentation levels (75 and 85 dB); however, the test difficulty varied slightly across the presentation levels. CONCLUSIONS: The K-DIN test can be influenced by stimulus factors. Continued research is warranted to understand the accuracy and reliability of the test better, especially for its use as a promising clinical measure.

16.
Front Neurosci ; 17: 1282764, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38192513

RESUMEN

Many previous studies have reported that speech segregation performance in multi-talker environments can be enhanced by two major acoustic cues: (1) voice-characteristic differences between talkers; (2) spatial separation between talkers. Here, the improvement they can provide for speech segregation is referred to as "release from masking." The goal of this study was to investigate how masking release performance with two cues is affected by various target presentation levels. Sixteen normal-hearing listeners participated in the speech recognition in noise experiment. Speech-on-speech masking performance was measured as the threshold target-to-masker ratio needed to understand a target talker in the presence of either same- or different-gender masker talkers to manipulate the voice-gender difference cue. These target-masker gender combinations were tested with five spatial configurations (maskers co-located or 15°, 30°, 45°, and 60° symmetrically spatially separated from the target) to manipulate the spatial separation cue. In addition, those conditions were repeated at three target presentation levels (30, 40, and 50 dB sensation levels). Results revealed that the amount of masking release by either voice-gender difference or spatial separation cues was significantly affected by the target level, especially at the small target-masker spatial separation (±15°). Further, the results showed that the intersection points between two masking release types (equal perceptual weighting) could be varied by the target levels. These findings suggest that the perceptual weighting of masking release from two cues is non-linearly related to the target levels. The target presentation level could be one major factor associated with masking release performance in normal-hearing listeners.

17.
Front Neurosci ; 16: 1031424, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36340778

RESUMEN

A series of our previous studies explored the use of an abstract visual representation of the amplitude envelope cues from target sentences to benefit speech perception in complex listening environments. The purpose of this study was to expand this auditory-visual speech perception to the tactile domain. Twenty adults participated in speech recognition measurements in four different sensory modalities (AO, auditory-only; AV, auditory-visual; AT, auditory-tactile; AVT, auditory-visual-tactile). The target sentences were fixed at 65 dB sound pressure level and embedded within a simultaneous speech-shaped noise masker of varying degrees of signal-to-noise ratios (-7, -5, -3, -1, and 1 dB SNR). The amplitudes of both abstract visual and vibrotactile stimuli were temporally synchronized with the target speech envelope for comparison. Average results showed that adding temporally-synchronized multimodal cues to the auditory signal did provide significant improvements in word recognition performance across all three multimodal stimulus conditions (AV, AT, and AVT), especially at the lower SNR levels of -7, -5, and -3 dB for both male (8-20% improvement) and female (5-25% improvement) talkers. The greatest improvement in word recognition performance (15-19% improvement for males and 14-25% improvement for females) was observed when both visual and tactile cues were integrated (AVT). Another interesting finding in this study is that temporally synchronized abstract visual and vibrotactile stimuli additively stack in their influence on speech recognition performance. Our findings suggest that a multisensory integration process in speech perception requires salient temporal cues to enhance speech recognition ability in noisy environments.

18.
J Audiol Otol ; 26(1): 10-21, 2022 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-34775699

RESUMEN

BACKGROUND AND OBJECTIVES: Although the digit-in-noise (DIN) test is simple and quick, little is known about its key factors. This study explored the considerable components of the DIN test through a systematic review and meta-analysis. MATERIALS AND METHODS: After six electronic journal databases were screened, 14 studies were selected. For the meta-analysis, standardized mean difference was used to calculate effect sizes and 95% confidence intervals. RESULTS: The overall result of the meta-analysis showed an effect size of 2.224. In a subgroup analysis, the patient's hearing status was found to have the highest effect size, meaning that the DIN test was significantly sensitive to screen for hearing loss. In terms of the length of the presenting digits, triple digits had lower speech recognition thresholds (SRTs) than single or pairs of digits. Among the types of background noise, speech-spectrum noise provided lower SRTs than multi-talker babbling. Regarding language variance, the DIN test showed better performance in the patient's native language(s) than in other languages. CONCLUSIONS: When uniformly developed and well validated, the DIN test can be a universal tool for hearing screening.

19.
Front Neurosci ; 16: 1059639, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36507363

RESUMEN

Voice-gender differences and spatial separation are important cues for auditory object segregation. The goal of this study was to investigate the relationship of voice-gender difference benefit to the breadth of binaural pitch fusion, the perceptual integration of dichotic stimuli that evoke different pitches across ears, and the relationship of spatial separation benefit to localization acuity, the ability to identify the direction of a sound source. Twelve bilateral hearing aid (HA) users (age from 30 to 75 years) and eleven normal hearing (NH) listeners (age from 36 to 67 years) were tested in the following three experiments. First, speech-on-speech masking performance was measured as the threshold target-to-masker ratio (TMR) needed to understand a target talker in the presence of either same- or different-gender masker talkers. These target-masker gender combinations were tested with two spatial configurations (maskers co-located or 60° symmetrically spatially separated from the target) in both monaural and binaural listening conditions. Second, binaural pitch fusion range measurements were conducted using harmonic tone complexes around a 200-Hz fundamental frequency. Third, absolute localization acuity was measured using broadband (125-8000 Hz) noise and one-third octave noise bands centered at 500 and 3000 Hz. Voice-gender differences between target and maskers improved TMR thresholds for both listener groups in the binaural condition as well as both monaural (left ear and right ear) conditions, with greater benefit in co-located than spatially separated conditions. Voice-gender difference benefit was correlated with the breadth of binaural pitch fusion in the binaural condition, but not the monaural conditions, ruling out a role of monaural abilities in the relationship between binaural fusion and voice-gender difference benefits. Spatial separation benefit was not significantly correlated with absolute localization acuity. In addition, greater spatial separation benefit was observed in NH listeners than in bilateral HA users, indicating a decreased ability of HA users to benefit from spatial release from masking (SRM). These findings suggest that sharp binaural pitch fusion may be important for maximal speech perception in multi-talker environments for both NH listeners and bilateral HA users.

20.
Front Psychol ; 12: 626762, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33597910

RESUMEN

Binaural pitch fusion is the perceptual integration of stimuli that evoke different pitches between the ears into a single auditory image. This study was designed to investigate how steady background noise can influence binaural pitch fusion. The binaural fusion ranges, the frequency ranges over which binaural pitch fusion occurred, were measured with three signal-to-noise ratios (+15, +5, and -5dB SNR) of the pink noise and compared with those measured in quiet. The preliminary results show that addition of an appropriate amount of noise can reduce binaural fusion ranges, an effect called stochastic resonance. This finding increases the understanding of how specific noise levels can sharpen binaural pitch fusion in normal hearing individuals. Furthermore, it elicits more pathways for research to explore how this benefit can practically be used to help improve binaural auditory perception.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA