Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 30
Filtrar
1.
Semin Hear ; 45(1): 110-122, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38370520

RESUMO

Maintaining balance involves the combination of sensory signals from the visual, vestibular, proprioceptive, and auditory systems. However, physical and biological constraints ensure that these signals are perceived slightly asynchronously. The brain only recognizes them as simultaneous when they occur within a period of time called the temporal binding window (TBW). Aging can prolong the TBW, leading to temporal uncertainty during multisensory integration. This effect might contribute to imbalance in the elderly but has not been examined with respect to vestibular inputs. Here, we compared the vestibular-related TBW in 13 younger and 12 older subjects undergoing 0.5 Hz sinusoidal rotations about the earth-vertical axis. An alternating dichotic auditory stimulus was presented at the same frequency but with the phase varied to determine the temporal range over which the two stimuli were perceived as simultaneous at least 75% of the time, defined as the TBW. The mean TBW among younger subjects was 286 ms (SEM ± 56 ms) and among older subjects was 560 ms (SEM ± 52 ms). TBW was related to vestibular sensitivity among younger but not older subjects, suggesting that a prolonged TBW could be a mechanism for imbalance in the elderly person independent of changes in peripheral vestibular function.

2.
J Acoust Soc Am ; 154(6): 3799-3809, 2023 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-38109404

RESUMO

Computational models are used to predict the performance of human listeners for carefully specified signal and noise conditions. However, there may be substantial discrepancies between the conditions under which listeners are tested and those used for model predictions. Thus, models may predict better performance than exhibited by the listeners, or they may "fail" to capture the ability of the listener to respond to subtle stimulus conditions. This study tested a computational model devised to predict a listener's ability to detect an aircraft in various soundscapes. The model and listeners processed the same sound recordings under carefully specified testing conditions. Details of signal and masker calibration were carefully matched, and the model was tested using the same adaptive tracking paradigm. Perhaps most importantly, the behavioral results were not available to the modeler before the model predictions were presented. Recordings from three different aircraft were used as the target signals. Maskers were derived from recordings obtained at nine locations ranging from very quiet rural environments to suburban and urban settings. Overall, with a few exceptions, model predictions matched the performance of the listeners very well. Discussion focuses on those differences and possible reasons for their occurrence.


Assuntos
Mascaramento Perceptivo , Percepção da Fala , Humanos , Limiar Auditivo , Ruído , Aeronaves , Simulação por Computador
3.
JASA Express Lett ; 3(2): 025203, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36858994

RESUMO

Inputs delivered to different sensory organs provide us with complementary speech information about the environment. The goal of this study was to establish which multisensory characteristics can facilitate speech recognition in noise. The major finding is that the tracking of temporal cues of visual/tactile speech synced with auditory speech can play a key role in speech-in-noise performance. This suggests that multisensory interactions are fundamentally important for speech recognition ability in noisy environments, and they require salient temporal cues. The amplitude envelope, serving as a reliable temporal cue source, can be applied through different sensory modalities when speech recognition is compromised.


Assuntos
Sinais (Psicologia) , Percepção da Fala , Fala
4.
J Audiol Otol ; 27(2): 88-96, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-36950808

RESUMO

BACKGROUND AND OBJECTIVES: The digits-in-noise (DIN) test was developed as a simple and time-efficient hearing-in-noise test worldwide. The Korean version of the DIN (K-DIN) test was previously validated for both normal-hearing and hearing-impaired listeners. This study aimed to explore the factors influencing the outcomes of the K-DIN test further by analyzing the threshold (representing detection ability) and slope (representing test difficulty) parameters for the psychometric curve fit. Subjects and. METHODS: In total, 35 young adults with normal hearing participated in the K-DIN test under the following four experimental conditions: 1) background noise (digit-shaped vs. pink noise); 2) gender of the speaker (male vs. female); 3) ear side (right vs. left); and 4) digit presentation levels (55, 65, 75, and 85 dB). The digits were presented using the method of constant stimuli procedure. Participant responses to the stimulus trials were used to fit a psychometric function, and the threshold and slope parameters were estimated according to pre-determined criteria. The accuracy of fit performance was determined using the root-mean-square error calculation. RESULTS: The listener's digit detection ability (threshold) was slightly better with pink noise than with digit-shaped noise, with similar test difficulties (slopes) across the digits. Gender and the tested ear side influenced neither the detection ability nor the task difficulty. Additionally, lower presentation levels (55 and 65 dB) elicited better thresholds than the higher presentation levels (75 and 85 dB); however, the test difficulty varied slightly across the presentation levels. CONCLUSIONS: The K-DIN test can be influenced by stimulus factors. Continued research is warranted to understand the accuracy and reliability of the test better, especially for its use as a promising clinical measure.

5.
Ear Hear ; 44(2): 318-329, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36395512

RESUMO

OBJECTIVES: Some cochlear implant (CI) users are fitted with a CI in each ear ("bilateral"), while others have a CI in one ear and a hearing aid in the other ("bimodal"). Presently, evaluation of the benefits of bilateral or bimodal CI fitting does not take into account the integration of frequency information across the ears. This study tests the hypothesis that CI listeners, especially bimodal CI users, with a more precise integration of frequency information across ears ("sharp binaural pitch fusion") will derive greater benefit from voice gender differences in a multi-talker listening environment. DESIGN: Twelve bimodal CI users and twelve bilateral CI users participated. First, binaural pitch fusion ranges were measured using the simultaneous, dichotic presentation of reference and comparison stimuli (electric pulse trains for CI ears and acoustic tones for HA ears) in opposite ears, with reference stimuli fixed and comparison stimuli varied in frequency/electrode to find the range perceived as a single sound. Direct electrical stimulation was used in implanted ears through the research interface, which allowed selective stimulation of one electrode at a time, and acoustic stimulation was used in the non-implanted ears through the headphone. Second, speech-on-speech masking performance was measured to estimate masking release by voice gender difference between target and maskers (VGRM). The VGRM was calculated as the difference in speech recognition thresholds of target sounds in the presence of same-gender or different-gender maskers. RESULTS: Voice gender differences between target and masker talkers improved speech recognition performance for the bimodal CI group, but not the bilateral CI group. The bimodal CI users who benefited the most from voice gender differences were those who had the narrowest range of acoustic frequencies that fused into a single sound with stimulation from a single electrode from the CI in the opposite ear. There was no similar voice gender difference benefit of narrow binaural fusion range for the bilateral CI users. CONCLUSIONS: The findings suggest that broad binaural fusion reduces the acoustical information available for differentiating individual talkers in bimodal CI users, but not for bilateral CI users. In addition, for bimodal CI users with narrow binaural fusion who benefit from voice gender differences, bilateral implantation could lead to a loss of that benefit and impair their ability to selectively attend to one talker in the presence of multiple competing talkers. The results suggest that binaural pitch fusion, along with an assessment of residual hearing and other factors, could be important for assessing bimodal and bilateral CI users.


Assuntos
Implante Coclear , Implantes Cocleares , Auxiliares de Audição , Percepção da Fala , Humanos , Fatores Sexuais
6.
Front Neurosci ; 17: 1282764, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38192513

RESUMO

Many previous studies have reported that speech segregation performance in multi-talker environments can be enhanced by two major acoustic cues: (1) voice-characteristic differences between talkers; (2) spatial separation between talkers. Here, the improvement they can provide for speech segregation is referred to as "release from masking." The goal of this study was to investigate how masking release performance with two cues is affected by various target presentation levels. Sixteen normal-hearing listeners participated in the speech recognition in noise experiment. Speech-on-speech masking performance was measured as the threshold target-to-masker ratio needed to understand a target talker in the presence of either same- or different-gender masker talkers to manipulate the voice-gender difference cue. These target-masker gender combinations were tested with five spatial configurations (maskers co-located or 15°, 30°, 45°, and 60° symmetrically spatially separated from the target) to manipulate the spatial separation cue. In addition, those conditions were repeated at three target presentation levels (30, 40, and 50 dB sensation levels). Results revealed that the amount of masking release by either voice-gender difference or spatial separation cues was significantly affected by the target level, especially at the small target-masker spatial separation (±15°). Further, the results showed that the intersection points between two masking release types (equal perceptual weighting) could be varied by the target levels. These findings suggest that the perceptual weighting of masking release from two cues is non-linearly related to the target levels. The target presentation level could be one major factor associated with masking release performance in normal-hearing listeners.

7.
Front Neurosci ; 16: 1059639, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36507363

RESUMO

Voice-gender differences and spatial separation are important cues for auditory object segregation. The goal of this study was to investigate the relationship of voice-gender difference benefit to the breadth of binaural pitch fusion, the perceptual integration of dichotic stimuli that evoke different pitches across ears, and the relationship of spatial separation benefit to localization acuity, the ability to identify the direction of a sound source. Twelve bilateral hearing aid (HA) users (age from 30 to 75 years) and eleven normal hearing (NH) listeners (age from 36 to 67 years) were tested in the following three experiments. First, speech-on-speech masking performance was measured as the threshold target-to-masker ratio (TMR) needed to understand a target talker in the presence of either same- or different-gender masker talkers. These target-masker gender combinations were tested with two spatial configurations (maskers co-located or 60° symmetrically spatially separated from the target) in both monaural and binaural listening conditions. Second, binaural pitch fusion range measurements were conducted using harmonic tone complexes around a 200-Hz fundamental frequency. Third, absolute localization acuity was measured using broadband (125-8000 Hz) noise and one-third octave noise bands centered at 500 and 3000 Hz. Voice-gender differences between target and maskers improved TMR thresholds for both listener groups in the binaural condition as well as both monaural (left ear and right ear) conditions, with greater benefit in co-located than spatially separated conditions. Voice-gender difference benefit was correlated with the breadth of binaural pitch fusion in the binaural condition, but not the monaural conditions, ruling out a role of monaural abilities in the relationship between binaural fusion and voice-gender difference benefits. Spatial separation benefit was not significantly correlated with absolute localization acuity. In addition, greater spatial separation benefit was observed in NH listeners than in bilateral HA users, indicating a decreased ability of HA users to benefit from spatial release from masking (SRM). These findings suggest that sharp binaural pitch fusion may be important for maximal speech perception in multi-talker environments for both NH listeners and bilateral HA users.

8.
Front Neurosci ; 16: 1031424, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36340778

RESUMO

A series of our previous studies explored the use of an abstract visual representation of the amplitude envelope cues from target sentences to benefit speech perception in complex listening environments. The purpose of this study was to expand this auditory-visual speech perception to the tactile domain. Twenty adults participated in speech recognition measurements in four different sensory modalities (AO, auditory-only; AV, auditory-visual; AT, auditory-tactile; AVT, auditory-visual-tactile). The target sentences were fixed at 65 dB sound pressure level and embedded within a simultaneous speech-shaped noise masker of varying degrees of signal-to-noise ratios (-7, -5, -3, -1, and 1 dB SNR). The amplitudes of both abstract visual and vibrotactile stimuli were temporally synchronized with the target speech envelope for comparison. Average results showed that adding temporally-synchronized multimodal cues to the auditory signal did provide significant improvements in word recognition performance across all three multimodal stimulus conditions (AV, AT, and AVT), especially at the lower SNR levels of -7, -5, and -3 dB for both male (8-20% improvement) and female (5-25% improvement) talkers. The greatest improvement in word recognition performance (15-19% improvement for males and 14-25% improvement for females) was observed when both visual and tactile cues were integrated (AVT). Another interesting finding in this study is that temporally synchronized abstract visual and vibrotactile stimuli additively stack in their influence on speech recognition performance. Our findings suggest that a multisensory integration process in speech perception requires salient temporal cues to enhance speech recognition ability in noisy environments.

9.
J Audiol Otol ; 26(1): 10-21, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-34775699

RESUMO

BACKGROUND AND OBJECTIVES: Although the digit-in-noise (DIN) test is simple and quick, little is known about its key factors. This study explored the considerable components of the DIN test through a systematic review and meta-analysis. MATERIALS AND METHODS: After six electronic journal databases were screened, 14 studies were selected. For the meta-analysis, standardized mean difference was used to calculate effect sizes and 95% confidence intervals. RESULTS: The overall result of the meta-analysis showed an effect size of 2.224. In a subgroup analysis, the patient's hearing status was found to have the highest effect size, meaning that the DIN test was significantly sensitive to screen for hearing loss. In terms of the length of the presenting digits, triple digits had lower speech recognition thresholds (SRTs) than single or pairs of digits. Among the types of background noise, speech-spectrum noise provided lower SRTs than multi-talker babbling. Regarding language variance, the DIN test showed better performance in the patient's native language(s) than in other languages. CONCLUSIONS: When uniformly developed and well validated, the DIN test can be a universal tool for hearing screening.

10.
J Audiol Otol ; 25(4): 171-177, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34649417

RESUMO

BACKGROUND AND OBJECTIVES: The purpose of the present study was to validate the performance and diagnostic efficacy of the Korean digits-in-noise (K-DIN) test in comparison to the Korean speech perception-in-noise (K-SPIN) test, which is the representative speech-in-noise test in clinical practice. SUBJECTS AND PURPOSE: Twenty-seven subjects (15 normal-hearing and 12 hearing-impaired listeners) participated. The recorded Korean 0-9 digits were used to form quasirandom digit triplets; 50 target digit triplets were presented at the most comfortable level of each subject while presenting speech-shaped background noise at various levels of signal-to-noise ratios (-12.5, -10, -5, or +5 dB). Subjects were then instructed to listen to both target and noise masker unilaterally and bilaterally through a headphone. K-SPIN test was also conducted using the same procedure as the K-DIN. After calculating their percent correct responses, K-DIN and K-SPIN results were compared using a Pearson-correlation test. RESULTS: Results showed a statistically significant correlation between K-DIN and K-SPIN in all hearing conditions (left: r=0.814, p<0.001; right: r=0.788, p<0.001; bilateral: r=0.727, p<0.001). Moreover, the K-DIN test achieved better testing efficacy, shorter average listening time (5 min vs. 30 min), and easier performance of task according to participants' qualitative reports than the K-SPIN test. CONCLUSIONS: In this study, the Korean version of digit triplet test was validated in both normal-hearing and hearing-impaired listeners. The findings suggest that the K-DIN test can be used as a simple and time-efficient hearing-in-noise test in audiology clinics in Korea.

11.
JASA Express Lett ; 1(8): 084404, 2021 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-34713273

RESUMO

Voice-gender difference and spatial separation between talkers are important cues for speech segregation in multi-talker listening environments. The goal of this study was to investigate the interactions of these two cues to explore how they influence masking release in normal hearing listeners. Speech recognition thresholds in competing speech were measured, and masking release benefits by either voice-gender difference or spatial separation cues were calculated. Results revealed that the masking releases by those two cues are inversely related as a function of spatial separation, with a gender-specific difference of transition between the two types of masking release.

12.
J Speech Lang Hear Res ; 64(7): 2845-2853, 2021 07 16.
Artigo em Inglês | MEDLINE | ID: mdl-34100628

RESUMO

Purpose This study investigated the effects of visually presented speech envelope information with various modulation rates and depths on audiovisual speech perception in noise. Method Forty adults (21.25 ± 1.45 years) participated in audiovisual sentence recognition measurements in noise. Target speech sentences were auditorily presented in multitalker babble noises at a -3 dB SNR. Acoustic amplitude envelopes of target signals were extracted through low-pass filters with different cutoff frequencies (4, 10, and 30 Hz) and a fixed modulation depth at 100% (Experiment 1) or extracted with various modulation depths (0%, 25%, 50%, 75%, and 100%) and a fixed 10-Hz modulation rate (Experiment 2). The extracted target envelopes were synchronized with the amplitude of a spherical-shaped ball and presented as visual stimuli. Subjects were instructed to attend to both auditory and visual stimuli of the target sentences and type down their answers. The sentence recognition accuracy was compared between audio-only and audiovisual conditions. Results In Experiment 1, a significant improvement in speech intelligibility was observed when the visual analog (a sphere) synced with the acoustic amplitude envelope modulated at a 10-Hz modulation rate compared to the audio-only condition. In Experiment 2, the visual analog with 75% modulation depth resulted in better audiovisual speech perception in noise compared to the other modulation depth conditions. Conclusion An abstract visual analog of acoustic amplitude envelopes can be efficiently delivered by the visual system and integrated online with auditory signals to enhance speech perception in noise, independent of particular articulation movements.


Assuntos
Percepção da Fala , Estimulação Acústica , Adulto , Percepção Auditiva , Humanos , Ruído , Razão Sinal-Ruído , Fala , Percepção Visual
13.
Front Neurosci ; 15: 678029, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34163326

RESUMO

Speech perception often takes place in noisy environments, where multiple auditory signals compete with one another. The addition of visual cues such as talkers' faces or lip movements to an auditory signal can help improve the intelligibility of speech in those suboptimal listening environments. This is referred to as audiovisual benefits. The current study aimed to delineate the signal-to-noise ratio (SNR) conditions under which visual presentations of the acoustic amplitude envelopes have their most significant impact on speech perception. Seventeen adults with normal hearing were recruited. Participants were presented with spoken sentences in babble noise either in auditory-only or auditory-visual conditions with various SNRs at -7, -5, -3, -1, and 1 dB. The visual stimulus applied in this study was a sphere that varied in size syncing with the amplitude envelope of the target speech signals. Participants were asked to transcribe the sentences they heard. Results showed that a significant improvement in accuracy in the auditory-visual condition versus the audio-only condition was obtained at the SNRs of -3 and -1 dB, but no improvement was observed in other SNRs. These results showed that dynamic temporal visual information can benefit speech perception in noise, and the optimal facilitative effects of visual amplitude envelope can be observed under an intermediate SNR range.

14.
Front Psychol ; 12: 626762, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33597910

RESUMO

Binaural pitch fusion is the perceptual integration of stimuli that evoke different pitches between the ears into a single auditory image. This study was designed to investigate how steady background noise can influence binaural pitch fusion. The binaural fusion ranges, the frequency ranges over which binaural pitch fusion occurred, were measured with three signal-to-noise ratios (+15, +5, and -5dB SNR) of the pink noise and compared with those measured in quiet. The preliminary results show that addition of an appropriate amount of noise can reduce binaural fusion ranges, an effect called stochastic resonance. This finding increases the understanding of how specific noise levels can sharpen binaural pitch fusion in normal hearing individuals. Furthermore, it elicits more pathways for research to explore how this benefit can practically be used to help improve binaural auditory perception.

15.
JASA Express Lett ; 1(10): 105202, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-36154217

RESUMO

The present investigation tested whether there is cross-interference between current electromagnetic articulography (EMA) and cochlear implants (CIs). In an initial experiment, we calibrated EMA sensors with and without a CI present in the EMA field, and measured impedances of all CI electrodes when in and out of the EMA field. In a subsequent experiment, head reference sensor positions were recorded during a speaking task for a normal-hearing talker with and without a CI present in the EMA field. Results revealed minimal interference between the devices, suggesting that EMA is a promising method for assessing speech motor skills in CI users.


Assuntos
Implante Coclear , Implantes Cocleares , Fenômenos Eletromagnéticos , Destreza Motora , Fala
16.
Front Neurosci ; 15: 725093, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-35087369

RESUMO

In multi-talker listening environments, the culmination of different voice streams may lead to the distortion of each source's individual message, causing deficits in comprehension. Voice characteristics, such as pitch and timbre, are major dimensions of auditory perception and play a vital role in grouping and segregating incoming sounds based on their acoustic properties. The current study investigated how pitch and timbre cues (determined by fundamental frequency, notated as F0, and spectral slope, respectively) can affect perceptual integration and segregation of complex-tone sequences within an auditory streaming paradigm. Twenty normal-hearing listeners participated in a traditional auditory streaming experiment using two alternating sequences of harmonic tone complexes A and B with manipulating F0 and spectral slope. Grouping ranges, the F0/spectral slope ranges over which auditory grouping occurs, were measured with various F0/spectral slope differences between tones A and B. Results demonstrated that the grouping ranges were maximized in the absence of the F0/spectral slope differences between tones A and B and decreased by 2 times as their differences increased to ±1-semitone F0 and ±1-dB/octave spectral slope. In other words, increased differences in either F0 or spectral slope allowed listeners to more easily distinguish between harmonic stimuli, and thus group them together less. These findings suggest that pitch/timbre difference cues play an important role in how we perceive harmonic sounds in an auditory stream, representing our ability to group or segregate human voices in a multi-talker listening environment.

17.
Front Pediatr ; 9: 771192, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34976894

RESUMO

Prenatal infections can have adverse effects on an infant's hearing, speech, and language development. Congenital cytomegalovirus (CMV) and human immunodeficiency virus (HIV) are two such infections that may lead to these complications, especially when left untreated. CMV is commonly associated with sensorineural hearing loss in children, and it can also be associated with anatomical abnormalities in the central nervous system responsible for speech, language, and intellectual acquisition. In terms of speech, language, and hearing, HIV is most associated with conductive and/or sensorineural hearing loss and expressive language deficits. Children born with these infections may benefit from cochlear implantation for severe to profound sensorineural hearing losses and/or speech therapy for speech/language deficits. CMV and HIV simultaneously present in infants has not been thoroughly studied, but one may hypothesize these speech, language, and hearing deficits to be present with potentially higher severity. Early identification of the infection in combination with early intervention strategies yields better results for these children than no identification or intervention. The purpose of this review was to investigate how congenital CMV and/or HIV may affect hearing, speech, and language development in children, and the importance of early identification for these populations.

18.
Ear Hear ; 41(6): 1450-1460, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33136622

RESUMO

OBJECTIVES: Individuals who use hearing aids (HAs) or cochlear implants (CIs) can experience broad binaural pitch fusion, such that sounds differing in pitch by as much as 3 to 4 octaves are perceptually integrated across ears. Previously, it was shown in HA users that the fused pitch is a weighted average of the two monaural pitches, ranging from equal weighting to dominance by the lower pitch. The goal of this study was to systematically measure the fused pitches in adult CI users, and determine whether CI users experience similar pitch averaging effects as observed in HA users. DESIGN: Twelve adult CI users (Cochlear Ltd, Sydney, Australia) participated in this study: six bimodal CI users, who wear a CI with a contralateral HA, and six bilateral CI users. Stimuli to HA ears were acoustic pure tones, and stimuli to CI ears were biphasic pulse trains delivered to individual electrodes. Fusion ranges, the ranges of frequencies/electrodes in the comparison ear that were fused with a single electrode (electrode 22, 18, 12, or 6) in the reference ear, were measured using simultaneous, dichotic presentation of reference and comparison stimuli in opposite ears, and varying the comparison stimulus. Once the fusion ranges were measured, the fused binaural pitch of a reference-pair stimulus combination was measured by finding a pitch match to monaural comparison stimuli presented to the paired stimulus ear. RESULTS: Fusion pitch weighting in CI users varied depending on the pitch difference of the reference-pair stimulus combination, with equal pitch averaging occurring for stimuli closer in pitch and lower pitch dominance occurring for stimuli farther apart in pitch. The averaging region was typically 0.5 to 2.3 octaves around the reference for bimodal CI users and 0.4 to 1.5 octaves for bilateral CI users. In some cases, a bias in the averaging region was observed toward the ear with greater stimulus variability. CONCLUSIONS: Fusion pitch weighting effects in CI users were similar to those observed previously in HA users. However, CI users showed greater inter-subject variability in both pitch averaging ranges and bias effects. These findings suggest that binaural pitch averaging could be a common underlying mechanism in hearing-impaired listeners.


Assuntos
Implante Coclear , Implantes Cocleares , Auxiliares de Audição , Perda Auditiva , Adulto , Austrália , Humanos
19.
Ear Hear ; 41(6): 1545-1559, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33136630

RESUMO

OBJECTIVES: Binaural pitch fusion is the perceptual integration of stimuli that evoke different pitches between the ears into a single auditory image. Adults who use hearing aids (HAs) or cochlear implants (CIs) often experience abnormally broad binaural pitch fusion, such that sounds differing in pitch by as much as 3 to 4 octaves are fused across ears, leading to spectral averaging and speech perception interference. The main goal of this study was to measure binaural pitch fusion in children with different hearing device combinations and compare results across groups and with adults. A second goal was to examine the relationship of binaural pitch fusion to interaural pitch differences or pitch match range, a measure of sequential pitch discriminability. DESIGN: Binaural pitch fusion was measured in children between the ages of 6.1 and 11.1 years with bilateral HAs (n = 9), bimodal CI (n = 10), bilateral CIs (n = 17), as well as normal-hearing (NH) children (n = 21). Depending on device combination, stimuli were pure tones or electric pulse trains delivered to individual electrodes. Fusion ranges were measured using simultaneous, dichotic presentation of reference and comparison stimuli in opposite ears, and varying the comparison stimulus to find the range that fused with the reference stimulus. Interaural pitch match functions were measured using sequential presentation of reference and comparison stimuli, and varying the comparison stimulus to find the pitch match center and range. RESULTS: Children with bilateral HAs had significantly broader binaural pitch fusion than children with NH, bimodal CI, or bilateral CIs. Children with NH and bilateral HAs, but not children with bimodal or bilateral CIs, had significantly broader fusion than adults with the same hearing status and device configuration. In children with bilateral CIs, fusion range was correlated with several variables that were also correlated with each other: pure-tone average in the second implanted ear before CI, and duration of prior bilateral HA, bimodal CI, or bilateral CI experience. No relationship was observed between fusion range and pitch match differences or range. CONCLUSIONS: The findings suggest that binaural pitch fusion is still developing in this age range and depends on hearing device combination but not on interaural pitch differences or discriminability.


Assuntos
Implante Coclear , Implantes Cocleares , Auxiliares de Audição , Percepção da Fala , Adulto , Criança , Audição , Testes Auditivos , Humanos
20.
Ear Hear ; 41(6): 1772-1774, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33136650

RESUMO

OBJECTIVES: Vestibular reflexes have traditionally formed the cornerstone of vestibular evaluation, but perceptual tests have recently gained attention for use in research studies and potential clinical applications. However, the unknown reliability of perceptual thresholds limits their current importance. This is addressed here by establishing the test-retest reliability of vestibular perceptual testing. DESIGN: Perceptual detection thresholds to earth-vertical, yaw-axis rotations were collected in 15 young healthy people. Participants were tested at two time intervals (baseline, 5 to 14 days later) using an adaptive psychophysical procedure. RESULTS: Thresholds to 1 Hz rotations ranged from 0.69 to 2.99°/s (mean: 1.49°/s; SD: 0.63). They demonstrated an excellent intraclass correlation (0.92; 95% confidence interval: 0.77 to 0.97) with a minimum detectable difference of 0.45°/s. CONCLUSIONS: The excellent test-retest reliability of perceptual vestibular testing supports its use as a research tool and motivates further exploration for its use as a novel clinical technique.


Assuntos
Vestíbulo do Labirinto , Humanos , Reflexo Vestíbulo-Ocular , Reprodutibilidade dos Testes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA