Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 79
Filter
Add more filters

Country/Region as subject
Publication year range
1.
Ear Hear ; 44(2): 318-329, 2023.
Article in English | MEDLINE | ID: mdl-36395512

ABSTRACT

OBJECTIVES: Some cochlear implant (CI) users are fitted with a CI in each ear ("bilateral"), while others have a CI in one ear and a hearing aid in the other ("bimodal"). Presently, evaluation of the benefits of bilateral or bimodal CI fitting does not take into account the integration of frequency information across the ears. This study tests the hypothesis that CI listeners, especially bimodal CI users, with a more precise integration of frequency information across ears ("sharp binaural pitch fusion") will derive greater benefit from voice gender differences in a multi-talker listening environment. DESIGN: Twelve bimodal CI users and twelve bilateral CI users participated. First, binaural pitch fusion ranges were measured using the simultaneous, dichotic presentation of reference and comparison stimuli (electric pulse trains for CI ears and acoustic tones for HA ears) in opposite ears, with reference stimuli fixed and comparison stimuli varied in frequency/electrode to find the range perceived as a single sound. Direct electrical stimulation was used in implanted ears through the research interface, which allowed selective stimulation of one electrode at a time, and acoustic stimulation was used in the non-implanted ears through the headphone. Second, speech-on-speech masking performance was measured to estimate masking release by voice gender difference between target and maskers (VGRM). The VGRM was calculated as the difference in speech recognition thresholds of target sounds in the presence of same-gender or different-gender maskers. RESULTS: Voice gender differences between target and masker talkers improved speech recognition performance for the bimodal CI group, but not the bilateral CI group. The bimodal CI users who benefited the most from voice gender differences were those who had the narrowest range of acoustic frequencies that fused into a single sound with stimulation from a single electrode from the CI in the opposite ear. There was no similar voice gender difference benefit of narrow binaural fusion range for the bilateral CI users. CONCLUSIONS: The findings suggest that broad binaural fusion reduces the acoustical information available for differentiating individual talkers in bimodal CI users, but not for bilateral CI users. In addition, for bimodal CI users with narrow binaural fusion who benefit from voice gender differences, bilateral implantation could lead to a loss of that benefit and impair their ability to selectively attend to one talker in the presence of multiple competing talkers. The results suggest that binaural pitch fusion, along with an assessment of residual hearing and other factors, could be important for assessing bimodal and bilateral CI users.


Subject(s)
Cochlear Implantation , Cochlear Implants , Hearing Aids , Speech Perception , Humans , Sex Factors
2.
J Acoust Soc Am ; 154(1): 379-387, 2023 07 01.
Article in English | MEDLINE | ID: mdl-37462921

ABSTRACT

Auditory difficulties reported by normal-hearing Veterans with a history of blast exposure are primarily thought to stem from processing deficits in the central nervous system. However, previous work on speech understanding in noise difficulties in this patient population have only considered peripheral hearing thresholds in the standard audiometric range. Recent research suggests that variability in extended high-frequency (EHF; >8 kHz) hearing sensitivity may contribute to speech understanding deficits in normal-hearing individuals. Therefore, this work was designed to identify the effects of blast exposure on several common clinical speech understanding measures and EHF hearing sensitivity. This work also aimed to determine whether variability in EHF hearing sensitivity contributes to speech understanding difficulties in normal-hearing blast-exposed Veterans. Data from 41 normal- or near-normal-hearing Veterans with a history of blast exposure and 31 normal- or near-normal-hearing control participants with no history of head injury were employed in this study. Analysis identified an effect of blast exposure on several speech understanding measures but showed no statistically significant differences in EHF thresholds between participant groups. Data showed that variability in EHF hearing sensitivity did not contribute to group-related differences in speech understanding, although study limitations impact interpretation of these results.


Subject(s)
Speech Perception , Speech , Humans , Auditory Threshold/physiology , Speech Perception/physiology , Hearing/physiology , Hearing Tests
3.
J Acoust Soc Am ; 153(1): 316, 2023 01.
Article in English | MEDLINE | ID: mdl-36732214

ABSTRACT

This study validates a new Spanish-language version of the Coordinate Response Measure (CRM) corpus using a well-established measure of spatial release from masking (SRM). Participants were 96 Spanish-speaking young adults without hearing complaints in Mexico City. To present the Spanish-language SRM test, we created new recordings of the CRM with Spanish-language Translations and updated the freely available app (PART; https://ucrbraingamecenter.github.io/PART_Utilities/) to present materials in Spanish. In addition to SRM, we collected baseline data on a battery of non-speech auditory assessments, including detection of frequency modulations, temporal gaps, and modulated broadband noise in the temporal, spectral, and spectrotemporal domains. Data demonstrate that the newly developed speech and non-speech tasks show similar reliability to an earlier report in English-speaking populations. This study demonstrates an approach by which auditory assessment for clinical and basic research can be extended to Spanish-speaking populations for whom testing platforms are not currently available.


Subject(s)
Speech Perception , Speech , Young Adult , Humans , Mexico , Reproducibility of Results , Language , Speech Perception/physiology
4.
J Acoust Soc Am ; 152(2): 807, 2022 08.
Article in English | MEDLINE | ID: mdl-36050190

ABSTRACT

Remote testing of auditory function can be transformative to both basic research and hearing healthcare; however, historically, many obstacles have limited remote collection of reliable and valid auditory psychometric data. Here, we report performance on a battery of auditory processing tests using a remotely administered system, Portable Automatic Rapid Testing. We compare a previously reported dataset collected in a laboratory setting with the same measures using uncalibrated, participant-owned devices in remote settings (experiment 1, n = 40) remote with and without calibrated hardware (experiment 2, n = 36) and laboratory with and without calibrated hardware (experiment 3, n = 58). Results were well-matched across datasets and had similar reliability, but overall performance was slightly worse than published norms. Analyses of potential nuisance factors such as environmental noise, distraction, or lack of calibration failed to provide reliable evidence that these factors contributed to the observed variance in performance. These data indicate feasibility of remote testing of suprathreshold auditory processing using participants' own devices. Although the current investigation was limited to young participants without hearing difficulties, its outcomes demonstrate the potential for large-scale, remote hearing testing of more hearing-diverse populations both to advance basic science and to establish the clinical viability of auditory remote testing.


Subject(s)
Hearing Loss , Hearing Tests , Auditory Perception , Hearing , Humans , Reproducibility of Results
5.
J Acoust Soc Am ; 151(5): 3116, 2022 05.
Article in English | MEDLINE | ID: mdl-35649891

ABSTRACT

Acoustics research involving human participants typically takes place in specialized laboratory settings. Listening studies, for example, may present controlled sounds using calibrated transducers in sound-attenuating or anechoic chambers. In contrast, remote testing takes place outside of the laboratory in everyday settings (e.g., participants' homes). Remote testing could provide greater access to participants, larger sample sizes, and opportunities to characterize performance in typical listening environments at the cost of reduced control of environmental conditions, less precise calibration, and inconsistency in attentional state and/or response behaviors from relatively smaller sample sizes and unintuitive experimental tasks. The Acoustical Society of America Technical Committee on Psychological and Physiological Acoustics launched the Task Force on Remote Testing (https://tcppasa.org/remotetesting/) in May 2020 with goals of surveying approaches and platforms available to support remote testing and identifying challenges and considerations for prospective investigators. The results of this task force survey were made available online in the form of a set of Wiki pages and summarized in this report. This report outlines the state-of-the-art of remote testing in auditory-related research as of August 2021, which is based on the Wiki and a literature search of papers published in this area since 2020, and provides three case studies to demonstrate feasibility during practice.


Subject(s)
Acoustics , Auditory Perception , Attention/physiology , Humans , Prospective Studies , Sound
6.
Ear Hear ; 42(1): 106-121, 2021.
Article in English | MEDLINE | ID: mdl-32520849

ABSTRACT

OBJECTIVES: Veterans who have been exposed to high-intensity blast waves frequently report persistent auditory difficulties such as problems with speech-in-noise (SIN) understanding, even when hearing sensitivity remains normal. However, these subjective reports have proven challenging to corroborate objectively. Here, we sought to determine whether use of complex stimuli and challenging signal contrasts in auditory evoked potential (AEP) paradigms rather than traditional use of simple stimuli and easy signal contrasts improved the ability of these measures to (1) distinguish between blast-exposed Veterans with auditory complaints and neurologically normal control participants, and (2) predict behavioral measures of SIN perception. DESIGN: A total of 33 adults (aged 19-56 years) took part in this study, including 17 Veterans exposed to high-intensity blast waves within the past 10 years and 16 neurologically normal control participants matched for age and hearing status with the Veteran participants. All participants completed the following test measures: (1) a questionnaire probing perceived hearing abilities; (2) behavioral measures of SIN understanding including the BKB-SIN, the AzBio presented in 0 and +5 dB signal to noise ratios (SNRs), and a word-level consonant-vowel-consonant test presented at +5 dB SNR; and (3) electrophysiological tasks involving oddball paradigms in response to simple tones (500 Hz standard, 1000 Hz deviant) and complex speech syllables (/ba/ standard, /da/ deviant) presented in quiet and in four-talker speech babble at a SNR of +5 dB. RESULTS: Blast-exposed Veterans reported significantly greater auditory difficulties compared to control participants. Behavioral performance on tests of SIN perception was generally, but not significantly, poorer among the groups. Latencies of P3 responses to tone signals were significantly longer among blast-exposed participants compared to control participants regardless of background condition, though responses to speech signals were similar across groups. For cortical AEPs, no significant interactions were found between group membership and either stimulus type or background. P3 amplitudes measured in response to signals in background babble accounted for 30.9% of the variance in subjective auditory reports. Behavioral SIN performance was best predicted by a combination of N1 and P2 responses to signals in quiet which accounted for 69.6% and 57.4% of the variance on the AzBio at 0 dB SNR and the BKB-SIN, respectively. CONCLUSIONS: Although blast-exposed participants reported far more auditory difficulties compared to controls, use of complex stimuli and challenging signal contrasts in cortical and cognitive AEP measures failed to reveal larger group differences than responses to simple stimuli and easy signal contrasts. Despite this, only P3 responses to signals presented in background babble were predictive of subjective auditory complaints. In contrast, cortical N1 and P2 responses were predictive of behavioral SIN performance but not subjective auditory complaints, and use of challenging background babble generally did not improve performance predictions. These results suggest that challenging stimulus protocols are more likely to tap into perceived auditory deficits, but may not be beneficial for predicting performance on clinical measures of SIN understanding. Finally, these results should be interpreted with caution since blast-exposed participants did not perform significantly poorer on tests of SIN perception.


Subject(s)
Speech Perception , Veterans , Adult , Evoked Potentials, Auditory , Humans , Noise , Speech
7.
J Acoust Soc Am ; 149(3): 1434, 2021 03.
Article in English | MEDLINE | ID: mdl-33765775

ABSTRACT

Traditionally, real-time generation of spectro-temporally modulated noise has been performed on a linear amplitude scale, partially due to computational constraints. Experiments often require modulation that is sinusoidal on a logarithmic amplitude scale as a result of the many perceptual and physiological measures which scale linearly with exponential changes in the signal magnitude. A method is presented for computing exponential spectro-temporal modulation, showing that it can be expressed analytically as a sum over linearly offset sidebands with component amplitudes equal to the values of the modified Bessel function of the first kind. This approach greatly improves the efficiency and precision of stimulus generation over current methods, facilitating real-time generation for a broad range of carrier and envelope signals.


Subject(s)
Noise , Acoustic Stimulation
8.
J Acoust Soc Am ; 150(2): 745, 2021 08.
Article in English | MEDLINE | ID: mdl-34470296

ABSTRACT

Frequency modulation (FM) detection at low modulation frequencies is commonly used as an index of temporal fine-structure processing. The present study evaluated the rate of improvement in monaural and dichotic FM across a range of test parameters. In experiment I, dichotic and monaural FM detection was measured as a function of duration and modulator starting phase. Dichotic FM thresholds were lower than monaural FM thresholds and the modulator starting phase had no effect on detection. Experiment II measured monaural FM detection for signals that differed in modulation rate and duration such that the improvement with duration in seconds (carrier) or cycles (modulator) was compared. Monaural FM detection improved monotonically with the number of modulation cycles, suggesting that the modulator is extracted prior to detection. Experiment III measured dichotic FM detection for shorter signal durations to test the hypothesis that dichotic FM relies primarily on the signal onset. The rate of improvement decreased as duration increased, which is consistent with the use of primarily onset cues for the detection of dichotic FM. These results establish that improvement with duration occurs as a function of the modulation cycles at a rate consistent with the independent-samples model for monaural FM, but later cycles contribute less to detection in dichotic FM.


Subject(s)
Cues , Time Perception , Auditory Threshold , Time Factors
9.
J Neurophysiol ; 123(3): 936-944, 2020 03 01.
Article in English | MEDLINE | ID: mdl-31940239

ABSTRACT

Recent evidence has shown that auditory information may be used to improve postural stability, spatial orientation, navigation, and gait, suggesting an auditory component of self-motion perception. To determine how auditory and other sensory cues integrate for self-motion perception, we measured motion perception during yaw rotations of the body and the auditory environment. Psychophysical thresholds in humans were measured over a range of frequencies (0.1-1.0 Hz) during self-rotation without spatial auditory stimuli, rotation of a sound source around a stationary listener, and self-rotation in the presence of an earth-fixed sound source. Unisensory perceptual thresholds and the combined multisensory thresholds were found to be frequency dependent. Auditory thresholds were better at lower frequencies, and vestibular thresholds were better at higher frequencies. Expressed in terms of peak angular velocity, multisensory vestibular and auditory thresholds ranged from 0.39°/s at 0.1 Hz to 0.95°/s at 1.0 Hz and were significantly better over low frequencies than either the auditory-only (0.54°/s to 2.42°/s at 0.1 and 1.0 Hz, respectively) or vestibular-only (2.00°/s to 0.75°/s at 0.1 and 1.0 Hz, respectively) unisensory conditions. Monaurally presented auditory cues were less effective than binaural cues in lowering multisensory thresholds. Frequency-independent thresholds were derived, assuming that vestibular thresholds depended on a weighted combination of velocity and acceleration cues, whereas auditory thresholds depended on displacement and velocity cues. These results elucidate fundamental mechanisms for the contribution of audition to balance and help explain previous findings, indicating its significance in tasks requiring self-orientation.NEW & NOTEWORTHY Auditory information can be integrated with visual, proprioceptive, and vestibular signals to improve balance, orientation, and gait, but this process is poorly understood. Here, we show that auditory cues significantly improve sensitivity to self-motion perception below 0.5 Hz, whereas vestibular cues contribute more at higher frequencies. Motion thresholds are determined by a weighted combination of displacement, velocity, and acceleration information. These findings may help understand and treat imbalance, particularly in people with sensory deficits.


Subject(s)
Auditory Perception/physiology , Motion Perception/physiology , Proprioception/physiology , Sensory Thresholds/physiology , Sound Localization/physiology , Space Perception/physiology , Adult , Female , Humans , Male , Young Adult
10.
J Acoust Soc Am ; 148(2): 526, 2020 08.
Article in English | MEDLINE | ID: mdl-32873000

ABSTRACT

A classic paradigm used to quantify the perceptual weighting of binaural spatial cues requires a listener to adjust the value of one cue, while the complementary cue is held constant. Adjustments are made until the auditory percept appears centered in the head, and the values of both cues are recorded as a trading relation (TR), most commonly in µs interaural time difference per dB interaural level difference. Interestingly, existing literature has shown that TRs differ according to the cue being adjusted. The current study investigated whether cue-specific adaptation, which might arise due to the continuous, alternating presentation of signals during adjustment tasks, could account for this poorly understood phenomenon. Three experiments measured TRs via adjustment and via lateralization of single targets in virtual reality (VR). Targets were 500 Hz pure tones preceded by silence or by adapting trains that held one of the cues constant. VR removed visual anchors and provided an intuitive response technique during lateralization. The pattern of results suggests that adaptation can account for cue-dependent TRs. In addition, VR seems to be a viable tool for psychophysical tasks.


Subject(s)
Sound Localization , Virtual Reality , Acoustic Stimulation , Cues , Time Factors
11.
J Acoust Soc Am ; 148(4): 1831, 2020 10.
Article in English | MEDLINE | ID: mdl-33138479

ABSTRACT

This study aims to determine the degree to which Portable Automated Rapid Testing (PART), a freely available program running on a tablet computer, is capable of reproducing standard laboratory results. Undergraduate students were assigned to one of three within-subject conditions that examined repeatability of performance on a battery of psychoacoustical tests of temporal fine structure processing, spectro-temporal amplitude modulation, and targets in competition. The repeatability condition examined test/retest with the same system, the headphones condition examined the effects of varying headphones (passive and active noise-attenuating), and the noise condition examined repeatability in the presence of recorded cafeteria noise. In general, performance on the test battery showed high repeatability, even across manipulated conditions, and was similar to that reported in the literature. These data serve as validation that suprathreshold psychoacoustical tests can be made accessible to run on consumer-grade hardware and perform in less controlled settings. This dataset also provides a distribution of thresholds that can be used as a normative baseline against which auditory dysfunction can be identified in future work.


Subject(s)
Hearing Tests/instrumentation , Auditory Threshold , Computers, Handheld , Humans , Noise , Young Adult
12.
J Acoust Soc Am ; 147(2): EL201, 2020 02.
Article in English | MEDLINE | ID: mdl-32113282

ABSTRACT

Measures of signal-in-noise neural encoding may improve understanding of the hearing-in-noise difficulties experienced by many individuals in everyday life. Usually noise results in weaker envelope following responses (EFRs); however, some studies demonstrate EFR enhancements. This experiment tested whether noise-induced enhancements in EFRs are demonstrated with simple 500- and 1000-Hz pure tones amplitude modulated at 110 Hz. Most of the 12 young normal-hearing participants demonstrated enhanced encoding of the 110-Hz fundamental in a noise background compared to quiet; in contrast, responses at the harmonics were decreased in noise relative to quiet conditions. Possible mechanisms of such an enhancement are discussed.


Subject(s)
Evoked Potentials, Auditory , Noise , Acoustic Stimulation , Adult , Hearing , Humans , Noise/adverse effects
13.
J Acoust Soc Am ; 146(5): 3849, 2019 11.
Article in English | MEDLINE | ID: mdl-31795660

ABSTRACT

Tinnitus is one of the predicted perceptual consequences of cochlear synaptopathy, a type of age-, noise-, or drug-induced auditory damage that has been demonstrated in animal models to cause homeostatic changes in central auditory gain. Although synaptopathy has been observed in human temporal bones, assessment of this condition in living humans is limited to indirect non-invasive measures such as the auditory brainstem response (ABR). In animal models, synaptopathy is associated with a reduction in ABR wave I amplitude at suprathreshold stimulus levels. Several human studies have explored the relationship between wave I amplitude and tinnitus, with conflicting results. This study investigates the hypothesis that reduced peripheral auditory input due to synaptic/neuronal loss is associated with tinnitus. Wave I amplitude data from 193 individuals [43 with tinnitus (22%), 150 without tinnitus (78%)], who participated in up to 3 out of 4 different studies, were included in a logistic regression analysis to estimate the relationship between wave I amplitude and tinnitus at a variety of stimulus levels and frequencies. Statistical adjustment for sex and distortion product otoacoustic emissions (DPOAEs) was included. The results suggest that smaller wave I amplitudes and/or lower DPOAE levels are associated with an increased probability of tinnitus.


Subject(s)
Cochlear Nerve/physiopathology , Evoked Potentials, Auditory, Brain Stem , Tinnitus/physiopathology , Adult , Auditory Perception , Diagnostic Self Evaluation , Female , Humans , Male , Middle Aged , Noise , Synaptic Transmission , Tinnitus/diagnosis
14.
Ear Hear ; 38(1): e13-e21, 2017.
Article in English | MEDLINE | ID: mdl-27556520

ABSTRACT

OBJECTIVE: Spatial release from masking (SRM) can increase speech intelligibility in complex listening environments. The goal of the present study was to document how speech-in-speech stimuli could be best processed to encourage optimum SRM for listeners who represent a range of ages and amounts of hearing loss. We examined the effects of equating stimulus audibility among listeners, presenting stimuli at uniform sensation levels (SLs), and filtering stimuli at two separate bandwidths. DESIGN: Seventy-one participants completed two speech intelligibility experiments (36 listeners in experiment 1; all 71 in experiment 2) in which a target phrase from the coordinate response measure (CRM) and two masking phrases from the CRM were presented simultaneously via earphones using a virtual spatial array, such that the target sentence was always at 0 degree azimuth angle and the maskers were either colocated or positioned at ±45 degrees. Experiments 1 and 2 examined the impacts of SL, age, and hearing loss on SRM. Experiment 2 also assessed the effects of stimulus bandwidth on SRM. RESULTS: Overall, listeners' ability to achieve SRM improved with increased SL. Younger listeners with less hearing loss achieved more SRM than older or hearing-impaired listeners. It was hypothesized that SL and bandwidth would result in dissociable effects on SRM. However, acoustical analysis revealed that effective audible bandwidth, defined as the highest frequency at which the stimulus was audible at both ears, was the best predictor of performance. Thus, increasing SL seemed to improve SRM by increasing the effective bandwidth rather than increasing the level of already audible components. CONCLUSIONS: Performance for all listeners, regardless of age or hearing loss, improved with an increase in overall SL and/or bandwidth, but the improvement was small relative to the benefits of spatial separation.


Subject(s)
Hearing Loss/physiopathology , Perceptual Masking , Spatial Processing , Speech Perception/physiology , Acoustic Stimulation , Adolescent , Adult , Age Factors , Aged , Audiometry, Pure-Tone , Auditory Threshold , Female , Humans , Male , Middle Aged , Young Adult
15.
Brain Inj ; 31(9): 1183-1187, 2017.
Article in English | MEDLINE | ID: mdl-28981349

ABSTRACT

It has been shown that there is an increased risk for impaired auditory function following traumatic brain injury (TBI) in Veterans. Evidence is strongest in the area of self-report, but behavioural and electrophysiological data have been obtained that are consistent with these complaints. Peripheral and central dysfunction have both been observed. Historically, studies have focused on penetrating head injuries where central injury is more easily documented than in mild closed head injuries, but several recent reports have expanded the literature to include closed head injuries as well. The lack of imaging technology that can identify which closed head injuries are likely to impact auditory function is a significant barrier to accurate diagnosis and rehabilitation. Current behavioural and electrophysiological measures are effective in substantiating the auditory complaints of these patients but leave many questions unanswered. One significant limitation of current approaches is the lack of clear data regarding the potential influence of those mental health comorbidities that are very likely to be present in the Veteran population. In the area of rehabilitation, there are indications that hearing aids and other assistive listening devices may provide benefit, as can auditory training programmes, yet more research needs to be done.


Subject(s)
Brain Injuries, Traumatic/diagnosis , Brain Injuries, Traumatic/therapy , Hearing Loss/diagnosis , Hearing Loss/therapy , Veterans , Blast Injuries/diagnosis , Blast Injuries/epidemiology , Blast Injuries/therapy , Brain Injuries, Traumatic/epidemiology , Hearing Loss/epidemiology , Hearing Tests/methods , Humans , Self Report/standards
16.
J Acoust Soc Am ; 141(2): EL170, 2017 02.
Article in English | MEDLINE | ID: mdl-28253635

ABSTRACT

Interaural differences in time (ITDs) and interaural differences in level (ILDs) contribute to a listener's ability to achieve spatial release from masking (SRM), and help to improve speech intelligibility in noisy environments. In this study, the extent to which ITDs and ILDs contribute to SRM and the relationships with aging and hearing loss were examined. SRM was greatest when stimuli were presented with consistent ITD and ILD, relative to ITD or ILD alone, all of which produced greater SRM than when ITD and ILD cues were in conflict with each other. This pattern was independent of age and hearing loss.


Subject(s)
Cues , Environment , Hearing Loss, Sensorineural/psychology , Noise/adverse effects , Perceptual Masking , Persons With Hearing Impairments/psychology , Speech Intelligibility , Speech Perception , Acoustic Stimulation , Adult , Age Factors , Aged , Audiometry, Pure-Tone , Auditory Threshold , Female , Hearing Loss, Sensorineural/diagnosis , Hearing Loss, Sensorineural/physiopathology , Humans , Male , Middle Aged , Speech Reception Threshold Test , Time Factors , Young Adult
17.
J Acoust Soc Am ; 141(3): EL185, 2017 03.
Article in English | MEDLINE | ID: mdl-28372125

ABSTRACT

Early reflections have been linked to improved speech intelligibility, while later-arriving reverberant sound has been shown to limit speech understanding. Here, these effects were examined by artificially removing either early reflections or late reflections. Removing late reflections improved performance more for colocated than for spatially separated maskers. Results of a multiple regression analysis suggest that pure-tone average (PTA) is a significant predictor of spatial release from masking (SRM) in all acoustic conditions. Controlling for the effects of PTA, age is a significant predictor of SRM only when early reflections are absent.


Subject(s)
Aging/psychology , Hearing Loss/psychology , Perceptual Masking , Persons With Hearing Impairments/psychology , Speech Intelligibility , Speech Perception , Acoustic Stimulation , Age Factors , Audiometry, Pure-Tone , Audiometry, Speech , Auditory Threshold , Comprehension , Hearing , Hearing Loss/diagnosis , Hearing Loss/physiopathology , Humans , Motion , Sound , Time Factors , Vibration
18.
Ear Hear ; 37(2): 144-52, 2016.
Article in English | MEDLINE | ID: mdl-26462171

ABSTRACT

OBJECTIVES: Hearing aids are frequently used in reverberant environments; however, relatively little is known about how reverberation affects the processing of signals by modern hearing-aid algorithms. The purpose of this study was to investigate the acoustic and behavioral effects of reverberation and wide-dynamic range compression (WDRC) in hearing aids on consonant identification for individuals with hearing impairment. DESIGN: Twenty-three listeners with mild to moderate sloping sensorineural hearing loss were tested monaurally under varying degrees of reverberation and WDRC conditions. Listeners identified consonants embedded within vowel-consonant-vowel nonsense syllables. Stimuli were processed to simulate a range of realistic reverberation times and WDRC release times using virtual acoustic simulations. In addition, the effects of these processing conditions were acoustically analyzed using a model of envelope distortion to examine the effects on the temporal envelope. RESULTS: Aided consonant identification significantly decreased as reverberation time increased. Consonant identification was also significantly affected by WDRC release time. This relationship was such that individuals tended to perform significantly better with longer release times. There was no significant interaction between reverberation and WDRC. The application of the acoustic model to the processed signal showed a close relationship between trends in the behavioral performance and distortion to the temporal envelope resulting from reverberation and WDRC. The results of the acoustic model demonstrated the same trends found in the behavioral data for both reverberation and WDRC. CONCLUSIONS: Reverberation and WDRC release time both affect aided consonant identification for individuals with hearing impairment, and these condition effects are associated with alterations to the temporal envelope. There was no significant interaction between reverberation and WDRC release time.


Subject(s)
Algorithms , Hearing Aids , Hearing Loss, Sensorineural/rehabilitation , Speech Perception , Aged , Aged, 80 and over , Data Compression , Female , Hearing Loss, Sensorineural/physiopathology , Humans , Male , Middle Aged
19.
Adv Exp Med Biol ; 894: 83-91, 2016.
Article in English | MEDLINE | ID: mdl-27080649

ABSTRACT

Hearing loss has been shown to reduce speech understanding in spatialized multitalker listening situations, leading to the common belief that spatial processing is disrupted by hearing loss. This paper describes related studies from three laboratories that explored the contribution of reduced target audibility to this deficit. All studies used a stimulus configuration in which a speech target presented from the front was masked by speech maskers presented symmetrically from the sides. Together these studies highlight the importance of adequate stimulus audibility for optimal performance in spatialized speech mixtures and suggest that reduced access to target speech information might explain a substantial portion of the "spatial" deficit observed in listeners with hearing loss.


Subject(s)
Hearing Loss/physiopathology , Speech Intelligibility , Acoustic Stimulation , Adult , Aged , Auditory Threshold , Humans , Perceptual Masking
20.
J Acoust Soc Am ; 140(1): EL73, 2016 07.
Article in English | MEDLINE | ID: mdl-27475216

ABSTRACT

Spatially separating target and masking speech can result in substantial spatial release from masking (SRM) for normal-hearing listeners. In this study, SRM was examined at eight spatial configurations of azimuth angle: maskers co-located with the target (0°) or symmetrically separated by 2°, 4°, 6°, 8°, 10°, 15°, or 30°. Results revealed that different listening groups (young normal-hearing, older normal-hearing, and older hearing-impaired) required different minimum amounts of spatial separation between target and maskers to achieve SRM. The results also indicated that aging was the contributing factor predicting SRM at smaller separations, whereas hearing loss was the contributing factor at larger separations.


Subject(s)
Aging/physiology , Hearing Loss/physiopathology , Perceptual Masking/physiology , Adolescent , Adult , Age Factors , Aged , Aged, 80 and over , Audiometry, Speech/methods , Auditory Perception , Auditory Threshold , Child , Female , Humans , Male , Middle Aged , Speech Perception
SELECTION OF CITATIONS
SEARCH DETAIL