Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 10.290
Filter
1.
Trends Hear ; 28: 23312165241263485, 2024.
Article in English | MEDLINE | ID: mdl-39099537

ABSTRACT

Older adults with normal hearing or with age-related hearing loss face challenges when listening to speech in noisy environments. To better serve individuals with communication difficulties, precision diagnostics are needed to characterize individuals' auditory perceptual and cognitive abilities beyond pure tone thresholds. These abilities can be heterogenous across individuals within the same population. The goal of the present study is to consider the suprathreshold variability and develop characteristic profiles for older adults with normal hearing (ONH) and with hearing loss (OHL). Auditory perceptual and cognitive abilities were tested on ONH (n = 20) and OHL (n = 20) on an abbreviated test battery using portable automated rapid testing. Using cluster analyses, three main profiles were revealed for each group, showing differences in auditory perceptual and cognitive abilities despite similar audiometric thresholds. Analysis of variance showed that ONH profiles differed in spatial release from masking, speech-in-babble testing, cognition, tone-in-noise, and binaural temporal processing abilities. The OHL profiles differed in spatial release from masking, speech-in-babble testing, cognition, and tolerance to background noise performance. Correlation analyses showed significant relationships between auditory and cognitive abilities in both groups. This study showed that auditory perceptual and cognitive deficits can be present to varying degrees in the presence of audiometrically normal hearing and among listeners with similar degrees of hearing loss. The results of this study inform the need for taking individual differences into consideration and developing targeted intervention options beyond pure tone thresholds and speech testing.


Subject(s)
Audiometry, Pure-Tone , Auditory Threshold , Cognition , Noise , Perceptual Masking , Speech Perception , Humans , Male , Cognition/physiology , Female , Aged , Auditory Threshold/physiology , Speech Perception/physiology , Middle Aged , Noise/adverse effects , Acoustic Stimulation , Auditory Perception/physiology , Aged, 80 and over , Hearing/physiology , Age Factors , Case-Control Studies , Presbycusis/diagnosis , Presbycusis/physiopathology , Predictive Value of Tests , Audiology/methods , Individuality , Persons With Hearing Impairments/psychology , Cluster Analysis , Audiometry, Speech/methods
2.
eNeuro ; 11(8)2024 Aug.
Article in English | MEDLINE | ID: mdl-39134409

ABSTRACT

Older listeners often report difficulties understanding speech in noisy environments. It is important to identify where in the auditory pathway hearing-in-noise deficits arise to develop appropriate therapies. We tested how encoding of sounds is affected by masking noise at early stages of the auditory pathway by recording responses of principal cells in the anteroventral cochlear nucleus (AVCN) of aging CBA/CaJ and C57BL/6J mice in vivo. Previous work indicated that masking noise shifts the dynamic range of single auditory nerve fibers (ANFs), leading to elevated tone thresholds. We hypothesized that such threshold shifts could contribute to increased hearing-in-noise deficits with age if susceptibility to masking increased in AVCN units. We tested this by recording the responses of AVCN principal neurons to tones in the presence and absence of masking noise. Surprisingly, we found that masker-induced threshold shifts decreased with age in primary-like units and did not change in choppers. In addition, spontaneous activity decreased in primary-like and chopper units of old mice, with no change in dynamic range or tuning precision. In C57 mice, which undergo early-onset hearing loss, units showed similar changes in threshold and spontaneous rate at younger ages, suggesting they were related to hearing loss and not simply aging. These findings suggest that sound information carried by AVCN principal cells remains largely unchanged with age. Therefore, hearing-in-noise deficits may result from other changes during aging, such as distorted across-channel input from the cochlea and changes in sound coding at later stages of the auditory pathway.


Subject(s)
Aging , Cochlear Nucleus , Mice, Inbred C57BL , Mice, Inbred CBA , Noise , Animals , Cochlear Nucleus/physiology , Aging/physiology , Male , Acoustic Stimulation , Neurons/physiology , Female , Auditory Threshold/physiology , Perceptual Masking/physiology , Mice , Action Potentials/physiology
3.
J Acoust Soc Am ; 156(2): 1202-1213, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39158325

ABSTRACT

Band importance functions for speech-in-noise recognition, typically determined in the presence of steady background noise, indicate a negligible role for extended high frequencies (EHFs; 8-20 kHz). However, recent findings indicate that EHF cues support speech recognition in multi-talker environments, particularly when the masker has reduced EHF levels relative to the target. This scenario can occur in natural auditory scenes when the target talker is facing the listener, but the maskers are not. In this study, we measured the importance of five bands from 40 to 20 000 Hz for speech-in-speech recognition by notch-filtering the bands individually. Stimuli consisted of a female target talker recorded from 0° and a spatially co-located two-talker female masker recorded either from 0° or 56.25°, simulating a masker either facing the listener or facing away, respectively. Results indicated peak band importance in the 0.4-1.3 kHz band and a negligible effect of removing the EHF band in the facing-masker condition. However, in the non-facing condition, the peak was broader and EHF importance was higher and comparable to that of the 3.3-8.3 kHz band in the facing-masker condition. These findings suggest that EHFs contain important cues for speech recognition in listening conditions with mismatched talker head orientations.


Subject(s)
Acoustic Stimulation , Cues , Noise , Perceptual Masking , Recognition, Psychology , Speech Perception , Humans , Female , Speech Perception/physiology , Young Adult , Adult , Male , Audiometry, Speech , Speech Intelligibility , Auditory Threshold , Sound Localization , Speech Acoustics , Sound Spectrography
4.
J Acoust Soc Am ; 156(1): 93-106, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38958486

ABSTRACT

Older adults with hearing loss may experience difficulty recognizing speech in noise due to factors related to attenuation (e.g., reduced audibility and sensation levels, SLs) and distortion (e.g., reduced temporal fine structure, TFS, processing). Furthermore, speech recognition may improve when the amplitude modulation spectrum of the speech and masker are non-overlapping. The current study investigated this by filtering the amplitude modulation spectrum into different modulation rates for speech and speech-modulated noise. The modulation depth of the noise was manipulated to vary the SL of speech glimpses. Younger adults with normal hearing and older adults with normal or impaired hearing listened to natural speech or speech vocoded to degrade TFS cues. Control groups of younger adults were tested on all conditions with spectrally shaped speech and threshold matching noise, which reduced audibility to match that of the older hearing-impaired group. All groups benefitted from increased masker modulation depth and preservation of syllabic-rate speech modulations. Older adults with hearing loss had reduced speech recognition across all conditions. This was explained by factors related to attenuation, due to reduced SLs, and distortion, due to reduced TFS processing, which resulted in poorer auditory processing of speech cues during the dips of the masker.


Subject(s)
Acoustic Stimulation , Auditory Threshold , Cues , Noise , Perceptual Masking , Speech Perception , Humans , Speech Perception/physiology , Aged , Noise/adverse effects , Adult , Young Adult , Male , Female , Middle Aged , Age Factors , Recognition, Psychology , Time Factors , Aging/physiology , Presbycusis/physiopathology , Presbycusis/diagnosis , Presbycusis/psychology , Persons With Hearing Impairments/psychology , Aged, 80 and over , Case-Control Studies , Speech Intelligibility
5.
J Acoust Soc Am ; 156(1): 284-298, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38984810

ABSTRACT

This study investigated the effect of different types of phonetic training on potential changes in the production and perception of English vowels by Arabic learners of English. Forty-six Arabic learners of English were randomly assigned to one of three high variability vowel training programs: Perception training (High Variability Phonetic Training), Production training, and a Hybrid Training program (production and perception training). Pre- and post-tests (vowel identification, category discrimination, speech recognition in noise, and vowel production) showed that all training types led to improvements in perception and production. There was some evidence that improvements were linked to training type: learners in the Perception Training condition improved in vowel identification but not vowel production, while those in the Production Training condition showed only small improvements in performance on perceptual tasks, but greater improvement in production. However, the effects of training modality were complicated by proficiency, with high proficiency learners benefitting more from different types of training regardless of training mode than lower proficiency learners.


Subject(s)
Multilingualism , Phonetics , Speech Perception , Humans , Female , Male , Young Adult , Adult , Speech Acoustics , Learning , Speech Production Measurement , Recognition, Psychology , Perceptual Masking , Noise , Language , Adolescent
6.
J Speech Lang Hear Res ; 67(7): 2454-2472, 2024 Jul 09.
Article in English | MEDLINE | ID: mdl-38950169

ABSTRACT

PURPOSE: A corpus of English matrix sentences produced by 60 native and nonnative speakers of English was developed as part of a multinational coalition task group. This corpus was tested on a large cohort of U.S. Service members in order to examine the effects of talker nativeness, listener nativeness, masker type, and hearing sensitivity on speech recognition performance in this population. METHOD: A total of 1,939 U.S. Service members (ages 18-68 years) completed this closed-set listening task, including 430 women and 110 nonnative English speakers. Stimuli were produced by native and nonnative speakers of English and were presented in speech-shaped noise and multitalker babble. Keyword recognition accuracy and response times were analyzed. RESULTS: General(ized) linear mixed-effects regression models found that, on the whole, speech recognition performance was lower for listeners who identified as nonnative speakers of English and when listening to speech produced by nonnative speakers of English. Talker and listener effects were more pronounced when listening in a babble masker than in a speech-shaped noise masker. Response times varied as a function of recognition score, with longest response times found for intermediate levels of performance. CONCLUSIONS: This study found additive effects of talker and listener nonnativeness when listening to speech in background noise. These effects were present in both accuracy and response time measures. No multiplicative effects of talker and listener language background were found. There was little evidence of a negative interaction between talker nonnativeness and hearing impairment, suggesting that these factors may have redundant effects on speech recognition. SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.26060191.


Subject(s)
Noise , Perceptual Masking , Speech Intelligibility , Speech Perception , Humans , Female , Adult , Middle Aged , Male , Young Adult , Aged , Adolescent , United States , Perceptual Masking/physiology , Cohort Studies , Language , Military Personnel
7.
J Acoust Soc Am ; 156(1): 511-523, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-39013168

ABSTRACT

Echolocating bats rely on precise auditory temporal processing to detect echoes generated by calls that may be emitted at rates reaching 150-200 Hz. High call rates can introduce forward masking perceptual effects that interfere with echo detection; however, bats may have evolved specializations to prevent repetition suppression of auditory responses and facilitate detection of sounds separated by brief intervals. Recovery of the auditory brainstem response (ABR) was assessed in two species that differ in the temporal characteristics of their echolocation behaviors: Eptesicus fuscus, which uses high call rates to capture prey, and Carollia perspicillata, which uses lower call rates to avoid obstacles and forage for fruit. We observed significant species differences in the effects of forward masking on ABR wave 1, in which E. fuscus maintained comparable ABR wave 1 amplitudes when stimulated at intervals of <3 ms, whereas post-stimulus recovery in C. perspicillata required 12 ms. When the intensity of the second stimulus was reduced by 20-30 dB relative to the first, however, C. perspicillata showed greater recovery of wave 1 amplitudes. The results demonstrate that species differences in temporal resolution are established at early levels of the auditory pathway and that these differences reflect auditory processing requirements of species-specific echolocation behaviors.


Subject(s)
Acoustic Stimulation , Chiroptera , Echolocation , Evoked Potentials, Auditory, Brain Stem , Perceptual Masking , Species Specificity , Animals , Chiroptera/physiology , Acoustic Stimulation/methods , Evoked Potentials, Auditory, Brain Stem/physiology , Time Factors , Male , Female , Auditory Threshold , Auditory Perception/physiology
8.
J Acoust Soc Am ; 156(1): 341-349, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38990038

ABSTRACT

Previous research has shown that learning effects are present for speech intelligibility in temporally modulated (TM) noise, but not in stationary noise. The present study aimed to gain more insight into the factors that might affect the time course (the number of trials required to reach stable performance) and size [the improvement in the speech reception threshold (SRT)] of the learning effect. Two hypotheses were addressed: (1) learning effects are present in both TM and spectrally modulated (SM) noise and (2) the time course and size of the learning effect depend on the amount of masking release caused by either TM or SM noise. Eighteen normal-hearing adults (23-62 years) participated in SRT measurements, in which they listened to sentences in six masker conditions, including stationary, TM, and SM noise conditions. The results showed learning effects in all TM and SM noise conditions, but not for the stationary noise condition. The learning effect was related to the size of masking release: a larger masking release was accompanied by an increased time course of the learning effect and a larger learning effect. The results also indicate that speech is processed differently in SM noise than in TM noise.


Subject(s)
Acoustic Stimulation , Learning , Noise , Perceptual Masking , Speech Intelligibility , Speech Perception , Humans , Noise/adverse effects , Adult , Young Adult , Male , Speech Perception/physiology , Female , Middle Aged , Speech Reception Threshold Test , Time Factors , Auditory Threshold
9.
Trends Hear ; 28: 23312165241261490, 2024.
Article in English | MEDLINE | ID: mdl-39051703

ABSTRACT

Speech-recognition tests are widely used in both clinical and research audiology. The purpose of this study was the development of a novel speech-recognition test that combines concepts of different speech-recognition tests to reduce training effects and allows for a large set of speech material. The new test consists of four different words per trial in a meaningful construct with a fixed structure, the so-called phrases. Various free databases were used to select the words and to determine their frequency. Highly frequent nouns were grouped into thematic categories and combined with related adjectives and infinitives. After discarding inappropriate and unnatural combinations, and eliminating duplications of (sub-)phrases, a total number of 772 phrases remained. Subsequently, the phrases were synthesized using a text-to-speech system. The synthesis significantly reduces the effort compared to recordings with a real speaker. After excluding outliers, measured speech-recognition scores for the phrases with 31 normal-hearing participants at fixed signal-to-noise ratios (SNR) revealed speech-recognition thresholds (SRT) for each phrase varying up to 4 dB. The median SRT was -9.1 dB SNR and thus comparable to existing sentence tests. The psychometric function's slope of 15 percentage points per dB is also comparable and enables efficient use in audiology. Summarizing, the principle of creating speech material in a modular system has many potential applications.


Subject(s)
Recognition, Psychology , Speech Perception , Humans , Male , Female , Adult , Young Adult , Acoustic Stimulation , Speech Reception Threshold Test/methods , Auditory Threshold , Reproducibility of Results , Predictive Value of Tests , Psychometrics , Speech Intelligibility , Signal-To-Noise Ratio , Perceptual Masking
10.
Trends Hear ; 28: 23312165241262517, 2024.
Article in English | MEDLINE | ID: mdl-39051688

ABSTRACT

Listeners with normal audiometric thresholds show substantial variability in their ability to understand speech in noise (SiN). These individual differences have been reported to be associated with a range of auditory and cognitive abilities. The present study addresses the association between SiN processing and the individual susceptibility of short-term memory to auditory distraction (i.e., the irrelevant sound effect [ISE]). In a sample of 67 young adult participants with normal audiometric thresholds, we measured speech recognition performance in a spatial listening task with two interfering talkers (speech-in-speech identification), audiometric thresholds, binaural sensitivity to the temporal fine structure (interaural phase differences [IPD]), serial memory with and without interfering talkers, and self-reported noise sensitivity. Speech-in-speech processing was not significantly associated with the ISE. The most important predictors of high speech-in-speech recognition performance were a large short-term memory span, low IPD thresholds, bilaterally symmetrical audiometric thresholds, and low individual noise sensitivity. Surprisingly, the susceptibility of short-term memory to irrelevant sound accounted for a substantially smaller amount of variance in speech-in-speech processing than the nondisrupted short-term memory capacity. The data confirm the role of binaural sensitivity to the temporal fine structure, although its association to SiN recognition was weaker than in some previous studies. The inverse association between self-reported noise sensitivity and SiN processing deserves further investigation.


Subject(s)
Acoustic Stimulation , Auditory Threshold , Memory, Short-Term , Noise , Perceptual Masking , Recognition, Psychology , Speech Perception , Humans , Noise/adverse effects , Male , Female , Speech Perception/physiology , Young Adult , Memory, Short-Term/physiology , Adult , Speech Intelligibility , Attention/physiology , Adolescent
11.
J Acoust Soc Am ; 156(1): 706-724, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-39082692

ABSTRACT

Understanding speech in noisy environments is a challenging task, especially in communication situations with several competing speakers. Despite their ongoing improvement, assistive listening devices and speech processing approaches still do not perform well enough in noisy multi-talker environments, as they may fail to restore the intelligibility of a speaker of interest among competing sound sources. In this study, a quasi-causal deep learning algorithm was developed that can extract the voice of a target speaker, as indicated by a short enrollment utterance, from a mixture of multiple concurrent speakers in background noise. Objective evaluation with computational metrics demonstrated that the speaker-informed algorithm successfully extracts the target speaker from noisy multi-talker mixtures. This was achieved using a single algorithm that generalized to unseen speakers, different numbers of speakers and relative speaker levels, and different speech corpora. Double-blind sentence recognition tests on mixtures of one, two, and three speakers in restaurant noise were conducted with listeners with normal hearing and listeners with hearing loss. Results indicated significant intelligibility improvements with the speaker-informed algorithm of 17% and 31% for people without and with hearing loss, respectively. In conclusion, it was demonstrated that deep learning-based speaker extraction can enhance speech intelligibility in noisy multi-talker environments where uninformed speech enhancement methods fail.


Subject(s)
Deep Learning , Noise , Speech Intelligibility , Speech Perception , Humans , Noise/adverse effects , Female , Male , Adult , Middle Aged , Hearing Loss/physiopathology , Hearing Loss/psychology , Young Adult , Aged , Algorithms , Hearing , Perceptual Masking
12.
Hear Res ; 451: 109080, 2024 Sep 15.
Article in English | MEDLINE | ID: mdl-39004016

ABSTRACT

Auditory masking methods originally employed to assess behavioral frequency selectivity have evolved over the years to infer cochlear tuning. Behavioral forward masking thresholds for spectrally notched noise maskers and a fixed, low-level probe tone provide accurate estimates of cochlear tuning. Here, we use this method to investigate the effect of stimulus duration on human cochlear tuning at 500 Hz and 4 kHz. Probes were 20-ms sinusoids at 10 dB sensation level. Maskers were noises with a spectral notch symmetrically and asymmetrically placed around the probe frequency. For seven participants with normal hearing, masker levels at masking threshold were measured in forward masking for various notch widths and for masker durations of 30 and 400 ms. Measurements were fitted assuming rounded exponential filter shapes and the power spectrum model of masking, and equivalent rectangular bandwidths (ERBs) were inferred from the fits. At 4 kHz, masker thresholds were higher for the shorter maskers but ERBs were not significantly different for the two masker durations (ERB30ms=294 Hz vs. ERB400ms=277 Hz). At 500 Hz, by contrast, notched-noise curves were shallower for the 30-ms than the 400-ms masker, and ERBs were significantly broader for the shorter masker (ERB30ms=126 Hz vs. ERB400ms=55 Hz). We discuss possible factors that may underlay the duration effect at low frequencies and argue that it may not be possible to fully control for those factors. We conclude that tuning estimates are not affected by maker duration at high frequencies but should be measured and interpreted with caution at low frequencies.


Subject(s)
Acoustic Stimulation , Auditory Threshold , Cochlea , Noise , Perceptual Masking , Humans , Cochlea/physiology , Adult , Male , Female , Time Factors , Noise/adverse effects , Young Adult
13.
Hear Res ; 451: 109081, 2024 Sep 15.
Article in English | MEDLINE | ID: mdl-39004015

ABSTRACT

Speech-in-noise (SIN) perception is a fundamental ability that declines with aging, as does general cognition. We assess whether auditory cognitive ability, in particular short-term memory for sound features, contributes to both. We examined how auditory memory for fundamental sound features, the carrier frequency and amplitude modulation rate of modulated white noise, contributes to SIN perception. We assessed SIN in 153 healthy participants with varying degrees of hearing loss using measures that require single-digit perception (the Digits-in-Noise, DIN) and sentence perception (Speech-in-Babble, SIB). Independent variables were auditory memory and a range of other factors including the Pure Tone Audiogram (PTA), a measure of dichotic pitch-in-noise perception (Huggins pitch), and demographic variables including age and sex. Multiple linear regression models were compared using Bayesian Model Comparison. The best predictor model for DIN included PTA and Huggins pitch (r2 = 0.32, p < 0.001), whereas the model for SIB included the addition of auditory memory for sound features (r2 = 0.24, p < 0.001). Further analysis demonstrated that auditory memory also explained a significant portion of the variance (28 %) in scores for a screening cognitive test for dementia. Auditory memory for non-speech sounds may therefore provide an important predictor of both SIN and cognitive ability.


Subject(s)
Acoustic Stimulation , Cognition , Memory, Short-Term , Noise , Perceptual Masking , Speech Perception , Humans , Female , Male , Noise/adverse effects , Middle Aged , Adult , Aged , Young Adult , Pitch Perception , Bayes Theorem , Aged, 80 and over , Audiometry, Pure-Tone , Hearing , Auditory Threshold , Dichotic Listening Tests
14.
Conscious Cogn ; 123: 103726, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38972288

ABSTRACT

In prosopagnosia, brain lesions impair overt face recognition, but not face detection, and may coexist with residual covert recognition of familiar faces. Previous studies that simulated covert recognition in healthy individuals have impaired face detection as well as recognition, thus not fully mirroring the deficits in prosopagnosia. We evaluated a model of covert recognition based on continuous flash suppression (CFS). Familiar and unfamiliar faces and houses were masked while participants performed two discrimination tasks. With increased suppression, face/house discrimination remained largely intact, but face familiarity discrimination deteriorated. Covert recognition was present across all masking levels, evinced by higher pupil dilation to familiar than unfamiliar faces. Pupil dilation was uncorrelated with overt performance across subjects. Thus, CFS can impede overt face recognition without disrupting covert recognition and face detection, mirroring critical features of prosopagnosia. CFS could be used to uncover shared neural mechanisms of covert recognition in prosopagnosic patients and neurotypicals.


Subject(s)
Facial Recognition , Pupil , Recognition, Psychology , Humans , Facial Recognition/physiology , Adult , Female , Male , Recognition, Psychology/physiology , Young Adult , Pupil/physiology , Perceptual Masking/physiology
15.
JASA Express Lett ; 4(7)2024 Jul 01.
Article in English | MEDLINE | ID: mdl-39051871

ABSTRACT

Since its creation, the coordinate response measure (CRM) corpus has been applied in hundreds of studies to explore the mechanisms of informational masking in multi-talker situations, but also in speech-in-noise or auditory attentional tasks. Here, we present its French version, with equivalent content to the original version in English. Furthermore, an evaluation of speech-on-speech intelligibility in French shows informational masking with similar result patterns to the original data in English. This validation of the French CRM corpus allows to propose the use of the CRM for intelligibility tests in French, and for comparisons with a foreign language under masking conditions.


Subject(s)
Language , Speech Intelligibility , Speech Perception , Humans , Speech Perception/physiology , Female , Male , Adult , Perceptual Masking/physiology , France , Young Adult , Noise
16.
J Vis ; 24(7): 15, 2024 Jul 02.
Article in English | MEDLINE | ID: mdl-39046720

ABSTRACT

Humans can estimate the number of visually presented items without counting. In most studies on numerosity perception, items are uniformly distributed across displays, with identical distributions in central and eccentric parts. However, the neural and perceptual representation of the human visual field differs between the fovea and the periphery. For example, in peripheral vision, there are strong asymmetries with regard to perceptual interferences between visual items. In particular, items arranged radially usually interfere more strongly with each other than items arranged tangentially (the radial-tangential anisotropy). This has been shown for crowding (the deleterious effect of clutter on target identification) and redundancy masking (the reduction of the number of perceived items in repeating patterns). In the present study, we tested how the radial-tangential anisotropy of peripheral vision impacts numerosity perception. In four experiments, we presented displays with varying numbers of discs that were predominantly arranged radially or tangentially, forming strong and weak interference conditions, respectively. Participants were asked to report the number of discs. We found that radial displays were reported as less numerous than tangential displays for all radial and tangential manipulations: weak (Experiment 1), strong (Experiment 2), and when using displays with mixed contrast polarity discs (Experiments 3 and 4). We propose that numerosity perception exhibits a significant radial-tangential anisotropy, resulting from local spatial interactions between items.


Subject(s)
Pattern Recognition, Visual , Humans , Anisotropy , Adult , Male , Female , Young Adult , Pattern Recognition, Visual/physiology , Photic Stimulation/methods , Visual Fields/physiology , Perceptual Masking/physiology , Visual Perception/physiology
17.
JASA Express Lett ; 4(6)2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38884558

ABSTRACT

Age-related changes in auditory processing may reduce physiological coding of acoustic cues, contributing to older adults' difficulty perceiving speech in background noise. This study investigated whether older adults differed from young adults in patterns of acoustic cue weighting for categorizing vowels in quiet and in noise. All participants relied primarily on spectral quality to categorize /ɛ/ and /æ/ sounds under both listening conditions. However, relative to young adults, older adults exhibited greater reliance on duration and less reliance on spectral quality. These results suggest that aging alters patterns of perceptual cue weights that may influence speech recognition abilities.


Subject(s)
Cues , Perceptual Masking , Speech Perception , Humans , Speech Perception/physiology , Aged , Young Adult , Female , Male , Adult , Perceptual Masking/physiology , Noise/adverse effects , Aging/physiology , Aging/psychology , Speech Acoustics , Middle Aged , Phonetics , Age Factors , Adolescent
18.
J Vis ; 24(6): 9, 2024 Jun 03.
Article in English | MEDLINE | ID: mdl-38856981

ABSTRACT

Four experiments were conducted to gain a better understanding of the visual mechanisms related to how integration of partial shape cues provides for recognition of the full shape. In each experiment, letters formed as outline contours were displayed as a sequence of adjacent segments (fragments), each visible during a 17-ms time frame. The first experiment varied the contrast of the fragments. There were substantial individual differences in contrast sensitivity, so stimulus displays in the masking experiments that followed were calibrated to the sensitivity of each participant. Masks were displayed either as patterns that filled the entire screen (full field) or as successive strips that were sliced from the pattern, each strip lying across the location of the letter fragment that had been shown a moment before. Contrast of masks were varied to be lighter or darker than the letter fragments. Full-field masks, whether light or dark, provided relatively little impairment of recognition, as was the case for mask strips that were lighter than the letter fragments. However, dark strip masks proved to be very effective, with the degree of recognition impairment becoming larger as mask contrast was increased. A final experiment found the strip masks to be most effective when they overlapped the location where the letter fragments had been shown a moment before. They became progressively less effective with increased spatial separation from that location. Results are discussed with extensive reference to potential brain mechanisms for integrating shape cues.


Subject(s)
Contrast Sensitivity , Form Perception , Pattern Recognition, Visual , Perceptual Masking , Photic Stimulation , Humans , Perceptual Masking/physiology , Contrast Sensitivity/physiology , Photic Stimulation/methods , Adult , Pattern Recognition, Visual/physiology , Form Perception/physiology , Male , Female , Cues , Young Adult
19.
Trends Hear ; 28: 23312165241260029, 2024.
Article in English | MEDLINE | ID: mdl-38831646

ABSTRACT

The extent to which active noise cancelation (ANC), when combined with hearing assistance, can improve speech intelligibility in noise is not well understood. One possible source of benefit is ANC's ability to reduce the sound level of the direct (i.e., vent-transmitted) path. This reduction lowers the "floor" imposed by the direct path, thereby allowing any increases to the signal-to-noise ratio (SNR) created in the amplified path to be "realized" at the eardrum. Here we used a modeling approach to estimate this benefit. We compared pairs of simulated hearing aids that differ only in terms of their ability to provide ANC and computed intelligibility metrics on their outputs. The difference in metric scores between simulated devices is termed the "ANC Benefit." These simulations show that ANC Benefit increases as (1) the environmental sound level increases, (2) the ability of the hearing aid to improve SNR increases, (3) the strength of the ANC increases, and (4) the hearing loss severity decreases. The predicted size of the ANC Benefit can be substantial. For a moderate hearing loss, the model predicts improvement in intelligibility metrics of >30% when environments are moderately loud (>70 dB SPL) and devices are moderately capable of increasing SNR (by >4 dB). It appears that ANC can be a critical ingredient in hearing devices that attempt to improve SNR in loud environments. ANC will become more and more important as advanced SNR-improving algorithms (e.g., artificial intelligence speech enhancement) are included in hearing devices.


Subject(s)
Hearing Aids , Noise , Perceptual Masking , Signal-To-Noise Ratio , Speech Intelligibility , Speech Perception , Humans , Noise/adverse effects , Computer Simulation , Acoustic Stimulation , Correction of Hearing Impairment/instrumentation , Persons With Hearing Impairments/rehabilitation , Persons With Hearing Impairments/psychology , Hearing Loss/diagnosis , Hearing Loss/rehabilitation , Hearing Loss/physiopathology , Equipment Design , Signal Processing, Computer-Assisted
20.
Cogn Res Princ Implic ; 9(1): 35, 2024 Jun 05.
Article in English | MEDLINE | ID: mdl-38834918

ABSTRACT

Multilingual speakers can find speech recognition in everyday environments like restaurants and open-plan offices particularly challenging. In a world where speaking multiple languages is increasingly common, effective clinical and educational interventions will require a better understanding of how factors like multilingual contexts and listeners' language proficiency interact with adverse listening environments. For example, word and phrase recognition is facilitated when competing voices speak different languages. Is this due to a "release from masking" from lower-level acoustic differences between languages and talkers, or higher-level cognitive and linguistic factors? To address this question, we created a "one-man bilingual cocktail party" selective attention task using English and Mandarin speech from one bilingual talker to reduce low-level acoustic cues. In Experiment 1, 58 listeners more accurately recognized English targets when distracting speech was Mandarin compared to English. Bilingual Mandarin-English listeners experienced significantly more interference and intrusions from the Mandarin distractor than did English listeners, exacerbated by challenging target-to-masker ratios. In Experiment 2, 29 Mandarin-English bilingual listeners exhibited linguistic release from masking in both languages. Bilinguals experienced greater release from masking when attending to English, confirming an influence of linguistic knowledge on the "cocktail party" paradigm that is separate from primarily energetic masking effects. Effects of higher-order language processing and expertise emerge only in the most demanding target-to-masker contexts. The "one-man bilingual cocktail party" establishes a useful tool for future investigations and characterization of communication challenges in the large and growing worldwide community of Mandarin-English bilinguals.


Subject(s)
Attention , Multilingualism , Speech Perception , Humans , Speech Perception/physiology , Adult , Female , Male , Young Adult , Attention/physiology , Perceptual Masking/physiology , Psycholinguistics
SELECTION OF CITATIONS
SEARCH DETAIL