Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 5.547
Filter
1.
J Int Adv Otol ; 20(4): 289-300, 2024 Jul 29.
Article in English | MEDLINE | ID: mdl-39159037

ABSTRACT

People with single-sided deafness (SSD) or asymmetric hearing loss (AHL) have particular difficulty understanding speech in noisy listening situations and in sound localization. The objective of this multicenter study is to evaluate the effect of a cochlear implant (CI) in adults with single-sided deafness (SSD) or asymmetric hearing loss (AHL), particularly regarding sound localization and speech intelligibility with additional interest in electric-acoustic pitch matching. A prospective longitudinal study at 7 European tertiary referral centers was conducted including 19 SSD and 16 AHL subjects undergoing cochlear implantation. Sound localization accuracy was investigated in terms of root mean square error and signed bias before and after implantation. Speech recognition in quiet and speech reception thresholds in noise for several spatial configurations were assessed preoperatively and at several post-activation time points. Pitch perception with CI was tracked using pitch matching. Data up to 12 months post activation were collected. In both SSD and AHL subjects, CI significantly improved sound localization for sound sources on the implant side, and thus overall sound localization. Speech recognition in quiet with the implant ear improved significantly. In noise, a significant head shadow effect was found for SSD subjects only. However, the evaluation of AHL subjects was limited by the small sample size. No uniform development of pitch perception with the implant ear was observed. The benefits shown in this study confirm and expand the existing body of evidence for the effectiveness of CI in SSD and AHL. Particularly, improved localization was shown to result from increased localization accuracy on the implant side.


Subject(s)
Cochlear Implantation , Cochlear Implants , Hearing Loss, Unilateral , Sound Localization , Speech Perception , Humans , Cochlear Implantation/methods , Male , Sound Localization/physiology , Female , Middle Aged , Speech Perception/physiology , Prospective Studies , Hearing Loss, Unilateral/surgery , Hearing Loss, Unilateral/rehabilitation , Hearing Loss, Unilateral/physiopathology , Follow-Up Studies , Aged , Adult , Europe , Longitudinal Studies , Treatment Outcome , Speech Intelligibility/physiology , Pitch Perception/physiology , Deafness/surgery , Deafness/rehabilitation , Deafness/physiopathology , Noise
3.
Proc Natl Acad Sci U S A ; 121(34): e2411167121, 2024 Aug 20.
Article in English | MEDLINE | ID: mdl-39136991

ABSTRACT

Evidence accumulates that the cerebellum's role in the brain is not restricted to motor functions. Rather, cerebellar activity seems to be crucial for a variety of tasks that rely on precise event timing and prediction. Due to its complex structure and importance in communication, human speech requires a particularly precise and predictive coordination of neural processes to be successfully comprehended. Recent studies proposed that the cerebellum is indeed a major contributor to speech processing, but how this contribution is achieved mechanistically remains poorly understood. The current study aimed to reveal a mechanism underlying cortico-cerebellar coordination and demonstrate its speech-specificity. In a reanalysis of magnetoencephalography data, we found that activity in the cerebellum aligned to rhythmic sequences of noise-vocoded speech, irrespective of its intelligibility. We then tested whether these "entrained" responses persist, and how they interact with other brain regions, when a rhythmic stimulus stopped and temporal predictions had to be updated. We found that only intelligible speech produced sustained rhythmic responses in the cerebellum. During this "entrainment echo," but not during rhythmic speech itself, cerebellar activity was coupled with that in the left inferior frontal gyrus, and specifically at rates corresponding to the preceding stimulus rhythm. This finding represents evidence for specific cerebellum-driven temporal predictions in speech processing and their relay to cortical regions.


Subject(s)
Cerebellum , Magnetoencephalography , Humans , Cerebellum/physiology , Male , Female , Adult , Speech Perception/physiology , Young Adult , Speech/physiology , Speech Intelligibility/physiology
4.
J Acoust Soc Am ; 156(2): 1202-1213, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39158325

ABSTRACT

Band importance functions for speech-in-noise recognition, typically determined in the presence of steady background noise, indicate a negligible role for extended high frequencies (EHFs; 8-20 kHz). However, recent findings indicate that EHF cues support speech recognition in multi-talker environments, particularly when the masker has reduced EHF levels relative to the target. This scenario can occur in natural auditory scenes when the target talker is facing the listener, but the maskers are not. In this study, we measured the importance of five bands from 40 to 20 000 Hz for speech-in-speech recognition by notch-filtering the bands individually. Stimuli consisted of a female target talker recorded from 0° and a spatially co-located two-talker female masker recorded either from 0° or 56.25°, simulating a masker either facing the listener or facing away, respectively. Results indicated peak band importance in the 0.4-1.3 kHz band and a negligible effect of removing the EHF band in the facing-masker condition. However, in the non-facing condition, the peak was broader and EHF importance was higher and comparable to that of the 3.3-8.3 kHz band in the facing-masker condition. These findings suggest that EHFs contain important cues for speech recognition in listening conditions with mismatched talker head orientations.


Subject(s)
Acoustic Stimulation , Cues , Noise , Perceptual Masking , Recognition, Psychology , Speech Perception , Humans , Female , Speech Perception/physiology , Young Adult , Adult , Male , Audiometry, Speech , Speech Intelligibility , Auditory Threshold , Sound Localization , Speech Acoustics , Sound Spectrography
5.
Am J Otolaryngol ; 45(5): 104400, 2024.
Article in English | MEDLINE | ID: mdl-39094303

ABSTRACT

OBJECTIVES: The aim of this study was to present an institution's experience with cochlear reimplantation (CRI), to assess surgical challenges and post-operative outcomes and to increase the success rate of CRI. STUDY DESIGN: Retrospective single-institution study. SETTING: Tertiary medical center. METHODS: We retrospectively evaluated data from 76 reimplantation cases treated in a tertiary center between 2001 and 2022. Clinical features including etiology of hearing loss, type of failure, surgical issues, and auditory speech performance were analyzed. Categorical Auditory Performance (CAP) and Speech Intelligibility Rating (SIR) scores were used to evaluate pre- and post-CRI outcomes. RESULTS: The CRI population comprises of 7 patients from our institute,69 referred patients from other centers. Device failure was the most common reason (68/76, 89.5 %) for CRI; in addition, there were 7 medical failures and 1 had both soft device failure. Medical failures included flap rupture and device extrusion, magnet migration, auditory neuropathy, leukoencephalopathy, foreign-body residue and meningitis. In 21/76 patients, the electrode technology was upgraded. The mean time to failure was 0.58-13 years, with a mean of 4.97 years. The mean (± SD) CAP and SIR scores before and after CRI were 5.2 ± 1.2 versus 5.5 ± 1.1 and 3.4 ± 1.1 versus 3.5 ± 1.1, respectively. Performance was poor in six patients with severe cochlear malformation, auditory nerve dysplasia, leukoencephalopathy, and epilepsy. CONCLUSION: CRI surgery is a challenging but relatively safe procedure, and most reimplanted patients experience favorable postoperative outcomes. Medical complications and intracochlear damage are the main causes of poor postoperative results. Therefore, adequate preoperative preparation and atraumatic CRI should be carried out for optimal results.


Subject(s)
Cochlear Implantation , Replantation , Humans , Male , Retrospective Studies , Female , Cochlear Implantation/methods , Treatment Outcome , Child , Replantation/methods , Child, Preschool , Adolescent , Adult , Middle Aged , Time Factors , Cochlear Implants , Young Adult , Infant , Speech Intelligibility
6.
Trends Hear ; 28: 23312165241266316, 2024.
Article in English | MEDLINE | ID: mdl-39183533

ABSTRACT

During continuous speech perception, endogenous neural activity becomes time-locked to acoustic stimulus features, such as the speech amplitude envelope. This speech-brain coupling can be decoded using non-invasive brain imaging techniques, including electroencephalography (EEG). Neural decoding may provide clinical use as an objective measure of stimulus encoding by the brain-for example during cochlear implant listening, wherein the speech signal is severely spectrally degraded. Yet, interplay between acoustic and linguistic factors may lead to top-down modulation of perception, thereby complicating audiological applications. To address this ambiguity, we assess neural decoding of the speech envelope under spectral degradation with EEG in acoustically hearing listeners (n = 38; 18-35 years old) using vocoded speech. We dissociate sensory encoding from higher-order processing by employing intelligible (English) and non-intelligible (Dutch) stimuli, with auditory attention sustained using a repeated-phrase detection task. Subject-specific and group decoders were trained to reconstruct the speech envelope from held-out EEG data, with decoder significance determined via random permutation testing. Whereas speech envelope reconstruction did not vary by spectral resolution, intelligible speech was associated with better decoding accuracy in general. Results were similar across subject-specific and group analyses, with less consistent effects of spectral degradation in group decoding. Permutation tests revealed possible differences in decoder statistical significance by experimental condition. In general, while robust neural decoding was observed at the individual and group level, variability within participants would most likely prevent the clinical use of such a measure to differentiate levels of spectral degradation and intelligibility on an individual basis.


Subject(s)
Acoustic Stimulation , Electroencephalography , Speech Intelligibility , Speech Perception , Humans , Speech Perception/physiology , Female , Male , Adolescent , Adult , Young Adult , Speech Acoustics , Brain/physiology
7.
J Acoust Soc Am ; 156(1): 93-106, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38958486

ABSTRACT

Older adults with hearing loss may experience difficulty recognizing speech in noise due to factors related to attenuation (e.g., reduced audibility and sensation levels, SLs) and distortion (e.g., reduced temporal fine structure, TFS, processing). Furthermore, speech recognition may improve when the amplitude modulation spectrum of the speech and masker are non-overlapping. The current study investigated this by filtering the amplitude modulation spectrum into different modulation rates for speech and speech-modulated noise. The modulation depth of the noise was manipulated to vary the SL of speech glimpses. Younger adults with normal hearing and older adults with normal or impaired hearing listened to natural speech or speech vocoded to degrade TFS cues. Control groups of younger adults were tested on all conditions with spectrally shaped speech and threshold matching noise, which reduced audibility to match that of the older hearing-impaired group. All groups benefitted from increased masker modulation depth and preservation of syllabic-rate speech modulations. Older adults with hearing loss had reduced speech recognition across all conditions. This was explained by factors related to attenuation, due to reduced SLs, and distortion, due to reduced TFS processing, which resulted in poorer auditory processing of speech cues during the dips of the masker.


Subject(s)
Acoustic Stimulation , Auditory Threshold , Cues , Noise , Perceptual Masking , Speech Perception , Humans , Speech Perception/physiology , Aged , Noise/adverse effects , Adult , Young Adult , Male , Female , Middle Aged , Age Factors , Recognition, Psychology , Time Factors , Aging/physiology , Presbycusis/physiopathology , Presbycusis/diagnosis , Presbycusis/psychology , Persons With Hearing Impairments/psychology , Aged, 80 and over , Case-Control Studies , Speech Intelligibility
8.
J Speech Lang Hear Res ; 67(7): 2454-2472, 2024 Jul 09.
Article in English | MEDLINE | ID: mdl-38950169

ABSTRACT

PURPOSE: A corpus of English matrix sentences produced by 60 native and nonnative speakers of English was developed as part of a multinational coalition task group. This corpus was tested on a large cohort of U.S. Service members in order to examine the effects of talker nativeness, listener nativeness, masker type, and hearing sensitivity on speech recognition performance in this population. METHOD: A total of 1,939 U.S. Service members (ages 18-68 years) completed this closed-set listening task, including 430 women and 110 nonnative English speakers. Stimuli were produced by native and nonnative speakers of English and were presented in speech-shaped noise and multitalker babble. Keyword recognition accuracy and response times were analyzed. RESULTS: General(ized) linear mixed-effects regression models found that, on the whole, speech recognition performance was lower for listeners who identified as nonnative speakers of English and when listening to speech produced by nonnative speakers of English. Talker and listener effects were more pronounced when listening in a babble masker than in a speech-shaped noise masker. Response times varied as a function of recognition score, with longest response times found for intermediate levels of performance. CONCLUSIONS: This study found additive effects of talker and listener nonnativeness when listening to speech in background noise. These effects were present in both accuracy and response time measures. No multiplicative effects of talker and listener language background were found. There was little evidence of a negative interaction between talker nonnativeness and hearing impairment, suggesting that these factors may have redundant effects on speech recognition. SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.26060191.


Subject(s)
Noise , Perceptual Masking , Speech Intelligibility , Speech Perception , Humans , Female , Adult , Middle Aged , Male , Young Adult , Aged , Adolescent , United States , Perceptual Masking/physiology , Cohort Studies , Language , Military Personnel
9.
Int J Pediatr Otorhinolaryngol ; 182: 112029, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38972249

ABSTRACT

OBJECTIVE: The present investigation examined how factors such as cleft type, age of primary palatal surgery, diagnosed syndromes, hearing problems, and malocclusions could predict persistent speech difficulties and the need for speech services in school-aged children with cleft palate. METHODS: Participants included 100 school-aged children with cleft palate. Americleft speech protocol was used to assess the perceptual aspects of speech production. The logistic regression was performed to evaluate the impact of independent variables (IV) on the dependent variables (DV): intelligibility, posterior oral CSCs, audible nasal emission, hypernasality, anterior oral CSCs, and speech therapy required. RESULTS: Sixty-five percent of the children were enrolled in (or had received) speech therapy. The logistic regression model shows a good fit to the data for the need for speech therapy (Hosmer and Lemeshow's χ2(8)=9.647,p=.291). No IVs were found to have a significant impact on the need for speech therapy. A diagnosed syndrome was associated with poorer intelligibility (Pulkstenis-Robinson's χ2(11)=7.120,p=.789). Children with diagnosed syndromes have about six times the odds of a higher hypernasality rating (Odds Ratio = 5.703) than others. The cleft type was significantly associated with audible nasal emission (Fisher'sexactp=.006). At the same time, malocclusion had a significant association with anterior oral CSCs (Fisher'sexactp=.005). CONCLUSIONS: According to the latest data in the Cleft Registry and Audit Network Annual Report for the UK, the majority of children with cleft palate attain typical speech by age five. However, it is crucial to delve into the factors that may influence the continuation of speech disorders beyond this age. This understanding is vital for formulating intervention strategies aimed at mitigating the long-term effects of speech disorders as individuals grow older.


Subject(s)
Cleft Lip , Cleft Palate , Speech Disorders , Speech Intelligibility , Speech Therapy , Humans , Cleft Palate/complications , Cleft Palate/surgery , Male , Child , Female , Retrospective Studies , Cleft Lip/surgery , Cleft Lip/complications , Speech Disorders/etiology , Speech Therapy/methods , Logistic Models , Speech Production Measurement , Adolescent
10.
J Acoust Soc Am ; 156(1): 341-349, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38990038

ABSTRACT

Previous research has shown that learning effects are present for speech intelligibility in temporally modulated (TM) noise, but not in stationary noise. The present study aimed to gain more insight into the factors that might affect the time course (the number of trials required to reach stable performance) and size [the improvement in the speech reception threshold (SRT)] of the learning effect. Two hypotheses were addressed: (1) learning effects are present in both TM and spectrally modulated (SM) noise and (2) the time course and size of the learning effect depend on the amount of masking release caused by either TM or SM noise. Eighteen normal-hearing adults (23-62 years) participated in SRT measurements, in which they listened to sentences in six masker conditions, including stationary, TM, and SM noise conditions. The results showed learning effects in all TM and SM noise conditions, but not for the stationary noise condition. The learning effect was related to the size of masking release: a larger masking release was accompanied by an increased time course of the learning effect and a larger learning effect. The results also indicate that speech is processed differently in SM noise than in TM noise.


Subject(s)
Acoustic Stimulation , Learning , Noise , Perceptual Masking , Speech Intelligibility , Speech Perception , Humans , Noise/adverse effects , Adult , Young Adult , Male , Speech Perception/physiology , Female , Middle Aged , Speech Reception Threshold Test , Time Factors , Auditory Threshold
11.
JASA Express Lett ; 4(7)2024 Jul 01.
Article in English | MEDLINE | ID: mdl-39046893

ABSTRACT

Although the telephone band (0.3-3 kHz) provides sufficient information for speech recognition, the contribution of the non-telephone band (<0.3 and >3 kHz) is unclear. To investigate its contribution, speech intelligibility and talker identification were evaluated using consonants, vowels, and sentences. The non-telephone band produced relatively good intelligibility for consonants (76.0%) and sentences (77.4%), but not vowels (11.5%). The non-telephone band supported good talker identification only with sentences (74.5%), but not vowels (45.8%) or consonants (10.8%). Furthermore, the non-telephone band cannot produce satisfactory speech intelligibility in noise at the sentence level, suggesting the importance of full-band access in realistic listening.


Subject(s)
Speech Intelligibility , Speech Perception , Humans , Speech Perception/physiology , Male , Female , Telephone , Adult , Young Adult , Phonetics , Noise
12.
Trends Hear ; 28: 23312165241261490, 2024.
Article in English | MEDLINE | ID: mdl-39051703

ABSTRACT

Speech-recognition tests are widely used in both clinical and research audiology. The purpose of this study was the development of a novel speech-recognition test that combines concepts of different speech-recognition tests to reduce training effects and allows for a large set of speech material. The new test consists of four different words per trial in a meaningful construct with a fixed structure, the so-called phrases. Various free databases were used to select the words and to determine their frequency. Highly frequent nouns were grouped into thematic categories and combined with related adjectives and infinitives. After discarding inappropriate and unnatural combinations, and eliminating duplications of (sub-)phrases, a total number of 772 phrases remained. Subsequently, the phrases were synthesized using a text-to-speech system. The synthesis significantly reduces the effort compared to recordings with a real speaker. After excluding outliers, measured speech-recognition scores for the phrases with 31 normal-hearing participants at fixed signal-to-noise ratios (SNR) revealed speech-recognition thresholds (SRT) for each phrase varying up to 4 dB. The median SRT was -9.1 dB SNR and thus comparable to existing sentence tests. The psychometric function's slope of 15 percentage points per dB is also comparable and enables efficient use in audiology. Summarizing, the principle of creating speech material in a modular system has many potential applications.


Subject(s)
Recognition, Psychology , Speech Perception , Humans , Male , Female , Adult , Young Adult , Acoustic Stimulation , Speech Reception Threshold Test/methods , Auditory Threshold , Reproducibility of Results , Predictive Value of Tests , Psychometrics , Speech Intelligibility , Signal-To-Noise Ratio , Perceptual Masking
13.
Trends Hear ; 28: 23312165241262517, 2024.
Article in English | MEDLINE | ID: mdl-39051688

ABSTRACT

Listeners with normal audiometric thresholds show substantial variability in their ability to understand speech in noise (SiN). These individual differences have been reported to be associated with a range of auditory and cognitive abilities. The present study addresses the association between SiN processing and the individual susceptibility of short-term memory to auditory distraction (i.e., the irrelevant sound effect [ISE]). In a sample of 67 young adult participants with normal audiometric thresholds, we measured speech recognition performance in a spatial listening task with two interfering talkers (speech-in-speech identification), audiometric thresholds, binaural sensitivity to the temporal fine structure (interaural phase differences [IPD]), serial memory with and without interfering talkers, and self-reported noise sensitivity. Speech-in-speech processing was not significantly associated with the ISE. The most important predictors of high speech-in-speech recognition performance were a large short-term memory span, low IPD thresholds, bilaterally symmetrical audiometric thresholds, and low individual noise sensitivity. Surprisingly, the susceptibility of short-term memory to irrelevant sound accounted for a substantially smaller amount of variance in speech-in-speech processing than the nondisrupted short-term memory capacity. The data confirm the role of binaural sensitivity to the temporal fine structure, although its association to SiN recognition was weaker than in some previous studies. The inverse association between self-reported noise sensitivity and SiN processing deserves further investigation.


Subject(s)
Acoustic Stimulation , Auditory Threshold , Memory, Short-Term , Noise , Perceptual Masking , Recognition, Psychology , Speech Perception , Humans , Noise/adverse effects , Male , Female , Speech Perception/physiology , Young Adult , Memory, Short-Term/physiology , Adult , Speech Intelligibility , Attention/physiology , Adolescent
14.
J Acoust Soc Am ; 156(1): 706-724, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-39082692

ABSTRACT

Understanding speech in noisy environments is a challenging task, especially in communication situations with several competing speakers. Despite their ongoing improvement, assistive listening devices and speech processing approaches still do not perform well enough in noisy multi-talker environments, as they may fail to restore the intelligibility of a speaker of interest among competing sound sources. In this study, a quasi-causal deep learning algorithm was developed that can extract the voice of a target speaker, as indicated by a short enrollment utterance, from a mixture of multiple concurrent speakers in background noise. Objective evaluation with computational metrics demonstrated that the speaker-informed algorithm successfully extracts the target speaker from noisy multi-talker mixtures. This was achieved using a single algorithm that generalized to unseen speakers, different numbers of speakers and relative speaker levels, and different speech corpora. Double-blind sentence recognition tests on mixtures of one, two, and three speakers in restaurant noise were conducted with listeners with normal hearing and listeners with hearing loss. Results indicated significant intelligibility improvements with the speaker-informed algorithm of 17% and 31% for people without and with hearing loss, respectively. In conclusion, it was demonstrated that deep learning-based speaker extraction can enhance speech intelligibility in noisy multi-talker environments where uninformed speech enhancement methods fail.


Subject(s)
Deep Learning , Noise , Speech Intelligibility , Speech Perception , Humans , Noise/adverse effects , Female , Male , Adult , Middle Aged , Hearing Loss/physiopathology , Hearing Loss/psychology , Young Adult , Aged , Algorithms , Hearing , Perceptual Masking
15.
Hear Res ; 450: 109076, 2024 Sep 01.
Article in English | MEDLINE | ID: mdl-38991628

ABSTRACT

As part of a longitudinal study regarding the benefit of early cochlear implantation for children with single-sided deafness, the current work explored the children's daily device use, potential barriers to full-time device use, and the children's ability to understand speech with the cochlear implant (CI). Data were collected from 20 children with prelingual SSD who received a CI before the age of 2.5 years, from the initial activation of the sound processor until the children were 4.8 to 11.0 years old. Daily device use was extracted from the CI's data logging, while word perception in quiet was assessed using direct audio input to the children's sound processor. The children's caregivers completed a questionnaire about habits, motivations, and barriers to device use. The children with SSD and a CI used their device on average 8.3 h per day, corresponding to 63 % of their time spent awake. All children except one could understand speech through the CI, with an average score of 59 % on a closed-set test and 73 % on an open-set test. More device use was associated with higher speech perception scores. Parents were happy with their decision to pursue a CI for their child. Certain habits, like taking off the sound processor during illness, were associated with lower device use. Providing timely counselling to the children's parents, focused on SSD-specific challenges, may be helpful to improve daily device use in these children.


Subject(s)
Cochlear Implantation , Cochlear Implants , Speech Perception , Humans , Cochlear Implantation/instrumentation , Female , Male , Child , Child, Preschool , Time Factors , Longitudinal Studies , Persons With Hearing Impairments/psychology , Persons With Hearing Impairments/rehabilitation , Surveys and Questionnaires , Speech Intelligibility , Hearing Loss, Unilateral/rehabilitation , Hearing Loss, Unilateral/psychology , Hearing Loss, Unilateral/physiopathology , Hearing Loss, Unilateral/surgery , Comprehension , Treatment Outcome , Child Language , Deafness/psychology , Deafness/rehabilitation , Deafness/physiopathology , Deafness/diagnosis , Deafness/surgery , Age Factors , Child Behavior , Motivation , Infant
16.
JASA Express Lett ; 4(7)2024 Jul 01.
Article in English | MEDLINE | ID: mdl-39051871

ABSTRACT

Since its creation, the coordinate response measure (CRM) corpus has been applied in hundreds of studies to explore the mechanisms of informational masking in multi-talker situations, but also in speech-in-noise or auditory attentional tasks. Here, we present its French version, with equivalent content to the original version in English. Furthermore, an evaluation of speech-on-speech intelligibility in French shows informational masking with similar result patterns to the original data in English. This validation of the French CRM corpus allows to propose the use of the CRM for intelligibility tests in French, and for comparisons with a foreign language under masking conditions.


Subject(s)
Language , Speech Intelligibility , Speech Perception , Humans , Speech Perception/physiology , Female , Male , Adult , Perceptual Masking/physiology , France , Young Adult , Noise
17.
Trends Hear ; 28: 23312165241260621, 2024.
Article in English | MEDLINE | ID: mdl-39053897

ABSTRACT

While listening, we commonly participate in simultaneous activities. For instance, at receptions people often stand while engaging in conversation. It is known that listening and postural control are associated with each other. Previous studies focused on the interplay of listening and postural control when the speech identification task had rather high cognitive control demands. This study aimed to determine whether listening and postural control interact when the speech identification task requires minimal cognitive control, i.e., when words are presented without background noise, or a large memory load. This study included 22 young adults, 27 middle-aged adults, and 21 older adults. Participants performed a speech identification task (auditory single task), a postural control task (posture single task) and combined postural control and speech identification tasks (dual task) to assess the effects of multitasking. The difficulty levels of the listening and postural control tasks were manipulated by altering the level of the words (25 or 30 dB SPL) and the mobility of the platform (stable or moving). The sound level was increased for adults with a hearing impairment. In the dual-task, listening performance decreased, especially for middle-aged and older adults, while postural control improved. These results suggest that even when cognitive control demands for listening are minimal, interaction with postural control occurs. Correlational analysis revealed that hearing loss was a better predictor than age of speech identification and postural control.


Subject(s)
Aging , Cognition , Multitasking Behavior , Postural Balance , Speech Perception , Standing Position , Humans , Male , Female , Middle Aged , Speech Perception/physiology , Adult , Aged , Young Adult , Age Factors , Postural Balance/physiology , Multitasking Behavior/physiology , Aging/physiology , Aging/psychology , Acoustic Stimulation , Noise/adverse effects , Speech Intelligibility , Hearing/physiology , Recognition, Psychology
18.
J Acoust Soc Am ; 155(6): 3833-3847, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38884525

ABSTRACT

For cochlear implant (CI) listeners, holding a conversation in noisy and reverberant environments is often challenging. Deep-learning algorithms can potentially mitigate these difficulties by enhancing speech in everyday listening environments. This study compared several deep-learning algorithms with access to one, two unilateral, or six bilateral microphones that were trained to recover speech signals by jointly removing noise and reverberation. The noisy-reverberant speech and an ideal noise reduction algorithm served as lower and upper references, respectively. Objective signal metrics were compared with results from two listening tests, including 15 typical hearing listeners with CI simulations and 12 CI listeners. Large and statistically significant improvements in speech reception thresholds of 7.4 and 10.3 dB were found for the multi-microphone algorithms. For the single-microphone algorithm, there was an improvement of 2.3 dB but only for the CI listener group. The objective signal metrics correctly predicted the rank order of results for CI listeners, and there was an overall agreement for most effects and variances between results for CI simulations and CI listeners. These algorithms hold promise to improve speech intelligibility for CI listeners in environments with noise and reverberation and benefit from a boost in performance when using features extracted from multiple microphones.


Subject(s)
Cochlear Implants , Deep Learning , Noise , Speech Intelligibility , Speech Perception , Humans , Noise/adverse effects , Female , Male , Adult , Middle Aged , Aged , Algorithms , Young Adult , Cochlear Implantation/instrumentation
19.
Trends Hear ; 28: 23312165241260029, 2024.
Article in English | MEDLINE | ID: mdl-38831646

ABSTRACT

The extent to which active noise cancelation (ANC), when combined with hearing assistance, can improve speech intelligibility in noise is not well understood. One possible source of benefit is ANC's ability to reduce the sound level of the direct (i.e., vent-transmitted) path. This reduction lowers the "floor" imposed by the direct path, thereby allowing any increases to the signal-to-noise ratio (SNR) created in the amplified path to be "realized" at the eardrum. Here we used a modeling approach to estimate this benefit. We compared pairs of simulated hearing aids that differ only in terms of their ability to provide ANC and computed intelligibility metrics on their outputs. The difference in metric scores between simulated devices is termed the "ANC Benefit." These simulations show that ANC Benefit increases as (1) the environmental sound level increases, (2) the ability of the hearing aid to improve SNR increases, (3) the strength of the ANC increases, and (4) the hearing loss severity decreases. The predicted size of the ANC Benefit can be substantial. For a moderate hearing loss, the model predicts improvement in intelligibility metrics of >30% when environments are moderately loud (>70 dB SPL) and devices are moderately capable of increasing SNR (by >4 dB). It appears that ANC can be a critical ingredient in hearing devices that attempt to improve SNR in loud environments. ANC will become more and more important as advanced SNR-improving algorithms (e.g., artificial intelligence speech enhancement) are included in hearing devices.


Subject(s)
Hearing Aids , Noise , Perceptual Masking , Signal-To-Noise Ratio , Speech Intelligibility , Speech Perception , Humans , Noise/adverse effects , Computer Simulation , Acoustic Stimulation , Correction of Hearing Impairment/instrumentation , Persons With Hearing Impairments/rehabilitation , Persons With Hearing Impairments/psychology , Hearing Loss/diagnosis , Hearing Loss/rehabilitation , Hearing Loss/physiopathology , Equipment Design , Signal Processing, Computer-Assisted
20.
Am J Speech Lang Pathol ; 33(4): 1930-1951, 2024 Jul 03.
Article in English | MEDLINE | ID: mdl-38838243

ABSTRACT

PURPOSE: This study investigated the effects of the SPEAK OUT! & LOUD Crowd therapy program on speaking rate, percent pause time, intelligibility, naturalness, and communicative participation in individuals with Parkinson's disease (PD). METHOD: Six adults with PD completed 12 individual SPEAK OUT! sessions across four consecutive weeks followed by group-based LOUD Crowd sessions for five consecutive weeks. Most therapy sessions were conducted via telehealth, with two participants completing the SPEAK OUT! portion in person. Speech samples were recorded at six time points: three baseline time points prior to SPEAK OUT!, two post-SPEAK OUT! time points, and one post-LOUD Crowd time point. Acoustic measures of speaking rate and percent pause time and listener ratings of speech intelligibility and naturalness were obtained for each time point. Participant self-ratings of communicative participation were also collected at pre- and posttreatment time points. RESULTS: Results showed significant improvement in communicative participation scores at a group level following completion of the SPEAK OUT! & LOUD Crowd treatment program. Two participants showed a significant decrease in speaking rate and increase in percent pause time following treatment. Changes in intelligibility and naturalness were not statistically significant. CONCLUSIONS: These findings provide preliminary support for the effectiveness of the SPEAK OUT! & LOUD Crowd treatment program in improving communicative participation for people with mild-to-moderate hypokinetic dysarthria secondary to PD. This study is also the first to demonstrate positive effects of this treatment program for people receiving the therapy via telehealth.


Subject(s)
Parkinson Disease , Speech Intelligibility , Speech Production Measurement , Speech Therapy , Humans , Parkinson Disease/complications , Parkinson Disease/therapy , Male , Female , Aged , Middle Aged , Speech Therapy/methods , Dysarthria/etiology , Dysarthria/therapy , Dysarthria/rehabilitation , Treatment Outcome , Speech Acoustics , Time Factors , Voice Quality , Telemedicine
SELECTION OF CITATIONS
SEARCH DETAIL