Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 46
Filter
1.
Hear Res ; 447: 109023, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38733710

ABSTRACT

Limited auditory input, whether caused by hearing loss or by electrical stimulation through a cochlear implant (CI), can be compensated by the remaining senses. Specifically for CI users, previous studies reported not only improved visual skills, but also altered cortical processing of unisensory visual and auditory stimuli. However, in multisensory scenarios, it is still unclear how auditory deprivation (before implantation) and electrical hearing experience (after implantation) affect cortical audiovisual speech processing. Here, we present a prospective longitudinal electroencephalography (EEG) study which systematically examined the deprivation- and CI-induced alterations of cortical processing of audiovisual words by comparing event-related potentials (ERPs) in postlingually deafened CI users before and after implantation (five weeks and six months of CI use). A group of matched normal-hearing (NH) listeners served as controls. The participants performed a word-identification task with congruent and incongruent audiovisual words, focusing their attention on either the visual (lip movement) or the auditory speech signal. This allowed us to study the (top-down) attention effect on the (bottom-up) sensory cortical processing of audiovisual speech. When compared to the NH listeners, the CI candidates (before implantation) and the CI users (after implantation) exhibited enhanced lipreading abilities and an altered cortical response at the N1 latency range (90-150 ms) that was characterized by a decreased theta oscillation power (4-8 Hz) and a smaller amplitude in the auditory cortex. After implantation, however, the auditory-cortex response gradually increased and developed a stronger intra-modal connectivity. Nevertheless, task efficiency and activation in the visual cortex was significantly modulated in both groups by focusing attention on the visual as compared to the auditory speech signal, with the NH listeners additionally showing an attention-dependent decrease in beta oscillation power (13-30 Hz). In sum, these results suggest remarkable deprivation effects on audiovisual speech processing in the auditory cortex, which partially reverse after implantation. Although even experienced CI users still show distinct audiovisual speech processing compared to NH listeners, pronounced effects of (top-down) direction of attention on (bottom-up) audiovisual processing can be observed in both groups. However, NH listeners but not CI users appear to show enhanced allocation of cognitive resources in visually as compared to auditory attended audiovisual speech conditions, which supports our behavioural observations of poorer lipreading abilities and reduced visual influence on audition in NH listeners as compared to CI users.


Subject(s)
Acoustic Stimulation , Attention , Cochlear Implantation , Cochlear Implants , Deafness , Electroencephalography , Persons With Hearing Impairments , Photic Stimulation , Speech Perception , Humans , Male , Female , Middle Aged , Cochlear Implantation/instrumentation , Adult , Prospective Studies , Longitudinal Studies , Persons With Hearing Impairments/psychology , Persons With Hearing Impairments/rehabilitation , Deafness/physiopathology , Deafness/rehabilitation , Deafness/psychology , Case-Control Studies , Aged , Visual Perception , Lipreading , Time Factors , Hearing , Evoked Potentials, Auditory , Auditory Cortex/physiopathology , Evoked Potentials
2.
Trends Hear ; 28: 23312165231215916, 2024.
Article in English | MEDLINE | ID: mdl-38284359

ABSTRACT

When presenting two competing speech stimuli, one to each ear, a right-ear advantage (REA) can often be observed, reflected in better speech recognition compared to the left ear. Considering the left-hemispheric dominance for language, the REA has been explained by superior contralateral pathways (structural models) and language-induced shifts of attention to the right (attentional models). There is some evidence that the REA becomes more pronounced, as cognitive load increases. Hence, it is interesting to investigate the REA in static (constant target talker) and dynamic (target changing pseudo-randomly) cocktail-party situations, as the latter is associated with a higher cognitive load than the former. Furthermore, previous research suggests an increasing REA, when listening becomes more perceptually challenging. The present study examined the REA by using virtual acoustics to simulate static and dynamic cocktail-party situations, with three spatially separated talkers uttering concurrent matrix sentences. Sentences were presented at low sound pressure levels or processed with a noise vocoder to increase perceptual load. Sixteen young normal-hearing adults participated in the study. The REA was assessed by means of word recognition scores and a detailed error analysis. Word recognition revealed a greater REA for the dynamic than for the static situations, compatible with the view that an increase in cognitive load results in a heightened REA. Also, the REA depended on the type of perceptual load, as indicated by a higher REA associated with vocoded compared to low-level stimuli. The results of the error analysis support both structural and attentional models of the REA.


Subject(s)
Speech Perception , Adult , Humans , Acoustic Stimulation , Ear , Noise
3.
Curr Res Neurobiol ; 3: 100059, 2022.
Article in English | MEDLINE | ID: mdl-36405629

ABSTRACT

Hearing with a cochlear implant (CI) is limited compared to natural hearing. Although CI users may develop compensatory strategies, it is currently unknown whether these extend from auditory to visual functions, and whether compensatory strategies vary between different CI user groups. To better understand the experience-dependent contributions to multisensory plasticity in audiovisual speech perception, the current event-related potential (ERP) study presented syllables in auditory, visual, and audiovisual conditions to CI users with unilateral or bilateral hearing loss, as well as to normal-hearing (NH) controls. Behavioural results revealed shorter audiovisual response times compared to unisensory conditions for all groups. Multisensory integration was confirmed by electrical neuroimaging, including topographic and ERP source analysis, showing a visual modulation of the auditory-cortex response at N1 and P2 latency. However, CI users with bilateral hearing loss showed a distinct pattern of N1 topography, indicating a stronger visual impact on auditory speech processing compared to CI users with unilateral hearing loss and NH listeners. Furthermore, both CI user groups showed a delayed auditory-cortex activation and an additional recruitment of the visual cortex, and a better lip-reading ability compared to NH listeners. In sum, these results extend previous findings by showing distinct multisensory processes not only between NH listeners and CI users in general, but even between CI users with unilateral and bilateral hearing loss. However, the comparably enhanced lip-reading ability and visual-cortex activation in both CI user groups suggest that these visual improvements are evident regardless of the hearing status of the contralateral ear.

4.
Trends Hear ; 26: 23312165221111676, 2022.
Article in English | MEDLINE | ID: mdl-35849353

ABSTRACT

In cocktail party situations multiple talkers speak simultaneously, which causes listening to be perceptually and cognitively challenging. Such situations can either be static (fixed target talker) or dynamic, meaning the target talker switches occasionally and in a potentially unpredictable way. To shed light on the perceptional and cognitive mechanisms in static and dynamic cocktail party situations, we conducted an analysis of error types that occur during a multi-talker speech recognition test. The error analysis distinguished between misunderstood or omitted words (random errors) and target-masker confusions. To investigate the effects of aging and hearing impairment, we compared data from three listener groups, comprised of young as well as older adults with and without hearing loss. In the static condition, error rates were generally very low, except for the older hearing-impaired listeners. Consistent with the assumption of decreased audibility, they showed a notable amount of random errors. In the dynamic condition, errors increased compared to the static condition, especially immediately following a target talker switch. Those increases were similar for random and confusion errors. The older hearing-impaired listeners showed greater difficulties than the younger adults in trials not preceded by a switch. These results suggest that the load associated with dynamic cocktail party listening affects the ability to focus attention on the talker of interest and the retrieval of words from short-term memory, as indicated by the increased amount of confusion and random errors. This was most pronounced in the older hearing-impaired listeners proposing an interplay of perceptual and cognitive mechanisms.


Subject(s)
Hearing Loss , Speech Perception , Aged , Auditory Perception , Cognition , Hearing Loss/diagnosis , Humans , Perceptual Masking
5.
Neuroimage Clin ; 34: 102982, 2022.
Article in English | MEDLINE | ID: mdl-35303598

ABSTRACT

A cochlear implant (CI) is an auditory prosthesis which can partially restore the auditory function in patients with severe to profound hearing loss. However, this bionic device provides only limited auditory information, and CI patients may compensate for this limitation by means of a stronger interaction between the auditory and visual system. To better understand the electrophysiological correlates of audiovisual speech perception, the present study used electroencephalography (EEG) and a redundant target paradigm. Postlingually deafened CI users and normal-hearing (NH) listeners were compared in auditory, visual and audiovisual speech conditions. The behavioural results revealed multisensory integration for both groups, as indicated by shortened response times for the audiovisual as compared to the two unisensory conditions. The analysis of the N1 and P2 event-related potentials (ERPs), including topographic and source analyses, confirmed a multisensory effect for both groups and showed a cortical auditory response which was modulated by the simultaneous processing of the visual stimulus. Nevertheless, the CI users in particular revealed a distinct pattern of N1 topography, pointing to a strong visual impact on auditory speech processing. Apart from these condition effects, the results revealed ERP differences between CI users and NH listeners, not only in N1/P2 ERP topographies, but also in the cortical source configuration. When compared to the NH listeners, the CI users showed an additional activation in the visual cortex at N1 latency, which was positively correlated with CI experience, and a delayed auditory-cortex activation with a reversed, rightward functional lateralisation. In sum, our behavioural and ERP findings demonstrate a clear audiovisual benefit for both groups, and a CI-specific alteration in cortical activation at N1 latency when auditory and visual input is combined. These cortical alterations may reflect a compensatory strategy to overcome the limited CI input, which allows the CI users to improve the lip-reading skills and to approximate the behavioural performance of NH listeners in audiovisual speech conditions. Our results are clinically relevant, as they highlight the importance of assessing the CI outcome not only in auditory-only, but also in audiovisual speech conditions.


Subject(s)
Cochlear Implantation , Cochlear Implants , Speech Perception , Acoustic Stimulation/methods , Auditory Perception/physiology , Evoked Potentials , Humans , Speech , Speech Perception/physiology , Visual Perception/physiology
6.
Front Neurosci ; 16: 1005859, 2022.
Article in English | MEDLINE | ID: mdl-36620447

ABSTRACT

A cochlear implant (CI) can partially restore hearing in individuals with profound sensorineural hearing loss. However, electrical hearing with a CI is limited and highly variable. The current study aimed to better understand the different factors contributing to this variability by examining how age affects cognitive functions and cortical speech processing in CI users. Electroencephalography (EEG) was applied while two groups of CI users (young and elderly; N = 13 each) and normal-hearing (NH) listeners (young and elderly; N = 13 each) performed an auditory sentence categorization task, including semantically correct and incorrect sentences presented either with or without background noise. Event-related potentials (ERPs) representing earlier, sensory-driven processes (N1-P2 complex to sentence onset) and later, cognitive-linguistic integration processes (N400 to semantically correct/incorrect sentence-final words) were compared between the different groups and speech conditions. The results revealed reduced amplitudes and prolonged latencies of auditory ERPs in CI users compared to NH listeners, both at earlier (N1, P2) and later processing stages (N400 effect). In addition to this hearing-group effect, CI users and NH listeners showed a comparable background-noise effect, as indicated by reduced hit rates and reduced (P2) and delayed (N1/P2) ERPs in conditions with background noise. Moreover, we observed an age effect in CI users and NH listeners, with young individuals showing improved specific cognitive functions (working memory capacity, cognitive flexibility and verbal learning/retrieval), reduced latencies (N1/P2), decreased N1 amplitudes and an increased N400 effect when compared to the elderly. In sum, our findings extend previous research by showing that the CI users' speech processing is impaired not only at earlier (sensory) but also at later (semantic integration) processing stages, both in conditions with and without background noise. Using objective ERP measures, our study provides further evidence of strong age effects on cortical speech processing, which can be observed in both the NH listeners and the CI users. We conclude that elderly individuals require more effortful processing at sensory stages of speech processing, which however seems to be at the cost of the limited resources available for the later semantic integration processes.

7.
JASA Express Lett ; 1(7): 075201, 2021 07.
Article in English | MEDLINE | ID: mdl-36154643

ABSTRACT

Situations with multiple competing talkers are especially challenging for listeners with hearing impairment. These "cocktail party" situations can either be static (fixed target talker) or dynamic (changing target talker). Relative to static situations, dynamic listening is typically associated with increased cognitive load and decreased speech recognition ("costs"). This study addressed the role of hearing impairment and cognition in two groups of older listeners with and without hearing loss. In most of the dynamic situations, the costs did not differ between the listener groups. There was no clear evidence that overall costs show an association with the individuals' cognitive abilities.


Subject(s)
Hearing Loss , Speech Perception , Acoustic Stimulation , Cognition , Humans , Perceptual Masking
8.
Front Neurosci ; 15: 725412, 2021.
Article in English | MEDLINE | ID: mdl-35221883

ABSTRACT

The outcome of cochlear implantation is typically assessed by speech recognition tests in quiet and in noise. Many cochlear implant recipients reveal satisfactory speech recognition especially in quiet situations. However, since cochlear implants provide only limited spectro-temporal cues the effort associated with understanding speech might be increased. In this respect, measures of listening effort could give important extra information regarding the outcome of cochlear implantation. In order to shed light on this topic and to gain knowledge for clinical applications we compared speech recognition and listening effort in cochlear implants (CI) recipients and age-matched normal-hearing listeners while considering potential influential factors, such as cognitive abilities. Importantly, we estimated speech recognition functions for both listener groups and compared listening effort at similar performance level. Therefore, a subjective listening effort test (adaptive scaling, "ACALES") as well as an objective test (dual-task paradigm) were applied and compared. Regarding speech recognition CI users needed about 4 dB better signal-to-noise ratio to reach the same performance level of 50% as NH listeners and even 5 dB better SNR to reach 80% speech recognition revealing shallower psychometric functions in the CI listeners. However, when targeting a fixed speech intelligibility of 50 and 80%, respectively, CI users and normal hearing listeners did not differ significantly in terms of listening effort. This applied for both the subjective and the objective estimation. Outcome for subjective and objective listening effort was not correlated with each other nor with age or cognitive abilities of the listeners. This study did not give evidence that CI users and NH listeners differ in terms of listening effort - at least when the same performance level is considered. In contrast, both listener groups showed large inter-individual differences in effort determined with the subjective scaling and the objective dual-task. Potential clinical implications of how to assess listening effort as an outcome measure for hearing rehabilitation are discussed.

9.
J Speech Lang Hear Res ; 63(12): 4325-4326, 2020 12 14.
Article in English | MEDLINE | ID: mdl-33237832

ABSTRACT

Purpose The purpose of this letter is to compare results by Skuk et al. (2020) with Meister et al. (2016) and to point to a potential general influence of stimulus type. Conclusion Our conclusion is that presenting sentences may give cochlear implant recipients the opportunity to use timbre cues for voice perception. This might not be the case when presenting brief and sparse stimuli such as consonant-vowel-consonant or single words, which were applied in the majority of studies.


Subject(s)
Cochlear Implantation , Cochlear Implants , Speech Perception , Voice , Cues , Humans
10.
Hear Res ; 395: 108020, 2020 09 15.
Article in English | MEDLINE | ID: mdl-32698114

ABSTRACT

Verbal communication often takes place in situations with several simultaneous speakers ("cocktail party listening"). These situations can be static (only one listening target) or dynamic (with alternating targets). In particular, dynamic cocktail party listening is believed to generate extra cognitive load and appears to be particularly demanding for older listeners. Two groups of younger and older listeners with good hearing and normal cognition participated in the present study. Three different, spatially separated talker voices uttering matrix sentences were presented to each listener with varying types and probabilities of target switches. Moreover, several neuropsychological tests were conducted to investigate general cognitive characteristics that may be important for speech understanding in these situations. In a static condition with a priori knowledge of the target talker, both age groups revealed very high speech recognition performance. In comparison, dynamic conditions caused extra costs associated with the need to monitor different potential sound sources and to refocus attention when the target changed. The amount of costs depended on the probability and type of target talker alterations. Again, no significant age-group differences were found. No significant associations of cognitive characteristics and costs could be shown. However, a more fine-grained analysis based on the calculation of general and specific switch costs showed different mechanisms in older and younger listeners. This study confirms that dynamic cocktail party listening is associated with costs that depend on the type and probability of target switches. It extends previous research by showing that the effects of switching type and probability are similar for younger and older listeners with good hearing and good cognitive abilities. It further shows that, despite comparable costs of dynamic listening, mechanisms are different for the two age groups, as switching auditory attention may be preserved with aging, but monitoring different sound sources appears to be more difficult for older adults.


Subject(s)
Auditory Perception , Attention , Cognition , Noise , Speech Perception
11.
J Acoust Soc Am ; 147(1): EL19, 2020 01.
Article in English | MEDLINE | ID: mdl-32007021

ABSTRACT

Cochlear implant (CI) recipients are limited in their perception of voice cues, such as the fundamental frequency (F0). This has important consequences for speech recognition when several talkers speak simultaneously. This examination considered the comparison of clear speech and noise-vocoded sentences as maskers. For the speech maskers it could be shown that good CI performers are able to benefit from F0 differences between target and masker. This was due to the fact that a F0 difference of 80 Hz significantly reduced target-masker confusions, an effect that was slightly more pronounced in bimodal than in bilateral users.


Subject(s)
Cochlear Implantation/methods , Perceptual Masking/physiology , Speech Perception/physiology , Speech Reception Threshold Test/methods , Adult , Aged , Cochlear Implantation/standards , Female , Humans , Male , Middle Aged , Speech Reception Threshold Test/standards
12.
J Acoust Soc Am ; 145(3): 1283, 2019 03.
Article in English | MEDLINE | ID: mdl-31067927

ABSTRACT

This study investigated the potential influence of cognitive factors on subjective sound-quality ratings. To this end, 34 older subjects (ages 61-79) with near-normal hearing thresholds rated the perceived sound quality of speech and music stimuli that had been distorted by linear filtering, non-linear processing, and multiband dynamic compression. In addition, all subjects performed the Reading Span Test (RST) to assess working memory capacity (WMC), and the test d2-R (a visual test of letter and symbol identification) was used to assess the subjects' selective and sustained attention. The quality-rating scores, which reflected the susceptibility to signal distortions, were characterized by large interindividual variances. Linear mixed modelling with age, high-frequency pure tone threshold, RST, and d2-R results as independent variables showed that individual speech-quality ratings were significantly related to age and attention. Music-quality ratings were significantly related to WMC. Taking these factors into account might lead to improved sound-quality prediction models. Future studies should, however, address the question of whether these effects are due to procedural mechanisms or actually do show that cognitive abilities mediate sensitivity to sound-quality modifications.

13.
J Acoust Soc Am ; 144(5): EL417, 2018 11.
Article in English | MEDLINE | ID: mdl-30522293

ABSTRACT

This letter describes a dual-task paradigm sensitive to noise masking at favorable signal-to-noise ratios (SNRs). Two competing sentences differing in voice and context cues were presented against noise at SNRs of +2 and +6 dB. Listeners were asked to repeat back words from both competing sentences while prioritizing one of them. Recognition of the high-priority sentences was high and did not depend on the SNR. In contrast, recognition of the low-priority sentences was low and showed a significant SNR effect that was related to the listener's working memory capacity. This suggests that even subtle noise masking causes cognitive load in competing-talker situations.


Subject(s)
Auditory Perception/physiology , Cognition/physiology , Noise/adverse effects , Speech Perception/physiology , Aged , Cues , Female , Humans , Male , Memory, Short-Term/physiology , Middle Aged , Signal-To-Noise Ratio
14.
Trends Hear ; 22: 2331216518793255, 2018.
Article in English | MEDLINE | ID: mdl-30124111

ABSTRACT

This study examined verbal response times-that is, the duration from stimulus offset to voice onset-as a potential measure of cognitive load during conventional testing of speech-in-noise understanding. Response times were compared with a measure of perceived effort as assessed by listening effort scaling. Three listener groups differing in age and hearing status participated in the study. Testing was done at two target intelligibility levels (80%, 95%) and with two noise types (stationary and fluctuating). Verbal response times reflected effects of intelligibility level, noise type, and listener group. Response times were shorter for 95% compared with 80% target intelligibility, shorter for fluctuating compared with stationary noise, and shorter for young listeners compared with older listeners. Responses were also faster for the older listeners with near normal hearing compared with the older hearing-aid users. In contrast, subjective listening effort scaling predominantly revealed effects of target intelligibility level but did not show consistent noise-type or listener-group effects. These findings show that verbal response times and effort scalings tap into different domains of listening effort. Verbal response times can be easily assessed during conventional speech audiometry and have the potential to show effects beyond performance measures and subjective effort estimates.


Subject(s)
Cognition/physiology , Hearing Loss, Sensorineural/physiopathology , Reaction Time , Speech Intelligibility , Speech Perception/physiology , Adult , Aged , Audiometry, Speech , Auditory Threshold/physiology , Female , Humans , Male , Noise , Young Adult
15.
Am J Audiol ; 27(2): 197-207, 2018 Jun 08.
Article in English | MEDLINE | ID: mdl-29536106

ABSTRACT

PURPOSE: This study aimed to investigate whether adults with cochlear implants benefit from a change of fine structure (FS) coding strategies regarding the discrimination of prosodic speech cues, timbre cues, and the identification of natural instruments. The FS processing (FSP) coding strategy was compared to 2 settings of the FS4 strategy. METHOD: A longitudinal crossover, double-blinded study was conducted. This study consisted of 2 parts, with 14 participants in the first part and 12 participants in the second part. Each part lasted 3 months, in which participants were alternately fitted with either the established FSP strategy or 1 of the 2 newly developed FS4 settings. Participants had to complete an intonation identification test; a timbre discrimination test in which 1 of 2 isolated cues changed, either the spectral centroid or the spectral irregularity; and an instrument identification test. RESULTS: A significant effect was seen in the discrimination of spectral irregularity with 1 of the 2 FS4 settings. The improvement was seen in the FS4 setting in which the upper envelope channels had a low stimulation rate. This improvement was not seen with the FS4 setting that had a higher stimulation rate on the envelope channels. CONCLUSIONS: In general, the FSP strategy and the 2 settings of the FS4 strategy provided similar levels in the perception of prosody and timbre cues, as well as in the identification of instruments.


Subject(s)
Acoustic Stimulation/methods , Auditory Perception/physiology , Cochlear Implants , Deafness/surgery , Pitch Discrimination/physiology , Speech Perception/physiology , Aged , Cochlear Implantation/methods , Cross-Over Studies , Cues , Deafness/diagnosis , Double-Blind Method , Female , Germany , Humans , Longitudinal Studies , Male , Middle Aged , Music , Prospective Studies , Prosthesis Design , Treatment Outcome
16.
Int J Audiol ; 57(sup3): S105-S111, 2018 06.
Article in English | MEDLINE | ID: mdl-28449597

ABSTRACT

OBJECTIVES: Model-based hearing aid development considers the assessment of speech recognition using a master hearing aid (MHA). It is known that aided speech recognition in noise is related to cognitive factors such as working memory capacity (WMC). This relationship might be mediated by hearing aid experience (HAE). The aim of this study was to examine the relationship of WMC and speech recognition with a MHA for listeners with different HAE. DESIGN: Using the MHA, unaided and aided 80% speech recognition thresholds in noise were determined. Individual WMC capacity was assed using the Verbal Learning and Memory Test (VLMT) and the Reading Span Test (RST). STUDY SAMPLE: Forty-nine hearing aid users with mild to moderate sensorineural hearing loss divided into three groups differing in HAE. RESULTS: Whereas unaided speech recognition did not show a significant relationship with WMC, a significant correlation could be observed between WMC and aided speech recognition. However, this only applied to listeners with HAE of up to approximately three years, and a consistent weakening of the correlation could be observed with more experience. CONCLUSIONS: Speech recognition scores obtained in acute experiments with an MHA are less influenced by individual cognitive capacity when experienced HA users are taken into account.


Subject(s)
Algorithms , Cognition , Correction of Hearing Impairment/instrumentation , Hearing Aids , Hearing Loss, Sensorineural/rehabilitation , Hearing , Persons With Hearing Impairments/rehabilitation , Recognition, Psychology , Signal Processing, Computer-Assisted , Speech Perception , Acoustic Stimulation , Aged , Aged, 80 and over , Audiometry, Pure-Tone , Audiometry, Speech , Auditory Threshold , Equipment Design , Female , Germany , Hearing Loss, Sensorineural/diagnosis , Hearing Loss, Sensorineural/physiopathology , Hearing Loss, Sensorineural/psychology , Humans , Male , Memory, Short-Term , Middle Aged , Models, Theoretical , Neuropsychological Tests , Noise/adverse effects , Perceptual Masking , Persons With Hearing Impairments/psychology , Psychoacoustics , Speech Intelligibility
17.
Int J Audiol ; 57(sup3): S55-S61, 2018 06.
Article in English | MEDLINE | ID: mdl-28112001

ABSTRACT

OBJECTIVE: The perceived qualities of nine different single-microphone noise reduction (SMNR) algorithms were to be evaluated and compared in subjective listening tests with normal hearing and hearing impaired (HI) listeners. DESIGN: Speech samples added with traffic noise or with party noise were processed by the SMNR algorithms. Subjects rated the amount of speech distortions, intrusiveness of background noise, listening effort and overall quality, using a simplified MUSHRA (ITU-R, 2003 ) assessment method. STUDY SAMPLE: 18 normal hearing and 18 moderately HI subjects participated in the study. RESULTS: Significant differences between the rating behaviours of the two subject groups were observed: While normal hearing subjects clearly differentiated between different SMNR algorithms, HI subjects rated all processed signals very similarly. Moreover, HI subjects rated speech distortions of the unprocessed, noisier signals as being more severe than the distortions of the processed signals, in contrast to normal hearing subjects. CONCLUSIONS: It seems harder for HI listeners to distinguish between additive noise and speech distortions or/and they might have a different understanding of the term "speech distortion" than normal hearing listeners have. The findings confirm that the evaluation of SMNR schemes for hearing aids should always involve HI listeners.


Subject(s)
Correction of Hearing Impairment/instrumentation , Hearing Aids , Hearing Loss/rehabilitation , Hearing , Noise/adverse effects , Perceptual Masking , Persons With Hearing Impairments/rehabilitation , Speech Perception , Acoustic Stimulation , Adult , Aged , Auditory Threshold , Case-Control Studies , Equipment Design , Female , Hearing Loss/diagnosis , Hearing Loss/physiopathology , Hearing Loss/psychology , Hearing Tests , Humans , Male , Models, Theoretical , Persons With Hearing Impairments/psychology , Psychoacoustics , Signal Processing, Computer-Assisted , Speech Intelligibility
18.
Int J Audiol ; 57(sup3): S118-S129, 2018 06.
Article in English | MEDLINE | ID: mdl-27875658

ABSTRACT

OBJECTIVE: The aim of the study was, based on the individualisation of hearing aids (HA) and pre-sets for audio devices, to develop a questionnaire to determine the basis for profiling sound preferences and hearing habits to gather additional information usable for HA fitting and adjustment tools for audio-devices. METHODS: We developed a questionnaire consisting of 46 items. A postal survey was conducted with N = 622 users with a mean age of 66 years (47.9% aided with HA, 45.7% female). RESULTS: Seven factors were identified by means of Explanatory and Confirmatory Factor Analyses: F1: 'Annoyance/distraction by background noise', F2: 'Importance of sound quality', F3: 'Noise Sensitivity', F4: 'Avoidance of unpredictable sounds', F5: 'Openness towards loud/new sounds', F6: 'Preferences for warm sounds', and F7: 'Details of environmental sounds/music'. Only the first of these factors was related to the audiogram of the user. No difference with any of the factors could be observed with HA use/non-use. In contrast, gender effects were found with female respondents preferring warm sounds and being more sensitive to noise. CONCLUSIONS: The sound preference and hearing habits questionnaire (SP-HHQ) is a usable tool for profiling the users with respect to sound preferences relevant for HA fitting and pre-sets for audio devices.


Subject(s)
Auditory Perception , Correction of Hearing Impairment/instrumentation , Habits , Hearing Aids , Hearing Loss/rehabilitation , Hearing , Patient Preference , Persons With Hearing Impairments/rehabilitation , Psychometrics , Surveys and Questionnaires , Acoustic Stimulation , Adult , Aged , Auditory Threshold , Equipment Design , Female , Hearing Loss/diagnosis , Hearing Loss/physiopathology , Hearing Loss/psychology , Humans , Male , Middle Aged , Persons With Hearing Impairments/psychology , Predictive Value of Tests , Sex Factors
19.
Ear Hear ; 39(3): 503-516, 2018.
Article in English | MEDLINE | ID: mdl-29068860

ABSTRACT

OBJECTIVES: Watching a talker's mouth is beneficial for speech reception (SR) in many communication settings, especially in noise and when hearing is impaired. Measures for audiovisual (AV) SR can be valuable in the framework of diagnosing or treating hearing disorders. This study addresses the lack of standardized methods in many languages for assessing lipreading, AV gain, and integration. A new method is validated that supplements a German speech audiometric test with visualizations of the synthetic articulation of an avatar that was used, for it is feasible to lip-sync auditory speech in a highly standardized way. Three hypotheses were formed according to the literature on AV SR that used live or filmed talkers. It was tested whether respective effects could be reproduced with synthetic articulation: (1) cochlear implant (CI) users have a higher visual-only SR than normal-hearing (NH) individuals, and younger individuals obtain higher lipreading scores than older persons. (2) Both CI and NH gain from presenting AV over unimodal (auditory or visual) sentences in noise. (3) Both CI and NH listeners efficiently integrate complementary auditory and visual speech features. DESIGN: In a controlled, cross-sectional study with 14 experienced CI users (mean age 47.4) and 14 NH individuals (mean age 46.3, similar broad age distribution), lipreading, AV gain, and integration of a German matrix sentence test were assessed. Visual speech stimuli were synthesized by the articulation of the Talking Head system "MASSY" (Modular Audiovisual Speech Synthesizer), which displayed standardized articulation with respect to the visibility of German phones. RESULTS: In line with the hypotheses and previous literature, CI users had a higher mean visual-only SR than NH individuals (CI, 38%; NH, 12%; p < 0.001). Age was correlated with lipreading such that within each group, younger individuals obtained higher visual-only scores than older persons (rCI = -0.54; p = 0.046; rNH = -0.78; p < 0.001). Both CI and NH benefitted by AV over unimodal speech as indexed by calculations of the measures visual enhancement and auditory enhancement (each p < 0.001). Both groups efficiently integrated complementary auditory and visual speech features as indexed by calculations of the measure integration enhancement (each p < 0.005). CONCLUSIONS: Given the good agreement between results from literature and the outcome of supplementing an existing validated auditory test with synthetic visual cues, the introduced method can be considered an interesting candidate for clinical and scientific applications to assess measures important for AV SR in a standardized manner. This could be beneficial for optimizing the diagnosis and treatment of individual listening and communication disorders, such as cochlear implantation.


Subject(s)
Cochlear Implants , Lipreading , Speech Intelligibility , Speech Perception , Visual Perception , Acoustic Stimulation , Adult , Cross-Sectional Studies , Humans , Middle Aged
20.
J Acoust Soc Am ; 139(6): 3116, 2016 06.
Article in English | MEDLINE | ID: mdl-27369134

ABSTRACT

This study addressed the hypothesis that an improvement in speech recognition due to combined envelope and fine structure cues is greater in the audiovisual than the auditory modality. Normal hearing listeners were presented with envelope vocoded speech in combination with low-pass filtered speech. The benefit of adding acoustic low-frequency fine structure to acoustic envelope cues was significantly greater for audiovisual than for auditory-only speech. It is suggested that this is due to complementary information of the different acoustic and visual cues. The results have potential implications for the assessment of bimodal cochlear implant fittings or electroacoustic stimulation.


Subject(s)
Cues , Recognition, Psychology , Speech Acoustics , Speech Perception , Visual Perception , Acoustic Stimulation , Adult , Audiometry, Speech , Female , Humans , Male , Noise/adverse effects , Perceptual Masking , Photic Stimulation , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...