Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 62
Filter
1.
Neuroimage ; 277: 120223, 2023 08 15.
Article in English | MEDLINE | ID: mdl-37315772

ABSTRACT

Neural processing of the speech envelope is of crucial importance for speech perception and comprehension. This envelope processing is often investigated by measuring neural synchronization to sinusoidal amplitude-modulated stimuli at different modulation frequencies. However, it has been argued that these stimuli lack ecological validity. Pulsatile amplitude-modulated stimuli, on the other hand, are suggested to be more ecologically valid and efficient, and have increased potential to uncover the neural mechanisms behind some developmental disorders such a dyslexia. Nonetheless, pulsatile stimuli have not yet been investigated in pre-reading and beginning reading children, which is a crucial age for developmental reading research. We performed a longitudinal study to examine the potential of pulsatile stimuli in this age range. Fifty-two typically reading children were tested at three time points from the middle of their last year of kindergarten (5 years old) to the end of first grade (7 years old). Using electroencephalography, we measured neural synchronization to syllable rate and phoneme rate sinusoidal and pulsatile amplitude-modulated stimuli. Our results revealed that the pulsatile stimuli significantly enhance neural synchronization at syllable rate, compared to the sinusoidal stimuli. Additionally, the pulsatile stimuli at syllable rate elicited a different hemispheric specialization, more closely resembling natural speech envelope tracking. We postulate that using the pulsatile stimuli greatly increases EEG data acquisition efficiency compared to the common sinusoidal amplitude-modulated stimuli in research in younger children and in developmental reading research.


Subject(s)
Dyslexia , Speech Perception , Humans , Child , Child, Preschool , Longitudinal Studies , Acoustic Stimulation/methods , Reading , Electroencephalography
2.
J Assoc Res Otolaryngol ; 23(4): 491-512, 2022 08.
Article in English | MEDLINE | ID: mdl-35668206

ABSTRACT

Cochlear implant (CI) users show limited sensitivity to the temporal pitch conveyed by electric stimulation, contributing to impaired perception of music and of speech in noise. Neurophysiological studies in cats suggest that this limitation is due, in part, to poor transmission of the temporal fine structure (TFS) by the brainstem pathways that are activated by electrical cochlear stimulation. It remains unknown, however, how that neural limit might influence perception in the same animal model. For that reason, we developed non-invasive psychophysical and electrophysiological measures of temporal (i.e., non-spectral) pitch processing in the cat. Normal-hearing (NH) cats were presented with acoustic pulse trains consisting of band-limited harmonic complexes that simulated CI stimulation of the basal cochlea while removing cochlear place-of-excitation cues. In the psychophysical procedure, trained cats detected changes from a base pulse rate to a higher pulse rate. In the scalp-recording procedure, the cortical-evoked acoustic change complex (ACC) and brainstem-generated frequency following response (FFR) were recorded simultaneously in sedated cats for pulse trains that alternated between the base and higher rates. The range of perceptual sensitivity to temporal pitch broadly resembled that of humans but was shifted to somewhat higher rates. The ACC largely paralleled these perceptual patterns, validating its use as an objective measure of temporal pitch sensitivity. The phase-locked FFR, in contrast, showed strong brainstem encoding for all tested pulse rates. These measures demonstrate the cat's perceptual sensitivity to pitch in the absence of cochlear-place cues and may be valuable for evaluating neural mechanisms of temporal pitch perception in the feline animal model of stimulation by a CI or novel auditory prostheses.


Subject(s)
Cochlear Implantation , Cochlear Implants , Acoustic Stimulation/methods , Animals , Cats , Humans , Pitch Perception/physiology , Psychophysics , Scalp
3.
Eur J Neurosci ; 53(11): 3688-3709, 2021 06.
Article in English | MEDLINE | ID: mdl-33811405

ABSTRACT

Different approaches have been used to extract auditory steady-state responses (ASSRs) from electroencephalography (EEG) recordings, including region-related electrode configurations (electrode level) and the manual placement of equivalent current dipoles (source level). Inherent limitations of these approaches are the assumption of the anatomical origin and the omission of activity generated by secondary sources. Data-driven methods such as independent component analysis (ICA) seem to avoid these limitations but only to face new others such as the presence of ASSRs with similar properties in different components and the manual selection protocol to select and classify the most relevant components carrying ASSRs. We propose the novel approach of applying a spatial filter to these components in order to extract the most relevant information. We aimed to develop a method based on the reproducibility across trials that performs reliably in low-signal-to-noise ratio (SNR) scenarios using denoising source separation (DSS). DSS combined with ICA successfully reduced the number of components and extracted the most relevant ASSR at 4, 10 and 20 Hz stimulation in group and individual level studies of EEG adolescent data. The anatomical brain location for these low stimulation frequencies showed results in cortical areas with relatively small dispersion. However, for 40 and 80 Hz, results with regard to the number of components and the anatomical origin were less clear. At all stimulation frequencies the outcome measures were consistent with literature, and the partial rejection of inter-subject variability led to more accurate results and higher SNRs. These findings are promising for future applications in group comparison involving pathologies.


Subject(s)
Brain , Electroencephalography , Acoustic Stimulation , Adolescent , Evoked Potentials, Auditory , Humans , Reproducibility of Results , Signal-To-Noise Ratio
4.
Hear Res ; 393: 107993, 2020 08.
Article in English | MEDLINE | ID: mdl-32535277

ABSTRACT

Envelope following responses (EFRs) can be evoked by a wide range of auditory stimuli, but for many stimulus parameters the effect on EFR strength is not fully understood. This complicates the comparison of earlier studies and the design of new studies. Furthermore, the most optimal stimulus parameters are unknown. To help resolve this issue, we investigated the effects of four important stimulus parameters and their interactions on the EFR. Responses were measured in 16 normal hearing subjects evoked by stimuli with four levels of stimulus complexity (amplitude modulated noise, artificial vowels, natural vowels and vowel-consonant-vowel combinations), three fundamental frequencies (105 Hz, 185 Hz and 245 Hz), three fundamental frequency contours (upward sweeping, downward sweeping and flat) and three vowel identities (Flemish /a:/, /u:/, and /i:/). We found that EFRs evoked by artificial vowels were on average 4-6 dB SNR larger than responses evoked by the other stimulus complexities, probably because of (unnaturally) strong higher harmonics. Moreover, response amplitude decreased with fundamental frequency but response SNR remained largely unaffected. Thirdly, fundamental frequency variation within the stimulus did not impact EFR strength, but only when rate of change remained low (e.g. not the case for sweeping natural vowels). Finally, the vowel /i:/ appeared to evoke larger response amplitudes compared to /a:/ and /u:/, but analysis power was too small to confirm this statistically. Vowel-dependent differences in response strength have been suggested to stem from destructive interference between response components. We show how a model of the auditory periphery can simulate these interference patterns and predict response strength. Altogether, the results of this study can guide stimulus choice for future EFR research and practical applications.


Subject(s)
Noise , Speech Perception , Speech , Acoustic Stimulation , Hearing Tests , Humans , Noise/adverse effects
5.
Int J Audiol ; 59(5): 341-347, 2020 05.
Article in English | MEDLINE | ID: mdl-31860369

ABSTRACT

Objective: Subjects implanted with a Direct Acoustic Cochlear Implant (DACI) show improvements in their bone conduction (BC) thresholds after surgery. We hypothesised that a new pathway for BC sound is created via the DACI. The aim of this study was to investigate the contribution of this pathway to the cochlear response via measurements of the promontory and round window membrane (RWM) velocities while stimulating with a conventional bone conductor.Design: This study was a cadaver head study with a repeated measures study design.Study Sample: Eight ears of five fresh-frozen cadaveric whole heads were investigated in this trial.Results: After DACI implantation the promontory and RWM velocities did not change significantly in the frequency range 0.5-2 kHz when the DACI was switched off.Conclusions: No significant changes in the relative vibration magnitude of the RWM after DACI implantation were observed. The improvements in BC thresholds seen in patients implanted with a DACI very likely have their origin in the changed impedance at the oval window after DACI surgery leading to a more efficient contribution from the inner ear components to BC sound.


Subject(s)
Bone Conduction/physiology , Cochlear Implants , Round Window, Ear/physiopathology , Acoustic Stimulation , Aged , Aged, 80 and over , Auditory Threshold/physiology , Cadaver , Cochlear Implantation , Ear, Inner/physiopathology , Female , Humans , Male , Postoperative Period , Round Window, Ear/surgery , Vibration
6.
Hear Res ; 380: 22-34, 2019 09 01.
Article in English | MEDLINE | ID: mdl-31170624

ABSTRACT

Auditory steady-state responses (ASSRs) are auditory evoked potentials that reflect phase-locked neural activity to periodic stimuli. ASSRs are often evoked by tones with a modulated envelope, with sinusoidal envelopes being most common. However, it is unclear if and how the shape of the envelope affects ASSR responses. In this study, we used various trapezoidal modulated tones to evoke ASSRs (modulation frequency = 40 Hz) and studied the effect of four envelope parameters: attack time, hold time, decay time and off time. ASSR measurements in 20 normal hearing subjects showed that envelope shape significantly influenced responses: increased off time and/or increased decay time led to responses with a larger signal-to-noise-ratio (SNR). Response phase delay was significantly influenced by attack time and to a lesser degree by off time. We also simulated neural population responses that approximate ASSRs with a model of the auditory periphery (Bruce et al. 2018). The modulation depth of the simulated responses, i.e. the difference between maximum and minimum firing rate, correlated highly with the response SNRs found in the ASSR measurements. Longer decay time and off time enhanced the modulation depth both by decreasing the minimum firing rate and by increasing the maximum firing rate. In conclusion, custom envelopes with long decay and off time provide larger response SNRs and the benefit over the commonly used sinusoidal envelope was in the range of several dB.


Subject(s)
Acoustic Stimulation , Audiometry, Pure-Tone , Auditory Cortex/physiology , Electroencephalography , Evoked Potentials, Auditory , Adolescent , Adult , Auditory Pathways/physiology , Female , Humans , Male , Reaction Time , Time Factors , Young Adult
7.
Cortex ; 113: 128-140, 2019 04.
Article in English | MEDLINE | ID: mdl-30640141

ABSTRACT

In recent studies phonological deficits in dyslexia are related to a deficit in the synchronization of neural oscillations to the dynamics of the speech envelope. The temporal features of both amplitude modulations and rise times characterize the speech envelope. Previous studies uncovered the inefficiency of the dyslexic brain to follow different amplitude modulations in speech. However, it remains to be investigated how the envelope's rise time mediates this neural processing. In this study we examined neural synchronization in students with and without dyslexia using auditory steady-state responses at theta, alpha, beta and low-gamma range oscillations (i.e., 4, 10, 20 and 40 Hz) to stimuli with different envelope rise times. Our results revealed reduced neural synchronization in the alpha, beta and low-gamma frequency ranges in dyslexia. Moreover, atypical neural synchronization was modulated by rise time for alpha and beta oscillations, showing that deficits found at 10 and 20 Hz were only evident when the envelope's rise time was significantly shortened. This impaired tracking of rise time cues may very well lead to the speech and phonological processing difficulties observed in dyslexia.


Subject(s)
Auditory Perception/physiology , Brain Waves/physiology , Brain/physiopathology , Dyslexia/physiopathology , Neurons/physiology , Acoustic Stimulation , Adolescent , Adult , Auditory Cortex/physiopathology , Electroencephalography , Female , Humans , Male , Speech Perception/physiology , Time Factors , Young Adult
8.
Trends Hear ; 23: 2331216519866566, 2019.
Article in English | MEDLINE | ID: mdl-32516059

ABSTRACT

A language-independent automated self-test on tablet based on masked recognition of ecological sounds, the Sound Ear Check (SEC), was developed. In this test, 24 trials of eight different sounds are randomly presented in a noise that was spectrally shaped according to the average frequency spectra of the stimulus sounds, using a 1-up 2-down adaptive procedure. The test was evaluated in adults with normal hearing and hearing loss, and its feasibility was investigated in young children, who are the target population of this test. Following equalization of perceptual difficulty across sounds by applying level adjustments to the individual tokens, a reference curve with a steep slope of 18%/dB was obtained, resulting in a test with a high test-retest reliability of 1 dB. The SEC sound reception threshold was significantly associated with the averaged pure tone threshold (r = .70), as well as with the speech reception threshold for the Digit Triplet Test (r = .79), indicating that the SEC is susceptible to both audibility and signal-to-noise ratio loss. Sensitivity and specificity values on the order of magnitude of ∼70% and ∼80% to detect individuals with mild and moderate hearing loss, respectively, and ∼80% to detect individuals with slight speech-in-noise recognition difficulties were obtained. Homogeneity among sounds was verified in children. Psychometric functions fitted to the data indicated a steep slope of 16%/dB, and test-retest reliability of sound reception threshold estimates was 1.3 dB. A reference value of -9 dB signal-to-noise ratio was obtained. Test duration was around 6 minutes, including training and acclimatization.


Subject(s)
Hearing Loss/diagnosis , Hearing , Noise , Recognition, Psychology , Speech Reception Threshold Test/methods , Acoustic Stimulation/methods , Adolescent , Adult , Aged , Female , Humans , Male , Middle Aged , Reference Values , Reproducibility of Results , Sensitivity and Specificity , Signal-To-Noise Ratio , Speech Perception , Young Adult
9.
Hear Res ; 371: 11-18, 2019 01.
Article in English | MEDLINE | ID: mdl-30439570

ABSTRACT

The understanding of speech in noise relies (at least partially) on spectrotemporal modulation sensitivity. This sensitivity can be measured by spectral ripple tests, which can be administered at different presentation levels. However, it is not known how presentation level affects spectrotemporal modulation thresholds. In this work, we present behavioral data for normal-hearing adults which show that at higher ripple densities (2 and 4 ripples/oct), increasing presentation level led to worse discrimination thresholds. Results of a computational model suggested that the higher thresholds could be explained by a worsening of the spectrotemporal representation in the auditory nerve due to broadening of cochlear filters and neural activity saturation. Our results demonstrate the importance of taking presentation level into account when administering spectrotemporal modulation detection tests.


Subject(s)
Speech Perception/physiology , Acoustic Stimulation , Adult , Auditory Threshold/physiology , Cochlear Nerve/physiology , Female , Humans , Male , Models, Neurological , Models, Psychological , Speech Acoustics , Speech Discrimination Tests/methods , Speech Discrimination Tests/statistics & numerical data , Young Adult
10.
Trends Hear ; 22: 2331216518805363, 2018.
Article in English | MEDLINE | ID: mdl-30334496

ABSTRACT

In Part I, we investigated 40-Hz auditory steady-state response (ASSR) amplitudes for the use of objective loudness balancing across the ears for normal-hearing participants and found median across-ear ratios in ASSR amplitudes close to 1. In this part, we further investigated whether the ASSR can be used to estimate binaural loudness balance for listeners with asymmetric hearing, for whom binaural loudness balancing is of particular interest. We tested participants with asymmetric hearing and participants with bimodal hearing, who hear with electrical stimulation through a cochlear implant (CI) in one ear and with acoustical stimulation in the other ear. Behavioral loudness balancing was performed at different percentages of the dynamic range. Acoustical carrier frequencies were 500, 1000, or 2000 Hz, and CI channels were stimulated in apical or middle regions in the cochlea. For both groups, the ASSR amplitudes at balanced loudness levels were similar for the two ears, with median ratios between left and right ear stimulation close to 1. However, individual variability was observed. For participants with asymmetric hearing loss, the difference between the behavioral balanced levels and the ASSR-predicted balanced levels was smaller than 10 dB in 50% and 56% of cases, for 500 Hz and 2000 Hz, respectively. For bimodal listeners, these percentages were 89% and 60%. Apical CI channels yielded significantly better results (median difference near 0 dB) than middle CI channels, which had a median difference of -7.25 dB.


Subject(s)
Auditory Threshold/physiology , Cochlear Implantation/methods , Hearing Aids/statistics & numerical data , Hearing Loss/diagnosis , Hearing Loss/surgery , Acoustic Stimulation/methods , Adult , Aged , Audiometry/methods , Auditory Cortex/diagnostic imaging , Cohort Studies , Electroencephalography/methods , Female , Follow-Up Studies , Hearing Loss/rehabilitation , Humans , Male , Middle Aged , Otoscopy/methods , Prospective Studies , Treatment Outcome , Young Adult
11.
Trends Hear ; 22: 2331216518805352, 2018.
Article in English | MEDLINE | ID: mdl-30334493

ABSTRACT

Psychophysical procedures are used to balance loudness across the ears. However, they can be difficult and require active cooperation. We investigated whether 40-Hz auditory steady-state response (ASSR) amplitudes can be used to objectively estimate the balanced loudness across the ears for a group of young, normal-hearing participants. The 40-Hz ASSRs were recorded using monaural stimuli with carrier frequencies of 500, 1000, or 2000 Hz over a range of levels between 40 and 80 dB SPL. Behavioral loudness balancing was performed for at least one reference level of the left ear. ASSR amplitude growth functions were listener dependent, but median across-ear ratios in ASSR amplitudes were close to 1. The differences between the ASSR-predicted balanced levels and the behaviorally found balanced levels were smaller than 5 dB in 59% of cases and smaller than 10 dB in 85% of cases. The differences between the ASSR-predicted balanced levels and the reference levels were smaller than 5 dB in 54% of cases and smaller than 10 dB in 87% of cases. No clear hemispheric lateralization was found for 40-Hz ASSRs, with the exception of responses evoked by stimulus levels of 40 to 60 dB SPL at 2000 Hz.


Subject(s)
Acoustic Stimulation/methods , Adaptation, Physiological , Auditory Perception/physiology , Hearing/physiology , Noise , Electroencephalography/methods , Female , Healthy Volunteers , Humans , Male , Reference Values , Sampling Studies , Statistics, Nonparametric , Young Adult
12.
Hear Res ; 370: 217-231, 2018 12.
Article in English | MEDLINE | ID: mdl-30213516

ABSTRACT

Acoustic hearing implants, such as direct acoustic cochlear implants (DACIs), can be used to treat profound mixed hearing loss. Electrophysiological responses in DACI subjects are of interest to confirm auditory processing intra-operatively, and to assist DACI fitting postoperatively. We present two related studies, focusing on DACI artifacts and electrophysiological measurements in DACI subjects, respectively. In the first study we aimed to characterize DACI artifacts, to study the feasibility of measuring frequency-specific electrophysiological responses in DACI subjects. Measurements of DACI artifacts were collected in a cadaveric head to disentangle possible DACI artifact sources and compared to a constructed DACI artifact template. It is shown that for moderate stimulation levels, DACI artifacts are mainly dominated by the artifact from the radio frequency (RF) communication signal, that can be modeled if the RF encoding protocol is known. In a second study, the feasibility of measuring intra-operative responses, without applying the RF artifact models, in DACI subjects is investigated. Auditory steady-state and brainstem responses were measured intra-operatively in three DACI subjects, immediately after implantation, to confirm proper DACI functioning and coupling to the inner ear. Intra-operative responses could be measured in two of the three tested subjects. Absence of intra-operative responses in the third subject can possibly be explained by the hearing loss, attenuation of intra-operative responses, the difference between electrophysiological and behavioral threshold, and a temporary threshold shift due to the DACI surgery. In conclusion, RF artifacts can be modeled, such that electrophysiological responses to frequency-specific stimuli could possibly be measured in DACI subjects, and intra-operative responses in DACI subjects can be obtained.


Subject(s)
Auditory Perception , Brain Stem/physiopathology , Cochlear Implantation/instrumentation , Cochlear Implants , Evoked Potentials, Auditory, Brain Stem , Hearing Loss/rehabilitation , Persons With Hearing Impairments/rehabilitation , Acoustic Stimulation , Aged , Artifacts , Cadaver , Electric Stimulation , Feasibility Studies , Hearing Loss/diagnosis , Hearing Loss/physiopathology , Hearing Loss/psychology , Humans , Intraoperative Neurophysiological Monitoring , Middle Aged , Persons With Hearing Impairments/psychology , Predictive Value of Tests , Prosthesis Fitting , Reproducibility of Results
13.
Int J Audiol ; 57(12): 908-916, 2018 12.
Article in English | MEDLINE | ID: mdl-30261770

ABSTRACT

The speech intelligibility benefit of visual speech cues during oral communication is well-established. Therefore, an ecologically valid approach of auditory assessment should include the processing of both auditory and visual speech cues. This study describes the development and evaluation of a virtual human speaker designed to present speech auditory-visually. A male and female virtual human speaker were created and evaluated in two experiments: a visual-only speech reading test of words and sentences and an auditory-visual speech intelligibility sentence test. A group of five hearing, skilled speech reading adults participated in the speech reading test whereas a group of young normal-hearing participants (N = 35) was recruited for the intelligibility test. Skilled speech readers correctly identified 57 to 67% of the words and sentences uttered by the virtual speakers. The presence of the virtual speaker improved the speech intelligibility of sentences in noise by 1.5 to 2 dB. These results demonstrate the potential applicability of virtual humans in future auditory-visual speech assessment paradigms.


Subject(s)
Speech Acoustics , Speech Intelligibility , Speech Perception , Virtual Reality , Voice Quality , Acoustic Stimulation , Adult , Cues , Female , Humans , Male , Mouth/physiology , Movement , Photic Stimulation , Speech Reception Threshold Test , Visual Perception , Young Adult
14.
Hear Res ; 370: 189-200, 2018 12.
Article in English | MEDLINE | ID: mdl-30131201

ABSTRACT

Peripheral hearing impairment cannot fully account for speech perception difficulties that emerge with advancing age. As the fluctuating speech envelope bears crucial information for speech perception, changes in temporal envelope processing are thought to contribute to degraded speech perception. Previous research has demonstrated changes in neural encoding of envelope modulations throughout the adult lifespan, either due to age or due to hearing impairment. To date, however, it remains unclear whether such age- and hearing-related neural changes are associated with impaired speech perception. In the present study, we investigated the potential relationship between perception of speech in different types of masking sounds and neural envelope encoding for a normal-hearing and hearing-impaired adult population including young (20-30 years), middle-aged (50-60 years), and older (70-80 years) people. Our analyses show that enhanced neural envelope encoding in the cortex and in the brainstem, respectively, is related to worse speech perception for normal-hearing and for hearing-impaired adults. This neural-behavioral correlation is found for the three age groups and appears to be independent of the type of masking noise, i.e., background noise or competing speech. These findings provide promising directions for future research aiming to develop advanced rehabilitation strategies for speech perception difficulties that emerge throughout adult life.


Subject(s)
Auditory Cortex/physiopathology , Brain Stem/physiopathology , Evoked Potentials, Auditory , Hearing Loss/psychology , Persons With Hearing Impairments/psychology , Speech Perception , Acoustic Stimulation , Adult , Age Factors , Aged , Aged, 80 and over , Case-Control Studies , Electroencephalography , Evoked Potentials, Auditory, Brain Stem , Female , Hearing , Hearing Loss/diagnosis , Hearing Loss/physiopathology , Humans , Male , Middle Aged , Noise/adverse effects , Perceptual Masking , Time Factors , Young Adult
15.
J Neural Eng ; 15(1): 016006, 2018 02.
Article in English | MEDLINE | ID: mdl-29211684

ABSTRACT

OBJECTIVE: Electrically evoked auditory steady-state responses (EASSRs) are potentially useful for objective cochlear implant (CI) fitting and follow-up of the auditory maturation in infants and children with a CI. EASSRs are recorded in the electro-encephalogram (EEG) in response to electrical stimulation with continuous pulse trains, and are distorted by significant CI artifacts related to this electrical stimulation. The aim of this study is to evaluate a CI artifacts attenuation method based on independent component analysis (ICA) for three EASSR datasets. APPROACH: ICA has often been used to remove CI artifacts from the EEG to record transient auditory responses, such as cortical evoked auditory potentials. Independent components (ICs) corresponding to CI artifacts are then often manually identified. In this study, an ICA based CI artifacts attenuation method was developed and evaluated for EASSR measurements with varying CI artifacts and EASSR characteristics. Artifactual ICs were automatically identified based on their spectrum. MAIN RESULTS: For 40 Hz amplitude modulation (AM) stimulation at comfort level, in high SNR recordings, ICA succeeded in removing CI artifacts from all recording channels, without distorting the EASSR. For lower SNR recordings, with 40 Hz AM stimulation at lower levels, or 90 Hz AM stimulation, ICA either distorted the EASSR or could not remove all CI artifacts in most subjects, except for two of the seven subjects tested with low level 40 Hz AM stimulation. Noise levels were reduced after ICA was applied, and up to 29 ICs were rejected, suggesting poor ICA separation quality. SIGNIFICANCE: We hypothesize that ICA is capable of separating CI artifacts and EASSR in case the contralateral hemisphere is EASSR dominated. For small EASSRs or large CI artifact amplitudes, ICA separation quality is insufficient to ensure complete CI artifacts attenuation without EASSR distortion.


Subject(s)
Acoustic Stimulation/methods , Artifacts , Cochlear Implants , Electroencephalography/methods , Evoked Potentials, Auditory/physiology , Principal Component Analysis/methods , Cochlear Implantation/methods , Cochlear Implantation/standards , Cochlear Implants/standards , Databases, Factual , Electric Stimulation/methods , Humans
16.
Ear Hear ; 39(2): 260-268, 2018.
Article in English | MEDLINE | ID: mdl-28857787

ABSTRACT

OBJECTIVES: Auditory steady state responses (ASSRs) are used in clinical practice for objective hearing assessments. The response is called steady state because it is assumed to be stable over time, and because it is evoked by a stimulus with a certain periodicity, which will lead to discrete frequency components that are stable in amplitude and phase over time. However, the stimuli commonly used to evoke ASSRs are also known to be able to induce loudness adaptation behaviorally. Researchers and clinicians using ASSRs assume that the response remains stable over time. This study investigates (1) the stability of ASSR amplitudes over time, within one recording, and (2) whether loudness adaptation can be reflected in ASSRs. DESIGN: ASSRs were measured from 14 normal-hearing participants. The ASSRs were evoked by the stimuli that caused the most loudness adaptation in a previous behavioral study, that is, mixed-modulated sinusoids with carrier frequencies of either 500 or 2000 Hz, a modulation frequency of 40 Hz, and a low sensation level of 30 dB SL. For each carrier frequency and participant, 40 repetitions of 92 sec recordings were made. Two types of analyses were used to investigate the ASSR amplitudes over time: with the more traditionally used Fast Fourier Transform and with a novel Kalman filtering approach. Robust correlations between the ASSR amplitudes and behavioral loudness adaptation ratings were also calculated. RESULTS: Overall, ASSR amplitudes were stable. Over all individual recordings, the median change of the amplitudes over time was -0.0001 µV/s. Based on group analysis, a significant but very weak decrease in amplitude over time was found, with the decrease in amplitude over time around -0.0002 µV/s. Correlation coefficients between ASSR amplitudes and behavioral loudness adaptation ratings were significant but low to moderate, with r = 0.27 and r = 0.39 for the 500 and 2000 Hz carrier frequency, respectively. CONCLUSIONS: The decrease in amplitude of ASSRs over time (92 sec) is small. Consequently, it is safe to use ASSRs in clinical practice, and additional correction factors for objective hearing assessments are not needed. Because only small decreases in amplitudes were found, loudness adaptation is probably not reflected by the ASSRs.


Subject(s)
Auditory Perception/physiology , Hearing Tests/methods , Hearing/physiology , Acoustic Stimulation , Auditory Threshold , Electroencephalography , Female , Humans , Male , Reference Values , Young Adult
17.
Int J Audiol ; 56(7): 464-471, 2017 07.
Article in English | MEDLINE | ID: mdl-28635497

ABSTRACT

OBJECTIVE: Binaural processing can be measured objectively as a desynchronisation of phase-locked neural activity to changes in interaural phase differences (IPDs). This was reported in a magnetoencephalography study for 40 Hz amplitude modulated tones. The goal of this study was to measure this desynchronisation using electroencephalography and explore the outcomes for different modulation frequencies. DESIGN: Auditory steady-state responses (ASSRs) were recorded to pure tones, amplitude modulated at 20, 40 or 80 Hz. IPDs switched between 0 and 180° at fixed time intervals. STUDY SAMPLE: Sixteen young listeners with bilateral normal hearing thresholds (≤25 dB HL at 125-8000 Hz) participated in this study. RESULTS: Significant ASSR phase desynchronisations to IPD changes were detected in 14 out of 16 participants for 40 Hz and in 8, respectively 9, out of 13 participants for 20 and 80 Hz modulators. Desynchronisation and restoration of ASSR phase took place significantly faster for 80 Hz than for 40 and 20 Hz. CONCLUSIONS: ASSR desynchronisation to IPD changes was successfully recorded using electroencephalography. It was feasible for 20, 40 and 80 Hz modulators and could be an objective tool to assess processing of changes in binaural information.


Subject(s)
Auditory Cortex/physiology , Cortical Synchronization , Cues , Hearing , Sound Localization , Acoustic Stimulation , Adolescent , Adult , Audiometry, Pure-Tone , Auditory Threshold , Evoked Potentials, Auditory , Female , Humans , Magnetoencephalography , Male , Time Factors , Young Adult
18.
Neuroimage ; 148: 240-253, 2017 03 01.
Article in English | MEDLINE | ID: mdl-28110090

ABSTRACT

Investigating the neural generators of auditory steady-state responses (ASSRs), i.e., auditory evoked brain responses, with a wide range of screening and diagnostic applications, has been the focus of various studies for many years. Most of these studies employed a priori assumptions regarding the number and location of neural generators. The aim of this study is to reconstruct ASSR sources with minimal assumptions in order to gain in-depth insight into the number and location of brain regions that are activated in response to low- as well as high-frequency acoustically amplitude modulated signals. In order to reconstruct ASSR sources, we applied independent component analysis with subsequent equivalent dipole modeling to single-subject EEG data (young adults, 20-30 years of age). These data were based on white noise stimuli, amplitude modulated at 4, 20, 40, or 80Hz. The independent components that exhibited a significant ASSR were clustered among all participants by means of a probabilistic clustering method based on a Gaussian mixture model. Results suggest that a widely distributed network of sources, located in cortical as well as subcortical regions, is active in response to 4, 20, 40, and 80Hz amplitude modulated noises. Some of these sources are located beyond the central auditory pathway. Comparison of brain sources in response to different modulation frequencies suggested that the identified brain sources in the brainstem, the left and the right auditory cortex show a higher responsiveness to 40Hz than to the other modulation frequencies.


Subject(s)
Acoustic Stimulation , Auditory Pathways/physiology , Pitch Perception/physiology , Adult , Algorithms , Auditory Cortex/diagnostic imaging , Auditory Cortex/physiology , Cerebral Cortex/diagnostic imaging , Cerebral Cortex/physiology , Cluster Analysis , Electroencephalography , Evoked Potentials, Auditory , Female , Functional Laterality/physiology , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Normal Distribution , Young Adult
19.
IEEE Trans Neural Syst Rehabil Eng ; 25(3): 196-204, 2017 03.
Article in English | MEDLINE | ID: mdl-27071180

ABSTRACT

Auditory steady state responses (ASSRs) are brain responses to modulated or repetitive stimuli that can be captured in the EEG recording. ASSRs can be used as an objective measure to clinically determine frequency specific hearing thresholds, to quantify the sensitivity of the auditory system to modulation, and have been related to speech intelligibility. However, the detection of ASSRs is difficult due to the low signal to noise ratio of the responses. Moreover, minimizing measurement time is important for clinical applications. Traditionally ASSRs are analyzed using discrete Fourier transform (DFT) based methods. We present a Kalman filter based ASSR analysis procedure and illustrate several benefits over traditional DFT based methods. We show on a data set of 320 measurements that the proposed method reaches valid amplitude estimates significantly faster than the state of the art DFT method. Further, we provide two extensions to the proposed method. First, we demonstrate information can be incorporated from multiple recording electrodes by extending the system model. Secondly, we extend the model to incorporate artifacts from cochlear implant (CI) stimulation and demonstrate electrically evoked auditory steady state responses (EASSRs) can be accurately measured.


Subject(s)
Acoustic Stimulation/methods , Electroencephalography/methods , Evoked Potentials, Auditory/physiology , Models, Statistical , Pitch Perception/physiology , Signal Processing, Computer-Assisted , Algorithms , Computer Simulation , Female , Humans , Male , Middle Aged , Reproducibility of Results , Sensitivity and Specificity
20.
Hear Res ; 344: 109-124, 2017 02.
Article in English | MEDLINE | ID: mdl-27845259

ABSTRACT

As people grow older, speech perception difficulties become highly prevalent, especially in noisy listening situations. Moreover, it is assumed that speech intelligibility is more affected in the event of background noises that induce a higher cognitive load, i.e., noises that result in informational versus energetic masking. There is ample evidence showing that speech perception problems in aging persons are partly due to hearing impairment and partly due to age-related declines in cognition and suprathreshold auditory processing. In order to develop effective rehabilitation strategies, it is indispensable to know how these different degrading factors act upon speech perception. This implies disentangling effects of hearing impairment versus age and examining the interplay between both factors in different background noises of everyday settings. To that end, we investigated open-set sentence identification in six participant groups: a young (20-30 years), middle-aged (50-60 years), and older cohort (70-80 years), each including persons who had normal audiometric thresholds up to at least 4 kHz, on the one hand, and persons who were diagnosed with elevated audiometric thresholds, on the other hand. All participants were screened for (mild) cognitive impairment. We applied stationary and amplitude modulated speech-weighted noise, which are two types of energetic maskers, and unintelligible speech, which causes informational masking in addition to energetic masking. By means of these different background noises, we could look into speech perception performance in listening situations with a low and high cognitive load, respectively. Our results indicate that, even when audiometric thresholds are within normal limits up to 4 kHz, irrespective of threshold elevations at higher frequencies, and there is no indication of even mild cognitive impairment, masked speech perception declines by middle age and decreases further on to older age. The impact of hearing impairment is as detrimental for young and middle-aged as it is for older adults. When the background noise becomes cognitively more demanding, there is a larger decline in speech perception, due to age or hearing impairment. Hearing impairment seems to be the main factor underlying speech perception problems in background noises that cause energetic masking. However, in the event of informational masking, which induces a higher cognitive load, age appears to explain a significant part of the communicative impairment as well. We suggest that the degrading effect of age is mediated by deficiencies in temporal processing and central executive functions. This study may contribute to the improvement of auditory rehabilitation programs aiming to prevent aging persons from missing out on conversations, which, in turn, will improve their quality of life.


Subject(s)
Aging/psychology , Hearing Disorders/psychology , Noise/adverse effects , Perceptual Masking , Persons With Hearing Impairments/psychology , Speech Intelligibility , Speech Perception , Acoustic Stimulation , Adult , Age Factors , Aged , Aged, 80 and over , Audiometry, Speech , Auditory Threshold , Cognitive Aging , Cognitive Dysfunction/psychology , Female , Hearing , Hearing Disorders/diagnosis , Hearing Disorders/physiopathology , Humans , Male , Middle Aged , Quality of Life , Risk Factors , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL