Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 52
Filter
Add more filters











Publication year range
1.
bioRxiv ; 2024 Sep 04.
Article in English | MEDLINE | ID: mdl-39282463

ABSTRACT

Musical training has been associated with enhanced neural processing of sounds, as measured via the frequency following response (FFR), implying the potential for human subcortical neural plasticity. We conducted a large-scale multi-site preregistered study (n > 260) to replicate and extend the findings underpinning this important relationship. We failed to replicate any of the major findings published previously in smaller studies. Musical training was related neither to enhanced spectral encoding strength of a speech stimulus (/da/) in babble nor to a stronger neural-stimulus correlation. Similarly, the strength of neural tracking of a speech sound with a time-varying pitch was not related to either years of musical training or age of onset of musical training. Our findings provide no evidence for plasticity of early auditory responses based on musical training and exposure.

2.
bioRxiv ; 2024 Jul 10.
Article in English | MEDLINE | ID: mdl-39026860

ABSTRACT

Distortion product otoacoustic emissions (DPOAEs) and behavioral audiometry are routinely used for hearing screening and assessment. These measures provide related information about hearing status as both are sensitive to cochlear pathologies. However, DPOAE testing is quicker and does not require a behavioral response. Despite these practical advantages, DPOAE testing is often limited to screening only low and mid- frequencies. Variation in ear canal acoustics across ears and probe placements has resulted in less reliable measurements of DPOAEs near 4 kHz and above where standing waves commonly occur. Stimulus calibration in forward pressure level and responses in emitted pressure level can reduce measurement variability. Using these calibrations, this study assessed the correlation between audiometry and DPOAEs in the extended high frequencies where stimulus calibrations and responses are most susceptible to the effect of standing waves. Behavioral thresholds and DPOAE amplitudes were negatively correlated, and DPOAE amplitudes in emitted pressure level accounted for twice as much variance as amplitudes in sound pressure level. Both measures were correlated with age. These data show that with appropriate calibration methods, extended high-frequency DPOAEs are sensitive to differences in audiometric thresholds and highlight the need to consider calibration techniques in clinical and research applications of DPOAEs.

3.
Sci Rep ; 14(1): 13241, 2024 06 09.
Article in English | MEDLINE | ID: mdl-38853168

ABSTRACT

Cochlear implants (CIs) do not offer the same level of effectiveness in noisy environments as in quiet settings. Current single-microphone noise reduction algorithms in hearing aids and CIs only remove predictable, stationary noise, and are ineffective against realistic, non-stationary noise such as multi-talker interference. Recent developments in deep neural network (DNN) algorithms have achieved noteworthy performance in speech enhancement and separation, especially in removing speech noise. However, more work is needed to investigate the potential of DNN algorithms in removing speech noise when tested with listeners fitted with CIs. Here, we implemented two DNN algorithms that are well suited for applications in speech audio processing: (1) recurrent neural network (RNN) and (2) SepFormer. The algorithms were trained with a customized dataset ( ∼ 30 h), and then tested with thirteen CI listeners. Both RNN and SepFormer algorithms significantly improved CI listener's speech intelligibility in noise without compromising the perceived quality of speech overall. These algorithms not only increased the intelligibility in stationary non-speech noise, but also introduced a substantial improvement in non-stationary noise, where conventional signal processing strategies fall short with little benefits. These results show the promise of using DNN algorithms as a solution for listening challenges in multi-talker noise interference.


Subject(s)
Algorithms , Cochlear Implants , Deep Learning , Noise , Speech Intelligibility , Humans , Female , Middle Aged , Male , Speech Perception/physiology , Aged , Adult , Neural Networks, Computer
4.
Behav Res Methods ; 56(3): 1433-1448, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37326771

ABSTRACT

Anonymous web-based experiments are increasingly used in many domains of behavioral research. However, online studies of auditory perception, especially of psychoacoustic phenomena pertaining to low-level sensory processing, are challenging because of limited available control of the acoustics, and the inability to perform audiometry to confirm normal-hearing status of participants. Here, we outline our approach to mitigate these challenges and validate our procedures by comparing web-based measurements to lab-based data on a range of classic psychoacoustic tasks. Individual tasks were created using jsPsych, an open-source JavaScript front-end library. Dynamic sequences of psychoacoustic tasks were implemented using Django, an open-source library for web applications, and combined with consent pages, questionnaires, and debriefing pages. Subjects were recruited via Prolific, a subject recruitment platform for web-based studies. Guided by a meta-analysis of lab-based data, we developed and validated a screening procedure to select participants for (putative) normal-hearing status based on their responses in a suprathreshold task and a survey. Headphone use was standardized by supplementing procedures from prior literature with a binaural hearing task. Individuals meeting all criteria were re-invited to complete a range of classic psychoacoustic tasks. For the re-invited participants, absolute thresholds were in excellent agreement with lab-based data for fundamental frequency discrimination, gap detection, and sensitivity to interaural time delay and level difference. Furthermore, word identification scores, consonant confusion patterns, and co-modulation masking release effect also matched lab-based studies. Our results suggest that web-based psychoacoustics is a viable complement to lab-based research. Source code for our infrastructure is provided.


Subject(s)
Auditory Perception , Hearing , Humans , Psychoacoustics , Hearing/physiology , Auditory Perception/physiology , Audiometry , Internet , Auditory Threshold/physiology , Acoustic Stimulation
5.
bioRxiv ; 2023 Sep 22.
Article in English | MEDLINE | ID: mdl-37790457

ABSTRACT

The auditory system is unique among sensory systems in its ability to phase lock to and precisely follow very fast cycle-by-cycle fluctuations in the phase of sound-driven cochlear vibrations. Yet, the perceptual role of this temporal fine structure (TFS) code is debated. This fundamental gap is attributable to our inability to experimentally manipulate TFS cues without altering other perceptually relevant cues. Here, we circumnavigated this limitation by leveraging individual differences across 200 participants to systematically compare variations in TFS sensitivity to performance in a range of speech perception tasks. Results suggest that robust TFS sensitivity does not confer additional masking release from pitch or spatial cues, but appears to confer resilience against the effects of reverberation. Yet, across conditions, we also found that greater TFS sensitivity is associated with faster response times, consistent with reduced listening effort. These findings highlight the perceptual significance of TFS coding for everyday hearing.

6.
Commun Biol ; 6(1): 981, 2023 09 26.
Article in English | MEDLINE | ID: mdl-37752215

ABSTRACT

The auditory system has exquisite temporal coding in the periphery which is transformed into a rate-based code in central auditory structures, like auditory cortex. However, the cortex is still able to synchronize, albeit at lower modulation rates, to acoustic fluctuations. The perceptual significance of this cortical synchronization is unknown. We estimated physiological synchronization limits of cortex (in humans with electroencephalography) and brainstem neurons (in chinchillas) to dynamic binaural cues using a novel system-identification technique, along with parallel perceptual measurements. We find that cortex can synchronize to dynamic binaural cues up to approximately 10 Hz, which aligns well with our measured limits of perceiving dynamic spatial information and utilizing dynamic binaural cues for spatial unmasking, i.e. measures of binaural sluggishness. We also find that the tracking limit for frequency modulation (FM) is similar to the limit for spatial tracking, demonstrating that this sluggish tracking is a more general perceptual limit that can be accounted for by cortical temporal integration limits.


Subject(s)
Auditory Cortex , Time Perception , Humans , Acoustics , Brain Stem , Cortical Synchronization
7.
Hum Brain Mapp ; 44(17): 5810-5827, 2023 12 01.
Article in English | MEDLINE | ID: mdl-37688547

ABSTRACT

Cerebellar differences have long been documented in autism spectrum disorder (ASD), yet the extent to which such differences might impact language processing in ASD remains unknown. To investigate this, we recorded brain activity with magnetoencephalography (MEG) while ASD and age-matched typically developing (TD) children passively processed spoken meaningful English and meaningless Jabberwocky sentences. Using a novel source localization approach that allows higher resolution MEG source localization of cerebellar activity, we found that, unlike TD children, ASD children showed no difference between evoked responses to meaningful versus meaningless sentences in right cerebellar lobule VI. ASD children also had atypically weak functional connectivity in the meaningful versus meaningless speech condition between right cerebellar lobule VI and several left-hemisphere sensorimotor and language regions in later time windows. In contrast, ASD children had atypically strong functional connectivity for in the meaningful versus meaningless speech condition between right cerebellar lobule VI and primary auditory cortical areas in an earlier time window. The atypical functional connectivity patterns in ASD correlated with ASD severity and the ability to inhibit involuntary attention. These findings align with a model where cerebro-cerebellar speech processing mechanisms in ASD are impacted by aberrant stimulus-driven attention, which could result from atypical temporal information and predictions of auditory sensory events by right cerebellar lobule VI.


Subject(s)
Autism Spectrum Disorder , Child , Humans , Autism Spectrum Disorder/diagnostic imaging , Magnetoencephalography , Cerebellum/diagnostic imaging , Magnetic Resonance Imaging , Brain Mapping
8.
J Neurosci Methods ; 398: 109954, 2023 10 01.
Article in English | MEDLINE | ID: mdl-37625650

ABSTRACT

BACKGROUND: Disabling hearing loss affects nearly 466 million people worldwide (World Health Organization). The auditory brainstem response (ABR) is the most common non-invasive clinical measure of evoked potentials, e.g., as an objective measure for universal newborn hearing screening. In research, the ABR is widely used for estimating hearing thresholds and cochlear synaptopathy in animal models of hearing loss. The ABR contains multiple waves representing neural activity across different peripheral auditory pathway stages, which arise within the first 10 ms after stimulus onset. Multi-channel (e.g., 32 or higher) caps provide robust measures for a wide variety of EEG applications for the study of human hearing. However, translational studies using preclinical animal models typically rely on only a few subdermal electrodes. NEW METHOD: We evaluated the feasibility of a 32-channel rodent EEG mini-cap for improving the reliability of ABR measures in chinchillas, a common model of human hearing. RESULTS: After confirming initial feasibility, a systematic experimental design tested five potential sources of variability inherent to the mini-cap methodology. We found each source of variance minimally affected mini-cap ABR waveform morphology, thresholds, and wave-1 amplitudes. COMPARISON WITH EXISTING METHOD: The mini-cap methodology was statistically more robust and less variable than the conventional subdermal-needle methodology, most notably when analyzing ABR thresholds. Additionally, fewer repetitions were required to produce a robust ABR response when using the mini-cap. CONCLUSIONS: These results suggest the EEG mini-cap can improve translational studies of peripheral auditory evoked responses. Future work will evaluate the potential of the mini-cap to improve the reliability of more centrally evoked (e.g., cortical) EEG responses.


Subject(s)
Deafness , Hearing Loss , Animals , Infant, Newborn , Humans , Evoked Potentials, Auditory, Brain Stem/physiology , Chinchilla , Noise , Reproducibility of Results , Auditory Threshold/physiology , Hearing Loss/diagnosis , Electroencephalography , Acoustic Stimulation
9.
IEEE Trans Pattern Anal Mach Intell ; 45(11): 14052-14054, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37402186

ABSTRACT

A recent paper claims that a newly proposed method classifies EEG data recorded from subjects viewing ImageNet stimuli better than two prior methods. However, the analysis used to support that claim is based on confounded data. We repeat the analysis on a large new dataset that is free from that confound. Training and testing on aggregated supertrials derived by summing trials demonstrates that the two prior methods achieve statistically significant above-chance accuracy while the newly proposed method does not.

10.
Sci Rep ; 13(1): 10216, 2023 06 23.
Article in English | MEDLINE | ID: mdl-37353552

ABSTRACT

Neurophysiological studies suggest that intrinsic brain oscillations influence sensory processing, especially of rhythmic stimuli like speech. Prior work suggests that brain rhythms may mediate perceptual grouping and selective attention to speech amidst competing sound, as well as more linguistic aspects of speech processing like predictive coding. However, we know of no prior studies that have directly tested, at the single-trial level, whether brain oscillations relate to speech-in-noise outcomes. Here, we combined electroencephalography while simultaneously measuring intelligibility of spoken sentences amidst two different interfering sounds: multi-talker babble or speech-shaped noise. We find that induced parieto-occipital alpha (7-15 Hz; thought to modulate attentional focus) and frontal beta (13-30 Hz; associated with maintenance of the current sensorimotor state and predictive coding) oscillations covary with trial-wise percent-correct scores; importantly, alpha and beta power provide significant independent contributions to predicting single-trial behavioral outcomes. These results can inform models of speech processing and guide noninvasive measures to index different neural processes that together support complex listening.


Subject(s)
Speech Intelligibility , Speech Perception , Speech Perception/physiology , Noise , Auditory Perception , Electroencephalography
11.
J Acoust Soc Am ; 153(4): 2482, 2023 04 01.
Article in English | MEDLINE | ID: mdl-37092950

ABSTRACT

Physiological and psychoacoustic studies of the medial olivocochlear reflex (MOCR) in humans have often relied on long duration elicitors (>100 ms). This is largely due to previous research using otoacoustic emissions (OAEs) that found multiple MOCR time constants, including time constants in the 100s of milliseconds, when elicited by broadband noise. However, the effect of the duration of a broadband noise elicitor on similar psychoacoustic tasks is currently unknown. The current study measured the effects of ipsilateral broadband noise elicitor duration on psychoacoustic gain reduction estimated from a forward-masking paradigm. Analysis showed that both masker type and elicitor duration were significant main effects, but no interaction was found. Gain reduction time constants were ∼46 ms for the masker present condition and ∼78 ms for the masker absent condition (ranging from ∼29 to 172 ms), both similar to the fast time constants reported in the OAE literature (70-100 ms). Maximum gain reduction was seen for elicitor durations of ∼200 ms. This is longer than the 50-ms duration which was found to produce maximum gain reduction with a tonal on-frequency elicitor. Future studies of gain reduction may use 150-200 ms broadband elicitors to maximally or near-maximally stimulate the MOCR.


Subject(s)
Cochlea , Otoacoustic Emissions, Spontaneous , Humans , Psychoacoustics , Cochlea/physiology , Otoacoustic Emissions, Spontaneous/physiology , Reflex/physiology , Time Factors , Acoustic Stimulation , Perceptual Masking/physiology
12.
J Autism Dev Disord ; 2023 Mar 17.
Article in English | MEDLINE | ID: mdl-36932270

ABSTRACT

Auditory steady-state response (ASSR) has been studied as a potential biomarker for abnormal auditory sensory processing in autism spectrum disorder (ASD), with mixed results. Motivated by prior somatosensory findings of group differences in inter-trial coherence (ITC) between ASD and typically developing (TD) individuals at twice the steady-state stimulation frequency, we examined ASSR at 25 and 50 as well as 43 and 86 Hz in response to 25-Hz and 43-Hz auditory stimuli, respectively, using magnetoencephalography. Data were recorded from 22 ASD and 31 TD children, ages 6-17 years. ITC measures showed prominent ASSRs at the stimulation and double frequencies, without significant group differences. These results do not support ASSR as a robust ASD biomarker of abnormal auditory processing in ASD. Furthermore, the previously observed atypical double-frequency somatosensory response in ASD did not generalize to the auditory modality. Thus, the hypothesis about modality-independent abnormal local connectivity in ASD was not supported.

13.
Neuroimage Clin ; 37: 103336, 2023.
Article in English | MEDLINE | ID: mdl-36724734

ABSTRACT

Individuals with autism spectrum disorder (ASD) commonly display speech processing abnormalities. Binding of acoustic features of speech distributed across different frequencies into coherent speech objects is fundamental in speech perception. Here, we tested the hypothesis that the cortical processing of bottom-up acoustic cues for speech binding may be anomalous in ASD. We recorded magnetoencephalography while ASD children (ages 7-17) and typically developing peers heard sentences of sine-wave speech (SWS) and modulated SWS (MSS) where binding cues were restored through increased temporal coherence of the acoustic components and the introduction of harmonicity. The ASD group showed increased long-range feedforward functional connectivity from left auditory to parietal cortex with concurrent decreased local functional connectivity within the parietal region during MSS relative to SWS. As the parietal region has been implicated in auditory object binding, our findings support our hypothesis of atypical bottom-up speech binding in ASD. Furthermore, the long-range functional connectivity correlated with behaviorally measured auditory processing abnormalities, confirming the relevance of these atypical cortical signatures to the ASD phenotype. Lastly, the group difference in the local functional connectivity was driven by the youngest participants, suggesting that impaired speech binding in ASD might be ameliorated upon entering adolescence.


Subject(s)
Autism Spectrum Disorder , Humans , Autism Spectrum Disorder/diagnostic imaging , Cues , Speech , Magnetoencephalography , Auditory Perception
14.
bioRxiv ; 2023 May 22.
Article in English | MEDLINE | ID: mdl-36712081

ABSTRACT

Neurophysiological studies suggest that intrinsic brain oscillations influence sensory processing, especially of rhythmic stimuli like speech. Prior work suggests that brain rhythms may mediate perceptual grouping and selective attention to speech amidst competing sound, as well as more linguistic aspects of speech processing like predictive coding. However, we know of no prior studies that have directly tested, at the single-trial level, whether brain oscillations relate to speech-in-noise outcomes. Here, we combined electroencephalography while simultaneously measuring intelligibility of spoken sentences amidst two different interfering sounds: multi-talker babble or speech-shaped noise. We find that induced parieto-occipital alpha (7-15 Hz; thought to modulate attentional focus) and frontal beta (13-30 Hz; associated with maintenance of the current sensorimotor state and predictive coding) oscillations covary with trial-wise percent-correct scores; importantly, alpha and beta power provide significant independent contributions to predicting single-trial behavioral outcomes. These results can inform models of speech processing and guide noninvasive measures to index different neural processes that together support complex listening.

15.
Commun Biol ; 5(1): 733, 2022 07 22.
Article in English | MEDLINE | ID: mdl-35869142

ABSTRACT

Animal models suggest that cochlear afferent nerve endings may be more vulnerable than sensory hair cells to damage from acoustic overexposure and aging. Because neural degeneration without hair-cell loss cannot be detected in standard clinical audiometry, whether such damage occurs in humans is hotly debated. Here, we address this debate through co-ordinated experiments in at-risk humans and a wild-type chinchilla model. Cochlear neuropathy leads to large and sustained reductions of the wideband middle-ear muscle reflex in chinchillas. Analogously, human wideband reflex measures revealed distinct damage patterns in middle age, and in young individuals with histories of high acoustic exposure. Analysis of an independent large public dataset and additional measurements using clinical equipment corroborated the patterns revealed by our targeted cross-species experiments. Taken together, our results suggest that cochlear neural damage is widespread even in populations with clinically normal hearing.


Subject(s)
Cochlea , Hair Cells, Auditory , Acoustic Stimulation , Animals , Chinchilla , Hair Cells, Auditory/physiology , Hearing , Humans , Middle Aged
16.
J Acoust Soc Am ; 151(5): 3116, 2022 05.
Article in English | MEDLINE | ID: mdl-35649891

ABSTRACT

Acoustics research involving human participants typically takes place in specialized laboratory settings. Listening studies, for example, may present controlled sounds using calibrated transducers in sound-attenuating or anechoic chambers. In contrast, remote testing takes place outside of the laboratory in everyday settings (e.g., participants' homes). Remote testing could provide greater access to participants, larger sample sizes, and opportunities to characterize performance in typical listening environments at the cost of reduced control of environmental conditions, less precise calibration, and inconsistency in attentional state and/or response behaviors from relatively smaller sample sizes and unintuitive experimental tasks. The Acoustical Society of America Technical Committee on Psychological and Physiological Acoustics launched the Task Force on Remote Testing (https://tcppasa.org/remotetesting/) in May 2020 with goals of surveying approaches and platforms available to support remote testing and identifying challenges and considerations for prospective investigators. The results of this task force survey were made available online in the form of a set of Wiki pages and summarized in this report. This report outlines the state-of-the-art of remote testing in auditory-related research as of August 2021, which is based on the Wiki and a literature search of papers published in this area since 2020, and provides three case studies to demonstrate feasibility during practice.


Subject(s)
Acoustics , Auditory Perception , Attention/physiology , Humans , Prospective Studies , Sound
17.
PLoS Biol ; 20(2): e3001541, 2022 02.
Article in English | MEDLINE | ID: mdl-35167585

ABSTRACT

Organizing sensory information into coherent perceptual objects is fundamental to everyday perception and communication. In the visual domain, indirect evidence from cortical responses suggests that children with autism spectrum disorder (ASD) have anomalous figure-ground segregation. While auditory processing abnormalities are common in ASD, especially in environments with multiple sound sources, to date, the question of scene segregation in ASD has not been directly investigated in audition. Using magnetoencephalography, we measured cortical responses to unattended (passively experienced) auditory stimuli while parametrically manipulating the degree of temporal coherence that facilitates auditory figure-ground segregation. Results from 21 children with ASD (aged 7-17 years) and 26 age- and IQ-matched typically developing children provide evidence that children with ASD show anomalous growth of cortical neural responses with increasing temporal coherence of the auditory figure. The documented neurophysiological abnormalities did not depend on age, and were reflected both in the response evoked by changes in temporal coherence of the auditory scene and in the associated induced gamma rhythms. Furthermore, the individual neural measures were predictive of diagnosis (83% accuracy) and also correlated with behavioral measures of ASD severity and auditory processing abnormalities. These findings offer new insight into the neural mechanisms underlying auditory perceptual deficits and sensory overload in ASD, and suggest that temporal-coherence-based auditory scene analysis and suprathreshold processing of coherent auditory objects may be atypical in ASD.


Subject(s)
Auditory Perception/physiology , Autism Spectrum Disorder/physiopathology , Cortical Synchronization/physiology , Evoked Potentials, Auditory/physiology , Acoustic Stimulation/methods , Adolescent , Autism Spectrum Disorder/diagnosis , Autism Spectrum Disorder/psychology , Child , Female , Humans , Magnetoencephalography/methods , Male , Reaction Time/physiology
18.
eNeuro ; 9(2)2022.
Article in English | MEDLINE | ID: mdl-35193890

ABSTRACT

Neural phase-locking to temporal fluctuations is a fundamental and unique mechanism by which acoustic information is encoded by the auditory system. The perceptual role of this metabolically expensive mechanism, the neural phase-locking to temporal fine structure (TFS) in particular, is debated. Although hypothesized, it is unclear whether auditory perceptual deficits in certain clinical populations are attributable to deficits in TFS coding. Efforts to uncover the role of TFS have been impeded by the fact that there are no established assays for quantifying the fidelity of TFS coding at the individual level. While many candidates have been proposed, for an assay to be useful, it should not only intrinsically depend on TFS coding, but should also have the property that individual differences in the assay reflect TFS coding per se over and beyond other sources of variance. Here, we evaluate a range of behavioral and electroencephalogram (EEG)-based measures as candidate individualized measures of TFS sensitivity. Our comparisons of behavioral and EEG-based metrics suggest that extraneous variables dominate both behavioral scores and EEG amplitude metrics, rendering them ineffective. After adjusting behavioral scores using lapse rates, and extracting latency or percent-growth metrics from EEG, interaural timing sensitivity measures exhibit robust behavior-EEG correlations. Together with the fact that unambiguous theoretical links can be made relating binaural measures and phase-locking to TFS, our results suggest that these "adjusted" binaural assays may be well suited for quantifying individual TFS processing.


Subject(s)
Auditory Perception , Acoustic Stimulation/methods , Humans
19.
Ear Hear ; 43(3): 849-861, 2022.
Article in English | MEDLINE | ID: mdl-34751679

ABSTRACT

OBJECTIVES: Despite the widespread use of noise reduction (NR) in modern digital hearing aids, our neurophysiological understanding of how NR affects speech-in-noise perception and why its effect is variable is limited. The current study aimed to (1) characterize the effect of NR on the neural processing of target speech and (2) seek neural determinants of individual differences in the NR effect on speech-in-noise performance, hypothesizing that an individual's own capability to inhibit background noise would inversely predict NR benefits in speech-in-noise perception. DESIGN: Thirty-six adult listeners with normal hearing participated in the study. Behavioral and electroencephalographic responses were simultaneously obtained during a speech-in-noise task in which natural monosyllabic words were presented at three different signal-to-noise ratios, each with NR off and on. A within-subject analysis assessed the effect of NR on cortical evoked responses to target speech in the temporal-frontal speech and language brain regions, including supramarginal gyrus and inferior frontal gyrus in the left hemisphere. In addition, an across-subject analysis related an individual's tolerance to noise, measured as the amplitude ratio of auditory-cortical responses to target speech and background noise, to their speech-in-noise performance. RESULTS: At the group level, in the poorest signal-to-noise ratio condition, NR significantly increased early supramarginal gyrus activity and decreased late inferior frontal gyrus activity, indicating a switch to more immediate lexical access and less effortful cognitive processing, although no improvement in behavioral performance was found. The across-subject analysis revealed that the cortical index of individual noise tolerance significantly correlated with NR-driven changes in speech-in-noise performance. CONCLUSIONS: NR can facilitate speech-in-noise processing despite no improvement in behavioral performance. Findings from the current study also indicate that people with lower noise tolerance are more likely to get more benefits from NR. Overall, results suggest that future research should take a mechanistic approach to NR outcomes and individual noise tolerance.


Subject(s)
Hearing Aids , Speech Perception , Adult , Humans , Noise , Signal-To-Noise Ratio , Speech , Speech Perception/physiology
20.
IEEE Trans Pattern Anal Mach Intell ; 44(12): 9217-9220, 2022 12.
Article in English | MEDLINE | ID: mdl-34665721

ABSTRACT

Neuroimaging experiments in general, and EEG experiments in particular, must take care to avoid confounds. A recent TPAMI paper uses data that suffers from a serious previously reported confound. We demonstrate that their new model and analysis methods do not remedy this confound, and therefore that their claims of high accuracy and neuroscience relevance are invalid.


Subject(s)
Algorithms , Brain Mapping , Brain Mapping/methods , Brain/diagnostic imaging , Neuroimaging , Learning , Electroencephalography/methods
SELECTION OF CITATIONS
SEARCH DETAIL