Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 51
Filtrar
1.
PLoS Biol ; 20(2): e3001541, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-35167585

RESUMO

Organizing sensory information into coherent perceptual objects is fundamental to everyday perception and communication. In the visual domain, indirect evidence from cortical responses suggests that children with autism spectrum disorder (ASD) have anomalous figure-ground segregation. While auditory processing abnormalities are common in ASD, especially in environments with multiple sound sources, to date, the question of scene segregation in ASD has not been directly investigated in audition. Using magnetoencephalography, we measured cortical responses to unattended (passively experienced) auditory stimuli while parametrically manipulating the degree of temporal coherence that facilitates auditory figure-ground segregation. Results from 21 children with ASD (aged 7-17 years) and 26 age- and IQ-matched typically developing children provide evidence that children with ASD show anomalous growth of cortical neural responses with increasing temporal coherence of the auditory figure. The documented neurophysiological abnormalities did not depend on age, and were reflected both in the response evoked by changes in temporal coherence of the auditory scene and in the associated induced gamma rhythms. Furthermore, the individual neural measures were predictive of diagnosis (83% accuracy) and also correlated with behavioral measures of ASD severity and auditory processing abnormalities. These findings offer new insight into the neural mechanisms underlying auditory perceptual deficits and sensory overload in ASD, and suggest that temporal-coherence-based auditory scene analysis and suprathreshold processing of coherent auditory objects may be atypical in ASD.


Assuntos
Percepção Auditiva/fisiologia , Transtorno do Espectro Autista/fisiopatologia , Sincronização Cortical/fisiologia , Potenciais Evocados Auditivos/fisiologia , Estimulação Acústica/métodos , Adolescente , Transtorno do Espectro Autista/diagnóstico , Transtorno do Espectro Autista/psicologia , Criança , Feminino , Humanos , Magnetoencefalografia/métodos , Masculino , Tempo de Reação/fisiologia
2.
Behav Res Methods ; 56(3): 1433-1448, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37326771

RESUMO

Anonymous web-based experiments are increasingly used in many domains of behavioral research. However, online studies of auditory perception, especially of psychoacoustic phenomena pertaining to low-level sensory processing, are challenging because of limited available control of the acoustics, and the inability to perform audiometry to confirm normal-hearing status of participants. Here, we outline our approach to mitigate these challenges and validate our procedures by comparing web-based measurements to lab-based data on a range of classic psychoacoustic tasks. Individual tasks were created using jsPsych, an open-source JavaScript front-end library. Dynamic sequences of psychoacoustic tasks were implemented using Django, an open-source library for web applications, and combined with consent pages, questionnaires, and debriefing pages. Subjects were recruited via Prolific, a subject recruitment platform for web-based studies. Guided by a meta-analysis of lab-based data, we developed and validated a screening procedure to select participants for (putative) normal-hearing status based on their responses in a suprathreshold task and a survey. Headphone use was standardized by supplementing procedures from prior literature with a binaural hearing task. Individuals meeting all criteria were re-invited to complete a range of classic psychoacoustic tasks. For the re-invited participants, absolute thresholds were in excellent agreement with lab-based data for fundamental frequency discrimination, gap detection, and sensitivity to interaural time delay and level difference. Furthermore, word identification scores, consonant confusion patterns, and co-modulation masking release effect also matched lab-based studies. Our results suggest that web-based psychoacoustics is a viable complement to lab-based research. Source code for our infrastructure is provided.


Assuntos
Percepção Auditiva , Audição , Humanos , Psicoacústica , Audição/fisiologia , Percepção Auditiva/fisiologia , Audiometria , Internet , Limiar Auditivo/fisiologia , Estimulação Acústica
3.
Hum Brain Mapp ; 44(17): 5810-5827, 2023 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-37688547

RESUMO

Cerebellar differences have long been documented in autism spectrum disorder (ASD), yet the extent to which such differences might impact language processing in ASD remains unknown. To investigate this, we recorded brain activity with magnetoencephalography (MEG) while ASD and age-matched typically developing (TD) children passively processed spoken meaningful English and meaningless Jabberwocky sentences. Using a novel source localization approach that allows higher resolution MEG source localization of cerebellar activity, we found that, unlike TD children, ASD children showed no difference between evoked responses to meaningful versus meaningless sentences in right cerebellar lobule VI. ASD children also had atypically weak functional connectivity in the meaningful versus meaningless speech condition between right cerebellar lobule VI and several left-hemisphere sensorimotor and language regions in later time windows. In contrast, ASD children had atypically strong functional connectivity for in the meaningful versus meaningless speech condition between right cerebellar lobule VI and primary auditory cortical areas in an earlier time window. The atypical functional connectivity patterns in ASD correlated with ASD severity and the ability to inhibit involuntary attention. These findings align with a model where cerebro-cerebellar speech processing mechanisms in ASD are impacted by aberrant stimulus-driven attention, which could result from atypical temporal information and predictions of auditory sensory events by right cerebellar lobule VI.


Assuntos
Transtorno do Espectro Autista , Criança , Humanos , Transtorno do Espectro Autista/diagnóstico por imagem , Magnetoencefalografia , Cerebelo/diagnóstico por imagem , Imageamento por Ressonância Magnética , Mapeamento Encefálico
4.
J Acoust Soc Am ; 153(4): 2482, 2023 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-37092950

RESUMO

Physiological and psychoacoustic studies of the medial olivocochlear reflex (MOCR) in humans have often relied on long duration elicitors (>100 ms). This is largely due to previous research using otoacoustic emissions (OAEs) that found multiple MOCR time constants, including time constants in the 100s of milliseconds, when elicited by broadband noise. However, the effect of the duration of a broadband noise elicitor on similar psychoacoustic tasks is currently unknown. The current study measured the effects of ipsilateral broadband noise elicitor duration on psychoacoustic gain reduction estimated from a forward-masking paradigm. Analysis showed that both masker type and elicitor duration were significant main effects, but no interaction was found. Gain reduction time constants were ∼46 ms for the masker present condition and ∼78 ms for the masker absent condition (ranging from ∼29 to 172 ms), both similar to the fast time constants reported in the OAE literature (70-100 ms). Maximum gain reduction was seen for elicitor durations of ∼200 ms. This is longer than the 50-ms duration which was found to produce maximum gain reduction with a tonal on-frequency elicitor. Future studies of gain reduction may use 150-200 ms broadband elicitors to maximally or near-maximally stimulate the MOCR.


Assuntos
Cóclea , Emissões Otoacústicas Espontâneas , Humanos , Psicoacústica , Cóclea/fisiologia , Emissões Otoacústicas Espontâneas/fisiologia , Reflexo/fisiologia , Fatores de Tempo , Estimulação Acústica , Mascaramento Perceptivo/fisiologia
5.
PLoS Comput Biol ; 17(2): e1008155, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33617548

RESUMO

Significant scientific and translational questions remain in auditory neuroscience surrounding the neural correlates of perception. Relating perceptual and neural data collected from humans can be useful; however, human-based neural data are typically limited to evoked far-field responses, which lack anatomical and physiological specificity. Laboratory-controlled preclinical animal models offer the advantage of comparing single-unit and evoked responses from the same animals. This ability provides opportunities to develop invaluable insight into proper interpretations of evoked responses, which benefits both basic-science studies of neural mechanisms and translational applications, e.g., diagnostic development. However, these comparisons have been limited by a disconnect between the types of spectrotemporal analyses used with single-unit spike trains and evoked responses, which results because these response types are fundamentally different (point-process versus continuous-valued signals) even though the responses themselves are related. Here, we describe a unifying framework to study temporal coding of complex sounds that allows spike-train and evoked-response data to be analyzed and compared using the same advanced signal-processing techniques. The framework uses a set of peristimulus-time histograms computed from single-unit spike trains in response to polarity-alternating stimuli to allow advanced spectral analyses of both slow (envelope) and rapid (temporal fine structure) response components. Demonstrated benefits include: (1) novel spectrally specific temporal-coding measures that are less confounded by distortions due to hair-cell transduction, synaptic rectification, and neural stochasticity compared to previous metrics, e.g., the correlogram peak-height, (2) spectrally specific analyses of spike-train modulation coding (magnitude and phase), which can be directly compared to modern perceptually based models of speech intelligibility (e.g., that depend on modulation filter banks), and (3) superior spectral resolution in analyzing the neural representation of nonstationary sounds, such as speech and music. This unifying framework significantly expands the potential of preclinical animal models to advance our understanding of the physiological correlates of perceptual deficits in real-world listening following sensorineural hearing loss.


Assuntos
Percepção Auditiva/fisiologia , Potenciais Evocados Auditivos/fisiologia , Modelos Neurológicos , Estimulação Acústica , Animais , Chinchila/fisiologia , Nervo Coclear/fisiologia , Biologia Computacional , Modelos Animais de Doenças , Perda Auditiva Neurossensorial/fisiopatologia , Perda Auditiva Neurossensorial/psicologia , Humanos , Modelos Animais , Dinâmica não Linear , Psicoacústica , Som , Análise Espaço-Temporal , Inteligibilidade da Fala/fisiologia , Percepção da Fala/fisiologia , Pesquisa Translacional Biomédica
6.
Ear Hear ; 43(3): 849-861, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34751679

RESUMO

OBJECTIVES: Despite the widespread use of noise reduction (NR) in modern digital hearing aids, our neurophysiological understanding of how NR affects speech-in-noise perception and why its effect is variable is limited. The current study aimed to (1) characterize the effect of NR on the neural processing of target speech and (2) seek neural determinants of individual differences in the NR effect on speech-in-noise performance, hypothesizing that an individual's own capability to inhibit background noise would inversely predict NR benefits in speech-in-noise perception. DESIGN: Thirty-six adult listeners with normal hearing participated in the study. Behavioral and electroencephalographic responses were simultaneously obtained during a speech-in-noise task in which natural monosyllabic words were presented at three different signal-to-noise ratios, each with NR off and on. A within-subject analysis assessed the effect of NR on cortical evoked responses to target speech in the temporal-frontal speech and language brain regions, including supramarginal gyrus and inferior frontal gyrus in the left hemisphere. In addition, an across-subject analysis related an individual's tolerance to noise, measured as the amplitude ratio of auditory-cortical responses to target speech and background noise, to their speech-in-noise performance. RESULTS: At the group level, in the poorest signal-to-noise ratio condition, NR significantly increased early supramarginal gyrus activity and decreased late inferior frontal gyrus activity, indicating a switch to more immediate lexical access and less effortful cognitive processing, although no improvement in behavioral performance was found. The across-subject analysis revealed that the cortical index of individual noise tolerance significantly correlated with NR-driven changes in speech-in-noise performance. CONCLUSIONS: NR can facilitate speech-in-noise processing despite no improvement in behavioral performance. Findings from the current study also indicate that people with lower noise tolerance are more likely to get more benefits from NR. Overall, results suggest that future research should take a mechanistic approach to NR outcomes and individual noise tolerance.


Assuntos
Auxiliares de Audição , Percepção da Fala , Adulto , Humanos , Ruído , Razão Sinal-Ruído , Fala , Percepção da Fala/fisiologia
7.
J Acoust Soc Am ; 151(5): 3116, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35649891

RESUMO

Acoustics research involving human participants typically takes place in specialized laboratory settings. Listening studies, for example, may present controlled sounds using calibrated transducers in sound-attenuating or anechoic chambers. In contrast, remote testing takes place outside of the laboratory in everyday settings (e.g., participants' homes). Remote testing could provide greater access to participants, larger sample sizes, and opportunities to characterize performance in typical listening environments at the cost of reduced control of environmental conditions, less precise calibration, and inconsistency in attentional state and/or response behaviors from relatively smaller sample sizes and unintuitive experimental tasks. The Acoustical Society of America Technical Committee on Psychological and Physiological Acoustics launched the Task Force on Remote Testing (https://tcppasa.org/remotetesting/) in May 2020 with goals of surveying approaches and platforms available to support remote testing and identifying challenges and considerations for prospective investigators. The results of this task force survey were made available online in the form of a set of Wiki pages and summarized in this report. This report outlines the state-of-the-art of remote testing in auditory-related research as of August 2021, which is based on the Wiki and a literature search of papers published in this area since 2020, and provides three case studies to demonstrate feasibility during practice.


Assuntos
Acústica , Percepção Auditiva , Atenção/fisiologia , Humanos , Estudos Prospectivos , Som
8.
J Acoust Soc Am ; 150(3): 2230, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34598642

RESUMO

A fundamental question in the neuroscience of everyday communication is how scene acoustics shape the neural processing of attended speech sounds and in turn impact speech intelligibility. While it is well known that the temporal envelopes in target speech are important for intelligibility, how the neural encoding of target-speech envelopes is influenced by background sounds or other acoustic features of the scene is unknown. Here, we combine human electroencephalography with simultaneous intelligibility measurements to address this key gap. We find that the neural envelope-domain signal-to-noise ratio in target-speech encoding, which is shaped by masker modulations, predicts intelligibility over a range of strategically chosen realistic listening conditions unseen by the predictive model. This provides neurophysiological evidence for modulation masking. Moreover, using high-resolution vocoding to carefully control peripheral envelopes, we show that target-envelope coding fidelity in the brain depends not only on envelopes conveyed by the cochlea, but also on the temporal fine structure (TFS), which supports scene segregation. Our results are consistent with the notion that temporal coherence of sound elements across envelopes and/or TFS influences scene analysis and attentive selection of a target sound. Our findings also inform speech-intelligibility models and technologies attempting to improve real-world speech communication.


Assuntos
Inteligibilidade da Fala , Percepção da Fala , Estimulação Acústica , Acústica , Percepção Auditiva , Humanos , Mascaramento Perceptivo , Razão Sinal-Ruído
9.
Neuroimage ; 174: 57-68, 2018 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-29462724

RESUMO

The functional significance of resting state networks and their abnormal manifestations in psychiatric disorders are firmly established, as is the importance of the cortical rhythms in mediating these networks. Resting state networks are known to undergo substantial reorganization from childhood to adulthood, but whether distinct cortical rhythms, which are generated by separable neural mechanisms and are often manifested abnormally in psychiatric conditions, mediate maturation differentially, remains unknown. Using magnetoencephalography (MEG) to map frequency band specific maturation of resting state networks from age 7 to 29 in 162 participants (31 independent), we found significant changes with age in networks mediated by the beta (13-30 Hz) and gamma (31-80 Hz) bands. More specifically, gamma band mediated networks followed an expected asymptotic trajectory, but beta band mediated networks followed a linear trajectory. Network integration increased with age in gamma band mediated networks, while local segregation increased with age in beta band mediated networks. Spatially, the hubs that changed in importance with age in the beta band mediated networks had relatively little overlap with those that showed the greatest changes in the gamma band mediated networks. These findings are relevant for our understanding of the neural mechanisms of cortical maturation, in both typical and atypical development.


Assuntos
Envelhecimento , Ritmo beta , Córtex Cerebral/crescimento & desenvolvimento , Ritmo Gama , Adolescente , Adulto , Mapeamento Encefálico , Criança , Feminino , Humanos , Aprendizado de Máquina , Imageamento por Ressonância Magnética , Magnetoencefalografia , Masculino , Vias Neurais/crescimento & desenvolvimento , Adulto Jovem
10.
Hum Brain Mapp ; 39(10): 4094-4104, 2018 10.
Artigo em Inglês | MEDLINE | ID: mdl-29947148

RESUMO

Autism spectrum disorder (ASD) is characterized neurophysiologically by, among other things, functional connectivity abnormalities in the brain. Recent evidence suggests that the nature of these functional connectivity abnormalities might not be uniform throughout maturation. Comparing between adolescents and young adults (ages 14-21) with ASD and age- and IQ-matched typically developing (TD) individuals, we previously documented, using magnetoencephalography (MEG) data, that local functional connectivity in the fusiform face areas (FFA) and long-range functional connectivity between FFA and three higher order cortical areas were all reduced in ASD. Given the findings on abnormal maturation trajectories in ASD, we tested whether these results extend to preadolescent children (ages 7-13). We found that both local and long-range functional connectivity were in fact normal in this younger age group in ASD. Combining the two age groups, we found that local and long-range functional connectivity measures were positively correlated with age in TD, but negatively correlated with age in ASD. Last, we showed that local functional connectivity was the primary feature in predicting age in ASD group, but not in the TD group. Furthermore, local functional connectivity was only correlated with ASD severity in the older group. These results suggest that the direction of maturation of functional connectivity for processing of faces from childhood to young adulthood is itself abnormal in ASD, and that during the processing of faces, these trajectory abnormalities are more pronounced for local functional connectivity measures than they are for long-range functional connectivity measures.


Assuntos
Transtorno do Espectro Autista/fisiopatologia , Córtex Cerebral/fisiopatologia , Conectoma/métodos , Reconhecimento Facial/fisiologia , Desenvolvimento Humano/fisiologia , Magnetoencefalografia/métodos , Percepção Social , Adolescente , Adulto , Fatores Etários , Transtorno do Espectro Autista/diagnóstico por imagem , Córtex Cerebral/diagnóstico por imagem , Criança , Humanos , Masculino , Adulto Jovem
11.
J Neurosci ; 36(13): 3755-64, 2016 Mar 30.
Artigo em Inglês | MEDLINE | ID: mdl-27030760

RESUMO

Evidence from animal and human studies suggests that moderate acoustic exposure, causing only transient threshold elevation, can nonetheless cause "hidden hearing loss" that interferes with coding of suprathreshold sound. Such noise exposure destroys synaptic connections between cochlear hair cells and auditory nerve fibers; however, there is no clinical test of this synaptopathy in humans. In animals, synaptopathy reduces the amplitude of auditory brainstem response (ABR) wave-I. Unfortunately, ABR wave-I is difficult to measure in humans, limiting its clinical use. Here, using analogous measurements in humans and mice, we show that the effect of masking noise on the latency of the more robust ABR wave-V mirrors changes in ABR wave-I amplitude. Furthermore, in our human cohort, the effect of noise on wave-V latency predicts perceptual temporal sensitivity. Our results suggest that measures of the effects of noise on ABR wave-V latency can be used to diagnose cochlear synaptopathy in humans. SIGNIFICANCE STATEMENT: Although there are suspicions that cochlear synaptopathy affects humans with normal hearing thresholds, no one has yet reported a clinical measure that is a reliable marker of such loss. By combining human and animal data, we demonstrate that the latency of auditory brainstem response wave-V in noise reflects auditory nerve loss. This is the first study of human listeners with normal hearing thresholds that links individual differences observed in behavior and auditory brainstem response timing to cochlear synaptopathy. These results can guide development of a clinical test to reveal this previously unknown form of noise-induced hearing loss in humans.


Assuntos
Orelha Interna/patologia , Potenciais Evocados Auditivos do Tronco Encefálico/fisiologia , Perda Auditiva Provocada por Ruído/patologia , Ruído , Tempo de Reação/fisiologia , Sinapses/patologia , Estimulação Acústica , Adulto , Animais , Percepção Auditiva/fisiologia , Limiar Auditivo/fisiologia , Modelos Animais de Doenças , Eletroencefalografia , Feminino , Perda Auditiva Provocada por Ruído/fisiopatologia , Humanos , Masculino , Camundongos , Emissões Otoacústicas Espontâneas/fisiologia , Adulto Jovem
12.
J Neurosci ; 35(5): 2161-72, 2015 Feb 04.
Artigo em Inglês | MEDLINE | ID: mdl-25653371

RESUMO

Clinical audiometry has long focused on determining the detection thresholds for pure tones, which depend on intact cochlear mechanics and hair cell function. Yet many listeners with normal hearing thresholds complain of communication difficulties, and the causes for such problems are not well understood. Here, we explore whether normal-hearing listeners exhibit such suprathreshold deficits, affecting the fidelity with which subcortical areas encode the temporal structure of clearly audible sound. Using an array of measures, we evaluated a cohort of young adults with thresholds in the normal range to assess both cochlear mechanical function and temporal coding of suprathreshold sounds. Listeners differed widely in both electrophysiological and behavioral measures of temporal coding fidelity. These measures correlated significantly with each other. Conversely, these differences were unrelated to the modest variation in otoacoustic emissions, cochlear tuning, or the residual differences in hearing threshold present in our cohort. Electroencephalography revealed that listeners with poor subcortical encoding had poor cortical sensitivity to changes in interaural time differences, which are critical for localizing sound sources and analyzing complex scenes. These listeners also performed poorly when asked to direct selective attention to one of two competing speech streams, a task that mimics the challenges of many everyday listening environments. Together with previous animal and computational models, our results suggest that hidden hearing deficits, likely originating at the level of the cochlear nerve, are part of "normal hearing."


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva , Limiar Auditivo , Cóclea/fisiologia , Perda Auditiva/fisiopatologia , Audição , Adulto , Córtex Auditivo/fisiopatologia , Cóclea/fisiopatologia , Feminino , Humanos , Masculino , Percepção da Fala
13.
J Acoust Soc Am ; 138(3): 1637-59, 2015 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-26428802

RESUMO

Population responses such as the auditory brainstem response (ABR) are commonly used for hearing screening, but the relationship between single-unit physiology and scalp-recorded population responses are not well understood. Computational models that integrate physiologically realistic models of single-unit auditory-nerve (AN), cochlear nucleus (CN) and inferior colliculus (IC) cells with models of broadband peripheral excitation can be used to simulate ABRs and thereby link detailed knowledge of animal physiology to human applications. Existing functional ABR models fail to capture the empirically observed 1.2-2 ms ABR wave-V latency-vs-intensity decrease that is thought to arise from level-dependent changes in cochlear excitation and firing synchrony across different tonotopic sections. This paper proposes an approach where level-dependent cochlear excitation patterns, which reflect human cochlear filter tuning parameters, drive AN fibers to yield realistic level-dependent properties of the ABR wave-V. The number of free model parameters is minimal, producing a model in which various sources of hearing-impairment can easily be simulated on an individualized and frequency-dependent basis. The model fits latency-vs-intensity functions observed in human ABRs and otoacoustic emissions while maintaining rate-level and threshold characteristics of single-unit AN fibers. The simulations help to reveal which tonotopic regions dominate ABR waveform peaks at different stimulus intensities.


Assuntos
Tronco Encefálico/fisiologia , Nervo Coclear/fisiologia , Estimulação Acústica , Membrana Basilar/fisiologia , Ciências Biocomportamentais , Cóclea/fisiologia , Potenciais Evocados Auditivos do Tronco Encefálico/fisiologia , Audição/fisiologia , Humanos , Emissões Otoacústicas Espontâneas/fisiologia , Tempo de Reação/fisiologia , Vibração
14.
Psychol Res ; 78(3): 349-60, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24633644

RESUMO

Selective auditory attention causes a relative enhancement of the neural representation of important information and suppression of the neural representation of distracting sound, which enables a listener to analyze and interpret information of interest. Some studies suggest that in both vision and in audition, the "unit" on which attention operates is an object: an estimate of the information coming from a particular external source out in the world. In this view, which object ends up in the attentional foreground depends on the interplay of top-down, volitional attention and stimulus-driven, involuntary attention. Here, we test the idea that auditory attention is object based by exploring whether continuity of a non-spatial feature (talker identity, a feature that helps acoustic elements bind into one perceptual object) also influences selective attention performance. In Experiment 1, we show that perceptual continuity of target talker voice helps listeners report a sequence of spoken target digits embedded in competing reversed digits spoken by different talkers. In Experiment 2, we provide evidence that this benefit of voice continuity is obligatory and automatic, as if voice continuity biases listeners by making it easier to focus on a subsequent target digit when it is perceptually linked to what was already in the attentional foreground. Our results support the idea that feature continuity enhances streaming automatically, thereby influencing the dynamic processes that allow listeners to successfully attend to objects through time in the cacophony that assails our ears in many everyday settings.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Mascaramento Perceptivo/fisiologia , Voz , Estimulação Acústica , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
15.
Proc Natl Acad Sci U S A ; 108(37): 15516-21, 2011 Sep 13.
Artigo em Inglês | MEDLINE | ID: mdl-21844339

RESUMO

"Normal hearing" is typically defined by threshold audibility, even though everyday communication relies on extracting key features of easily audible sound, not on sound detection. Anecdotally, many normal-hearing listeners report difficulty communicating in settings where there are competing sound sources, but the reasons for such difficulties are debated: Do these difficulties originate from deficits in cognitive processing, or differences in peripheral, sensory encoding? Here we show that listeners with clinically normal thresholds exhibit very large individual differences on a task requiring them to focus spatial selective auditory attention to understand one speech stream when there are similar, competing speech streams coming from other directions. These individual differences in selective auditory attention ability are unrelated to age, reading span (a measure of cognitive function), and minor differences in absolute hearing threshold; however, selective attention ability correlates with the ability to detect simple frequency modulation in a clearly audible tone. Importantly, we also find that selective attention performance correlates with physiological measures of how well the periodic, temporal structure of sounds above the threshold of audibility are encoded in early, subcortical portions of the auditory pathway. These results suggest that the fidelity of early sensory encoding of the temporal structure in suprathreshold sounds influences the ability to communicate in challenging settings. Tests like these may help tease apart how peripheral and central deficits contribute to communication impairments, ultimately leading to new approaches to combat the social isolation that often ensues.


Assuntos
Limiar Auditivo/fisiologia , Comunicação , Audição/fisiologia , Acústica , Atenção/fisiologia , Córtex Auditivo/fisiologia , Humanos , Análise e Desempenho de Tarefas , Fatores de Tempo
16.
Sci Rep ; 14(1): 13241, 2024 06 09.
Artigo em Inglês | MEDLINE | ID: mdl-38853168

RESUMO

Cochlear implants (CIs) do not offer the same level of effectiveness in noisy environments as in quiet settings. Current single-microphone noise reduction algorithms in hearing aids and CIs only remove predictable, stationary noise, and are ineffective against realistic, non-stationary noise such as multi-talker interference. Recent developments in deep neural network (DNN) algorithms have achieved noteworthy performance in speech enhancement and separation, especially in removing speech noise. However, more work is needed to investigate the potential of DNN algorithms in removing speech noise when tested with listeners fitted with CIs. Here, we implemented two DNN algorithms that are well suited for applications in speech audio processing: (1) recurrent neural network (RNN) and (2) SepFormer. The algorithms were trained with a customized dataset ( ∼ 30 h), and then tested with thirteen CI listeners. Both RNN and SepFormer algorithms significantly improved CI listener's speech intelligibility in noise without compromising the perceived quality of speech overall. These algorithms not only increased the intelligibility in stationary non-speech noise, but also introduced a substantial improvement in non-stationary noise, where conventional signal processing strategies fall short with little benefits. These results show the promise of using DNN algorithms as a solution for listening challenges in multi-talker noise interference.


Assuntos
Algoritmos , Implantes Cocleares , Aprendizado Profundo , Ruído , Inteligibilidade da Fala , Humanos , Feminino , Pessoa de Meia-Idade , Masculino , Percepção da Fala/fisiologia , Idoso , Adulto , Redes Neurais de Computação
17.
bioRxiv ; 2024 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-39026860

RESUMO

Distortion product otoacoustic emissions (DPOAEs) and behavioral audiometry are routinely used for hearing screening and assessment. These measures provide related information about hearing status as both are sensitive to cochlear pathologies. However, DPOAE testing is quicker and does not require a behavioral response. Despite these practical advantages, DPOAE testing is often limited to screening only low and mid- frequencies. Variation in ear canal acoustics across ears and probe placements has resulted in less reliable measurements of DPOAEs near 4 kHz and above where standing waves commonly occur. Stimulus calibration in forward pressure level and responses in emitted pressure level can reduce measurement variability. Using these calibrations, this study assessed the correlation between audiometry and DPOAEs in the extended high frequencies where stimulus calibrations and responses are most susceptible to the effect of standing waves. Behavioral thresholds and DPOAE amplitudes were negatively correlated, and DPOAE amplitudes in emitted pressure level accounted for twice as much variance as amplitudes in sound pressure level. Both measures were correlated with age. These data show that with appropriate calibration methods, extended high-frequency DPOAEs are sensitive to differences in audiometric thresholds and highlight the need to consider calibration techniques in clinical and research applications of DPOAEs.

18.
Adv Exp Med Biol ; 787: 501-10, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23716257

RESUMO

We recently showed that listeners with normal hearing thresholds vary in their ability to direct spatial attention and that ability is related to the fidelity of temporal coding in the brainstem. Here, we recruited additional middle-aged listeners and extended our analysis of the brainstem response, measured using the frequency-following response (FFR). We found that even though age does not predict overall selective attention ability, middle-aged listeners are more susceptible to the detrimental effects of reverberant energy than young adults. We separated the overall FFR into orthogonal envelope and carrier components and used an existing model to predict which auditory channels drive each component. We find that responses in mid- to high-frequency auditory channels dominate envelope FFR, while lower-frequency channels dominate the carrier FFR. Importantly, we find that which component of the FFR predicts selective attention performance changes with age. We suggest that early aging degrades peripheral temporal coding in mid-to-high frequencies, interfering with the coding of envelope interaural time differences. We argue that, compared to young adults, middle-aged listeners, who do not have strong temporal envelope coding, have more trouble following a conversation in a reverberant room because they are forced to rely on fragile carrier ITDs that are susceptible to the degrading effects of reverberation.


Assuntos
Envelhecimento/fisiologia , Atenção/fisiologia , Tronco Encefálico/fisiologia , Audição/fisiologia , Modelos Neurológicos , Localização de Som/fisiologia , Estimulação Acústica/métodos , Adulto , Limiar Auditivo/fisiologia , Comportamento/fisiologia , Humanos , Pessoa de Meia-Idade , Ruído , Percepção Espacial/fisiologia , Percepção da Fala/fisiologia , Percepção do Tempo/fisiologia , Adulto Jovem
19.
J Acoust Soc Am ; 134(1): 384-95, 2013 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-23862815

RESUMO

Two experiments, both presenting diotic, harmonic tone complexes (100 Hz fundamental), were conducted to explore the envelope-related component of the frequency-following response (FFRENV), a measure of synchronous, subcortical neural activity evoked by a periodic acoustic input. Experiment 1 directly compared two common analysis methods, computing the magnitude spectrum and the phase-locking value (PLV). Bootstrapping identified which FFRENV frequency components were statistically above the noise floor for each metric and quantified the statistical power of the approaches. Across listeners and conditions, the two methods produced highly correlated results. However, PLV analysis required fewer processing stages to produce readily interpretable results. Moreover, at the fundamental frequency of the input, PLVs were farther above the metric's noise floor than spectral magnitudes. Having established the advantages of PLV analysis, the efficacy of the approach was further demonstrated by investigating how different acoustic frequencies contribute to FFRENV, analyzing responses to complex tones composed of different acoustic harmonics of 100 Hz (Experiment 2). Results show that the FFRENV response is dominated by peripheral auditory channels responding to unresolved harmonics, although low-frequency channels driven by resolved harmonics also contribute. These results demonstrate the utility of the PLV for quantifying the strength of FFRENV across conditions.


Assuntos
Estimulação Acústica/métodos , Vias Auditivas/fisiologia , Eletroencefalografia , Potenciais Evocados Auditivos/fisiologia , Discriminação da Altura Tonal/fisiologia , Processamento de Sinais Assistido por Computador , Espectrografia do Som , Adulto , Limiar Auditivo/fisiologia , Nervo Coclear/fisiopatologia , Feminino , Análise de Fourier , Humanos , Masculino , Psicoacústica , Colículos Superiores/fisiopatologia , Adulto Jovem
20.
bioRxiv ; 2023 Sep 22.
Artigo em Inglês | MEDLINE | ID: mdl-37790457

RESUMO

The auditory system is unique among sensory systems in its ability to phase lock to and precisely follow very fast cycle-by-cycle fluctuations in the phase of sound-driven cochlear vibrations. Yet, the perceptual role of this temporal fine structure (TFS) code is debated. This fundamental gap is attributable to our inability to experimentally manipulate TFS cues without altering other perceptually relevant cues. Here, we circumnavigated this limitation by leveraging individual differences across 200 participants to systematically compare variations in TFS sensitivity to performance in a range of speech perception tasks. Results suggest that robust TFS sensitivity does not confer additional masking release from pitch or spatial cues, but appears to confer resilience against the effects of reverberation. Yet, across conditions, we also found that greater TFS sensitivity is associated with faster response times, consistent with reduced listening effort. These findings highlight the perceptual significance of TFS coding for everyday hearing.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa