Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 402
Filtrar
1.
Trends Hear ; 28: 23312165241264466, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39106413

RESUMO

This study investigated sound localization abilities in patients with bilateral conductive and/or mixed hearing loss (BCHL) when listening with either one or two middle ear implants (MEIs). Sound localization was measured by asking patients to point as quickly and accurately as possible with a head-mounted LED in the perceived sound direction. Loudspeakers, positioned around the listener within a range of +73°/-73° in the horizontal plane, were not visible to the patients. Broadband (500 Hz-20 kHz) noise bursts (150 ms), roved over a 20-dB range in 10 dB steps was presented. MEIs stimulate the ipsilateral cochlea only and therefore the localization response was not affected by crosstalk. Sound localization was better with bilateral MEIs compared with the unilateral left and unilateral right conditions. Good sound localization performance was found in the bilaterally aided hearing condition in four patients. In two patients, localization abilities equaled normal hearing performance. Interestingly, in the unaided condition, when both devices were turned off, subjects could still localize the stimuli presented at the highest sound level. Comparison with data of patients implanted bilaterally with bone-conduction devices, demonstrated that localization abilities with MEIs were superior. The measurements demonstrate that patients with BCHL, using remnant binaural cues in the unaided condition, are able to process binaural cues when listening with bilateral MEIs. We conclude that implantation with two MEIs, each stimulating only the ipsilateral cochlea, without crosstalk to the contralateral cochlea, can result in good sound localization abilities, and that this topic needs further investigation.


Assuntos
Estimulação Acústica , Perda Auditiva Condutiva , Perda Auditiva Condutiva-Neurossensorial Mista , Prótese Ossicular , Localização de Som , Humanos , Localização de Som/fisiologia , Feminino , Masculino , Pessoa de Meia-Idade , Perda Auditiva Condutiva/fisiopatologia , Perda Auditiva Condutiva/cirurgia , Perda Auditiva Condutiva/diagnóstico , Perda Auditiva Condutiva/reabilitação , Adulto , Perda Auditiva Condutiva-Neurossensorial Mista/fisiopatologia , Perda Auditiva Condutiva-Neurossensorial Mista/reabilitação , Perda Auditiva Condutiva-Neurossensorial Mista/cirurgia , Perda Auditiva Condutiva-Neurossensorial Mista/diagnóstico , Idoso , Perda Auditiva Bilateral/fisiopatologia , Perda Auditiva Bilateral/reabilitação , Perda Auditiva Bilateral/diagnóstico , Perda Auditiva Bilateral/cirurgia , Resultado do Tratamento , Desenho de Prótese , Sinais (Psicologia) , Adulto Jovem , Limiar Auditivo , Condução Óssea/fisiologia
2.
Psychophysiology ; : e14656, 2024 Aug 02.
Artigo em Inglês | MEDLINE | ID: mdl-39095947

RESUMO

The neurological basis for perceptual awareness remains unclear, and theories disagree as to whether sensory cortices per se generate awareness. Critically, neural activity in the sensory cortices is only a neural correlate of consciousness (NCC) if it closely matches the contents of perceptual awareness. Research in vision and touch suggest that contralateral activity in sensory cortices is an NCC. Similarly, research in hearing with two sound sources (left and right) presented over headphones also suggests that a candidate NCC called the auditory awareness negativity (AAN) matches perceived location of sound. The current study used 13 different sound sources presented over loudspeakers for natural localization cues and measured event-related potentials to a threshold stimulus in a sound localization task. Preregistered Bayesian mixed models provided moderate evidence against an overall AAN and very strong evidence against its lateralization. Because of issues regarding data quantity and quality, exploratory analyses with aggregated data from multiple loudspeakers were conducted. Results provided moderate evidence for an overall AAN and strong evidence against its lateralization. Nonetheless, the interpretations of these results remain inconclusive. Therefore, future research should reduce the number of conditions and/or test over several sessions to procure a sufficient amount of data. Taken at face value, the results may suggest issues with AAN as an NCC of auditory awareness, as it does not laterally map onto experiences in a free-field auditory environment, in contrast to the NCCs of vision and touch.

3.
Hear Res ; 452: 109094, 2024 Jul 31.
Artigo em Inglês | MEDLINE | ID: mdl-39153443

RESUMO

Sound localization in the front-back dimension is reported to be challenging, with individual differences. We investigated whether auditory discrimination processing in the brain differs based on front-back sound localization ability. This study conducted an auditory oddball task using speakers in front of and behind the participants. We used event-related brain potentials to examine the deviance detection process between groups that could and could not discriminate front-back sound localization. The results indicated that mismatch negativity (MMN) occurred during the deviance detection process, and P2 amplitude differed between standard and deviant locations in both groups. However, the latency of MMN was shorter in the group that could discriminate front-back sounds than in the group that could not. Additionally, N1 amplitude increased for deviant locations compared to standard ones only in the discriminating group. In conclusion, the sensory memories matching process based on traces of previously presented stimuli (MMN, P2) occurred regardless of discrimination ability. However, the response to changes in the physical properties of sounds (MMN latency, N1 amplitude) differed depending on the ability to discriminate front-back sounds. Our findings suggest that the brain may have different processing strategies for the two directions even without subjective recognition of the front-back direction of incoming sounds.

4.
Sensors (Basel) ; 24(13)2024 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-39001008

RESUMO

Speaker diarization consists of answering the question of "who spoke when" in audio recordings. In meeting scenarios, the task of labeling audio with the corresponding speaker identities can be further assisted by the exploitation of spatial features. This work proposes a framework designed to assess the effectiveness of combining speaker embeddings with Time Difference of Arrival (TDOA) values from available microphone sensor arrays in meetings. We extract speaker embeddings using two popular and robust pre-trained models, ECAPA-TDNN and X-vectors, and calculate the TDOA values via the Generalized Cross-Correlation (GCC) method with Phase Transform (PHAT) weighting. Although ECAPA-TDNN outperforms the Xvectors model, we utilize both speaker embedding models to explore the potential of employing a computationally lighter model when spatial information is exploited. Various techniques for combining the spatial-temporal information are examined in order to determine the best clustering method. The proposed framework is evaluated on two multichannel datasets: the AVLab Speaker Localization dataset and a multichannel dataset (SpeaD-M3C) enriched in the context of the present work with supplementary information from smartphone recordings. Our results strongly indicate that the integration of spatial information can significantly improve the performance of state-of-the-art deep learning diarization models, presenting a 2-3% reduction in DER compared to the baseline approach on the evaluated datasets.

5.
J Audiol Otol ; 28(3): 203-212, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38946331

RESUMO

BACKGROUND AND OBJECTIVES: Localization of a sound source in the horizontal plane depends on the listener's interaural comparison of arrival time and level. Hearing loss (HL) can reduce access to these binaural cues, possibly disrupting the localization and memory of spatial information. Thus, this study aimed to investigate the horizontal sound localization performance and the spatial short-term memory in listeners with actual and simulated HL. SUBJECTS AND METHODS: Seventeen listeners with bilateral symmetric HL and 17 listeners with normal hearing (NH) participated in the study. The hearing thresholds of NH listeners were elevated by a spectrally shaped masking noise for the simulations of unilateral hearing loss (UHL) and bilateral hearing loss (BHL). The localization accuracy and errors as well as the spatial short-term memory span were measured in the free field using a set of 11 loudspeakers arrayed over a 150° arc. RESULTS: The localization abilities and spatial short-term memory span did not significantly differ between actual BHL listeners and BHL-simulated NH listeners. Overall, the localization performance with the UHL simulation was approximately twofold worse than that with the BHL simulation, and the hearing asymmetry led to a detrimental effect on spatial memory. The mean localization score as a function of stimulus location in the UHL simulation was less than 30% even for the front (0° azimuth) stimuli and much worse on the side closer to the simulated ear. In the UHL simulation, the localization responses were biased toward the side of the intact ear even when sounds were coming from the front. CONCLUSIONS: Hearing asymmetry induced by the UHL simulation substantially disrupted the localization performance and recall abilities of spatial positions encoded and stored in the memory, due to fewer chances to learn strategies to improve localization. The marked effect of hearing asymmetry on sound localization highlights the need for clinical assessments of spatial hearing in addition to conventional hearing tests.

6.
Eur J Neurosci ; 2024 Jul 31.
Artigo em Inglês | MEDLINE | ID: mdl-39085952

RESUMO

Sound-source localization is based on spatial cues arising due to interactions of sound waves with the torso, head and ears. Here, we evaluated neural responses to free-field sound sources in the central nucleus of the inferior colliculus (CIC), the medial geniculate body (MGB) and the primary auditory cortex (A1) of Mongolian gerbils. Using silicon probes we recorded from anaesthetized gerbils positioned in the centre of a sound-attenuating, anechoic chamber. We measured rate-azimuth functions (RAFs) with broad-band noise of varying levels presented from loudspeakers spanning 210° in azimuth and characterized RAFs by calculating spatial centroids, Equivalent Rectangular Receptive Fields (ERRFs), steepest slope locations and spatial-separation thresholds. To compare neuronal responses with behavioural discrimination thresholds from the literature we performed a neurometric analysis based on signal-detection theory. All structures demonstrated heterogeneous spatial tuning with a clear dominance of contralateral tuning. However, the relative amount of contralateral tuning decreased from the CIC to A1. In all three structures spatial tuning broadened with increasing sound-level. This effect was strongest in CIC and weakest in A1. Neurometric spatial-separation thresholds compared well with behavioural discrimination thresholds for locations directly in front of the animal. Our findings contrast with those reported for another rodent, the rat, which exhibits homogenous and sharply delimited contralateral spatial tuning. Spatial tuning in gerbils resembles more closely the tuning reported in A1 of cats, ferrets and non-human primates. Interestingly, gerbils, in contrast to rats, share good low-frequency hearing with carnivores and non-human primates, which may account for the observed spatial tuning properties.

7.
Hear Res ; 451: 109078, 2024 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-39053298

RESUMO

Musicians perform better than non-musicians on a variety of non-musical sound-perception tasks. Whether that musicians' advantage extends to spatial hearing is a topic of increasing interest. Here we investigated one facet of that topic by assessing musicians' and non-musicians' sensitivity to the two primary cues to sound-source location on the horizontal plane: interaural-level-differences (ILDs) and interaural-time-differences (ITDs). Specifically, we measured discrimination thresholds for ILDs at 4 kHz (n =246) and ITDs at 0.5 kHz (n = 137) in participants whose musical-training histories covered a wide range of lengths, onsets, and offsets. For ILD discrimination, when only musical-training length was considered in the analysis, no musicians' advantage was apparent. However, when thresholds were compared between subgroups of non-musicians (<2 years of training) and extreme musicians (≥10 years of training, started ≤ age 7, still playing) a musicians' advantage emerged. Threshold comparisons between the extreme musicians and other subgroups of highly trained musicians (≥10 years of training) further indicated that the advantage required both starting young and continuing to play. In addition, the advantage was larger in males than in females, by some measures, and was not evident in an assessment of learning. For ITD discrimination, in contrast to ILD discrimination, parallel analyses revealed no apparent musicians' advantage. The results suggest that musicianship is associated with greater sensitivity to ILDs, a fundamental sound-localization cue, even though that sensitivity is not central to music, that this musicians' advantage arises, at least in part, from nurture, and that it is governed by a neural substrate where ILDs are processed separately from, and more malleably than, ITDs.


Assuntos
Estimulação Acústica , Limiar Auditivo , Sinais (Psicologia) , Música , Localização de Som , Humanos , Masculino , Feminino , Adulto , Adulto Jovem , Fatores de Tempo , Adolescente , Audição , Fatores Etários , Pessoa de Meia-Idade , Discriminação Psicológica
8.
Psychol Rep ; : 332941241260246, 2024 Jun 10.
Artigo em Inglês | MEDLINE | ID: mdl-38857521

RESUMO

When completing a task, the ability to implement behavioral strategies to solve it in an effective and cognitively less-demanding way is extremely adaptive for humans. This behavior makes it possible to accumulate evidence and test one's own predictions about the external world. In this work, starting from examples in the field of spatial hearing research, I analyze the importance of considering motor strategies in perceptual tasks, and I stress the urgent need to create ecological experimental settings, which are essential in allowing the implementation of such behaviors and in measuring them. In particular, I will consider head movements as an example of strategic behavior implemented to solve acoustic space-perception tasks.

9.
Sci Prog ; 107(2): 368504241262195, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38872447

RESUMO

A vestibular schwannoma is a benign tumor; however, the schwannoma itself and interventions can cause sensorineural hearing loss. Most vestibular schwannomas are unilateral tumors that affect hearing only on one side. Attention has focused on improving the quality of life for patients with unilateral hearing loss and therapeutic interventions to address this issue have been emphasized. Herein, we encountered a patient who was a candidate for hearing preservation surgery based on preoperative findings and had nonserviceable hearing after the surgery, according to the Gardner-Robertson classification. Postoperatively, the patient had decreased listening comprehension and ability to localize sound sources. He was fitted with bilateral hearing aids, and his ability to localize sound sources improved. Although the patient had postoperative nonserviceable hearing on the affected side and age-related hearing loss on the unaffected side, hearing aids in both ears were useful for his daily life. Therefore, the patient was able to maintain a binaural hearing effect and the ability to localize the sound source improved. This report emphasizes the importance of hearing preservation with vestibular schwannomas, and the demand for hearing loss rehabilitation as a postoperative complication can increase, even if hearing loss is nonserviceable.


Assuntos
Auxiliares de Audição , Neuroma Acústico , Humanos , Neuroma Acústico/cirurgia , Masculino , Pessoa de Meia-Idade , Perda Auditiva Neurossensorial/cirurgia , Perda Auditiva Neurossensorial/reabilitação , Perda Auditiva Neurossensorial/etiologia , Qualidade de Vida , Perda Auditiva/etiologia , Perda Auditiva/cirurgia , Perda Auditiva/reabilitação , Complicações Pós-Operatórias/etiologia
10.
Front Cell Neurosci ; 18: 1354520, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38846638

RESUMO

The lateral superior olive (LSO), a prominent integration center in the auditory brainstem, contains a remarkably heterogeneous population of neurons. Ascending neurons, predominantly principal neurons (pLSOs), process interaural level differences for sound localization. Descending neurons (lateral olivocochlear neurons, LOCs) provide feedback into the cochlea and are thought to protect against acoustic overload. The molecular determinants of the neuronal diversity in the LSO are largely unknown. Here, we used patch-seq analysis in mice at postnatal days P10-12 to classify developing LSO neurons according to their functional and molecular profiles. Across the entire sample (n = 86 neurons), genes involved in ATP synthesis were particularly highly expressed, confirming the energy expenditure of auditory neurons. Two clusters were identified, pLSOs and LOCs. They were distinguished by 353 differentially expressed genes (DEGs), most of which were novel for the LSO. Electrophysiological analysis confirmed the transcriptomic clustering. We focused on genes affecting neuronal input-output properties and validated some of them by immunohistochemistry, electrophysiology, and pharmacology. These genes encode proteins such as osteopontin, Kv11.3, and Kvß3 (pLSO-specific), calcitonin-gene-related peptide (LOC-specific), or Kv7.2 and Kv7.3 (no DEGs). We identified 12 "Super DEGs" and 12 genes showing "Cluster similarity." Collectively, we provide fundamental and comprehensive insights into the molecular composition of individual ascending and descending neurons in the juvenile auditory brainstem and how this may relate to their specific functions, including developmental aspects.

11.
Sensors (Basel) ; 24(11)2024 May 27.
Artigo em Inglês | MEDLINE | ID: mdl-38894232

RESUMO

Sound localization is a crucial aspect of human auditory perception. VR (virtual reality) technologies provide immersive audio platforms that allow human listeners to experience natural sounds based on their ability to localize sound. However, the simulations of sound generated by these platforms, which are based on the general head-related transfer function (HRTF), often lack accuracy in terms of individual sound perception and localization due to significant individual differences in this function. In this study, we aimed to investigate the disparities between the perceived locations of sound sources by users and the locations generated by the platform. Our goal was to determine if it is possible to train users to adapt to the platform-generated sound sources. We utilized the Microsoft HoloLens 2 virtual platform and collected data from 12 subjects based on six separate training sessions arranged in 2 weeks. We employed three modes of training to assess their effects on sound localization, in particular for studying the impacts of multimodal error, visual, and sound guidance in combination with kinesthetic/postural guidance, on the effectiveness of the training. We analyzed the collected data in terms of the training effect between pre- and post-sessions as well as the retention effect between two separate sessions based on subject-wise paired statistics. Our findings indicate that, as far as the training effect between pre- and post-sessions is concerned, the effect is proven to be statistically significant, in particular in the case wherein kinesthetic/postural guidance is mixed with visual and sound guidance. Conversely, visual error guidance alone was found to be largely ineffective. On the other hand, as far as the retention effect between two separate sessions is concerned, we could not find any meaningful statistical implication on the effect for all three error guidance modes out of the 2-week session of training. These findings can contribute to the improvement of VR technologies by ensuring they are designed to optimize human sound localization abilities.


Assuntos
Localização de Som , Humanos , Localização de Som/fisiologia , Feminino , Masculino , Adulto , Realidade Virtual , Adulto Jovem , Percepção Auditiva/fisiologia , Som
12.
Brain Sci ; 14(6)2024 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-38928534

RESUMO

Auditory spatial cues contribute to two distinct functions, of which one leads to explicit localization of sound sources and the other provides a location-linked representation of sound objects. Behavioral and imaging studies demonstrated right-hemispheric dominance for explicit sound localization. An early clinical case study documented the dissociation between the explicit sound localizations, which was heavily impaired, and fully preserved use of spatial cues for sound object segregation. The latter involves location-linked encoding of sound objects. We review here evidence pertaining to brain regions involved in location-linked representation of sound objects. Auditory evoked potential (AEP) and functional magnetic resonance imaging (fMRI) studies investigated this aspect by comparing encoding of individual sound objects, which changed their locations or remained stationary. Systematic search identified 1 AEP and 12 fMRI studies. Together with studies of anatomical correlates of impaired of spatial-cue-based sound object segregation after focal brain lesions, the present evidence indicates that the location-linked representation of sound objects involves strongly the left hemisphere and to a lesser degree the right hemisphere. Location-linked encoding of sound objects is present in several early-stage auditory areas and in the specialized temporal voice area. In these regions, emotional valence benefits from location-linked encoding as well.

13.
Audiol Neurootol ; : 1-8, 2024 May 02.
Artigo em Inglês | MEDLINE | ID: mdl-38697033

RESUMO

INTRODUCTION: The aim of this study was to examine how bimodal stimulation affects quality of life (QOL) during the postoperative period following cochlear implantation (CI). These data could potentially provide evidence to encourage more bimodal candidates to continue hearing aid (HA) use after CI. METHODS: In this prospective study, patients completed preoperative and 1-, 3-, and 6-month post-activation QOL surveys on listening effort, speech perception, sound quality/localization, and hearing handicap. Fifteen HA users who were candidates for contralateral CI completed the study (mean age 65.6 years). RESULTS: Patients used both devices at a median rate of 97%, 97%, and 98% of the time at 1, 3, and 6 months, respectively. On average, patients' hearing handicap scores decreased by 16% at 1 month, 36% at 3 months, and 30% at 6 months. Patients' listening effort scores decreased by a mean of 10.8% at 1 month, 12.6% at 3 months, and 18.7% at 6 months. Localization significantly improved by 24.3% at 1 month and remained steady. There was no significant improvement in sound quality scores. CONCLUSION: Bimodal listeners should expect QOL to improve, and listening effort and localization are generally optimized using CI and HA compared to CI alone. Some scores improved at earlier time points than others, suggesting bimodal auditory skills may develop at different rates.

14.
Artigo em Inglês | MEDLINE | ID: mdl-38797372

RESUMO

BACKGROUND AND OBJECTIVE: Sound localization plays a crucial role in our daily lives, enabling us to recognize voices, respond to alarming situations, avoid dangers, and navigate towards specific signals. However, this ability is compromised in patients with Single-Sided Deafness (SSD) and Asymmetric Hearing Loss (AHL), negatively impacting their daily functioning. The main objective of the study was to quantify the degree of sound source localization in patients with single-sided deafness or asymmetric hearing loss using a Cochlear Implant (CI) and to compare between the two subgroups. MATERIALS AND METHODS: This was a prospective, longitudinal, observational, single-center study involving adult patients diagnosed with profound unilateral or asymmetric sensorineural hearing loss who underwent cochlear implantation. Sound localization was assessed in a chamber equipped with seven speakers evenly distributed from -90º to 90º. Stimuli were presented at 1000 Hz and intensities of 65 dB, 70 dB, and 75 dB. Each stimulus was presented only once per speaker, totaling 21 presentations. The number of correct responses at different intensities was recorded, and angular error in degrees was calculated to determine the mean angular distance between the patient-indicated speaker and the speaker presenting the stimulus. Both assessments were conducted preoperatively without a cochlear implant and two years post-implantation. RESULTS: The total sample comprised 20 patients, with 9 assigned to the SSD group and 11 to the AHL group. The Preoperative Pure Tone Average (PTA) in free field was 31.7 dB in the SSD group and 41.8 dB in the AHL group. There was a statistically significant improvement in sound localization ability and angular error with the use of the cochlear implant at all intensities in both SSD and AHL subgroups. CONCLUSIONS: Cochlear implantation in patients with SSD and AHL enhances sound localization, reducing mean angular error and increasing the number of correct sound localization responses.

15.
Cogn Neurodyn ; 18(2): 715-740, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38699604

RESUMO

Neurons in the medial superior olive (MSO) exhibit high frequency responses such as subthreshold resonance, which is helpful to sensitively detect a small difference in the arrival time of sounds between two ears for precise sound localization. Recently, except for the high frequency depolarization resonance mediated by a low threshold potassium (IKLT) current, a low frequency hyperpolarization resonance mediated by a hyperpolarization-activated cation (IH) current is observed in experiments on the MSO neurons, forming double resonances. The complex dynamics underlying double resonances are studied in an MSO neuron model in the present paper. Firstly, double resonances similar to the experimental observations are simulated as the resting membrane potential is between half-activation voltages of IH and IKLT currents, and stimulation current (IZAP) with large amplitude and exponentially increasing frequency is applied. Secondly, multiple effective factors to modulate double resonances are obtained. Especially, the decrease of time constant of IKLT current and increase of conductance of IH and IKLT currents can enhance the depolarization resonance frequency for precise sound localization. Last, different frequency responses of slow IH and fast IKLT currents in formation of the resonances are acquired. A middle phase difference between IZAP and IKLT currents appears at a high frequency, and the interaction between the positive part of IZAP and the negative IKLT current forms the depolarization resonance. Interaction between the negative part of IZAP and positive IH current with a middle phase difference results in hyperpolarization resonance at a low frequency. Furthermore, the phase difference between IZAP and resonance current can well explain the increase of depolarization resonance frequency modulated by the increase of conductance of IH or IKLT currents. The results present the dynamical and biophysical mechanisms for the double resonances mediated by two currents in the MSO neurons, which is helpful to enhance the depolarization resonance frequency for precise sound localization.

16.
J Neurosci ; 44(21)2024 May 22.
Artigo em Inglês | MEDLINE | ID: mdl-38664010

RESUMO

The natural environment challenges the brain to prioritize the processing of salient stimuli. The barn owl, a sound localization specialist, exhibits a circuit called the midbrain stimulus selection network, dedicated to representing locations of the most salient stimulus in circumstances of concurrent stimuli. Previous competition studies using unimodal (visual) and bimodal (visual and auditory) stimuli have shown that relative strength is encoded in spike response rates. However, open questions remain concerning auditory-auditory competition on coding. To this end, we present diverse auditory competitors (concurrent flat noise and amplitude-modulated noise) and record neural responses of awake barn owls of both sexes in subsequent midbrain space maps, the external nucleus of the inferior colliculus (ICx) and optic tectum (OT). While both ICx and OT exhibit a topographic map of auditory space, OT also integrates visual input and is part of the global-inhibitory midbrain stimulus selection network. Through comparative investigation of these regions, we show that while increasing strength of a competitor sound decreases spike response rates of spatially distant neurons in both regions, relative strength determines spike train synchrony of nearby units only in the OT. Furthermore, changes in synchrony by sound competition in the OT are correlated to gamma range oscillations of local field potentials associated with input from the midbrain stimulus selection network. The results of this investigation suggest that modulations in spiking synchrony between units by gamma oscillations are an emergent coding scheme representing relative strength of concurrent stimuli, which may have relevant implications for downstream readout.


Assuntos
Estimulação Acústica , Colículos Inferiores , Localização de Som , Estrigiformes , Animais , Estrigiformes/fisiologia , Feminino , Masculino , Estimulação Acústica/métodos , Localização de Som/fisiologia , Colículos Inferiores/fisiologia , Mesencéfalo/fisiologia , Percepção Auditiva/fisiologia , Mapeamento Encefálico , Vias Auditivas/fisiologia , Neurônios/fisiologia , Potenciais de Ação/fisiologia
17.
Front Neurosci ; 18: 1353413, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38562303

RESUMO

Background: Patients with age-related hearing loss (ARHL) often struggle with tracking and locating sound sources, but the neural signature associated with these impairments remains unclear. Materials and methods: Using a passive listening task with stimuli from five different horizontal directions in functional magnetic resonance imaging, we defined functional regions of interest (ROIs) of the auditory "where" pathway based on the data of previous literatures and young normal hearing listeners (n = 20). Then, we investigated associations of the demographic, cognitive, and behavioral features of sound localization with task-based activation and connectivity of the ROIs in ARHL patients (n = 22). Results: We found that the increased high-level region activation, such as the premotor cortex and inferior parietal lobule, was associated with increased localization accuracy and cognitive function. Moreover, increased connectivity between the left planum temporale and left superior frontal gyrus was associated with increased localization accuracy in ARHL. Increased connectivity between right primary auditory cortex and right middle temporal gyrus, right premotor cortex and left anterior cingulate cortex, and right planum temporale and left lingual gyrus in ARHL was associated with decreased localization accuracy. Among the ARHL patients, the task-dependent brain activation and connectivity of certain ROIs were associated with education, hearing loss duration, and cognitive function. Conclusion: Consistent with the sensory deprivation hypothesis, in ARHL, sound source identification, which requires advanced processing in the high-level cortex, is impaired, whereas the right-left discrimination, which relies on the primary sensory cortex, is compensated with a tendency to recruit more resources concerning cognition and attention to the auditory sensory cortex. Overall, this study expanded our understanding of the neural mechanisms contributing to sound localization deficits associated with ARHL and may serve as a potential imaging biomarker for investigating and predicting anomalous sound localization.

18.
Front Hum Neurosci ; 18: 1342931, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38681742

RESUMO

Objectives: The auditory spatial processing abilities mature throughout childhood and degenerate in older adults. This study aimed to compare the differences in onset cortical auditory evoked potentials (CAEPs) and location-evoked acoustic change complex (ACC) responses among children, adults, and the elderly and to investigate the impact of aging and development on ACC responses. Design: One hundred and seventeen people were recruited in the study, including 57 typically-developed children, 30 adults, and 30 elderlies. The onset-CAEP evoked by white noise and ACC by sequential changes in azimuths were recorded. Latencies and amplitudes as a function of azimuths were analyzed using the analysis of variance, Pearson correlation analysis, and multiple linear regression model. Results: The ACC N1'-P2' amplitudes and latencies in adults, P1'-N1' amplitudes in children, and N1' amplitudes and latencies in the elderly were correlated with angles of shifts. The N1'-P2' and P2' amplitudes decreased in the elderly compared to adults. In Children, the ACC P1'-N1' responses gradually differentiated into the P1'-N1'-P2' complex. Multiple regression analysis showed that N1'-P2' amplitudes (R2 = 0.33) and P2' latencies (R2 = 0.18) were the two most variable predictors in adults, while in the elderly, N1' latencies (R2 = 0.26) explained most variances. Although the amplitudes of onset-CAEP differed at some angles, it could not predict angle changes as effectively as ACC responses. Conclusion: The location-evoked ACC responses varied among children, adults, and the elderly. The N1'-P2' amplitudes and P2' latencies in adults and N1' latencies in the elderly explained most variances of changes in spatial position. The differentiation of the N1' waveform was observed in children. Further research should be conducted across all age groups, along with behavioral assessments, to confirm the relationship between aging and immaturity in objective ACC responses and poorer subjective spatial performance. Significance: ACCs evoked by location changes were assessed in adults, children, and the elderly to explore the impact of aging and development on these differences.

19.
HNO ; 72(7): 504-514, 2024 Jul.
Artigo em Alemão | MEDLINE | ID: mdl-38536465

RESUMO

BACKGROUND: Binaural hearing enables better speech comprehension in noisy environments and is necessary for acoustic spatial orientation. This study investigates speech discrimination in noise with separated signal sources and measures sound localization. The aim was to study characteristics and reproducibility of two selected measurement techniques which seem to be suitable for description of the aforementioned aspects of binaural hearing. MATERIALS AND METHODS: Speech reception thresholds (SRT) in noise and test-retest reliability were collected from 55 normal-hearing adults for a spatial setup of loudspeakers with angles of ±â€¯45° and ±â€¯90° using the Oldenburg sentence test. The investigations of sound localization were conducted in a semicircle and fullcircle setup (7 and 12 equidistant loudspeakers). RESULTS: SRT (S-45N45: -14.1 dB SNR; S45N-45: -16.4 dB SNR; S0N90: -13.1 dB SNR; S0N-90: -13.4 dB SNR) and test-retest reliability (4 to 6 dB SNR) were collected for speech intelligibility in noise with separated signals. The procedural learning effect for this setup could only be mitigated with 120 training sentences. Significantly smaller SRT values, resulting in better speech discrimination, were found for the test situation of the right compared to the left ear. RMS values could be gathered for sound localization in the semicircle (1,9°) as well as in the fullcircle setup (11,1°). Better results were obtained in the retest of the fullcircle setup. CONCLUSION: When using the Oldenburg sentence test in noise with spatially separated signals, it is mandatory to perform a training session of 120 sentences in order to minimize the procedural learning effect. Ear-specific SRT values for speech discrimination in noise with separated signal sources are required, which is probably due to the right-ear advantage. A training is recommended for sound localization in the fullcircle setup.


Assuntos
Ruído , Localização de Som , Percepção da Fala , Humanos , Localização de Som/fisiologia , Reprodutibilidade dos Testes , Feminino , Adulto , Masculino , Percepção da Fala/fisiologia , Adulto Jovem , Sensibilidade e Especificidade , Teste do Limiar de Recepção da Fala/métodos , Estimulação Acústica/métodos , Testes de Discriminação da Fala/métodos
20.
Trends Hear ; 28: 23312165241235463, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38425297

RESUMO

Sound localization testing is key for comprehensive hearing evaluations, particularly in cases of suspected auditory processing disorders. However, sound localization is not commonly assessed in clinical practice, likely due to the complexity and size of conventional measurement systems, which require semicircular loudspeaker arrays in large and acoustically treated rooms. To address this issue, we investigated the feasibility of testing sound localization in virtual reality (VR). Previous research has shown that virtualization can lead to an increase in localization blur. To measure these effects, we conducted a study with a group of normal-hearing adults, comparing sound localization performance in different augmented reality and VR scenarios. We started with a conventional loudspeaker-based measurement setup and gradually moved to a virtual audiovisual environment, testing sound localization in each scenario using a within-participant design. The loudspeaker-based experiment yielded results comparable to those reported in the literature, and the results of the virtual localization test provided new insights into localization performance in state-of-the-art VR environments. By comparing localization performance between the loudspeaker-based and virtual conditions, we were able to estimate the increase in localization blur induced by virtualization relative to a conventional test setup. Notably, our study provides the first proxy normative cutoff values for sound localization testing in VR. As an outlook, we discuss the potential of a VR-based sound localization test as a suitable, accessible, and portable alternative to conventional setups and how it could serve as a time- and resource-saving prescreening tool to avoid unnecessarily extensive and complex laboratory testing.


Assuntos
Transtornos da Percepção Auditiva , Localização de Som , Realidade Virtual , Adulto , Humanos , Testes Auditivos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...