Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 4.923
Filter
1.
J Acoust Soc Am ; 156(1): 164-175, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38958583

ABSTRACT

Piano tone localization at the performer's listening point is a multisensory process involving audition, vision, and upper limb proprioception. The consequent representation of the auditory scene, especially in experienced pianists, is likely also influenced by their memory about the instrument keyboard. Disambiguating such components is not obvious, and first requires an analysis of the acoustic tone localization process to assess the role of auditory feedback in forming this scene. This analysis is complicated by the acoustic behavior of the piano, which does not guarantee the activation of the auditory precedence effect during a tone attack, nor can it provide robust interaural differences during the subsequent free evolution of the sound. In a tone localization task using a Disklavier upright piano (which can be operated remotely and configured to have its hammers hit a damper instead of producing a tone), twenty-three expert musicians, including pianists, successfully recognized the angular position of seven evenly distributed notes across the keyboard. The experiment involved listening to either full piano tones or just the key mechanical noise, with no additional feedback from other senses. This result suggests that the key mechanical noise alone activated the localization process without support from vision and/or limb proprioception. Since the same noise is present in the onset of the full tones, the key mechanics of our piano created a touch precursor in such tones that may be responsible of their correct angular localization by means of the auditory precedence effect. However, the significance of pitch cues arriving at a listener after the touch precursor was not measured when full tones were presented. As these cues characterize a note and, hence, the corresponding key position comprehensively, an open question remains regarding the contribution of pianists' spatial memory of the instrument keyboard to tone localization.


Subject(s)
Cues , Music , Sound Localization , Humans , Sound Localization/physiology , Adult , Male , Female , Young Adult , Acoustic Stimulation , Proprioception/physiology , Feedback, Sensory/physiology
2.
J Comp Neurol ; 532(7): e25653, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38962885

ABSTRACT

The sound localization behavior of the nocturnally hunting barn owl and its underlying neural computations is a textbook example of neuroethology. Differences in sound timing and level at the two ears are integrated in a series of well-characterized steps, from brainstem to inferior colliculus (IC), resulting in a topographical neural representation of auditory space. It remains an important question of brain evolution: How is this specialized case derived from a more plesiomorphic pattern? The present study is the first to match physiology and anatomical subregions in the non-owl avian IC. Single-unit responses in the chicken IC were tested for selectivity to different frequencies and to the binaural difference cues. Their anatomical origin was reconstructed with the help of electrolytic lesions and immunohistochemical identification of different subregions of the IC, based on previous characterizations in owl and chicken. In contrast to barn owl, there was no distinct differentiation of responses in the different subregions. We found neural topographies for both binaural cues but no evidence for a coherent representation of auditory space. The results are consistent with previous work in pigeon IC and chicken higher-order midbrain and suggest a plesiomorphic condition of multisensory integration in the midbrain that is dominated by lateral panoramic vision.


Subject(s)
Acoustic Stimulation , Chickens , Cues , Inferior Colliculi , Sound Localization , Animals , Inferior Colliculi/physiology , Chickens/physiology , Sound Localization/physiology , Acoustic Stimulation/methods , Auditory Pathways/physiology , Strigiformes/physiology , Neurons/physiology
3.
Article in Chinese | MEDLINE | ID: mdl-38965850

ABSTRACT

Objectives: To investigate the outcomes of cochlear implantation in Mandarin-speaking cochlear implant (CI) users with single-sided deafness (SSD). Methods: This study was a single-center prospective cohort study. Eleven Mandarin-speaking adult SSD patients who underwent CI implantation at Capital Medical University Beijing Tongren Hospital from August 2020 to October 2021 were recruited, including 6 males and 5 females, with the age ranging from 24 to 50 years old. In a sound field with 7 loudspeakers distributed at 180°, we measured root-mean-square error(RMSE)in SSD patients at the preoperative, 1-month, 3-month, 6-month, and 12-month after switch-on to assess the improvement of sound source localization. The Mandarin Speech Perception (MSP) was used in the sound field to test the speech reception threshold (SRT) of SSD patients under different signal-to-noise locations in a steady-state noise under conditions of CI off and CI on, to reflect the head shadow effect(SSSDNNH), binaural summation effect(S0N0) and squelch effect(S0NSSD). The Tinnitus Handicap Inventory (THI) and the Visual Analogue Scale (VAS) were used to assess changes in tinnitus severity and tinnitus loudness in SSD patients at each time point. The Speech, Spatial and Qualities of Hearing Scale(SSQ) and the Nijmegen Cochlear Implantation Scale (NCIQ) were used to assess the subjective benefits of spatial speech perception and quality of life in SSD patients after cochlear implantation. SPSS 19.0 software was used for statistical analysis. Results: SSD patients showed a significant improvement in the poorer ear in hearing thresholds with CI-on compared with CI-off; The ability to localize the sound source was significantly improved, with statistically significant differences in RMSE at each follow-up time compared with the preoperative period (P<0.05). In the SSSDNNH condition, which reflects the head shadow effect, the SRT in binaural hearing was significantly improved by 6.5 dB compared with unaided condition, and the difference was statistically significant (t=6.25, P=0.001). However, there was no significant improvement in SRT between the binaural hearing condition and unaided conditions in the S0N0 and S0NSSD conditions (P>0.05). The total score of THI and three dimensions were significant decreased (P<0.05). Tinnitus VAS scores were significantly lower in binaural hearing compared to the unaided condition (P<0.001). The total score of SSQ, and the scores of speech and spatial dimensions were significant improved in binaural hearing compared to the unaided condition (P<0.001). There was no statistical difference in NCIQ questionnaire scores between preoperative and postoperative (P>0.05), and only the self-efficacy subscore showed a significant increase(Z=-2.497,P=0.013). Conclusion: CI could help Mandarin-speaking SSD patients restore binaural hearing to some extent, improve sound localization and speech recognition in noise. In addition, CI in SSD patients could suppress tinnitus, reduce the loudness of tinnitus, and improve subjective perceptions of spatial hearing and quality of life.


Subject(s)
Cochlear Implantation , Humans , Male , Female , Cochlear Implantation/methods , Adult , Middle Aged , Prospective Studies , Treatment Outcome , Hearing Loss, Unilateral/surgery , Cochlear Implants , Speech Perception , Young Adult , Sound Localization , Tinnitus/surgery , Deafness/surgery , Hearing Aids
4.
Nature ; 631(8019): 118-124, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38898274

ABSTRACT

Locating sound sources such as prey or predators is critical for survival in many vertebrates. Terrestrial vertebrates locate sources by measuring the time delay and intensity difference of sound pressure at each ear1-5. Underwater, however, the physics of sound makes interaural cues very small, suggesting that directional hearing in fish should be nearly impossible6. Yet, directional hearing has been confirmed behaviourally, although the mechanisms have remained unknown for decades. Several hypotheses have been proposed to explain this remarkable ability, including the possibility that fish evolved an extreme sensitivity to minute interaural differences or that fish might compare sound pressure with particle motion signals7,8. However, experimental challenges have long hindered a definitive explanation. Here we empirically test these models in the transparent teleost Danionella cerebrum, one of the smallest vertebrates9,10. By selectively controlling pressure and particle motion, we dissect the sensory algorithm underlying directional acoustic startles. We find that both cues are indispensable for this behaviour and that their relative phase controls its direction. Using micro-computed tomography and optical vibrometry, we further show that D. cerebrum has the sensory structures to implement this mechanism. D. cerebrum shares these structures with more than 15% of living vertebrate species, suggesting a widespread mechanism for inferring sound direction.


Subject(s)
Cues , Hearing , Sound Localization , Animals , Hearing/physiology , Sound Localization/physiology , Pressure , Zebrafish/physiology , X-Ray Microtomography , Male , Female , Sound , Vibration , Algorithms
5.
Sensors (Basel) ; 24(11)2024 May 27.
Article in English | MEDLINE | ID: mdl-38894232

ABSTRACT

Sound localization is a crucial aspect of human auditory perception. VR (virtual reality) technologies provide immersive audio platforms that allow human listeners to experience natural sounds based on their ability to localize sound. However, the simulations of sound generated by these platforms, which are based on the general head-related transfer function (HRTF), often lack accuracy in terms of individual sound perception and localization due to significant individual differences in this function. In this study, we aimed to investigate the disparities between the perceived locations of sound sources by users and the locations generated by the platform. Our goal was to determine if it is possible to train users to adapt to the platform-generated sound sources. We utilized the Microsoft HoloLens 2 virtual platform and collected data from 12 subjects based on six separate training sessions arranged in 2 weeks. We employed three modes of training to assess their effects on sound localization, in particular for studying the impacts of multimodal error, visual, and sound guidance in combination with kinesthetic/postural guidance, on the effectiveness of the training. We analyzed the collected data in terms of the training effect between pre- and post-sessions as well as the retention effect between two separate sessions based on subject-wise paired statistics. Our findings indicate that, as far as the training effect between pre- and post-sessions is concerned, the effect is proven to be statistically significant, in particular in the case wherein kinesthetic/postural guidance is mixed with visual and sound guidance. Conversely, visual error guidance alone was found to be largely ineffective. On the other hand, as far as the retention effect between two separate sessions is concerned, we could not find any meaningful statistical implication on the effect for all three error guidance modes out of the 2-week session of training. These findings can contribute to the improvement of VR technologies by ensuring they are designed to optimize human sound localization abilities.


Subject(s)
Sound Localization , Humans , Sound Localization/physiology , Female , Male , Adult , Virtual Reality , Young Adult , Auditory Perception/physiology , Sound
6.
PLoS One ; 19(6): e0304832, 2024.
Article in English | MEDLINE | ID: mdl-38900820

ABSTRACT

Neurons of the lateral superior olive (LSO) in the auditory brainstem play a fundamental role in binaural sound localization. Previous theoretical studies developed various types of neuronal models to study the physiological functions of the LSO. These models were usually tuned to a small set of physiological data with specific aims in mind. Therefore, it is unclear whether and how they can be related to each other, how widely applicable they are, and which model is suitable for what purposes. In this study, we address these questions for six different single-compartment integrate-and-fire (IF) type LSO models. The models are divided into two groups depending on their subthreshold responses: passive (linear) models with only the leak conductance and active (nonlinear) models with an additional low-voltage-activated potassium conductance that is prevalent among the auditory system. Each of these two groups is further subdivided into three subtypes according to the spike generation mechanism: one with simple threshold-crossing detection and voltage reset, one with threshold-crossing detection plus a current to mimic spike shapes, and one with a depolarizing exponential current for spiking. In our simulations, all six models were driven by identical synaptic inputs and calibrated with common criteria for binaural tuning. The resulting spike rates of the passive models were higher for intensive inputs and lower for temporally structured inputs than those of the active models, confirming the active function of the potassium current. Within each passive or active group, the simulated responses resembled each other, regardless of the spike generation types. These results, in combination with the analysis of computational costs, indicate that an active IF model is more suitable than a passive model for accurately reproducing temporal coding of LSO. The simulation of realistic spike shapes with an extended spiking mechanism added relatively small computational costs.


Subject(s)
Models, Neurological , Superior Olivary Complex , Superior Olivary Complex/physiology , Action Potentials/physiology , Neurons/physiology , Humans , Computer Simulation , Olivary Nucleus/physiology , Animals , Sound Localization/physiology
8.
PLoS One ; 19(5): e0303843, 2024.
Article in English | MEDLINE | ID: mdl-38771860

ABSTRACT

Bayesian models have proven effective in characterizing perception, behavior, and neural encoding across diverse species and systems. The neural implementation of Bayesian inference in the barn owl's sound localization system and behavior has been previously explained by a non-uniform population code model. This model specifies the neural population activity pattern required for a population vector readout to match the optimal Bayesian estimate. While prior analyses focused on trial-averaged comparisons of model predictions with behavior and single-neuron responses, it remains unknown whether this model can accurately approximate Bayesian inference on single trials under varying sensory reliability, a fundamental condition for natural perception and behavior. In this study, we utilized mathematical analysis and simulations to demonstrate that decoding a non-uniform population code via a population vector readout approximates the Bayesian estimate on single trials for varying sensory reliabilities. Our findings provide additional support for the non-uniform population code model as a viable explanation for the barn owl's sound localization pathway and behavior.


Subject(s)
Bayes Theorem , Sound Localization , Strigiformes , Animals , Strigiformes/physiology , Sound Localization/physiology , Models, Neurological , Neurons/physiology
9.
Hear Res ; 449: 109036, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38797037

ABSTRACT

Although rats and mice are among the preferred animal models for investigating many characteristics of auditory function, they are rarely used to study an essential aspect of binaural hearing: the ability of animals to localize the sources of low-frequency sounds by detecting the interaural time difference (ITD), that is the difference in the time at which the sound arrives at each ear. In mammals, ITDs are mostly encoded in the medial superior olive (MSO), one of the main nuclei of the superior olivary complex (SOC). Because of their small heads and high frequency hearing range, rats and mice are often considered unable to use ITDs for sound localization. Moreover, their MSO is frequently viewed as too small or insignificant compared to that of mammals that use ITDs to localize sounds, including cats and gerbils. However, recent research has demonstrated remarkable similarities between most morphological and physiological features of mouse MSO neurons and those of MSO neurons of mammals that use ITDs. In this context, we have analyzed the structure and neural afferent and efferent connections of the rat MSO, which had never been studied by injecting neuroanatomical tracers into the nucleus. The rat MSO spans the SOC longitudinally. It is relatively small caudally, but grows rostrally into a well-developed column of stacked bipolar neurons. By placing small, precise injections of the bidirectional tracer biotinylated dextran amine (BDA) into the MSO, we show that this nucleus is innervated mainly by the most ventral and rostral spherical bushy cells of the anteroventral cochlear nucleus of both sides, and by the most ventrolateral principal neurons of the ipsilateral medial nucleus of the trapezoid body. The same experiments reveal that the MSO densely innervates the most dorsolateral region of the central nucleus of the inferior colliculus, the central region of the dorsal nucleus of the lateral lemniscus, and the most lateral region of the intermediate nucleus of the lateral lemniscus of its own side. Therefore, the MSO is selectively innervated by, and sends projections to, neurons that process low-frequency sounds. The structural and hodological features of the rat MSO are notably similar to those of the MSO of cats and gerbils. While these similarities raise the question of what functions other than ITD coding the MSO performs, they also suggest that the rat MSO is an appropriate model for future MSO-centered research.


Subject(s)
Auditory Pathways , Axons , Sound Localization , Superior Olivary Complex , Animals , Superior Olivary Complex/physiology , Superior Olivary Complex/anatomy & histology , Auditory Pathways/physiology , Auditory Pathways/anatomy & histology , Axons/physiology , Rats , Male , Dextrans/metabolism , Biotin/analogs & derivatives , Acoustic Stimulation , Efferent Pathways/physiology , Efferent Pathways/anatomy & histology , Olivary Nucleus/physiology , Olivary Nucleus/anatomy & histology , Female , Neuroanatomical Tract-Tracing Techniques , Rats, Wistar
10.
Otol Neurotol ; 45(6): 635-642, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38769110

ABSTRACT

OBJECTIVE: To investigate if cartilage conduction (CC) rerouting devices are noninferior to air-conduction (AC) rerouting devices for single-sided deafness (SSD) patients by measuring objective and subjective performance using speech-in-noise tests that resemble a realistic hearing environment, sound localization tests, and standardized questionnaires. STUDY DESIGN: Prospective, single-subject randomized, crossover study. SETTING: Anechoic room inside a university. PATIENTS: Nine adults between 21 and 58 years of age with severe or profound unilateral sensorineural hearing loss. INTERVENTIONS: Patients' baseline hearing was assessed; they then used both the cartilage conduction contralateral routing of signals device (CC-CROS) and an air-conduction CROS hearing aid (AC-CROS). Patients wore each device for 2 weeks in a randomly assigned order. MAIN OUTCOME MEASURES: Three main outcome measures were 1) speech-in-noise tests, measuring speech reception thresholds; 2) proportion of correct sound localization responses; and 3) scores on the questionnaires, "Abbreviated Profile of Hearing Aid Benefit" (APHAB) and "Speech, Spatial, and Qualities of Hearing Scale" with 12 questions (SSQ-12). RESULTS: Speech reception threshold improved significantly when noise was ambient, and speech was presented from the front or the poor-ear side with both CC-CROS and AC-CROS. When speech was delivered from the better-ear side, AC-CROS significantly improved performance, whereas CC-CROS had no significant effect. Both devices mainly worsened sound localization, whereas the APHAB and SSQ-12 scores showed benefits. CONCLUSION: CC-CROS has noninferior hearing-in-noise performance except when the speech was presented to the better ear under ambient noise. Subjective measures showed that the patients realized the effectiveness of both devices.


Subject(s)
Bone Conduction , Cross-Over Studies , Hearing Aids , Hearing Loss, Sensorineural , Sound Localization , Speech Perception , Humans , Adult , Middle Aged , Male , Female , Sound Localization/physiology , Bone Conduction/physiology , Hearing Loss, Sensorineural/physiopathology , Hearing Loss, Sensorineural/rehabilitation , Speech Perception/physiology , Surveys and Questionnaires , Prospective Studies , Hearing Loss, Unilateral/physiopathology , Hearing Loss, Unilateral/rehabilitation , Young Adult , Noise , Treatment Outcome
11.
Hear Res ; 448: 109020, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38763034

ABSTRACT

Combining cochlear implants with binaural acoustic hearing via preserved hearing in the implanted ear(s) is commonly referred to as combined electric and acoustic stimulation (EAS). EAS fittings can provide patients with significant benefit for speech recognition in complex noise, perceived listening difficulty, and horizontal-plane localization as compared to traditional bimodal hearing conditions with contralateral and monaural acoustic hearing. However, EAS benefit varies across patients and the degree of benefit is not reliably related to the underlying audiogram. Previous research has indicated that EAS benefit for speech recognition in complex listening scenarios and localization is significantly correlated with the patients' binaural cue sensitivity, namely interaural time differences (ITD). In the context of pure tones, interaural phase differences (IPD) and ITD can be understood as two perspectives on the same phenomenon. Through simple mathematical conversion, one can be transformed into the other, illustrating their inherent interrelation for spatial hearing abilities. However, assessing binaural cue sensitivity is not part of a clinical assessment battery as psychophysical tasks are time consuming, require training to achieve performance asymptote, and specialized programming and software all of which render this clinically unfeasible. In this study, we investigated the possibility of using an objective measure of binaural cue sensitivity by the acoustic change complex (ACC) via imposition of an IPD of varying degrees at stimulus midpoint. Ten adult listeners with normal hearing were assessed on tasks of behavioral and objective binaural cue sensitivity for carrier frequencies of 250 and 1000 Hz. Results suggest that 1) ACC amplitude increases with IPD; 2) ACC-based IPD sensitivity for 250 Hz is significantly correlated with behavioral ITD sensitivity; 3) Participants were more sensitive to IPDs at 250 Hz as compared to 1000 Hz. Thus, this objective measure of IPD sensitivity may hold clinical application for pre- and post-operative assessment for individuals meeting candidacy indications for cochlear implantation with low-frequency acoustic hearing preservation as this relatively quick and objective measure may provide clinicians with information identifying patients most likely to derive benefit from EAS technology.


Subject(s)
Acoustic Stimulation , Auditory Threshold , Cochlear Implantation , Cochlear Implants , Cues , Sound Localization , Speech Perception , Humans , Female , Male , Cochlear Implantation/instrumentation , Adult , Middle Aged , Electric Stimulation , Audiometry, Pure-Tone , Persons With Hearing Impairments/psychology , Persons With Hearing Impairments/rehabilitation , Time Factors , Aged , Noise/adverse effects , Perceptual Masking , Young Adult , Hearing , Psychoacoustics
12.
J Acoust Soc Am ; 155(5): 2934-2947, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38717201

ABSTRACT

Spatial separation and fundamental frequency (F0) separation are effective cues for improving the intelligibility of target speech in multi-talker scenarios. Previous studies predominantly focused on spatial configurations within the frontal hemifield, overlooking the ipsilateral side and the entire median plane, where localization confusion often occurs. This study investigated the impact of spatial and F0 separation on intelligibility under the above-mentioned underexplored spatial configurations. The speech reception thresholds were measured through three experiments for scenarios involving two to four talkers, either in the ipsilateral horizontal plane or in the entire median plane, utilizing monotonized speech with varying F0s as stimuli. The results revealed that spatial separation in symmetrical positions (front-back symmetry in the ipsilateral horizontal plane or front-back, up-down symmetry in the median plane) contributes positively to intelligibility. Both target direction and relative target-masker separation influence the masking release attributed to spatial separation. As the number of talkers exceeds two, the masking release from spatial separation diminishes. Nevertheless, F0 separation remains as a remarkably effective cue and could even facilitate spatial separation in improving intelligibility. Further analysis indicated that current intelligibility models encounter difficulties in accurately predicting intelligibility in scenarios explored in this study.


Subject(s)
Cues , Perceptual Masking , Sound Localization , Speech Intelligibility , Speech Perception , Humans , Female , Male , Young Adult , Adult , Speech Perception/physiology , Acoustic Stimulation , Auditory Threshold , Speech Acoustics , Speech Reception Threshold Test , Noise
13.
Curr Biol ; 34(10): 2162-2174.e5, 2024 05 20.
Article in English | MEDLINE | ID: mdl-38718798

ABSTRACT

Humans make use of small differences in the timing of sounds at the two ears-interaural time differences (ITDs)-to locate their sources. Despite extensive investigation, however, the neural representation of ITDs in the human brain is contentious, particularly the range of ITDs explicitly represented by dedicated neural detectors. Here, using magneto- and electro-encephalography (MEG and EEG), we demonstrate evidence of a sparse neural representation of ITDs in the human cortex. The magnitude of cortical activity to sounds presented via insert earphones oscillated as a function of increasing ITD-within and beyond auditory cortical regions-and listeners rated the perceptual quality of these sounds according to the same oscillating pattern. This pattern was accurately described by a population of model neurons with preferred ITDs constrained to the narrow, sound-frequency-dependent range evident in other mammalian species. When scaled for head size, the distribution of ITD detectors in the human cortex is remarkably like that recorded in vivo from the cortex of rhesus monkeys, another large primate that uses ITDs for source localization. The data solve a long-standing issue concerning the neural representation of ITDs in humans and suggest a representation that scales for head size and sound frequency in an optimal manner.


Subject(s)
Auditory Cortex , Cues , Sound Localization , Auditory Cortex/physiology , Humans , Male , Sound Localization/physiology , Animals , Female , Adult , Electroencephalography , Macaca mulatta/physiology , Magnetoencephalography , Acoustic Stimulation , Young Adult , Auditory Perception/physiology
14.
Hear Res ; 447: 109025, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38733712

ABSTRACT

Cortical acetylcholine (ACh) release has been linked to various cognitive functions, including perceptual learning. We have previously shown that cortical cholinergic innervation is necessary for accurate sound localization in ferrets, as well as for their ability to adapt with training to altered spatial cues. To explore whether these behavioral deficits are associated with changes in the response properties of cortical neurons, we recorded neural activity in the primary auditory cortex (A1) of anesthetized ferrets in which cholinergic inputs had been reduced by making bilateral injections of the immunotoxin ME20.4-SAP in the nucleus basalis (NB) prior to training the animals. The pattern of spontaneous activity of A1 units recorded in the ferrets with cholinergic lesions (NB ACh-) was similar to that in controls, although the proportion of burst-type units was significantly lower. Depletion of ACh also resulted in more synchronous activity in A1. No changes in thresholds, frequency tuning or in the distribution of characteristic frequencies were found in these animals. When tested with normal acoustic inputs, the spatial sensitivity of A1 neurons in the NB ACh- ferrets and the distribution of their preferred interaural level differences also closely resembled those found in control animals, indicating that these properties had not been altered by sound localization training with one ear occluded. Simulating the animals' previous experience with a virtual earplug in one ear reduced the contralateral preference of A1 units in both groups, but caused azimuth sensitivity to change in slightly different ways, which may reflect the modest adaptation observed in the NB ACh- group. These results show that while ACh is required for behavioral adaptation to altered spatial cues, it is not required for maintenance of the spectral and spatial response properties of A1 neurons.


Subject(s)
Acoustic Stimulation , Auditory Cortex , Basal Forebrain , Ferrets , Animals , Auditory Cortex/metabolism , Auditory Cortex/physiopathology , Basal Forebrain/metabolism , Sound Localization , Acetylcholine/metabolism , Male , Cholinergic Neurons/metabolism , Cholinergic Neurons/pathology , Auditory Pathways/physiopathology , Auditory Pathways/metabolism , Female , Immunotoxins/toxicity , Basal Nucleus of Meynert/metabolism , Basal Nucleus of Meynert/physiopathology , Basal Nucleus of Meynert/pathology , Neurons/metabolism , Auditory Threshold , Adaptation, Physiological , Behavior, Animal
15.
J Acoust Soc Am ; 155(4): 2460-2469, 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38578178

ABSTRACT

Head-worn devices (HWDs) interfere with the natural transmission of sound from the source to the ears of the listener, worsening their localization abilities. The localization errors introduced by HWDs have been mostly studied in static scenarios, but these errors are reduced if head movements are allowed. We studied the effect of 12 HWDs on an auditory-cued visual search task, where head movements were not restricted. In this task, a visual target had to be identified in a three-dimensional space with the help of an acoustic stimulus emitted from the same location as the visual target. The results showed an increase in the search time caused by the HWDs. Acoustic measurements of a dummy head wearing the studied HWDs showed evidence of impaired localization cues, which were used to estimate the perceived localization errors using computational auditory models of static localization. These models were able to explain the search-time differences in the perceptual task, showing the influence of quadrant errors in the auditory-aided visual search task. These results indicate that HWDs have an impact on sound-source localization even when head movements are possible, which may compromise the safety and the quality of experience of the wearer.


Subject(s)
Hearing Aids , Sound Localization , Acoustic Stimulation , Head Movements
16.
J Neurosci ; 44(21)2024 May 22.
Article in English | MEDLINE | ID: mdl-38664010

ABSTRACT

The natural environment challenges the brain to prioritize the processing of salient stimuli. The barn owl, a sound localization specialist, exhibits a circuit called the midbrain stimulus selection network, dedicated to representing locations of the most salient stimulus in circumstances of concurrent stimuli. Previous competition studies using unimodal (visual) and bimodal (visual and auditory) stimuli have shown that relative strength is encoded in spike response rates. However, open questions remain concerning auditory-auditory competition on coding. To this end, we present diverse auditory competitors (concurrent flat noise and amplitude-modulated noise) and record neural responses of awake barn owls of both sexes in subsequent midbrain space maps, the external nucleus of the inferior colliculus (ICx) and optic tectum (OT). While both ICx and OT exhibit a topographic map of auditory space, OT also integrates visual input and is part of the global-inhibitory midbrain stimulus selection network. Through comparative investigation of these regions, we show that while increasing strength of a competitor sound decreases spike response rates of spatially distant neurons in both regions, relative strength determines spike train synchrony of nearby units only in the OT. Furthermore, changes in synchrony by sound competition in the OT are correlated to gamma range oscillations of local field potentials associated with input from the midbrain stimulus selection network. The results of this investigation suggest that modulations in spiking synchrony between units by gamma oscillations are an emergent coding scheme representing relative strength of concurrent stimuli, which may have relevant implications for downstream readout.


Subject(s)
Acoustic Stimulation , Inferior Colliculi , Sound Localization , Strigiformes , Animals , Strigiformes/physiology , Female , Male , Acoustic Stimulation/methods , Sound Localization/physiology , Inferior Colliculi/physiology , Mesencephalon/physiology , Auditory Perception/physiology , Brain Mapping , Auditory Pathways/physiology , Neurons/physiology , Action Potentials/physiology
17.
Behav Res Methods ; 56(4): 3814-3830, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38684625

ABSTRACT

The ability to detect the absolute location of sensory stimuli can be quantified with either error-based metrics derived from single-trial localization errors or regression-based metrics derived from a linear regression of localization responses on the true stimulus locations. Here we tested the agreement between these two approaches in estimating accuracy and precision in a large sample of 188 subjects who localized auditory stimuli from different azimuthal locations. A subsample of 57 subjects was subsequently exposed to audiovisual stimuli with a consistent spatial disparity before performing the sound localization test again, allowing us to additionally test which of the different metrics best assessed correlations between the amount of crossmodal spatial recalibration and baseline localization performance. First, our findings support a distinction between accuracy and precision. Localization accuracy was mainly reflected in the overall spatial bias and was moderately correlated with precision metrics. However, in our data, the variability of single-trial localization errors (variable error in error-based metrics) and the amount by which the eccentricity of target locations was overestimated (slope in regression-based metrics) were highly correlated, suggesting that intercorrelations between individual metrics need to be carefully considered in spatial perception studies. Secondly, exposure to spatially discrepant audiovisual stimuli resulted in a shift in bias toward the side of the visual stimuli (ventriloquism aftereffect) but did not affect localization precision. The size of the aftereffect shift in bias was at least partly explainable by unspecific test repetition effects, highlighting the need to account for inter-individual baseline differences in studies of spatial learning.


Subject(s)
Space Perception , Humans , Space Perception/physiology , Female , Male , Adult , Sound Localization , Photic Stimulation , Visual Perception/physiology , Young Adult , Acoustic Stimulation/methods , Auditory Perception/physiology
18.
Am J Audiol ; 33(2): 476-491, 2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38668699

ABSTRACT

PURPOSE: This project addressed the uses of a loudspeaker array for audiometric measurements. It sought to evaluate a prototype compact array in terms of the reliability of test results across sound booths. METHOD: A seven-loudspeaker array was developed to deliver sounds from -60° to +60° on an arc with a radius of 0.5 m. The system was equipped with a head position sensing system to maintain the listener's head near the optimal test position. Three array systems were distributed to each of the two test sites for within-subject assessments of booth equivalence on tests of sound localization, speech reception in noise, and threshold detection. A total of 36 subjects participated, 18 at each test site. RESULTS: Results showed excellent interbooth consistency on tests of sound localization using speech and noise signals, including conditions in which one or both ears were covered with a muff. Booth consistency was also excellent on sound field threshold measurements for detecting quasi-diffuse noise bands. Nonequivalence was observed in some cases of speech-in-noise tests, particularly with a small one-person booth. Acoustic analyses of in situ loudspeaker responses indicated that some of the nonequivalent comparisons on speech-in-noise tests could be traced to the effects of reflections. CONCLUSIONS: Overall, the results demonstrate the utility and reliability of a compact array for the assessment of localization ability, speech reception in noise, and sound field thresholds. However, the results indicate that researchers and clinicians should be aware of the reflection effects that can influence the results of sound field tests in which signal and noise levels from separate loudspeakers are critical.


Subject(s)
Sound Localization , Humans , Male , Adult , Female , Reproducibility of Results , Equipment Design , Young Adult , Noise , Audiometry/methods , Audiometry/instrumentation , Auditory Threshold , Amplifiers, Electronic , Speech Perception , Middle Aged
19.
Am J Audiol ; 33(2): 442-454, 2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38557158

ABSTRACT

PURPOSE: This study examined children's ability to perceive speech from multiple locations on the horizontal plane. Children with hearing loss were compared to normal-hearing peers while using amplification with and without advanced noise management. METHOD: Participants were 21 children with normal hearing (9-15 years) and 12 children with moderate symmetrical hearing loss (11-15 years). Word recognition, nonword detection, and word recall were assessed. Stimuli were presented randomly from multiple discrete locations in multitalker noise. Children with hearing loss were fit with devices having separate omnidirectional and noise management programs. The noise management feature is designed to preserve audibility in noise by rapidly analyzing input from all locations and reducing the noise management when speech is detected from locations around the hearing aid user. RESULTS: Significant effects of left/right and front/back lateralization occurred as well as effects of hearing loss and hearing aid noise management. Children with normal hearing experienced a left-side advantage for word recognition and a right-side advantage for nonword detection. Children with hearing loss demonstrated poorer performance overall on all tasks with better word recognition from the back, and word recall from the right, in the omnidirectional condition. With noise management, performance improved from the front compared to the back for all three tasks and from the right for word recognition and word recall. CONCLUSIONS: The shape of children's local speech intelligibility on the horizontal plane is not omnidirectional. It is task dependent and shaped further by hearing loss and hearing aid signal processing. Front/back shifts in children with hearing loss are consistent with the behavior of hearing aid noise management, while the right-side biases observed in both groups are consistent with the effects of specialized speech processing in the left hemisphere of the brain.


Subject(s)
Hearing Aids , Noise , Speech Intelligibility , Speech Perception , Humans , Child , Adolescent , Male , Female , Case-Control Studies , Sound Localization , Hearing Loss, Sensorineural/rehabilitation , Hearing Loss, Sensorineural/physiopathology
20.
PeerJ ; 12: e17104, 2024.
Article in English | MEDLINE | ID: mdl-38680894

ABSTRACT

Advancements in cochlear implants (CIs) have led to a significant increase in bilateral CI users, especially among children. Yet, most bilateral CI users do not fully achieve the intended binaural benefit due to potential limitations in signal processing and/or surgical implant positioning. One crucial auditory cue that normal hearing (NH) listeners can benefit from is the interaural time difference (ITD), i.e., the time difference between the arrival of a sound at two ears. The ITD sensitivity is thought to be heavily relying on the effective utilization of temporal fine structure (very rapid oscillations in sound). Unfortunately, most current CIs do not transmit such true fine structure. Nevertheless, bilateral CI users have demonstrated sensitivity to ITD cues delivered through envelope or interaural pulse time differences, i.e., the time gap between the pulses delivered to the two implants. However, their ITD sensitivity is significantly poorer compared to NH individuals, and it further degrades at higher CI stimulation rates, especially when the rate exceeds 300 pulse per second. The overall purpose of this research thread is to improve spatial hearing abilities in bilateral CI users. This study aims to develop electroencephalography (EEG) paradigms that can be used with clinical settings to assess and optimize the delivery of ITD cues, which are crucial for spatial hearing in everyday life. The research objective of this article was to determine the effect of CI stimulation pulse rate on the ITD sensitivity, and to characterize the rate-dependent degradation in ITD perception using EEG measures. To develop protocols for bilateral CI studies, EEG responses were obtained from NH listeners using sinusoidal-amplitude-modulated (SAM) tones and filtered clicks with changes in either fine structure ITD (ITDFS) or envelope ITD (ITDENV). Multiple EEG responses were analyzed, which included the subcortical auditory steady-state responses (ASSRs) and cortical auditory evoked potentials (CAEPs) elicited by stimuli onset, offset, and changes. Results indicated that acoustic change complex (ACC) responses elicited by ITDENV changes were significantly smaller or absent compared to those elicited by ITDFS changes. The ACC morphologies evoked by ITDFS changes were similar to onset and offset CAEPs, although the peak latencies were longest for ACC responses and shortest for offset CAEPs. The high-frequency stimuli clearly elicited subcortical ASSRs, but smaller than those evoked by lower carrier frequency SAM tones. The 40-Hz ASSRs decreased with increasing carrier frequencies. Filtered clicks elicited larger ASSRs compared to high-frequency SAM tones, with the order being 40 > 160 > 80> 320 Hz ASSR for both stimulus types. Wavelet analysis revealed a clear interaction between detectable transient CAEPs and 40-Hz ASSRs in the time-frequency domain for SAM tones with a low carrier frequency.


Subject(s)
Cochlear Implants , Cues , Electroencephalography , Humans , Electroencephalography/methods , Acoustic Stimulation/methods , Sound Localization/physiology , Auditory Perception/physiology , Evoked Potentials, Auditory/physiology , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...