Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 48.022
1.
Brain Behav ; 14(5): e3520, 2024 May.
Article En | MEDLINE | ID: mdl-38715412

OBJECTIVE: In previous animal studies, sound enhancement reduced tinnitus perception in cases associated with hearing loss. The aim of this study was to investigate the efficacy of sound enrichment therapy in tinnitus treatment by developing a protocol that includes criteria for psychoacoustic characteristics of tinnitus to determine whether the etiology is related to hearing loss. METHODS: A total of 96 patients with chronic tinnitus were included in the study. Fifty-two patients in the study group and 44 patients in the placebo group considered residual inhibition (RI) outcomes and tinnitus pitches. Both groups received sound enrichment treatment with different spectrum contents. The tinnitus handicap inventory (THI), visual analog scale (VAS), minimum masking level (MML), and tinnitus loudness level (TLL) results were compared before and at 1, 3, and 6 months after treatment. RESULTS: There was a statistically significant difference between the groups in THI, VAS, MML, and TLL scores from the first month to all months after treatment (p < .01). For the study group, there was a statistically significant decrease in THI, VAS, MML, and TLL scores in the first month (p < .01). This decrease continued at a statistically significant level in the third month of posttreatment for THI (p < .05) and at all months for VAS-1 (tinnitus severity) (p < .05) and VAS-2 (tinnitus discomfort) (p < .05). CONCLUSION: In clinical practice, after excluding other factors related to the tinnitus etiology, sound enrichment treatment can be effective in tinnitus cases where RI is positive and the tinnitus pitch is matched with a hearing loss between 45 and 55 dB HL in a relatively short period of 1 month.


Hearing Loss , Tinnitus , Tinnitus/therapy , Humans , Male , Female , Middle Aged , Adult , Hearing Loss/rehabilitation , Hearing Loss/therapy , Treatment Outcome , Aged , Acoustic Stimulation/methods , Sound , Psychoacoustics
2.
J Acoust Soc Am ; 155(5): 3101-3117, 2024 May 01.
Article En | MEDLINE | ID: mdl-38722101

Cochlear implant (CI) users often report being unsatisfied by music listening through their hearing device. Vibrotactile stimulation could help alleviate those challenges. Previous research has shown that musical stimuli was given higher preference ratings by normal-hearing listeners when concurrent vibrotactile stimulation was congruent in intensity and timing with the corresponding auditory signal compared to incongruent. However, it is not known whether this is also the case for CI users. Therefore, in this experiment, we presented 18 CI users and 24 normal-hearing listeners with five melodies and five different audio-to-tactile maps. Each map varied the congruence between the audio and tactile signals related to intensity, fundamental frequency, and timing. Participants were asked to rate the maps from zero to 100, based on preference. It was shown that almost all normal-hearing listeners, as well as a subset of the CI users, preferred tactile stimulation, which was congruent with the audio in intensity and timing. However, many CI users had no difference in preference between timing aligned and timing unaligned stimuli. The results provide evidence that vibrotactile music enjoyment enhancement could be a solution for some CI users; however, more research is needed to understand which CI users can benefit from it most.


Acoustic Stimulation , Auditory Perception , Cochlear Implants , Music , Humans , Female , Male , Adult , Middle Aged , Aged , Auditory Perception/physiology , Young Adult , Patient Preference , Cochlear Implantation/instrumentation , Touch Perception/physiology , Vibration , Touch
3.
Cogn Res Princ Implic ; 9(1): 29, 2024 05 12.
Article En | MEDLINE | ID: mdl-38735013

Auditory stimuli that are relevant to a listener have the potential to capture focal attention even when unattended, the listener's own name being a particularly effective stimulus. We report two experiments to test the attention-capturing potential of the listener's own name in normal speech and time-compressed speech. In Experiment 1, 39 participants were tested with a visual word categorization task with uncompressed spoken names as background auditory distractors. Participants' word categorization performance was slower when hearing their own name rather than other names, and in a final test, they were faster at detecting their own name than other names. Experiment 2 used the same task paradigm, but the auditory distractors were time-compressed names. Three compression levels were tested with 25 participants in each condition. Participants' word categorization performance was again slower when hearing their own name than when hearing other names; the slowing was strongest with slight compression and weakest with intense compression. Personally relevant time-compressed speech has the potential to capture attention, but the degree of capture depends on the level of compression. Attention capture by time-compressed speech has practical significance and provides partial evidence for the duplex-mechanism account of auditory distraction.


Attention , Names , Speech Perception , Humans , Attention/physiology , Female , Male , Speech Perception/physiology , Adult , Young Adult , Speech/physiology , Reaction Time/physiology , Acoustic Stimulation
4.
J Acoust Soc Am ; 155(5): 3254-3266, 2024 May 01.
Article En | MEDLINE | ID: mdl-38742964

Testudines are a highly threatened group facing an array of stressors, including alteration of their sensory environment. Underwater noise pollution has the potential to induce hearing loss and disrupt detection of biologically important acoustic cues and signals. To examine the conditions that induce temporary threshold shifts (TTS) in hearing in the freshwater Eastern painted turtle (Chrysemys picta picta), three individuals were exposed to band limited continuous white noise (50-1000 Hz) of varying durations and amplitudes (sound exposure levels ranged from 151 to 171 dB re 1 µPa2 s). Control and post-exposure auditory thresholds were measured and compared at 400 and 600 Hz using auditory evoked potential methods. TTS occurred in all individuals at both test frequencies, with shifts of 6.1-41.4 dB. While the numbers of TTS occurrences were equal between frequencies, greater shifts were observed at 600 Hz, a frequency of higher auditory sensitivity, compared to 400 Hz. The onset of TTS occurred at 154 dB re 1 µPa2 s for 600 Hz, compared to 158 dB re 1 µPa2 s at 400 Hz. The 400-Hz onset and patterns of TTS growth and recovery were similar to those observed in previously studied Trachemys scripta elegans, suggesting TTS may be comparable across Emydidae species.


Acoustic Stimulation , Auditory Threshold , Turtles , Animals , Turtles/physiology , Time Factors , Noise/adverse effects , Evoked Potentials, Auditory/physiology , Hearing Loss, Noise-Induced/physiopathology , Hearing Loss, Noise-Induced/etiology , Male , Female , Hearing/physiology
5.
Trends Hear ; 28: 23312165241246596, 2024.
Article En | MEDLINE | ID: mdl-38738341

The auditory brainstem response (ABR) is a valuable clinical tool for objective hearing assessment, which is conventionally detected by averaging neural responses to thousands of short stimuli. Progressing beyond these unnatural stimuli, brainstem responses to continuous speech presented via earphones have been recently detected using linear temporal response functions (TRFs). Here, we extend earlier studies by measuring subcortical responses to continuous speech presented in the sound-field, and assess the amount of data needed to estimate brainstem TRFs. Electroencephalography (EEG) was recorded from 24 normal hearing participants while they listened to clicks and stories presented via earphones and loudspeakers. Subcortical TRFs were computed after accounting for non-linear processing in the auditory periphery by either stimulus rectification or an auditory nerve model. Our results demonstrated that subcortical responses to continuous speech could be reliably measured in the sound-field. TRFs estimated using auditory nerve models outperformed simple rectification, and 16 minutes of data was sufficient for the TRFs of all participants to show clear wave V peaks for both earphones and sound-field stimuli. Subcortical TRFs to continuous speech were highly consistent in both earphone and sound-field conditions, and with click ABRs. However, sound-field TRFs required slightly more data (16 minutes) to achieve clear wave V peaks compared to earphone TRFs (12 minutes), possibly due to effects of room acoustics. By investigating subcortical responses to sound-field speech stimuli, this study lays the groundwork for bringing objective hearing assessment closer to real-life conditions, which may lead to improved hearing evaluations and smart hearing technologies.


Acoustic Stimulation , Electroencephalography , Evoked Potentials, Auditory, Brain Stem , Speech Perception , Humans , Evoked Potentials, Auditory, Brain Stem/physiology , Male , Female , Speech Perception/physiology , Acoustic Stimulation/methods , Adult , Young Adult , Auditory Threshold/physiology , Time Factors , Cochlear Nerve/physiology , Healthy Volunteers
6.
J Acoust Soc Am ; 155(5): 3183-3194, 2024 May 01.
Article En | MEDLINE | ID: mdl-38738939

Medial olivocochlear (MOC) efferents modulate outer hair cell motility through specialized nicotinic acetylcholine receptors to support encoding of signals in noise. Transgenic mice lacking the alpha9 subunits of these receptors (α9KOs) have normal hearing in quiet and noise, but lack classic cochlear suppression effects and show abnormal temporal, spectral, and spatial processing. Mice deficient for both the alpha9 and alpha10 receptor subunits (α9α10KOs) may exhibit more severe MOC-related phenotypes. Like α9KOs, α9α10KOs have normal auditory brainstem response (ABR) thresholds and weak MOC reflexes. Here, we further characterized auditory function in α9α10KO mice. Wild-type (WT) and α9α10KO mice had similar ABR thresholds and acoustic startle response amplitudes in quiet and noise, and similar frequency and intensity difference sensitivity. α9α10KO mice had larger ABR Wave I amplitudes than WTs in quiet and noise. Other ABR metrics of hearing-in-noise function yielded conflicting findings regarding α9α10KO susceptibility to masking effects. α9α10KO mice also had larger startle amplitudes in tone backgrounds than WTs. Overall, α9α10KO mice had grossly normal auditory function in quiet and noise, although their larger ABR amplitudes and hyperreactive startles suggest some auditory processing abnormalities. These findings contribute to the growing literature showing mixed effects of MOC dysfunction on hearing.


Acoustic Stimulation , Auditory Threshold , Evoked Potentials, Auditory, Brain Stem , Mice, Knockout , Noise , Receptors, Nicotinic , Reflex, Startle , Animals , Noise/adverse effects , Receptors, Nicotinic/genetics , Receptors, Nicotinic/deficiency , Perceptual Masking , Behavior, Animal , Mice , Mice, Inbred C57BL , Cochlea/physiology , Cochlea/physiopathology , Male , Phenotype , Olivary Nucleus/physiology , Auditory Pathways/physiology , Auditory Pathways/physiopathology , Female , Auditory Perception/physiology , Hearing
7.
Sci Rep ; 14(1): 10422, 2024 05 07.
Article En | MEDLINE | ID: mdl-38710727

Anticipating positive outcomes is a core cognitive function in the process of reward prediction. However, no neurophysiological method objectively assesses reward prediction in basic medical research. In the present study, we established a physiological paradigm using cortical direct current (DC) potential responses in rats to assess reward prediction. This paradigm consisted of five daily 1-h sessions with two tones, wherein the rewarded tone was followed by electrical stimulation of the medial forebrain bundle (MFB) scheduled at 1000 ms later, whereas the unrewarded tone was not. On day 1, both tones induced a negative DC shift immediately after auditory responses, persisting up to MFB stimulation. This negative shift progressively increased and peaked on day 4. Starting from day 3, the negative shift from 600 to 1000 ms was significantly larger following the rewarded tone than that following the unrewarded tone. This negative DC shift was particularly prominent in the frontal cortex, suggesting its crucial role in discriminative reward prediction. During the extinction sessions, the shift diminished significantly on extinction day 1. These findings suggest that cortical DC potential is related to reward prediction and could be a valuable tool for evaluating animal models of depression, providing a testing system for anhedonia.


Extinction, Psychological , Reward , Animals , Rats , Male , Extinction, Psychological/physiology , Electric Stimulation , Acoustic Stimulation , Medial Forebrain Bundle/physiology , Rats, Sprague-Dawley
8.
Multisens Res ; 37(2): 89-124, 2024 Feb 13.
Article En | MEDLINE | ID: mdl-38714311

Prior studies investigating the effects of routine action video game play have demonstrated improvements in a variety of cognitive processes, including improvements in attentional tasks. However, there is little evidence indicating that the cognitive benefits of playing action video games generalize from simplified unisensory stimuli to multisensory scenes - a fundamental characteristic of natural, everyday life environments. The present study addressed if video game experience has an impact on crossmodal congruency effects when searching through such multisensory scenes. We compared the performance of action video game players (AVGPs) and non-video game players (NVGPs) on a visual search task for objects embedded in video clips of realistic scenes. We conducted two identical online experiments with gender-balanced samples, for a total of N = 130. Overall, the data replicated previous findings reporting search benefits when visual targets were accompanied by semantically congruent auditory events, compared to neutral or incongruent ones. However, according to the results, AVGPs did not consistently outperform NVGPs in the overall search task, nor did they use multisensory cues more efficiently than NVGPs. Exploratory analyses with self-reported gender as a variable revealed a potential difference in response strategy between experienced male and female AVGPs when dealing with crossmodal cues. These findings suggest that the generalization of the advantage of AVG experience to realistic, crossmodal situations should be made with caution and considering gender-related issues.


Attention , Video Games , Visual Perception , Humans , Male , Female , Visual Perception/physiology , Young Adult , Adult , Attention/physiology , Auditory Perception/physiology , Photic Stimulation , Adolescent , Reaction Time/physiology , Cues , Acoustic Stimulation
9.
Multisens Res ; 37(2): 143-162, 2024 Apr 30.
Article En | MEDLINE | ID: mdl-38714315

A vital heuristic used when making judgements on whether audio-visual signals arise from the same event, is the temporal coincidence of the respective signals. Previous research has highlighted a process, whereby the perception of simultaneity rapidly recalibrates to account for differences in the physical temporal offsets of stimuli. The current paper investigated whether rapid recalibration also occurs in response to differences in central arrival latencies, driven by visual-intensity-dependent processing times. In a behavioural experiment, observers completed a temporal-order judgement (TOJ), simultaneity judgement (SJ) and simple reaction-time (RT) task and responded to audio-visual trials that were preceded by other audio-visual trials with either a bright or dim visual stimulus. It was found that the point of subjective simultaneity shifted, due to the visual intensity of the preceding stimulus, in the TOJ, but not SJ task, while the RT data revealed no effect of preceding intensity. Our data therefore provide some evidence that the perception of simultaneity rapidly recalibrates based on stimulus intensity.


Acoustic Stimulation , Auditory Perception , Photic Stimulation , Reaction Time , Visual Perception , Humans , Visual Perception/physiology , Auditory Perception/physiology , Male , Female , Reaction Time/physiology , Adult , Young Adult , Judgment/physiology
10.
Cereb Cortex ; 34(5)2024 May 02.
Article En | MEDLINE | ID: mdl-38715408

Speech comprehension in noise depends on complex interactions between peripheral sensory and central cognitive systems. Despite having normal peripheral hearing, older adults show difficulties in speech comprehension. It remains unclear whether the brain's neural responses could indicate aging. The current study examined whether individual brain activation during speech perception in different listening environments could predict age. We applied functional near-infrared spectroscopy to 93 normal-hearing human adults (20 to 70 years old) during a sentence listening task, which contained a quiet condition and 4 different signal-to-noise ratios (SNR = 10, 5, 0, -5 dB) noisy conditions. A data-driven approach, the region-based brain-age predictive modeling was adopted. We observed a significant behavioral decrease with age under the 4 noisy conditions, but not under the quiet condition. Brain activations in SNR = 10 dB listening condition could successfully predict individual's age. Moreover, we found that the bilateral visual sensory cortex, left dorsal speech pathway, left cerebellum, right temporal-parietal junction area, right homolog Wernicke's area, and right middle temporal gyrus contributed most to prediction performance. These results demonstrate that the activations of regions about sensory-motor mapping of sound, especially in noisy conditions, could be sensitive measures for age prediction than external behavior measures.


Aging , Brain , Comprehension , Noise , Spectroscopy, Near-Infrared , Speech Perception , Humans , Adult , Speech Perception/physiology , Male , Female , Spectroscopy, Near-Infrared/methods , Middle Aged , Young Adult , Aged , Comprehension/physiology , Brain/physiology , Brain/diagnostic imaging , Aging/physiology , Brain Mapping/methods , Acoustic Stimulation/methods
11.
Nat Commun ; 15(1): 3692, 2024 May 01.
Article En | MEDLINE | ID: mdl-38693186

Over the last decades, cognitive neuroscience has identified a distributed set of brain regions that are critical for attention. Strong anatomical overlap with brain regions critical for oculomotor processes suggests a joint network for attention and eye movements. However, the role of this shared network in complex, naturalistic environments remains understudied. Here, we investigated eye movements in relation to (un)attended sentences of natural speech. Combining simultaneously recorded eye tracking and magnetoencephalographic data with temporal response functions, we show that gaze tracks attended speech, a phenomenon we termed ocular speech tracking. Ocular speech tracking even differentiates a target from a distractor in a multi-speaker context and is further related to intelligibility. Moreover, we provide evidence for its contribution to neural differences in speech processing, emphasizing the necessity to consider oculomotor activity in future research and in the interpretation of neural differences in auditory cognition.


Attention , Eye Movements , Magnetoencephalography , Speech Perception , Speech , Humans , Attention/physiology , Eye Movements/physiology , Male , Female , Adult , Young Adult , Speech Perception/physiology , Speech/physiology , Acoustic Stimulation , Brain/physiology , Eye-Tracking Technology
12.
Trends Hear ; 28: 23312165241253653, 2024.
Article En | MEDLINE | ID: mdl-38715401

This study aimed to preliminarily investigate the associations between performance on the integrated Digit-in-Noise Test (iDIN) and performance on measures of general cognition and working memory (WM). The study recruited 81 older adult hearing aid users between 60 and 95 years of age with bilateral moderate to severe hearing loss. The Chinese version of the Montreal Cognitive Assessment Basic (MoCA-BC) was used to screen older adults for mild cognitive impairment. Speech reception thresholds (SRTs) were measured using 2- to 5-digit sequences of the Mandarin iDIN. The differences in SRT between five-digit and two-digit sequences (SRT5-2), and between five-digit and three-digit sequences (SRT5-3), were used as indicators of memory performance. The results were compared to those from the Digit Span Test and Corsi Blocks Tapping Test, which evaluate WM and attention capacity. SRT5-2 and SRT5-3 demonstrated significant correlations with the three cognitive function tests (rs ranging from -.705 to -.528). Furthermore, SRT5-2 and SRT5-3 were significantly higher in participants who failed the MoCA-BC screening compared to those who passed. The findings show associations between performance on the iDIN and performance on memory tests. However, further validation and exploration are needed to fully establish its effectiveness and efficacy.


Cognition , Cognitive Dysfunction , Hearing Aids , Memory, Short-Term , Humans , Aged , Female , Male , Middle Aged , Aged, 80 and over , Memory, Short-Term/physiology , Cognitive Dysfunction/diagnosis , Noise/adverse effects , Speech Perception/physiology , Speech Reception Threshold Test , Age Factors , Persons With Hearing Impairments/psychology , Persons With Hearing Impairments/rehabilitation , Hearing Loss/rehabilitation , Hearing Loss/diagnosis , Hearing Loss/psychology , Mental Status and Dementia Tests , Memory , Acoustic Stimulation , Predictive Value of Tests , Correction of Hearing Impairment/instrumentation , Auditory Threshold
13.
Article En | MEDLINE | ID: mdl-38691431

In hippocampus, synaptic plasticity and rhythmic oscillations reflect the cytological basis and the intermediate level of cognition, respectively. Transcranial ultrasound stimulation (TUS) has demonstrated the ability to elicit changes in neural response. However, the modulatory effect of TUS on synaptic plasticity and rhythmic oscillations was insufficient in the present studies, which may be attributed to the fact that TUS acts mainly through mechanical forces. To enhance the modulatory effect on synaptic plasticity and rhythmic oscillations, transcranial magneto-acoustic stimulation (TMAS) which induced a coupled electric field together with TUS's ultrasound field was applied. The modulatory effect of TMAS and TUS with a pulse repetition frequency of 100 Hz were compared. TMAS/TUS were performed on C57 mice for 7 days at two different ultrasound intensities (3 W/cm2 and 5 W/cm [Formula: see text]. Behavioral tests, long-term potential (LTP) and local field potentials in vivo were performed to evaluate TUS/TMAS modulatory effect on cognition, synaptic plasticity and rhythmic oscillations. Protein expression based on western blotting were used to investigate the under- lying mechanisms of these beneficial effects. At 5 W/cm2, TMAS-induced LTP were 113.4% compared to the sham group and 110.5% compared to TUS. Moreover, the relative power of high gamma oscillations (50-100Hz) in the TMAS group ( 1.060±0.155 %) was markedly higher than that in the TUS group ( 0.560±0.114 %) and sham group ( 0.570±0.088 %). TMAS significantly enhanced the synchronization of theta and gamma oscillations as well as theta-gamma cross-frequency coupling. Whereas, TUS did not show relative enhancements. TMAS provides enhanced effect for modulating the synaptic plasticity and rhythmic oscillations in hippocampus.


Acoustic Stimulation , Hippocampus , Mice, Inbred C57BL , Transcranial Magnetic Stimulation , Animals , Mice , Transcranial Magnetic Stimulation/methods , Male , Hippocampus/physiology , Neuronal Plasticity/physiology , Cognition/physiology , Long-Term Potentiation/physiology , Ultrasonic Waves , Theta Rhythm/physiology
14.
Sci Rep ; 14(1): 10518, 2024 05 08.
Article En | MEDLINE | ID: mdl-38714827

Previous work assessing the effect of additive noise on the postural control system has found a positive effect of additive white noise on postural dynamics. This study covers two separate experiments that were run sequentially to better understand how the structure of the additive noise signal affects postural dynamics, while also furthering our knowledge of how the intensity of auditory stimulation of noise may elicit this phenomenon. Across the two experiments, we introduced three auditory noise stimulations of varying structure (white, pink, and brown noise). Experiment 1 presented the stimuli at 35 dB while Experiment 2 was presented at 75 dB. Our findings demonstrate a decrease in variability of the postural control system regardless of the structure of the noise signal presented, but only for high intensity auditory stimulation.


Acoustic Stimulation , Noise , Humans , Female , Male , Adult , Young Adult , Postural Balance/physiology , Color , Posture/physiology , Standing Position
15.
PLoS One ; 19(5): e0303565, 2024.
Article En | MEDLINE | ID: mdl-38781127

In this study, we attempted to improve brain-computer interface (BCI) systems by means of auditory stream segregation in which alternately presented tones are perceived as sequences of various different tones (streams). A 3-class BCI using three tone sequences, which were perceived as three different tone streams, was investigated and evaluated. Each presented musical tone was generated by a software synthesizer. Eleven subjects took part in the experiment. Stimuli were presented to each user's right ear. Subjects were requested to attend to one of three streams and to count the number of target stimuli in the attended stream. In addition, 64-channel electroencephalogram (EEG) and two-channel electrooculogram (EOG) signals were recorded from participants with a sampling frequency of 1000 Hz. The measured EEG data were classified based on Riemannian geometry to detect the object of the subject's selective attention. P300 activity was elicited by the target stimuli in the segregated tone streams. In five out of eleven subjects, P300 activity was elicited only by the target stimuli included in the attended stream. In a 10-fold cross validation test, a classification accuracy over 80% for five subjects and over 75% for nine subjects was achieved. For subjects whose accuracy was lower than 75%, either the P300 was also elicited for nonattended streams or the amplitude of P300 was small. It was concluded that the number of selected BCI systems based on auditory stream segregation can be increased to three classes, and these classes can be detected by a single ear without the aid of any visual modality.


Acoustic Stimulation , Attention , Brain-Computer Interfaces , Electroencephalography , Humans , Male , Female , Electroencephalography/methods , Adult , Attention/physiology , Acoustic Stimulation/methods , Auditory Perception/physiology , Young Adult , Event-Related Potentials, P300/physiology , Electrooculography/methods
16.
BMC Biol ; 22(1): 120, 2024 May 23.
Article En | MEDLINE | ID: mdl-38783286

BACKGROUND: Threat and individual differences in threat-processing bias perception of stimuli in the environment. Yet, their effect on perception of one's own (body-based) self-motion in space is unknown. Here, we tested the effects of threat on self-motion perception using a multisensory motion simulator with concurrent threatening or neutral auditory stimuli. RESULTS: Strikingly, threat had opposite effects on vestibular and visual self-motion perception, leading to overestimation of vestibular, but underestimation of visual self-motions. Trait anxiety tended to be associated with an enhanced effect of threat on estimates of self-motion for both modalities. CONCLUSIONS: Enhanced vestibular perception under threat might stem from shared neural substrates with emotional processing, whereas diminished visual self-motion perception may indicate that a threatening stimulus diverts attention away from optic flow integration. Thus, threat induces modality-specific biases in everyday experiences of self-motion.


Motion Perception , Humans , Motion Perception/physiology , Male , Female , Adult , Young Adult , Visual Perception/physiology , Fear , Anxiety/psychology , Acoustic Stimulation
17.
Hear Res ; 447: 109011, 2024 Jun.
Article En | MEDLINE | ID: mdl-38692015

This study introduces and evaluates the PHAST+ model, part of a computational framework designed to simulate the behavior of auditory nerve fibers in response to the electrical stimulation from a cochlear implant. PHAST+ incorporates a highly efficient method for calculating accommodation and adaptation, making it particularly suited for simulations over extended stimulus durations. The proposed method uses a leaky integrator inspired by classic biophysical nerve models. Through evaluation against single-fiber animal data, our findings demonstrate the model's effectiveness across various stimuli, including short pulse trains with variable amplitudes and rates. Notably, the PHAST+ model performs better than its predecessor, PHAST (a phenomenological model by van Gendt et al.), particularly in simulations of prolonged neural responses. While PHAST+ is optimized primarily on spike rate decay, it shows good behavior on several other neural measures, such as vector strength and degree of adaptation. The future implications of this research are promising. PHAST+ drastically reduces the computational burden to allow the real-time simulation of neural behavior over extended periods, opening the door to future simulations of psychophysical experiments and multi-electrode stimuli for evaluating novel speech-coding strategies for cochlear implants.


Action Potentials , Adaptation, Physiological , Cochlear Implants , Cochlear Nerve , Computer Simulation , Electric Stimulation , Models, Neurological , Cochlear Nerve/physiology , Animals , Humans , Time Factors , Cochlear Implantation/instrumentation , Biophysics , Acoustic Stimulation
18.
Hear Res ; 447: 109021, 2024 Jun.
Article En | MEDLINE | ID: mdl-38703432

Understanding the complex pathologies associated with hearing loss is a significant motivation for conducting inner ear research. Lifelong exposure to loud noise, ototoxic drugs, genetic diversity, sex, and aging collectively contribute to human hearing loss. Replicating this pathology in research animals is challenging because hearing impairment has varied causes and different manifestations. A central aspect, however, is the loss of sensory hair cells and the inability of the mammalian cochlea to replace them. Researching therapeutic strategies to rekindle regenerative cochlear capacity, therefore, requires the generation of animal models in which cochlear hair cells are eliminated. This review discusses different approaches to ablate cochlear hair cells in adult mice. We inventoried the cochlear cyto- and histo-pathology caused by acoustic overstimulation, systemic and locally applied drugs, and various genetic tools. The focus is not to prescribe a perfect damage model but to highlight the limitations and advantages of existing approaches and identify areas for further refinement of damage models for use in regenerative studies.


Cochlea , Disease Models, Animal , Hair Cells, Auditory , Regeneration , Animals , Hair Cells, Auditory/pathology , Hair Cells, Auditory/metabolism , Mice , Cochlea/pathology , Cochlea/physiopathology , Humans , Hearing , Hearing Loss, Noise-Induced/physiopathology , Hearing Loss, Noise-Induced/pathology , Hearing Loss/pathology , Hearing Loss/physiopathology , Acoustic Stimulation
19.
Curr Biol ; 34(10): 2162-2174.e5, 2024 05 20.
Article En | MEDLINE | ID: mdl-38718798

Humans make use of small differences in the timing of sounds at the two ears-interaural time differences (ITDs)-to locate their sources. Despite extensive investigation, however, the neural representation of ITDs in the human brain is contentious, particularly the range of ITDs explicitly represented by dedicated neural detectors. Here, using magneto- and electro-encephalography (MEG and EEG), we demonstrate evidence of a sparse neural representation of ITDs in the human cortex. The magnitude of cortical activity to sounds presented via insert earphones oscillated as a function of increasing ITD-within and beyond auditory cortical regions-and listeners rated the perceptual quality of these sounds according to the same oscillating pattern. This pattern was accurately described by a population of model neurons with preferred ITDs constrained to the narrow, sound-frequency-dependent range evident in other mammalian species. When scaled for head size, the distribution of ITD detectors in the human cortex is remarkably like that recorded in vivo from the cortex of rhesus monkeys, another large primate that uses ITDs for source localization. The data solve a long-standing issue concerning the neural representation of ITDs in humans and suggest a representation that scales for head size and sound frequency in an optimal manner.


Auditory Cortex , Cues , Sound Localization , Auditory Cortex/physiology , Humans , Male , Sound Localization/physiology , Animals , Female , Adult , Electroencephalography , Macaca mulatta/physiology , Magnetoencephalography , Acoustic Stimulation , Young Adult , Auditory Perception/physiology
20.
Hear Res ; 447: 109027, 2024 Jun.
Article En | MEDLINE | ID: mdl-38723386

Despite that fact that the cochlear implant (CI) is one of the most successful neuro-prosthetic devices which allows hearing restoration, several aspects still need to be improved. Interactions between stimulating electrodes through current spread occurring within the cochlea drastically limit the number of discriminable frequency channels and thus can ultimately result in poor speech perception. One potential solution relies on the use of new pulse shapes, such as asymmetric pulses, which can potentially reduce the current spread within the cochlea. The present study characterized the impact of changing electrical pulse shapes from the standard biphasic symmetric to the asymmetrical shape by quantifying the evoked firing rate and the spatial activation in the guinea pig primary auditory cortex (A1). At a fixed charge, the firing rate and the spatial activation in A1 decreased by 15 to 25 % when asymmetric pulses were used to activate the auditory nerve fibers, suggesting a potential reduction of the spread of excitation inside the cochlea. A strong "polarity-order" effect was found as the reduction was more pronounced when the first phase of the pulse was cathodic with high amplitude. These results suggest that the use of asymmetrical pulse shapes in clinical settings can potentially reduce the channel interactions in CI users.


Auditory Cortex , Cochlear Implants , Electric Stimulation , Animals , Guinea Pigs , Auditory Cortex/physiology , Evoked Potentials, Auditory , Cochlear Nerve/physiopathology , Acoustic Stimulation , Cochlea/surgery , Cochlear Implantation/instrumentation , Action Potentials , Female
...