Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 48.036
1.
Hear Res ; 447: 109010, 2024 Jun.
Article En | MEDLINE | ID: mdl-38744019

Auditory nerve (AN) function has been hypothesized to deteriorate with age and noise exposure. Here, we perform a systematic review of published studies and find that the evidence for age-related deficits in AN function is largely consistent across the literature, but there are inconsistent findings among studies of noise exposure history. Further, evidence from animal studies suggests that the greatest deficits in AN response amplitudes are found in noise-exposed aged mice, but a test of the interaction between effects of age and noise exposure on AN function has not been conducted in humans. We report a study of our own examining differences in the response amplitude of the compound action potential N1 (CAP N1) between younger and older adults with and without a self-reported history of noise exposure in a large sample of human participants (63 younger adults 18-30 years of age, 103 older adults 50-86 years of age). CAP N1 response amplitudes were smaller in older than younger adults. Noise exposure history did not appear to predict CAP N1 response amplitudes, nor did the effect of noise exposure history interact with age. We then incorporated our results into two meta-analyses of published studies of age and noise exposure history effects on AN response amplitudes in neurotypical human samples. The meta-analyses found that age effects across studies are robust (r = -0.407), but noise exposure effects are weak (r = -0.152). We conclude that noise exposure effects may be highly variable depending on sample characteristics, study design, and statistical approach, and researchers should be cautious when interpreting results. The underlying pathology of age-related and noise-induced changes in AN function are difficult to determine in living humans, creating a need for longitudinal studies of changes in AN function across the lifespan and histological examination of the AN from temporal bones collected post-mortem.


Acoustic Stimulation , Cochlear Nerve , Noise , Humans , Noise/adverse effects , Aged , Cochlear Nerve/physiopathology , Middle Aged , Adult , Aged, 80 and over , Age Factors , Young Adult , Adolescent , Aging/physiology , Evoked Potentials, Auditory , Hearing Loss, Noise-Induced/physiopathology , Female , Male , Animals , Action Potentials
2.
J Acoust Soc Am ; 155(5): 3101-3117, 2024 May 01.
Article En | MEDLINE | ID: mdl-38722101

Cochlear implant (CI) users often report being unsatisfied by music listening through their hearing device. Vibrotactile stimulation could help alleviate those challenges. Previous research has shown that musical stimuli was given higher preference ratings by normal-hearing listeners when concurrent vibrotactile stimulation was congruent in intensity and timing with the corresponding auditory signal compared to incongruent. However, it is not known whether this is also the case for CI users. Therefore, in this experiment, we presented 18 CI users and 24 normal-hearing listeners with five melodies and five different audio-to-tactile maps. Each map varied the congruence between the audio and tactile signals related to intensity, fundamental frequency, and timing. Participants were asked to rate the maps from zero to 100, based on preference. It was shown that almost all normal-hearing listeners, as well as a subset of the CI users, preferred tactile stimulation, which was congruent with the audio in intensity and timing. However, many CI users had no difference in preference between timing aligned and timing unaligned stimuli. The results provide evidence that vibrotactile music enjoyment enhancement could be a solution for some CI users; however, more research is needed to understand which CI users can benefit from it most.


Acoustic Stimulation , Auditory Perception , Cochlear Implants , Music , Humans , Female , Male , Adult , Middle Aged , Aged , Auditory Perception/physiology , Young Adult , Patient Preference , Cochlear Implantation/instrumentation , Touch Perception/physiology , Vibration , Touch
3.
Cogn Res Princ Implic ; 9(1): 29, 2024 05 12.
Article En | MEDLINE | ID: mdl-38735013

Auditory stimuli that are relevant to a listener have the potential to capture focal attention even when unattended, the listener's own name being a particularly effective stimulus. We report two experiments to test the attention-capturing potential of the listener's own name in normal speech and time-compressed speech. In Experiment 1, 39 participants were tested with a visual word categorization task with uncompressed spoken names as background auditory distractors. Participants' word categorization performance was slower when hearing their own name rather than other names, and in a final test, they were faster at detecting their own name than other names. Experiment 2 used the same task paradigm, but the auditory distractors were time-compressed names. Three compression levels were tested with 25 participants in each condition. Participants' word categorization performance was again slower when hearing their own name than when hearing other names; the slowing was strongest with slight compression and weakest with intense compression. Personally relevant time-compressed speech has the potential to capture attention, but the degree of capture depends on the level of compression. Attention capture by time-compressed speech has practical significance and provides partial evidence for the duplex-mechanism account of auditory distraction.


Attention , Names , Speech Perception , Humans , Attention/physiology , Female , Male , Speech Perception/physiology , Adult , Young Adult , Speech/physiology , Reaction Time/physiology , Acoustic Stimulation
4.
J Acoust Soc Am ; 155(5): 3254-3266, 2024 May 01.
Article En | MEDLINE | ID: mdl-38742964

Testudines are a highly threatened group facing an array of stressors, including alteration of their sensory environment. Underwater noise pollution has the potential to induce hearing loss and disrupt detection of biologically important acoustic cues and signals. To examine the conditions that induce temporary threshold shifts (TTS) in hearing in the freshwater Eastern painted turtle (Chrysemys picta picta), three individuals were exposed to band limited continuous white noise (50-1000 Hz) of varying durations and amplitudes (sound exposure levels ranged from 151 to 171 dB re 1 µPa2 s). Control and post-exposure auditory thresholds were measured and compared at 400 and 600 Hz using auditory evoked potential methods. TTS occurred in all individuals at both test frequencies, with shifts of 6.1-41.4 dB. While the numbers of TTS occurrences were equal between frequencies, greater shifts were observed at 600 Hz, a frequency of higher auditory sensitivity, compared to 400 Hz. The onset of TTS occurred at 154 dB re 1 µPa2 s for 600 Hz, compared to 158 dB re 1 µPa2 s at 400 Hz. The 400-Hz onset and patterns of TTS growth and recovery were similar to those observed in previously studied Trachemys scripta elegans, suggesting TTS may be comparable across Emydidae species.


Acoustic Stimulation , Auditory Threshold , Turtles , Animals , Turtles/physiology , Time Factors , Noise/adverse effects , Evoked Potentials, Auditory/physiology , Hearing Loss, Noise-Induced/physiopathology , Hearing Loss, Noise-Induced/etiology , Male , Female , Hearing/physiology
5.
Trends Hear ; 28: 23312165241246596, 2024.
Article En | MEDLINE | ID: mdl-38738341

The auditory brainstem response (ABR) is a valuable clinical tool for objective hearing assessment, which is conventionally detected by averaging neural responses to thousands of short stimuli. Progressing beyond these unnatural stimuli, brainstem responses to continuous speech presented via earphones have been recently detected using linear temporal response functions (TRFs). Here, we extend earlier studies by measuring subcortical responses to continuous speech presented in the sound-field, and assess the amount of data needed to estimate brainstem TRFs. Electroencephalography (EEG) was recorded from 24 normal hearing participants while they listened to clicks and stories presented via earphones and loudspeakers. Subcortical TRFs were computed after accounting for non-linear processing in the auditory periphery by either stimulus rectification or an auditory nerve model. Our results demonstrated that subcortical responses to continuous speech could be reliably measured in the sound-field. TRFs estimated using auditory nerve models outperformed simple rectification, and 16 minutes of data was sufficient for the TRFs of all participants to show clear wave V peaks for both earphones and sound-field stimuli. Subcortical TRFs to continuous speech were highly consistent in both earphone and sound-field conditions, and with click ABRs. However, sound-field TRFs required slightly more data (16 minutes) to achieve clear wave V peaks compared to earphone TRFs (12 minutes), possibly due to effects of room acoustics. By investigating subcortical responses to sound-field speech stimuli, this study lays the groundwork for bringing objective hearing assessment closer to real-life conditions, which may lead to improved hearing evaluations and smart hearing technologies.


Acoustic Stimulation , Electroencephalography , Evoked Potentials, Auditory, Brain Stem , Speech Perception , Humans , Evoked Potentials, Auditory, Brain Stem/physiology , Male , Female , Speech Perception/physiology , Acoustic Stimulation/methods , Adult , Young Adult , Auditory Threshold/physiology , Time Factors , Cochlear Nerve/physiology , Healthy Volunteers
6.
J Acoust Soc Am ; 155(5): 2934-2947, 2024 May 01.
Article En | MEDLINE | ID: mdl-38717201

Spatial separation and fundamental frequency (F0) separation are effective cues for improving the intelligibility of target speech in multi-talker scenarios. Previous studies predominantly focused on spatial configurations within the frontal hemifield, overlooking the ipsilateral side and the entire median plane, where localization confusion often occurs. This study investigated the impact of spatial and F0 separation on intelligibility under the above-mentioned underexplored spatial configurations. The speech reception thresholds were measured through three experiments for scenarios involving two to four talkers, either in the ipsilateral horizontal plane or in the entire median plane, utilizing monotonized speech with varying F0s as stimuli. The results revealed that spatial separation in symmetrical positions (front-back symmetry in the ipsilateral horizontal plane or front-back, up-down symmetry in the median plane) contributes positively to intelligibility. Both target direction and relative target-masker separation influence the masking release attributed to spatial separation. As the number of talkers exceeds two, the masking release from spatial separation diminishes. Nevertheless, F0 separation remains as a remarkably effective cue and could even facilitate spatial separation in improving intelligibility. Further analysis indicated that current intelligibility models encounter difficulties in accurately predicting intelligibility in scenarios explored in this study.


Cues , Perceptual Masking , Sound Localization , Speech Intelligibility , Speech Perception , Humans , Female , Male , Young Adult , Adult , Speech Perception/physiology , Acoustic Stimulation , Auditory Threshold , Speech Acoustics , Speech Reception Threshold Test , Noise
7.
J Acoust Soc Am ; 155(5): 2990-3004, 2024 May 01.
Article En | MEDLINE | ID: mdl-38717206

Speakers can place their prosodic prominence on any locations within a sentence, generating focus prosody for listeners to perceive new information. This study aimed to investigate age-related changes in the bottom-up processing of focus perception in Jianghuai Mandarin by clarifying the perceptual cues and the auditory processing abilities involved in the identification of focus locations. Young, middle-aged, and older speakers of Jianghuai Mandarin completed a focus identification task and an auditory perception task. The results showed that increasing age led to a decrease in listeners' accuracy rate in identifying focus locations, with all participants performing the worst when dynamic pitch cues were inaccessible. Auditory processing abilities did not predict focus perception performance in young and middle-aged listeners but accounted significantly for the variance in older adults' performance. These findings suggest that age-related deteriorations in focus perception can be largely attributed to declined auditory processing of perceptual cues. Poor ability to extract frequency modulation cues may be the most important underlying psychoacoustic factor for older adults' difficulties in perceiving focus prosody in Jianghuai Mandarin. The results contribute to our understanding of the bottom-up mechanisms involved in linguistic prosody processing in aging adults, particularly in tonal languages.


Aging , Cues , Speech Perception , Humans , Middle Aged , Aged , Male , Female , Aging/psychology , Aging/physiology , Young Adult , Adult , Speech Perception/physiology , Age Factors , Speech Acoustics , Acoustic Stimulation , Pitch Perception , Language , Voice Quality , Psychoacoustics , Audiometry, Speech
8.
JASA Express Lett ; 4(5)2024 May 01.
Article En | MEDLINE | ID: mdl-38717467

A long-standing quest in audition concerns understanding relations between behavioral measures and neural representations of changes in sound intensity. Here, we examined relations between aspects of intensity perception and central neural responses within the inferior colliculus of unanesthetized rabbits (by averaging the population's spike count/level functions). We found parallels between the population's neural output and: (1) how loudness grows with intensity; (2) how loudness grows with duration; (3) how discrimination of intensity improves with increasing sound level; (4) findings that intensity discrimination does not depend on duration; and (5) findings that duration discrimination is a constant fraction of base duration.


Inferior Colliculi , Loudness Perception , Animals , Rabbits , Loudness Perception/physiology , Inferior Colliculi/physiology , Acoustic Stimulation/methods , Discrimination, Psychological/physiology , Auditory Perception/physiology , Neurons/physiology
9.
Brain Behav ; 14(5): e3520, 2024 May.
Article En | MEDLINE | ID: mdl-38715412

OBJECTIVE: In previous animal studies, sound enhancement reduced tinnitus perception in cases associated with hearing loss. The aim of this study was to investigate the efficacy of sound enrichment therapy in tinnitus treatment by developing a protocol that includes criteria for psychoacoustic characteristics of tinnitus to determine whether the etiology is related to hearing loss. METHODS: A total of 96 patients with chronic tinnitus were included in the study. Fifty-two patients in the study group and 44 patients in the placebo group considered residual inhibition (RI) outcomes and tinnitus pitches. Both groups received sound enrichment treatment with different spectrum contents. The tinnitus handicap inventory (THI), visual analog scale (VAS), minimum masking level (MML), and tinnitus loudness level (TLL) results were compared before and at 1, 3, and 6 months after treatment. RESULTS: There was a statistically significant difference between the groups in THI, VAS, MML, and TLL scores from the first month to all months after treatment (p < .01). For the study group, there was a statistically significant decrease in THI, VAS, MML, and TLL scores in the first month (p < .01). This decrease continued at a statistically significant level in the third month of posttreatment for THI (p < .05) and at all months for VAS-1 (tinnitus severity) (p < .05) and VAS-2 (tinnitus discomfort) (p < .05). CONCLUSION: In clinical practice, after excluding other factors related to the tinnitus etiology, sound enrichment treatment can be effective in tinnitus cases where RI is positive and the tinnitus pitch is matched with a hearing loss between 45 and 55 dB HL in a relatively short period of 1 month.


Hearing Loss , Tinnitus , Tinnitus/therapy , Humans , Male , Female , Middle Aged , Adult , Hearing Loss/rehabilitation , Hearing Loss/therapy , Treatment Outcome , Aged , Acoustic Stimulation/methods , Sound , Psychoacoustics
10.
Multisens Res ; 37(2): 89-124, 2024 Feb 13.
Article En | MEDLINE | ID: mdl-38714311

Prior studies investigating the effects of routine action video game play have demonstrated improvements in a variety of cognitive processes, including improvements in attentional tasks. However, there is little evidence indicating that the cognitive benefits of playing action video games generalize from simplified unisensory stimuli to multisensory scenes - a fundamental characteristic of natural, everyday life environments. The present study addressed if video game experience has an impact on crossmodal congruency effects when searching through such multisensory scenes. We compared the performance of action video game players (AVGPs) and non-video game players (NVGPs) on a visual search task for objects embedded in video clips of realistic scenes. We conducted two identical online experiments with gender-balanced samples, for a total of N = 130. Overall, the data replicated previous findings reporting search benefits when visual targets were accompanied by semantically congruent auditory events, compared to neutral or incongruent ones. However, according to the results, AVGPs did not consistently outperform NVGPs in the overall search task, nor did they use multisensory cues more efficiently than NVGPs. Exploratory analyses with self-reported gender as a variable revealed a potential difference in response strategy between experienced male and female AVGPs when dealing with crossmodal cues. These findings suggest that the generalization of the advantage of AVG experience to realistic, crossmodal situations should be made with caution and considering gender-related issues.


Attention , Video Games , Visual Perception , Humans , Male , Female , Visual Perception/physiology , Young Adult , Adult , Attention/physiology , Auditory Perception/physiology , Photic Stimulation , Adolescent , Reaction Time/physiology , Cues , Acoustic Stimulation
11.
Multisens Res ; 37(2): 143-162, 2024 Apr 30.
Article En | MEDLINE | ID: mdl-38714315

A vital heuristic used when making judgements on whether audio-visual signals arise from the same event, is the temporal coincidence of the respective signals. Previous research has highlighted a process, whereby the perception of simultaneity rapidly recalibrates to account for differences in the physical temporal offsets of stimuli. The current paper investigated whether rapid recalibration also occurs in response to differences in central arrival latencies, driven by visual-intensity-dependent processing times. In a behavioural experiment, observers completed a temporal-order judgement (TOJ), simultaneity judgement (SJ) and simple reaction-time (RT) task and responded to audio-visual trials that were preceded by other audio-visual trials with either a bright or dim visual stimulus. It was found that the point of subjective simultaneity shifted, due to the visual intensity of the preceding stimulus, in the TOJ, but not SJ task, while the RT data revealed no effect of preceding intensity. Our data therefore provide some evidence that the perception of simultaneity rapidly recalibrates based on stimulus intensity.


Acoustic Stimulation , Auditory Perception , Photic Stimulation , Reaction Time , Visual Perception , Humans , Visual Perception/physiology , Auditory Perception/physiology , Male , Female , Reaction Time/physiology , Adult , Young Adult , Judgment/physiology
12.
Cereb Cortex ; 34(5)2024 May 02.
Article En | MEDLINE | ID: mdl-38715408

Speech comprehension in noise depends on complex interactions between peripheral sensory and central cognitive systems. Despite having normal peripheral hearing, older adults show difficulties in speech comprehension. It remains unclear whether the brain's neural responses could indicate aging. The current study examined whether individual brain activation during speech perception in different listening environments could predict age. We applied functional near-infrared spectroscopy to 93 normal-hearing human adults (20 to 70 years old) during a sentence listening task, which contained a quiet condition and 4 different signal-to-noise ratios (SNR = 10, 5, 0, -5 dB) noisy conditions. A data-driven approach, the region-based brain-age predictive modeling was adopted. We observed a significant behavioral decrease with age under the 4 noisy conditions, but not under the quiet condition. Brain activations in SNR = 10 dB listening condition could successfully predict individual's age. Moreover, we found that the bilateral visual sensory cortex, left dorsal speech pathway, left cerebellum, right temporal-parietal junction area, right homolog Wernicke's area, and right middle temporal gyrus contributed most to prediction performance. These results demonstrate that the activations of regions about sensory-motor mapping of sound, especially in noisy conditions, could be sensitive measures for age prediction than external behavior measures.


Aging , Brain , Comprehension , Noise , Spectroscopy, Near-Infrared , Speech Perception , Humans , Adult , Speech Perception/physiology , Male , Female , Spectroscopy, Near-Infrared/methods , Middle Aged , Young Adult , Aged , Comprehension/physiology , Brain/physiology , Brain/diagnostic imaging , Aging/physiology , Brain Mapping/methods , Acoustic Stimulation/methods
13.
Nat Commun ; 15(1): 3692, 2024 May 01.
Article En | MEDLINE | ID: mdl-38693186

Over the last decades, cognitive neuroscience has identified a distributed set of brain regions that are critical for attention. Strong anatomical overlap with brain regions critical for oculomotor processes suggests a joint network for attention and eye movements. However, the role of this shared network in complex, naturalistic environments remains understudied. Here, we investigated eye movements in relation to (un)attended sentences of natural speech. Combining simultaneously recorded eye tracking and magnetoencephalographic data with temporal response functions, we show that gaze tracks attended speech, a phenomenon we termed ocular speech tracking. Ocular speech tracking even differentiates a target from a distractor in a multi-speaker context and is further related to intelligibility. Moreover, we provide evidence for its contribution to neural differences in speech processing, emphasizing the necessity to consider oculomotor activity in future research and in the interpretation of neural differences in auditory cognition.


Attention , Eye Movements , Magnetoencephalography , Speech Perception , Speech , Humans , Attention/physiology , Eye Movements/physiology , Male , Female , Adult , Young Adult , Speech Perception/physiology , Speech/physiology , Acoustic Stimulation , Brain/physiology , Eye-Tracking Technology
14.
eNeuro ; 11(5)2024 May.
Article En | MEDLINE | ID: mdl-38702194

Elicited upon violation of regularity in stimulus presentation, mismatch negativity (MMN) reflects the brain's ability to perform automatic comparisons between consecutive stimuli and provides an electrophysiological index of sensory error detection whereas P300 is associated with cognitive processes such as updating of the working memory. To date, there has been extensive research on the roles of MMN and P300 individually, because of their potential to be used as clinical markers of consciousness and attention, respectively. Here, we intend to explore with an unsupervised and rigorous source estimation approach, the underlying cortical generators of MMN and P300, in the context of prediction error propagation along the hierarchies of brain information processing in healthy human participants. The existing methods of characterizing the two ERPs involve only approximate estimations of their amplitudes and latencies based on specific sensors of interest. Our objective is twofold: first, we introduce a novel data-driven unsupervised approach to compute latencies and amplitude of ERP components accurately on an individual-subject basis and reconfirm earlier findings. Second, we demonstrate that in multisensory environments, MMN generators seem to reflect a significant overlap of "modality-specific" and "modality-independent" information processing while P300 generators mark a shift toward completely "modality-independent" processing. Advancing earlier understanding that multisensory contexts speed up early sensory processing, our study reveals that temporal facilitation extends to even the later components of prediction error processing, using EEG experiments. Such knowledge can be of value to clinical research for characterizing the key developmental stages of lifespan aging, schizophrenia, and depression.


Electroencephalography , Event-Related Potentials, P300 , Humans , Male , Female , Adult , Electroencephalography/methods , Young Adult , Event-Related Potentials, P300/physiology , Auditory Perception/physiology , Cerebral Cortex/physiology , Acoustic Stimulation/methods , Evoked Potentials/physiology
15.
Hear Res ; 447: 109028, 2024 Jun.
Article En | MEDLINE | ID: mdl-38733711

Amplitude modulation is an important acoustic cue for sound discrimination, and humans and animals are able to detect small modulation depths behaviorally. In the inferior colliculus (IC), both firing rate and phase-locking may be used to detect amplitude modulation. How neural representations that detect modulation change with age are poorly understood, including the extent to which age-related changes may be attributed to the inherited properties of ascending inputs to IC neurons. Here, simultaneous measures of local field potentials (LFPs) and single-unit responses were made from the inferior colliculus of Young and Aged rats using both noise and tone carriers in response to sinusoidally amplitude-modulated sounds of varying depths. We found that Young units had higher firing rates than Aged for noise carriers, whereas Aged units had higher phase-locking (vector strength), especially for tone carriers. Sustained LFPs were larger in Young animals for modulation frequencies 8-16 Hz and comparable at higher modulation frequencies. Onset LFP amplitudes were much larger in Young animals and were correlated with the evoked firing rates, while LFP onset latencies were shorter in Aged animals. Unit neurometric thresholds by synchrony or firing rate measures did not differ significantly across age and were comparable to behavioral thresholds in previous studies whereas LFP thresholds were lower than behavior.


Acoustic Stimulation , Aging , Inferior Colliculi , Animals , Inferior Colliculi/physiology , Aging/physiology , Rats , Age Factors , Auditory Perception/physiology , Male , Auditory Threshold , Evoked Potentials, Auditory , Neurons/physiology , Action Potentials , Reaction Time , Noise/adverse effects , Time Factors , Auditory Pathways/physiology
16.
Nat Commun ; 15(1): 4071, 2024 May 22.
Article En | MEDLINE | ID: mdl-38778078

Adaptive behavior requires integrating prior knowledge of action outcomes and sensory evidence for making decisions while maintaining prior knowledge for future actions. As outcome- and sensory-based decisions are often tested separately, it is unclear how these processes are integrated in the brain. In a tone frequency discrimination task with two sound durations and asymmetric reward blocks, we found that neurons in the medial prefrontal cortex of male mice represented the additive combination of prior reward expectations and choices. The sensory inputs and choices were selectively decoded from the auditory cortex irrespective of reward priors and the secondary motor cortex, respectively, suggesting localized computations of task variables are required within single trials. In contrast, all the recorded regions represented prior values that needed to be maintained across trials. We propose localized and global computations of task variables in different time scales in the cerebral cortex.


Auditory Cortex , Choice Behavior , Reward , Animals , Male , Choice Behavior/physiology , Mice , Auditory Cortex/physiology , Neurons/physiology , Prefrontal Cortex/physiology , Acoustic Stimulation , Mice, Inbred C57BL , Cerebral Cortex/physiology , Motor Cortex/physiology , Auditory Perception/physiology
17.
PLoS One ; 19(5): e0303565, 2024.
Article En | MEDLINE | ID: mdl-38781127

In this study, we attempted to improve brain-computer interface (BCI) systems by means of auditory stream segregation in which alternately presented tones are perceived as sequences of various different tones (streams). A 3-class BCI using three tone sequences, which were perceived as three different tone streams, was investigated and evaluated. Each presented musical tone was generated by a software synthesizer. Eleven subjects took part in the experiment. Stimuli were presented to each user's right ear. Subjects were requested to attend to one of three streams and to count the number of target stimuli in the attended stream. In addition, 64-channel electroencephalogram (EEG) and two-channel electrooculogram (EOG) signals were recorded from participants with a sampling frequency of 1000 Hz. The measured EEG data were classified based on Riemannian geometry to detect the object of the subject's selective attention. P300 activity was elicited by the target stimuli in the segregated tone streams. In five out of eleven subjects, P300 activity was elicited only by the target stimuli included in the attended stream. In a 10-fold cross validation test, a classification accuracy over 80% for five subjects and over 75% for nine subjects was achieved. For subjects whose accuracy was lower than 75%, either the P300 was also elicited for nonattended streams or the amplitude of P300 was small. It was concluded that the number of selected BCI systems based on auditory stream segregation can be increased to three classes, and these classes can be detected by a single ear without the aid of any visual modality.


Acoustic Stimulation , Attention , Brain-Computer Interfaces , Electroencephalography , Humans , Male , Female , Electroencephalography/methods , Adult , Attention/physiology , Acoustic Stimulation/methods , Auditory Perception/physiology , Young Adult , Event-Related Potentials, P300/physiology , Electrooculography/methods
18.
BMC Biol ; 22(1): 120, 2024 May 23.
Article En | MEDLINE | ID: mdl-38783286

BACKGROUND: Threat and individual differences in threat-processing bias perception of stimuli in the environment. Yet, their effect on perception of one's own (body-based) self-motion in space is unknown. Here, we tested the effects of threat on self-motion perception using a multisensory motion simulator with concurrent threatening or neutral auditory stimuli. RESULTS: Strikingly, threat had opposite effects on vestibular and visual self-motion perception, leading to overestimation of vestibular, but underestimation of visual self-motions. Trait anxiety tended to be associated with an enhanced effect of threat on estimates of self-motion for both modalities. CONCLUSIONS: Enhanced vestibular perception under threat might stem from shared neural substrates with emotional processing, whereas diminished visual self-motion perception may indicate that a threatening stimulus diverts attention away from optic flow integration. Thus, threat induces modality-specific biases in everyday experiences of self-motion.


Motion Perception , Humans , Motion Perception/physiology , Male , Female , Adult , Young Adult , Visual Perception/physiology , Fear , Anxiety/psychology , Acoustic Stimulation
19.
Curr Biol ; 34(10): 2162-2174.e5, 2024 05 20.
Article En | MEDLINE | ID: mdl-38718798

Humans make use of small differences in the timing of sounds at the two ears-interaural time differences (ITDs)-to locate their sources. Despite extensive investigation, however, the neural representation of ITDs in the human brain is contentious, particularly the range of ITDs explicitly represented by dedicated neural detectors. Here, using magneto- and electro-encephalography (MEG and EEG), we demonstrate evidence of a sparse neural representation of ITDs in the human cortex. The magnitude of cortical activity to sounds presented via insert earphones oscillated as a function of increasing ITD-within and beyond auditory cortical regions-and listeners rated the perceptual quality of these sounds according to the same oscillating pattern. This pattern was accurately described by a population of model neurons with preferred ITDs constrained to the narrow, sound-frequency-dependent range evident in other mammalian species. When scaled for head size, the distribution of ITD detectors in the human cortex is remarkably like that recorded in vivo from the cortex of rhesus monkeys, another large primate that uses ITDs for source localization. The data solve a long-standing issue concerning the neural representation of ITDs in humans and suggest a representation that scales for head size and sound frequency in an optimal manner.


Auditory Cortex , Cues , Sound Localization , Auditory Cortex/physiology , Humans , Male , Sound Localization/physiology , Animals , Female , Adult , Electroencephalography , Macaca mulatta/physiology , Magnetoencephalography , Acoustic Stimulation , Young Adult , Auditory Perception/physiology
20.
Hear Res ; 447: 109027, 2024 Jun.
Article En | MEDLINE | ID: mdl-38723386

Despite that fact that the cochlear implant (CI) is one of the most successful neuro-prosthetic devices which allows hearing restoration, several aspects still need to be improved. Interactions between stimulating electrodes through current spread occurring within the cochlea drastically limit the number of discriminable frequency channels and thus can ultimately result in poor speech perception. One potential solution relies on the use of new pulse shapes, such as asymmetric pulses, which can potentially reduce the current spread within the cochlea. The present study characterized the impact of changing electrical pulse shapes from the standard biphasic symmetric to the asymmetrical shape by quantifying the evoked firing rate and the spatial activation in the guinea pig primary auditory cortex (A1). At a fixed charge, the firing rate and the spatial activation in A1 decreased by 15 to 25 % when asymmetric pulses were used to activate the auditory nerve fibers, suggesting a potential reduction of the spread of excitation inside the cochlea. A strong "polarity-order" effect was found as the reduction was more pronounced when the first phase of the pulse was cathodic with high amplitude. These results suggest that the use of asymmetrical pulse shapes in clinical settings can potentially reduce the channel interactions in CI users.


Auditory Cortex , Cochlear Implants , Electric Stimulation , Animals , Guinea Pigs , Auditory Cortex/physiology , Evoked Potentials, Auditory , Cochlear Nerve/physiopathology , Acoustic Stimulation , Cochlea/surgery , Cochlear Implantation/instrumentation , Action Potentials , Female
...