Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 10.395
1.
eNeuro ; 11(5)2024 May.
Article En | MEDLINE | ID: mdl-38702194

Elicited upon violation of regularity in stimulus presentation, mismatch negativity (MMN) reflects the brain's ability to perform automatic comparisons between consecutive stimuli and provides an electrophysiological index of sensory error detection whereas P300 is associated with cognitive processes such as updating of the working memory. To date, there has been extensive research on the roles of MMN and P300 individually, because of their potential to be used as clinical markers of consciousness and attention, respectively. Here, we intend to explore with an unsupervised and rigorous source estimation approach, the underlying cortical generators of MMN and P300, in the context of prediction error propagation along the hierarchies of brain information processing in healthy human participants. The existing methods of characterizing the two ERPs involve only approximate estimations of their amplitudes and latencies based on specific sensors of interest. Our objective is twofold: first, we introduce a novel data-driven unsupervised approach to compute latencies and amplitude of ERP components accurately on an individual-subject basis and reconfirm earlier findings. Second, we demonstrate that in multisensory environments, MMN generators seem to reflect a significant overlap of "modality-specific" and "modality-independent" information processing while P300 generators mark a shift toward completely "modality-independent" processing. Advancing earlier understanding that multisensory contexts speed up early sensory processing, our study reveals that temporal facilitation extends to even the later components of prediction error processing, using EEG experiments. Such knowledge can be of value to clinical research for characterizing the key developmental stages of lifespan aging, schizophrenia, and depression.


Electroencephalography , Event-Related Potentials, P300 , Humans , Male , Female , Adult , Electroencephalography/methods , Young Adult , Event-Related Potentials, P300/physiology , Auditory Perception/physiology , Cerebral Cortex/physiology , Acoustic Stimulation/methods , Evoked Potentials/physiology
2.
Cereb Cortex ; 34(5)2024 May 02.
Article En | MEDLINE | ID: mdl-38715408

Speech comprehension in noise depends on complex interactions between peripheral sensory and central cognitive systems. Despite having normal peripheral hearing, older adults show difficulties in speech comprehension. It remains unclear whether the brain's neural responses could indicate aging. The current study examined whether individual brain activation during speech perception in different listening environments could predict age. We applied functional near-infrared spectroscopy to 93 normal-hearing human adults (20 to 70 years old) during a sentence listening task, which contained a quiet condition and 4 different signal-to-noise ratios (SNR = 10, 5, 0, -5 dB) noisy conditions. A data-driven approach, the region-based brain-age predictive modeling was adopted. We observed a significant behavioral decrease with age under the 4 noisy conditions, but not under the quiet condition. Brain activations in SNR = 10 dB listening condition could successfully predict individual's age. Moreover, we found that the bilateral visual sensory cortex, left dorsal speech pathway, left cerebellum, right temporal-parietal junction area, right homolog Wernicke's area, and right middle temporal gyrus contributed most to prediction performance. These results demonstrate that the activations of regions about sensory-motor mapping of sound, especially in noisy conditions, could be sensitive measures for age prediction than external behavior measures.


Aging , Brain , Comprehension , Noise , Spectroscopy, Near-Infrared , Speech Perception , Humans , Adult , Speech Perception/physiology , Male , Female , Spectroscopy, Near-Infrared/methods , Middle Aged , Young Adult , Aged , Comprehension/physiology , Brain/physiology , Brain/diagnostic imaging , Aging/physiology , Brain Mapping/methods , Acoustic Stimulation/methods
3.
JASA Express Lett ; 4(5)2024 May 01.
Article En | MEDLINE | ID: mdl-38717467

A long-standing quest in audition concerns understanding relations between behavioral measures and neural representations of changes in sound intensity. Here, we examined relations between aspects of intensity perception and central neural responses within the inferior colliculus of unanesthetized rabbits (by averaging the population's spike count/level functions). We found parallels between the population's neural output and: (1) how loudness grows with intensity; (2) how loudness grows with duration; (3) how discrimination of intensity improves with increasing sound level; (4) findings that intensity discrimination does not depend on duration; and (5) findings that duration discrimination is a constant fraction of base duration.


Inferior Colliculi , Loudness Perception , Animals , Rabbits , Loudness Perception/physiology , Inferior Colliculi/physiology , Acoustic Stimulation/methods , Discrimination, Psychological/physiology , Auditory Perception/physiology , Neurons/physiology
4.
Brain Behav ; 14(5): e3520, 2024 May.
Article En | MEDLINE | ID: mdl-38715412

OBJECTIVE: In previous animal studies, sound enhancement reduced tinnitus perception in cases associated with hearing loss. The aim of this study was to investigate the efficacy of sound enrichment therapy in tinnitus treatment by developing a protocol that includes criteria for psychoacoustic characteristics of tinnitus to determine whether the etiology is related to hearing loss. METHODS: A total of 96 patients with chronic tinnitus were included in the study. Fifty-two patients in the study group and 44 patients in the placebo group considered residual inhibition (RI) outcomes and tinnitus pitches. Both groups received sound enrichment treatment with different spectrum contents. The tinnitus handicap inventory (THI), visual analog scale (VAS), minimum masking level (MML), and tinnitus loudness level (TLL) results were compared before and at 1, 3, and 6 months after treatment. RESULTS: There was a statistically significant difference between the groups in THI, VAS, MML, and TLL scores from the first month to all months after treatment (p < .01). For the study group, there was a statistically significant decrease in THI, VAS, MML, and TLL scores in the first month (p < .01). This decrease continued at a statistically significant level in the third month of posttreatment for THI (p < .05) and at all months for VAS-1 (tinnitus severity) (p < .05) and VAS-2 (tinnitus discomfort) (p < .05). CONCLUSION: In clinical practice, after excluding other factors related to the tinnitus etiology, sound enrichment treatment can be effective in tinnitus cases where RI is positive and the tinnitus pitch is matched with a hearing loss between 45 and 55 dB HL in a relatively short period of 1 month.


Hearing Loss , Tinnitus , Tinnitus/therapy , Humans , Male , Female , Middle Aged , Adult , Hearing Loss/rehabilitation , Hearing Loss/therapy , Treatment Outcome , Aged , Acoustic Stimulation/methods , Sound , Psychoacoustics
5.
PLoS Biol ; 22(5): e3002631, 2024 May.
Article En | MEDLINE | ID: mdl-38805517

Music and speech are complex and distinct auditory signals that are both foundational to the human experience. The mechanisms underpinning each domain are widely investigated. However, what perceptual mechanism transforms a sound into music or speech and how basic acoustic information is required to distinguish between them remain open questions. Here, we hypothesized that a sound's amplitude modulation (AM), an essential temporal acoustic feature driving the auditory system across processing levels, is critical for distinguishing music and speech. Specifically, in contrast to paradigms using naturalistic acoustic signals (that can be challenging to interpret), we used a noise-probing approach to untangle the auditory mechanism: If AM rate and regularity are critical for perceptually distinguishing music and speech, judging artificially noise-synthesized ambiguous audio signals should align with their AM parameters. Across 4 experiments (N = 335), signals with a higher peak AM frequency tend to be judged as speech, lower as music. Interestingly, this principle is consistently used by all listeners for speech judgments, but only by musically sophisticated listeners for music. In addition, signals with more regular AM are judged as music over speech, and this feature is more critical for music judgment, regardless of musical sophistication. The data suggest that the auditory system can rely on a low-level acoustic property as basic as AM to distinguish music from speech, a simple principle that provokes both neurophysiological and evolutionary experiments and speculations.


Acoustic Stimulation , Auditory Perception , Music , Speech Perception , Humans , Male , Female , Adult , Auditory Perception/physiology , Acoustic Stimulation/methods , Speech Perception/physiology , Young Adult , Speech/physiology , Adolescent
6.
PLoS One ; 19(5): e0303565, 2024.
Article En | MEDLINE | ID: mdl-38781127

In this study, we attempted to improve brain-computer interface (BCI) systems by means of auditory stream segregation in which alternately presented tones are perceived as sequences of various different tones (streams). A 3-class BCI using three tone sequences, which were perceived as three different tone streams, was investigated and evaluated. Each presented musical tone was generated by a software synthesizer. Eleven subjects took part in the experiment. Stimuli were presented to each user's right ear. Subjects were requested to attend to one of three streams and to count the number of target stimuli in the attended stream. In addition, 64-channel electroencephalogram (EEG) and two-channel electrooculogram (EOG) signals were recorded from participants with a sampling frequency of 1000 Hz. The measured EEG data were classified based on Riemannian geometry to detect the object of the subject's selective attention. P300 activity was elicited by the target stimuli in the segregated tone streams. In five out of eleven subjects, P300 activity was elicited only by the target stimuli included in the attended stream. In a 10-fold cross validation test, a classification accuracy over 80% for five subjects and over 75% for nine subjects was achieved. For subjects whose accuracy was lower than 75%, either the P300 was also elicited for nonattended streams or the amplitude of P300 was small. It was concluded that the number of selected BCI systems based on auditory stream segregation can be increased to three classes, and these classes can be detected by a single ear without the aid of any visual modality.


Acoustic Stimulation , Attention , Brain-Computer Interfaces , Electroencephalography , Humans , Male , Female , Electroencephalography/methods , Adult , Attention/physiology , Acoustic Stimulation/methods , Auditory Perception/physiology , Young Adult , Event-Related Potentials, P300/physiology , Electrooculography/methods
7.
Cereb Cortex ; 34(5)2024 May 02.
Article En | MEDLINE | ID: mdl-38700440

While the auditory and visual systems each provide distinct information to our brain, they also work together to process and prioritize input to address ever-changing conditions. Previous studies highlighted the trade-off between auditory change detection and visual selective attention; however, the relationship between them is still unclear. Here, we recorded electroencephalography signals from 106 healthy adults in three experiments. Our findings revealed a positive correlation at the population level between the amplitudes of event-related potential indices associated with auditory change detection (mismatch negativity) and visual selective attention (posterior contralateral N2) when elicited in separate tasks. This correlation persisted even when participants performed a visual task while disregarding simultaneous auditory stimuli. Interestingly, as visual attention demand increased, participants whose posterior contralateral N2 amplitude increased the most exhibited the largest reduction in mismatch negativity, suggesting a within-subject trade-off between the two processes. Taken together, our results suggest an intimate relationship and potential shared mechanism between auditory change detection and visual selective attention. We liken this to a total capacity limit that varies between individuals, which could drive correlated individual differences in auditory change detection and visual selective attention, and also within-subject competition between the two, with task-based modulation of visual attention causing within-participant decrease in auditory change detection sensitivity.


Attention , Auditory Perception , Electroencephalography , Visual Perception , Humans , Attention/physiology , Male , Female , Young Adult , Adult , Auditory Perception/physiology , Visual Perception/physiology , Acoustic Stimulation/methods , Photic Stimulation/methods , Evoked Potentials/physiology , Brain/physiology , Adolescent
8.
Codas ; 36(2): e20230048, 2024.
Article Pt, En | MEDLINE | ID: mdl-38695432

PURPOSE: To correlate behavioral assessment results of central auditory processing and the self-perception questionnaire after acoustically controlled auditory training. METHODS: The study assessed 10 individuals with a mean age of 44.5 years who had suffered mild traumatic brain injury. They underwent behavioral assessment of central auditory processing and answered the Formal Auditory Training self-perception questionnaire after the therapeutic intervention - whose questions address auditory perception, understanding orders, request to repeat statements, occurrence of misunderstandings, attention span, auditory performance in noisy environments, telephone communication, and self-esteem. Patients were asked to indicate the frequency with which the listed behaviors occurred. RESULTS: Figure-ground, sequential memory for sounds, and temporal processing correlated with improvement in following instructions, fewer requests to repeat statements, increased attention span, improved communication, and understanding on the phone and when watching TV. CONCLUSION: Auditory closure, figure-ground, and temporal processing had improved in the assessment after the acoustically controlled auditory training, and there were fewer auditory behavior complaints.


OBJETIVO: Correlacionar os resultados da avaliação comportamental do processamento auditivo central e do questionário de autopercepção após o treinamento auditivo acusticamente controlado. MÉTODO: Foram avaliados dez indivíduos com média de idade de 44,5 anos, que sofreram traumatismo cranioencefálico de grau leve. Os indivíduos foram submetidos a avaliação comportamental do processamento auditivo central e também responderam ao questionário de autopercepção "Treinamento Auditivo Formal" após a intervenção terapêutica. O questionário foi composto por questões referentes a percepção auditiva, compreensão de ordens, solicitação de repetição de enunciados, ocorrência mal-entendidos, tempo de atenção, desempenho auditivo em ambiente ruidoso, comunicação ao telefone e autoestima e os pacientes foram solicitados a assinalar a frequência de ocorrência dos comportamentos listados. RESULTADOS: As habilidades auditivas de figura-fundo e memória para sons em sequência e processamento temporal correlacionaram-se com melhora para seguir instruções, diminuição das solicitações de repetições e aumento do tempo de atenção e melhora da comunicação e da compreensão ao telefone e para assistir TV. CONCLUSÃO: Observou-se adequação das habilidades auditivas de fechamento auditivo, figura fundo, e processamento temporal na avaliação pós-treinamento auditivo acusticamente controlado, além de redução das queixas quanto ao comportamento auditivo.


Auditory Perception , Self Concept , Humans , Adult , Male , Female , Auditory Perception/physiology , Surveys and Questionnaires , Middle Aged , Brain Concussion/psychology , Brain Concussion/rehabilitation , Acoustic Stimulation/methods , Young Adult
9.
eNeuro ; 11(5)2024 May.
Article En | MEDLINE | ID: mdl-38702187

Mismatch negativity (MMN) is commonly recognized as a neural signal of prediction error evoked by deviants from the expected patterns of sensory input. Studies show that MMN diminishes when sequence patterns become more predictable over a longer timescale. This implies that MMN is composed of multiple subcomponents, each responding to different levels of temporal regularities. To probe the hypothesized subcomponents in MMN, we record human electroencephalography during an auditory local-global oddball paradigm where the tone-to-tone transition probability (local regularity) and the overall sequence probability (global regularity) are manipulated to control temporal predictabilities at two hierarchical levels. We find that the size of MMN is correlated with both probabilities and the spatiotemporal structure of MMN can be decomposed into two distinct subcomponents. Both subcomponents appear as negative waveforms, with one peaking early in the central-frontal area and the other late in a more frontal area. With a quantitative predictive coding model, we map the early and late subcomponents to the prediction errors that are tied to local and global regularities, respectively. Our study highlights the hierarchical complexity of MMN and offers an experimental and analytical platform for developing a multitiered neural marker applicable in clinical settings.


Acoustic Stimulation , Electroencephalography , Evoked Potentials, Auditory , Humans , Male , Female , Electroencephalography/methods , Young Adult , Adult , Evoked Potentials, Auditory/physiology , Acoustic Stimulation/methods , Auditory Perception/physiology , Brain/physiology , Brain Mapping , Adolescent
10.
Trends Hear ; 28: 23312165241246596, 2024.
Article En | MEDLINE | ID: mdl-38738341

The auditory brainstem response (ABR) is a valuable clinical tool for objective hearing assessment, which is conventionally detected by averaging neural responses to thousands of short stimuli. Progressing beyond these unnatural stimuli, brainstem responses to continuous speech presented via earphones have been recently detected using linear temporal response functions (TRFs). Here, we extend earlier studies by measuring subcortical responses to continuous speech presented in the sound-field, and assess the amount of data needed to estimate brainstem TRFs. Electroencephalography (EEG) was recorded from 24 normal hearing participants while they listened to clicks and stories presented via earphones and loudspeakers. Subcortical TRFs were computed after accounting for non-linear processing in the auditory periphery by either stimulus rectification or an auditory nerve model. Our results demonstrated that subcortical responses to continuous speech could be reliably measured in the sound-field. TRFs estimated using auditory nerve models outperformed simple rectification, and 16 minutes of data was sufficient for the TRFs of all participants to show clear wave V peaks for both earphones and sound-field stimuli. Subcortical TRFs to continuous speech were highly consistent in both earphone and sound-field conditions, and with click ABRs. However, sound-field TRFs required slightly more data (16 minutes) to achieve clear wave V peaks compared to earphone TRFs (12 minutes), possibly due to effects of room acoustics. By investigating subcortical responses to sound-field speech stimuli, this study lays the groundwork for bringing objective hearing assessment closer to real-life conditions, which may lead to improved hearing evaluations and smart hearing technologies.


Acoustic Stimulation , Electroencephalography , Evoked Potentials, Auditory, Brain Stem , Speech Perception , Humans , Evoked Potentials, Auditory, Brain Stem/physiology , Male , Female , Speech Perception/physiology , Acoustic Stimulation/methods , Adult , Young Adult , Auditory Threshold/physiology , Time Factors , Cochlear Nerve/physiology , Healthy Volunteers
11.
J Neurosci ; 44(19)2024 May 08.
Article En | MEDLINE | ID: mdl-38561224

Coordinated neuronal activity has been identified to play an important role in information processing and transmission in the brain. However, current research predominantly focuses on understanding the properties and functions of neuronal coordination in hippocampal and cortical areas, leaving subcortical regions relatively unexplored. In this study, we use single-unit recordings in female Sprague Dawley rats to investigate the properties and functions of groups of neurons exhibiting coordinated activity in the auditory thalamus-the medial geniculate body (MGB). We reliably identify coordinated neuronal ensembles (cNEs), which are groups of neurons that fire synchronously, in the MGB. cNEs are shown not to be the result of false-positive detections or by-products of slow-state oscillations in anesthetized animals. We demonstrate that cNEs in the MGB have enhanced information-encoding properties over individual neurons. Their neuronal composition is stable between spontaneous and evoked activity, suggesting limited stimulus-induced ensemble dynamics. These MGB cNE properties are similar to what is observed in cNEs in the primary auditory cortex (A1), suggesting that ensembles serve as a ubiquitous mechanism for organizing local networks and play a fundamental role in sensory processing within the brain.


Acoustic Stimulation , Geniculate Bodies , Neurons , Rats, Sprague-Dawley , Animals , Female , Rats , Neurons/physiology , Geniculate Bodies/physiology , Acoustic Stimulation/methods , Auditory Pathways/physiology , Action Potentials/physiology , Auditory Cortex/physiology , Auditory Cortex/cytology , Thalamus/physiology , Thalamus/cytology , Evoked Potentials, Auditory/physiology
12.
PeerJ ; 12: e17104, 2024.
Article En | MEDLINE | ID: mdl-38680894

Advancements in cochlear implants (CIs) have led to a significant increase in bilateral CI users, especially among children. Yet, most bilateral CI users do not fully achieve the intended binaural benefit due to potential limitations in signal processing and/or surgical implant positioning. One crucial auditory cue that normal hearing (NH) listeners can benefit from is the interaural time difference (ITD), i.e., the time difference between the arrival of a sound at two ears. The ITD sensitivity is thought to be heavily relying on the effective utilization of temporal fine structure (very rapid oscillations in sound). Unfortunately, most current CIs do not transmit such true fine structure. Nevertheless, bilateral CI users have demonstrated sensitivity to ITD cues delivered through envelope or interaural pulse time differences, i.e., the time gap between the pulses delivered to the two implants. However, their ITD sensitivity is significantly poorer compared to NH individuals, and it further degrades at higher CI stimulation rates, especially when the rate exceeds 300 pulse per second. The overall purpose of this research thread is to improve spatial hearing abilities in bilateral CI users. This study aims to develop electroencephalography (EEG) paradigms that can be used with clinical settings to assess and optimize the delivery of ITD cues, which are crucial for spatial hearing in everyday life. The research objective of this article was to determine the effect of CI stimulation pulse rate on the ITD sensitivity, and to characterize the rate-dependent degradation in ITD perception using EEG measures. To develop protocols for bilateral CI studies, EEG responses were obtained from NH listeners using sinusoidal-amplitude-modulated (SAM) tones and filtered clicks with changes in either fine structure ITD (ITDFS) or envelope ITD (ITDENV). Multiple EEG responses were analyzed, which included the subcortical auditory steady-state responses (ASSRs) and cortical auditory evoked potentials (CAEPs) elicited by stimuli onset, offset, and changes. Results indicated that acoustic change complex (ACC) responses elicited by ITDENV changes were significantly smaller or absent compared to those elicited by ITDFS changes. The ACC morphologies evoked by ITDFS changes were similar to onset and offset CAEPs, although the peak latencies were longest for ACC responses and shortest for offset CAEPs. The high-frequency stimuli clearly elicited subcortical ASSRs, but smaller than those evoked by lower carrier frequency SAM tones. The 40-Hz ASSRs decreased with increasing carrier frequencies. Filtered clicks elicited larger ASSRs compared to high-frequency SAM tones, with the order being 40 > 160 > 80> 320 Hz ASSR for both stimulus types. Wavelet analysis revealed a clear interaction between detectable transient CAEPs and 40-Hz ASSRs in the time-frequency domain for SAM tones with a low carrier frequency.


Cochlear Implants , Cues , Electroencephalography , Humans , Electroencephalography/methods , Acoustic Stimulation/methods , Sound Localization/physiology , Auditory Perception/physiology , Evoked Potentials, Auditory/physiology , Time Factors
13.
Cereb Cortex ; 34(4)2024 Apr 01.
Article En | MEDLINE | ID: mdl-38687241

Speech comprehension entails the neural mapping of the acoustic speech signal onto learned linguistic units. This acousto-linguistic transformation is bi-directional, whereby higher-level linguistic processes (e.g. semantics) modulate the acoustic analysis of individual linguistic units. Here, we investigated the cortical topography and linguistic modulation of the most fundamental linguistic unit, the phoneme. We presented natural speech and "phoneme quilts" (pseudo-randomly shuffled phonemes) in either a familiar (English) or unfamiliar (Korean) language to native English speakers while recording functional magnetic resonance imaging. This allowed us to dissociate the contribution of acoustic vs. linguistic processes toward phoneme analysis. We show that (i) the acoustic analysis of phonemes is modulated by linguistic analysis and (ii) that for this modulation, both of acoustic and phonetic information need to be incorporated. These results suggest that the linguistic modulation of cortical sensitivity to phoneme classes minimizes prediction error during natural speech perception, thereby aiding speech comprehension in challenging listening situations.


Brain Mapping , Magnetic Resonance Imaging , Phonetics , Speech Perception , Humans , Speech Perception/physiology , Female , Magnetic Resonance Imaging/methods , Male , Adult , Young Adult , Linguistics , Acoustic Stimulation/methods , Comprehension/physiology , Brain/physiology , Brain/diagnostic imaging
14.
Autism Res ; 17(5): 1041-1052, 2024 May.
Article En | MEDLINE | ID: mdl-38661256

Research has shown that children on the autism spectrum and adults with high levels of autistic traits are less sensitive to audiovisual asynchrony compared to their neurotypical peers. However, this evidence has been limited to simultaneity judgments (SJ) which require participants to consider the timing of two cues together. Given evidence of partly divergent perceptual and neural mechanisms involved in making temporal order judgments (TOJ) and SJ, and given that SJ require a more global type of processing which may be impaired in autistic individuals, here we ask whether the observed differences in audiovisual temporal processing are task and stimulus specific. We examined the ability to detect audiovisual asynchrony in a group of 26 autistic adult males and a group of age and IQ-matched neurotypical males. Participants were presented with beep-flash, point-light drumming, and face-voice displays with varying degrees of asynchrony and asked to make SJ and TOJ. The results indicated that autistic participants were less able to detect audiovisual asynchrony compared to the control group, but this effect was specific to SJ and more complex social stimuli (e.g., face-voice) with stronger semantic correspondence between the cues, requiring a more global type of processing. This indicates that audiovisual temporal processing is not generally different in autistic individuals and that a similar level of performance could be achieved by using a more local type of processing, thus informing multisensory integration theory as well as multisensory training aimed to aid perceptual abilities in this population.


Auditory Perception , Autistic Disorder , Judgment , Visual Perception , Humans , Male , Judgment/physiology , Adult , Visual Perception/physiology , Auditory Perception/physiology , Young Adult , Autistic Disorder/physiopathology , Photic Stimulation/methods , Cues , Acoustic Stimulation/methods , Time Perception/physiology , Adolescent
15.
Cereb Cortex ; 34(4)2024 Apr 01.
Article En | MEDLINE | ID: mdl-38679480

Existing neuroimaging studies on neural correlates of musical familiarity often employ a familiar vs. unfamiliar contrast analysis. This singular analytical approach reveals associations between explicit musical memory and musical familiarity. However, is the neural activity associated with musical familiarity solely related to explicit musical memory, or could it also be related to implicit musical memory? To address this, we presented 130 song excerpts of varying familiarity to 21 participants. While acquiring their brain activity using functional magnetic resonance imaging (fMRI), we asked the participants to rate the familiarity of each song on a five-point scale. To comprehensively analyze the neural correlates of musical familiarity, we examined it from four perspectives: the intensity of local neural activity, patterns of local neural activity, global neural activity patterns, and functional connectivity. The results from these four approaches were consistent and revealed that musical familiarity is related to the activity of both explicit and implicit musical memory networks. Our findings suggest that: (1) musical familiarity is also associated with implicit musical memory, and (2) there is a cooperative and competitive interaction between the two types of musical memory in the perception of music.


Brain Mapping , Brain , Magnetic Resonance Imaging , Music , Recognition, Psychology , Humans , Music/psychology , Recognition, Psychology/physiology , Male , Female , Young Adult , Adult , Brain/physiology , Brain/diagnostic imaging , Brain Mapping/methods , Auditory Perception/physiology , Acoustic Stimulation/methods
16.
Neurobiol Dis ; 195: 106490, 2024 Jun 01.
Article En | MEDLINE | ID: mdl-38561111

The auditory oddball is a mainstay in research on attention, novelty, and sensory prediction. How this task engages subcortical structures like the subthalamic nucleus and substantia nigra pars reticulata is unclear. We administered an auditory OB task while recording single unit activity (35 units) and local field potentials (57 recordings) from the subthalamic nucleus and substantia nigra pars reticulata of 30 patients with Parkinson's disease undergoing deep brain stimulation surgery. We found tone modulated and oddball modulated units in both regions. Population activity differentiated oddball from standard trials from 200 ms to 1000 ms after the tone in both regions. In the substantia nigra, beta band activity in the local field potential was decreased following oddball tones. The oddball related activity we observe may underlie attention, sensory prediction, or surprise-induced motor suppression.


Acoustic Stimulation , Deep Brain Stimulation , Parkinson Disease , Pars Reticulata , Subthalamic Nucleus , Humans , Subthalamic Nucleus/physiology , Male , Middle Aged , Female , Parkinson Disease/physiopathology , Parkinson Disease/therapy , Aged , Pars Reticulata/physiology , Deep Brain Stimulation/methods , Acoustic Stimulation/methods , Auditory Perception/physiology , Evoked Potentials, Auditory/physiology , Substantia Nigra/physiology , Adult
17.
eNeuro ; 11(5)2024 May.
Article En | MEDLINE | ID: mdl-38627064

Infrared neural stimulation (INS) is a promising area of interest for the clinical application of a neuromodulation method. This is in part because of its low invasiveness, whereby INS modulates the activity of the neural tissue mainly through temperature changes. Additionally, INS may provide localized brain stimulation with less tissue damage. The inferior colliculus (IC) is a crucial auditory relay nucleus and a potential target for clinical application of INS to treat auditory diseases and develop artificial hearing devices. Here, using continuous INS with low to high-power density, we demonstrate the laminar modulation of neural activity in the mouse IC in the presence and absence of sound. We investigated stimulation parameters of INS to effectively modulate the neural activity in a facilitatory or inhibitory manner. A mathematical model of INS-driven brain tissue was first simulated, temperature distributions were numerically estimated, and stimulus parameters were selected from the simulation results. Subsequently, INS was administered to the IC of anesthetized mice, and the modulation effect on the neural activity was measured using an electrophysiological approach. We found that the modulatory effect of INS on the spontaneous neural activity was bidirectional between facilitatory and inhibitory effects. The modulatory effect on sound-evoked responses produced only an inhibitory effect to all examined stimulus intensities. Thus, this study provides important physiological evidence on the response properties of IC neurons to INS. Overall, INS can be used for the development of new therapies for neurological disorders and functional support devices for auditory central processing.


Inferior Colliculi , Infrared Rays , Animals , Inferior Colliculi/physiology , Mice , Male , Photic Stimulation/methods , Acoustic Stimulation/methods , Neurons/physiology , Mice, Inbred C57BL , Models, Neurological , Evoked Potentials, Auditory/physiology
18.
Ann N Y Acad Sci ; 1535(1): 121-136, 2024 May.
Article En | MEDLINE | ID: mdl-38566486

While certain musical genres and songs are widely popular, there is still large variability in the music that individuals find rewarding or emotional, even among those with a similar musical enculturation. Interestingly, there is one Western genre that is intended to attract minimal attention and evoke a mild emotional response: elevator music. In a series of behavioral experiments, we show that elevator music consistently elicits low pleasure and surprise. Participants reported elevator music as being less pleasurable than music from popular genres, even when participants did not regularly listen to the comparison genre. Participants reported elevator music to be familiar even when they had not explicitly heard the presented song before. Computational and behavioral measures of surprisal showed that elevator music was less surprising, and thus more predictable, than other well-known genres. Elevator music covers of popular songs were rated as less pleasurable, surprising, and arousing than their original counterparts. Finally, we used elevator music as a control for self-selected rewarding songs in a proof-of-concept physiological (electrodermal activity and piloerection) experiment. Our results suggest that elevator music elicits low emotional responses consistently across Western music listeners, making it a unique control stimulus for studying musical novelty, pleasure, and surprise.


Auditory Perception , Emotions , Music , Reward , Music/psychology , Humans , Male , Female , Emotions/physiology , Adult , Auditory Perception/physiology , Pleasure/physiology , Young Adult , Acoustic Stimulation/methods
19.
J Neurosci ; 44(21)2024 May 22.
Article En | MEDLINE | ID: mdl-38664010

The natural environment challenges the brain to prioritize the processing of salient stimuli. The barn owl, a sound localization specialist, exhibits a circuit called the midbrain stimulus selection network, dedicated to representing locations of the most salient stimulus in circumstances of concurrent stimuli. Previous competition studies using unimodal (visual) and bimodal (visual and auditory) stimuli have shown that relative strength is encoded in spike response rates. However, open questions remain concerning auditory-auditory competition on coding. To this end, we present diverse auditory competitors (concurrent flat noise and amplitude-modulated noise) and record neural responses of awake barn owls of both sexes in subsequent midbrain space maps, the external nucleus of the inferior colliculus (ICx) and optic tectum (OT). While both ICx and OT exhibit a topographic map of auditory space, OT also integrates visual input and is part of the global-inhibitory midbrain stimulus selection network. Through comparative investigation of these regions, we show that while increasing strength of a competitor sound decreases spike response rates of spatially distant neurons in both regions, relative strength determines spike train synchrony of nearby units only in the OT. Furthermore, changes in synchrony by sound competition in the OT are correlated to gamma range oscillations of local field potentials associated with input from the midbrain stimulus selection network. The results of this investigation suggest that modulations in spiking synchrony between units by gamma oscillations are an emergent coding scheme representing relative strength of concurrent stimuli, which may have relevant implications for downstream readout.


Acoustic Stimulation , Inferior Colliculi , Sound Localization , Strigiformes , Animals , Strigiformes/physiology , Female , Male , Acoustic Stimulation/methods , Sound Localization/physiology , Inferior Colliculi/physiology , Mesencephalon/physiology , Auditory Perception/physiology , Brain Mapping , Auditory Pathways/physiology , Neurons/physiology , Action Potentials/physiology
20.
Behav Res Methods ; 56(4): 3814-3830, 2024 Apr.
Article En | MEDLINE | ID: mdl-38684625

The ability to detect the absolute location of sensory stimuli can be quantified with either error-based metrics derived from single-trial localization errors or regression-based metrics derived from a linear regression of localization responses on the true stimulus locations. Here we tested the agreement between these two approaches in estimating accuracy and precision in a large sample of 188 subjects who localized auditory stimuli from different azimuthal locations. A subsample of 57 subjects was subsequently exposed to audiovisual stimuli with a consistent spatial disparity before performing the sound localization test again, allowing us to additionally test which of the different metrics best assessed correlations between the amount of crossmodal spatial recalibration and baseline localization performance. First, our findings support a distinction between accuracy and precision. Localization accuracy was mainly reflected in the overall spatial bias and was moderately correlated with precision metrics. However, in our data, the variability of single-trial localization errors (variable error in error-based metrics) and the amount by which the eccentricity of target locations was overestimated (slope in regression-based metrics) were highly correlated, suggesting that intercorrelations between individual metrics need to be carefully considered in spatial perception studies. Secondly, exposure to spatially discrepant audiovisual stimuli resulted in a shift in bias toward the side of the visual stimuli (ventriloquism aftereffect) but did not affect localization precision. The size of the aftereffect shift in bias was at least partly explainable by unspecific test repetition effects, highlighting the need to account for inter-individual baseline differences in studies of spatial learning.


Space Perception , Humans , Space Perception/physiology , Female , Male , Adult , Sound Localization , Photic Stimulation , Visual Perception/physiology , Young Adult , Acoustic Stimulation/methods , Auditory Perception/physiology
...