Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 8.475
Filter
1.
Cereb Cortex ; 34(6)2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38879757

ABSTRACT

The reactions to novelty manifesting in mismatch negativity in the rat brain were studied. During dissociative anesthesia, mismatch negativity-like waves were recorded from the somatosensory cortex using an epidural 32-electrode array. Experimental animals: 7 wild-type Wistar rats and 3 transgenic rats. During high-dose anesthesia, deviant 1,500 Hz tones were presented randomly among many standard 1,000 Hz tones in the oddball paradigm. "Deviant minus standard_before_deviant" difference waves were calculated using both the classical method of Naatanen and method of cross-correlation of sub-averages. Both methods gave consistent results: an early phasic component of the N40 and later N100 to 200 (mismatch negativity itself) tonic component. The gamma and delta rhythms power and the frequency of down-states (suppressed activity periods) were assessed. In all rats, the amplitude of tonic component grew with increasing sedation depth. At the same time, a decrease in gamma power with a simultaneous increase in delta power and the frequency of down-states. The earlier phasic frontocentral component is associated with deviance detection, while the later tonic one over the auditory cortex reflects the orienting reaction. Under anesthesia, this slow mismatch negativity-like wave most likely reflects the tendency of the system to respond to any influences with delta waves, K-complexes and down-states, or produce them spontaneously.


Subject(s)
Rats, Wistar , Animals , Male , Acoustic Stimulation/methods , Electroencephalography/methods , Rats , Rats, Transgenic , Anesthetics, Dissociative/administration & dosage , Anesthetics, Dissociative/pharmacology , Evoked Potentials, Auditory/physiology , Somatosensory Cortex/physiology , Gamma Rhythm/physiology , Delta Rhythm/physiology , Delta Rhythm/drug effects
2.
J Acoust Soc Am ; 155(6): 3639-3653, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38836771

ABSTRACT

The estimation of auditory evoked potentials requires deconvolution when the duration of the responses to be recovered exceeds the inter-stimulus interval. Based on least squares deconvolution, in this article we extend the procedure to the case of a multi-response convolutional model, that is, a model in which different categories of stimulus are expected to evoke different responses. The computational cost of the multi-response deconvolution significantly increases with the number of responses to be deconvolved, which restricts its applicability in practical situations. In order to alleviate this restriction, we propose to perform the multi-response deconvolution in a reduced representation space associated with a latency-dependent filtering of auditory responses, which provides a significant dimensionality reduction. We demonstrate the practical viability of the multi-response deconvolution with auditory responses evoked by clicks presented at different levels and categorized according to their stimulation level. The multi-response deconvolution applied in a reduced representation space provides the least squares estimation of the responses with a reasonable computational load. matlab/Octave code implementing the proposed procedure is included as supplementary material.


Subject(s)
Acoustic Stimulation , Evoked Potentials, Auditory , Evoked Potentials, Auditory/physiology , Humans , Acoustic Stimulation/methods , Male , Adult , Electroencephalography/methods , Female , Least-Squares Analysis , Young Adult , Signal Processing, Computer-Assisted , Reaction Time , Auditory Perception/physiology
3.
J Neurodev Disord ; 16(1): 28, 2024 Jun 03.
Article in English | MEDLINE | ID: mdl-38831410

ABSTRACT

BACKGROUND: In the search for objective tools to quantify neural function in Rett Syndrome (RTT), which are crucial in the evaluation of therapeutic efficacy in clinical trials, recordings of sensory-perceptual functioning using event-related potential (ERP) approaches have emerged as potentially powerful tools. Considerable work points to highly anomalous auditory evoked potentials (AEPs) in RTT. However, an assumption of the typical signal-averaging method used to derive these measures is "stationarity" of the underlying responses - i.e. neural responses to each input are highly stereotyped. An alternate possibility is that responses to repeated stimuli are highly variable in RTT. If so, this will significantly impact the validity of assumptions about underlying neural dysfunction, and likely lead to overestimation of underlying neuropathology. To assess this possibility, analyses at the single-trial level assessing signal-to-noise ratios (SNR), inter-trial variability (ITV) and inter-trial phase coherence (ITPC) are necessary. METHODS: AEPs were recorded to simple 100 Hz tones from 18 RTT and 27 age-matched controls (Ages: 6-22 years). We applied standard AEP averaging, as well as measures of neuronal reliability at the single-trial level (i.e. SNR, ITV, ITPC). To separate signal-carrying components from non-neural noise sources, we also applied a denoising source separation (DSS) algorithm and then repeated the reliability measures. RESULTS: Substantially increased ITV, lower SNRs, and reduced ITPC were observed in auditory responses of RTT participants, supporting a "neural unreliability" account. Application of the DSS technique made it clear that non-neural noise sources contribute to overestimation of the extent of processing deficits in RTT. Post-DSS, ITV measures were substantially reduced, so much so that pre-DSS ITV differences between RTT and TD populations were no longer detected. In the case of SNR and ITPC, DSS substantially improved these estimates in the RTT population, but robust differences between RTT and TD were still fully evident. CONCLUSIONS: To accurately represent the degree of neural dysfunction in RTT using the ERP technique, a consideration of response reliability at the single-trial level is highly advised. Non-neural sources of noise lead to overestimation of the degree of pathological processing in RTT, and denoising source separation techniques during signal processing substantially ameliorate this issue.


Subject(s)
Electroencephalography , Evoked Potentials, Auditory , Rett Syndrome , Humans , Rett Syndrome/physiopathology , Rett Syndrome/complications , Adolescent , Female , Evoked Potentials, Auditory/physiology , Child , Young Adult , Auditory Perception/physiology , Reproducibility of Results , Acoustic Stimulation , Male , Signal-To-Noise Ratio , Adult
4.
Ann N Y Acad Sci ; 1536(1): 167-176, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38829709

ABSTRACT

Time discrimination, a critical aspect of auditory perception, is influenced by numerous factors. Previous research has suggested that musical experience can restructure the brain, thereby enhancing time discrimination. However, this phenomenon remains underexplored. In this study, we seek to elucidate the enhancing effect of musical experience on time discrimination, utilizing both behavioral and electroencephalogram methodologies. Additionally, we aim to explore, through brain connectivity analysis, the role of increased connectivity in brain regions associated with auditory perception as a potential contributory factor to time discrimination induced by musical experience. The results show that the music-experienced group demonstrated higher behavioral accuracy, shorter reaction time, and shorter P3 and mismatch response latencies as compared to the control group. Furthermore, the music-experienced group had higher connectivity in the left temporal lobe. In summary, our research underscores the positive impact of musical experience on time discrimination and suggests that enhanced connectivity in brain regions linked to auditory perception may be responsible for this enhancement.


Subject(s)
Auditory Perception , Electroencephalography , Music , Humans , Music/psychology , Male , Auditory Perception/physiology , Female , Adult , Young Adult , Time Perception/physiology , Reaction Time/physiology , Acoustic Stimulation/methods , Discrimination, Psychological/physiology , Evoked Potentials, Auditory/physiology , Brain/physiology
5.
eNeuro ; 11(6)2024 Jun.
Article in English | MEDLINE | ID: mdl-38834300

ABSTRACT

Following repetitive visual stimulation, post hoc phase analysis finds that visually evoked response magnitudes vary with the cortical alpha oscillation phase that temporally coincides with sensory stimulus. This approach has not successfully revealed an alpha phase dependence for auditory evoked or induced responses. Here, we test the feasibility of tracking alpha with scalp electroencephalogram (EEG) recordings and play sounds phase-locked to individualized alpha phases in real-time using a novel end-point corrected Hilbert transform (ecHT) algorithm implemented on a research device. Based on prior work, we hypothesize that sound-evoked and induced responses vary with the alpha phase at sound onset and the alpha phase that coincides with the early sound-evoked response potential (ERP) measured with EEG. Thus, we use each subject's individualized alpha frequency (IAF) and individual auditory ERP latency to define target trough and peak alpha phases that allow an early component of the auditory ERP to align to the estimated poststimulus peak and trough phases, respectively. With this closed-loop and individualized approach, we find opposing alpha phase-dependent effects on the auditory ERP and alpha oscillations that follow stimulus onset. Trough and peak phase-locked sounds result in distinct evoked and induced post-stimulus alpha level and frequency modulations. Though additional studies are needed to localize the sources underlying these phase-dependent effects, these results suggest a general principle for alpha phase-dependence of sensory processing that includes the auditory system. Moreover, this study demonstrates the feasibility of using individualized neurophysiological indices to deliver automated, closed-loop, phase-locked auditory stimulation.


Subject(s)
Acoustic Stimulation , Alpha Rhythm , Electroencephalography , Evoked Potentials, Auditory , Humans , Acoustic Stimulation/methods , Evoked Potentials, Auditory/physiology , Male , Female , Electroencephalography/methods , Alpha Rhythm/physiology , Adult , Young Adult , Brain/physiology , Auditory Perception/physiology , Algorithms , Feasibility Studies
6.
PLoS Biol ; 22(6): e3002665, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38935589

ABSTRACT

Loss of synapses between spiral ganglion neurons and inner hair cells (IHC synaptopathy) leads to an auditory neuropathy called hidden hearing loss (HHL) characterized by normal auditory thresholds but reduced amplitude of sound-evoked auditory potentials. It has been proposed that synaptopathy and HHL result in poor performance in challenging hearing tasks despite a normal audiogram. However, this has only been tested in animals after exposure to noise or ototoxic drugs, which can cause deficits beyond synaptopathy. Furthermore, the impact of supernumerary synapses on auditory processing has not been evaluated. Here, we studied mice in which IHC synapse counts were increased or decreased by altering neurotrophin 3 (Ntf3) expression in IHC supporting cells. As we previously showed, postnatal Ntf3 knockdown or overexpression reduces or increases, respectively, IHC synapse density and suprathreshold amplitude of sound-evoked auditory potentials without changing cochlear thresholds. We now show that IHC synapse density does not influence the magnitude of the acoustic startle reflex or its prepulse inhibition. In contrast, gap-prepulse inhibition, a behavioral test for auditory temporal processing, is reduced or enhanced according to Ntf3 expression levels. These results indicate that IHC synaptopathy causes temporal processing deficits predicted in HHL. Furthermore, the improvement in temporal acuity achieved by increasing Ntf3 expression and synapse density suggests a therapeutic strategy for improving hearing in noise for individuals with synaptopathy of various etiologies.


Subject(s)
Hair Cells, Auditory, Inner , Neurotrophin 3 , Synapses , Animals , Hair Cells, Auditory, Inner/metabolism , Hair Cells, Auditory, Inner/pathology , Synapses/metabolism , Synapses/physiology , Neurotrophin 3/metabolism , Neurotrophin 3/genetics , Mice , Auditory Threshold , Evoked Potentials, Auditory/physiology , Reflex, Startle/physiology , Auditory Perception/physiology , Spiral Ganglion/metabolism , Female , Male , Hearing Loss, Hidden
7.
Adv Exp Med Biol ; 1455: 227-256, 2024.
Article in English | MEDLINE | ID: mdl-38918355

ABSTRACT

The aim of this chapter is to give an overview of how the perception of rhythmic temporal regularity such as a regular beat in music can be studied in human adults, human newborns, and nonhuman primates using event-related brain potentials (ERPs). First, we discuss different aspects of temporal structure in general, and musical rhythm in particular, and we discuss the possible mechanisms underlying the perception of regularity (e.g., a beat) in rhythm. Additionally, we highlight the importance of dissociating beat perception from the perception of other types of structure in rhythm, such as predictable sequences of temporal intervals, ordinal structure, and rhythmic grouping. In the second section of the chapter, we start with a discussion of auditory ERPs elicited by infrequent and frequent sounds: ERP responses to regularity violations, such as mismatch negativity (MMN), N2b, and P3, as well as early sensory responses to sounds, such as P1 and N1, have been shown to be instrumental in probing beat perception. Subsequently, we discuss how beat perception can be probed by comparing ERP responses to sounds in regular and irregular sequences, and by comparing ERP responses to sounds in different metrical positions in a rhythm, such as on and off the beat or on strong and weak beats. Finally, we will discuss previous research that has used the aforementioned ERPs and paradigms to study beat perception in human adults, human newborns, and nonhuman primates. In doing so, we consider the possible pitfalls and prospects of the technique, as well as future perspectives.


Subject(s)
Auditory Perception , Music , Primates , Humans , Animals , Auditory Perception/physiology , Infant, Newborn , Adult , Primates/physiology , Evoked Potentials, Auditory/physiology , Acoustic Stimulation/methods , Evoked Potentials/physiology , Electroencephalography
8.
Eur J Neurosci ; 60(1): 3812-3820, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38711271

ABSTRACT

Regularities in our surroundings lead to predictions about upcoming events. Previous research has shown that omitted sounds during otherwise regular tone sequences elicit frequency-specific neural activity related to the upcoming but omitted tone. We tested whether this neural response is depending on the unpredictability of the omission. Therefore, we recorded magnetencephalography (MEG) data while participants listened to ordered or random tone sequences with omissions occurring either ordered or randomly. Using multivariate pattern analysis shows that the frequency-specific neural pattern during omission within ordered tone sequences occurs independent of the regularity of the omissions. These results suggest that the auditory predictions based on sensory experiences are not immediately updated by violations of those expectations.


Subject(s)
Acoustic Stimulation , Auditory Perception , Magnetoencephalography , Humans , Male , Female , Magnetoencephalography/methods , Adult , Auditory Perception/physiology , Acoustic Stimulation/methods , Young Adult , Evoked Potentials, Auditory/physiology , Auditory Cortex/physiology
9.
Clin Neurophysiol ; 163: 102-111, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38729074

ABSTRACT

OBJECTIVE: We investigated the role of transverse temporal gyrus and adjacent cortex (TTG+) in facial expressions and perioral movements. METHODS: In 31 patients undergoing stereo-electroencephalography monitoring, we describe behavioral responses elicited by electrical stimulation within the TTG+. Task-induced high-gamma modulation (HGM), auditory evoked responses, and resting-state connectivity were used to investigate the cortical sites having different types of responses on electrical stimulation. RESULTS: Changes in facial expressions and perioral movements were elicited on electrical stimulation within TTG+ in 9 (29%) and 10 (32%) patients, respectively, in addition to the more common language responses (naming interruptions, auditory hallucinations, paraphasic errors). All functional sites showed auditory task induced HGM and evoked responses validating their location within the auditory cortex, however, motor sites showed lower peak amplitudes and longer peak latencies compared to language sites. Significant first-degree connections for motor sites included precentral, anterior cingulate, parahippocampal, and anterior insular gyri, whereas those for language sites included posterior superior temporal, posterior middle temporal, inferior frontal, supramarginal, and angular gyri. CONCLUSIONS: Multimodal data suggests that TTG+ may participate in auditory-motor integration. SIGNIFICANCE: TTG+ likely participates in facial expressions in response to emotional cues during an auditory discourse.


Subject(s)
Auditory Cortex , Emotions , Facial Expression , Humans , Male , Female , Adult , Middle Aged , Auditory Cortex/physiology , Emotions/physiology , Evoked Potentials, Auditory/physiology , Electroencephalography , Aged , Young Adult , Electric Stimulation
11.
Otol Neurotol ; 45(6): 643-650, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38769101

ABSTRACT

OBJECTIVE: This study aimed to evaluate the differences in electrically evoked compound action potential (ECAP) thresholds and postoperative mapping current (T) levels between electrode types after cochlear implantation, the correlation between ECAP thresholds and T levels, and the performance of machine learning techniques in predicting postoperative T levels. STUDY DESIGN: Retrospective case review. SETTING: Tertiary hospital. PATIENTS: We reviewed the charts of 124 ears of children with severe-to-profound hearing loss who had undergone cochlear implantation. INTERVENTIONS: We compared ECAP thresholds and T levels from different electrodes, calculated correlations between ECAP thresholds and T levels, and created five prediction models of T levels at switch-on and 6 months after surgery. MAIN OUTCOME MEASURES: The accuracy of prediction in postoperative mapping current (T) levels. RESULTS: The ECAP thresholds of the slim modiolar electrodes were significantly lower than those of the straight electrodes on the apical side. However, there was no significant difference in the neural response telemetry thresholds between the two electrodes on the basal side. Lasso regression achieved the most accurate prediction of T levels at switch-on, and the random forest algorithm achieved the most accurate prediction of T levels 6 months after surgery in this dataset. CONCLUSION: Machine learning techniques could be useful for accurately predicting postoperative T levels after cochlear implantation in children.


Subject(s)
Cochlear Implantation , Cochlear Implants , Machine Learning , Humans , Cochlear Implantation/methods , Male , Female , Retrospective Studies , Child, Preschool , Child , Infant , Prosthesis Fitting/methods , Evoked Potentials, Auditory/physiology
12.
Biol Futur ; 75(1): 145-158, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38805154

ABSTRACT

The speech multi-feature MMN (Mismatch Negativity) offers a means to explore the neurocognitive background of the processing of multiple speech features in a short time, by capturing the time-locked electrophysiological activity of the brain known as event-related brain potentials (ERPs). Originating from Näätänen et al. (Clin Neurophysiol 115:140-144, 2004) pioneering work, this paradigm introduces several infrequent deviant stimuli alongside standard ones, each differing in various speech features. In this study, we aimed to refine the multi-feature MMN paradigm used previously to encompass both segmental and suprasegmental (prosodic) features of speech. In the experiment, a two-syllable long pseudoword was presented as a standard, and the deviant stimuli included alterations in consonants (deviation by place or place and mode of articulation), vowels (deviation by place or mode of articulation), and stress pattern in the first syllable of the pseudoword. Results indicated the emergence of MMN components across all segmental and prosodic contrasts, with the expected fronto-central amplitude distribution. Subsequent analyses revealed subtle differences in MMN responses to the deviants, suggesting varying sensitivity to phonetic contrasts. Furthermore, individual differences in MMN amplitudes were noted, partially attributable to participants' musical and language backgrounds. These findings underscore the utility of the multi-feature MMN paradigm for rapid and efficient investigation of the neurocognitive mechanisms underlying speech processing. Moreover, the paradigm demonstrated the potential to be used in further research to study the speech processing abilities in various populations.


Subject(s)
Speech Perception , Adult , Female , Humans , Male , Young Adult , Electroencephalography/methods , Evoked Potentials/physiology , Evoked Potentials, Auditory/physiology , Speech Perception/physiology
13.
Sci Rep ; 14(1): 11164, 2024 05 15.
Article in English | MEDLINE | ID: mdl-38750185

ABSTRACT

Electrophysiological studies have investigated predictive processing in music by examining event-related potentials (ERPs) elicited by the violation of musical expectations. While several studies have reported that the predictability of stimuli can modulate the amplitude of ERPs, it is unclear how specific the representation of the expected note is. The present study addressed this issue by recording the omitted stimulus potentials (OSPs) to avoid contamination of bottom-up sensory processing with top-down predictive processing. Decoding of the omitted content was attempted using a support vector machine, which is a type of machine learning. ERP responses to the omission of four target notes (E, F, A, and C) at the same position in familiar and unfamiliar melodies were recorded from 25 participants. The results showed that the omission N1 were larger in the familiar melody condition than in the unfamiliar melody condition. The decoding accuracy of the four omitted notes was significantly higher in the familiar melody condition than in the unfamiliar melody condition. These results suggest that the OSPs contain discriminable predictive information, and the higher the predictability, the more the specific representation of the expected note is generated.


Subject(s)
Acoustic Stimulation , Electroencephalography , Music , Humans , Female , Male , Young Adult , Adult , Auditory Perception/physiology , Support Vector Machine , Evoked Potentials, Auditory/physiology , Evoked Potentials/physiology
14.
Am Ann Deaf ; 168(5): 241-257, 2024.
Article in English | MEDLINE | ID: mdl-38766937

ABSTRACT

Our study investigated the differences in speech performance and neurophysiological response in groups of school-age children with unilateral hearing loss (UHL) who were otherwise typically developing (TD). We recruited a total of 16 primary school-age children for our study (UHL = 9/TD = 7), who were screened by doctors at Shin Kong Wu-Ho-Su Memorial Hospital. We used the Peabody Picture Vocabulary Test-Revised (PPVT-R) to test word comprehension, and the PPVT-R percentile rank (PR) value was proportional to the auditory memory score (by The Children's Oral Comprehension Test) in both groups. Later, we assessed the latency and amplitude of auditory ERP P300 and found that the latency of auditory ERP P300 in the UHL group was prolonged compared with that in the TD group. Although students with UHL have typical hearing in one ear, based on our results, long-term UHL might be the cause of atypical organization of brain areas responsible for auditory processing or even visual perceptions attributed to speech delay and learning difficulties.


Subject(s)
Event-Related Potentials, P300 , Hearing Loss, Unilateral , Humans , Child , Event-Related Potentials, P300/physiology , Male , Female , Hearing Loss, Unilateral/physiopathology , Hearing Loss, Unilateral/rehabilitation , Reaction Time/physiology , Speech Perception/physiology , Evoked Potentials, Auditory/physiology , China , Case-Control Studies , Language , Comprehension
15.
Int J Pediatr Otorhinolaryngol ; 180: 111968, 2024 May.
Article in English | MEDLINE | ID: mdl-38714045

ABSTRACT

AIM & OBJECTIVES: The study aimed to compare P1 latency and P1-N1 amplitude with receptive and expressive language ages in children using cochlear implant (CI) in one ear and a hearing aid (HA) in non-implanted ear. METHODS: The study included 30 children, consisting of 18 males and 12 females, aged between 48 and 96 months. The age at which the children received CI ranged from 42 to 69 months. A within-subject research design was utilized and participants were selected through purposive sampling. Auditory late latency responses (ALLR) were assessed using the Intelligent hearing system to measure P1 latency and P1-N1 amplitude. The assessment checklist for speech-language skills (ACSLS) was employed to evaluate receptive and expressive language age. Both assessments were conducted after cochlear implantation. RESULTS: A total of 30 children participated in the study, with a mean implant age of 20.03 months (SD: 8.14 months). The mean P1 latency and P1-N1 amplitude was 129.50 ms (SD: 15.05 ms) and 6.93 µV (SD: 2.24 µV) respectively. Correlation analysis revealed no significant association between ALLR measures and receptive or expressive language ages. However, there was significant negative correlation between the P1 latency and implant age (Spearman's rho = -0.371, p = 0.043). CONCLUSIONS: The study suggests that P1 latency which is an indicative of auditory maturation, may not be a reliable marker for predicting language outcomes. It can be concluded that language development is likely to be influenced by other factors beyond auditory maturation alone.


Subject(s)
Cochlear Implants , Language Development , Humans , Male , Female , Child, Preschool , Child , Cochlear Implantation/methods , Reaction Time/physiology , Deafness/surgery , Deafness/rehabilitation , Evoked Potentials, Auditory/physiology , Age Factors , Speech Perception/physiology
16.
J Integr Neurosci ; 23(5): 93, 2024 Apr 30.
Article in English | MEDLINE | ID: mdl-38812381

ABSTRACT

BACKGROUND: Magnetoencephalography (MEG) is a non-invasive imaging technique for directly measuring the external magnetic field generated from synchronously activated pyramidal neurons in the brain. The optically pumped magnetometer (OPM) is known for its less expensive, non-cryogenic, movable and user-friendly custom-design provides the potential for a change in functional neuroimaging based on MEG. METHODS: An array of OPMs covering the opposite sides of a subject's head is placed inside a magnetically shielded room (MSR) and responses evoked from the auditory cortices are measured. RESULTS: High signal-to-noise ratio auditory evoked response fields (AEFs) were detected by a wearable OPM-MEG system in a MSR, for which a flexible helmet was specially designed to minimize the sensor-to-head distance, along with a set of bi-planar coils developed for background field and gradient nulling. Neuronal current sources activated in AEF experiments were localized and the auditory cortices showed the highest activities. Performance of the hybrid optically pumped magnetometer-magnetoencephalography/electroencephalography (OPM-MEG/EEG) system was also assessed. CONCLUSIONS: The multi-channel OPM-MEG system performs well in a custom built MSR equipped with bi-planar coils and detects human AEFs with a flexible helmet. Moreover, the similarities and differences of auditory evoked potentials (AEPs) and AEFs are discussed, while the operation of OPM-MEG sensors in conjunction with EEG electrodes provides an encouraging combination for the exploration of hybrid OPM-MEG/EEG systems.


Subject(s)
Auditory Cortex , Electroencephalography , Evoked Potentials, Auditory , Magnetoencephalography , Humans , Magnetoencephalography/instrumentation , Evoked Potentials, Auditory/physiology , Auditory Cortex/physiology , Electroencephalography/instrumentation , Electroencephalography/methods , Adult , Male
17.
Cell Rep ; 43(5): 114172, 2024 May 28.
Article in English | MEDLINE | ID: mdl-38703366

ABSTRACT

Changes in sound-evoked responses in the auditory cortex (ACtx) occur during learning, but how learning alters neural responses in different ACtx subregions and changes their interactions is unclear. To address these questions, we developed an automated training and widefield imaging system to longitudinally track the neural activity of all mouse ACtx subregions during a tone discrimination task. We find that responses in primary ACtx are highly informative of learned stimuli and behavioral outcomes throughout training. In contrast, representations of behavioral outcomes in the dorsal posterior auditory field, learned stimuli in the dorsal anterior auditory field, and inter-regional correlations between primary and higher-order areas are enhanced with training. Moreover, ACtx response changes vary between stimuli, and such differences display lag synchronization with the learning rate. These results indicate that learning alters functional connections between ACtx subregions, inducing region-specific modulations by propagating behavioral information from primary to higher-order areas.


Subject(s)
Auditory Cortex , Discrimination Learning , Auditory Cortex/physiology , Animals , Discrimination Learning/physiology , Mice , Acoustic Stimulation , Auditory Perception/physiology , Male , Female , Mice, Inbred C57BL , Evoked Potentials, Auditory/physiology
18.
Nat Commun ; 15(1): 3941, 2024 May 10.
Article in English | MEDLINE | ID: mdl-38729937

ABSTRACT

A relevant question concerning inter-areal communication in the cortex is whether these interactions are synergistic. Synergy refers to the complementary effect of multiple brain signals conveying more information than the sum of each isolated signal. Redundancy, on the other hand, refers to the common information shared between brain signals. Here, we dissociated cortical interactions encoding complementary information (synergy) from those sharing common information (redundancy) during prediction error (PE) processing. We analyzed auditory and frontal electrocorticography (ECoG) signals in five common awake marmosets performing two distinct auditory oddball tasks and investigated to what extent event-related potentials (ERP) and broadband (BB) dynamics encoded synergistic and redundant information about PE processing. The information conveyed by ERPs and BB signals was synergistic even at lower stages of the hierarchy in the auditory cortex and between auditory and frontal regions. Using a brain-constrained neural network, we simulated the synergy and redundancy observed in the experimental results and demonstrated that the emergence of synergy between auditory and frontal regions requires the presence of strong, long-distance, feedback, and feedforward connections. These results indicate that distributed representations of PE signals across the cortical hierarchy can be highly synergistic.


Subject(s)
Acoustic Stimulation , Auditory Cortex , Callithrix , Electrocorticography , Animals , Auditory Cortex/physiology , Callithrix/physiology , Male , Female , Evoked Potentials/physiology , Frontal Lobe/physiology , Evoked Potentials, Auditory/physiology , Auditory Perception/physiology , Brain Mapping/methods
19.
eNeuro ; 11(5)2024 May.
Article in English | MEDLINE | ID: mdl-38702187

ABSTRACT

Mismatch negativity (MMN) is commonly recognized as a neural signal of prediction error evoked by deviants from the expected patterns of sensory input. Studies show that MMN diminishes when sequence patterns become more predictable over a longer timescale. This implies that MMN is composed of multiple subcomponents, each responding to different levels of temporal regularities. To probe the hypothesized subcomponents in MMN, we record human electroencephalography during an auditory local-global oddball paradigm where the tone-to-tone transition probability (local regularity) and the overall sequence probability (global regularity) are manipulated to control temporal predictabilities at two hierarchical levels. We find that the size of MMN is correlated with both probabilities and the spatiotemporal structure of MMN can be decomposed into two distinct subcomponents. Both subcomponents appear as negative waveforms, with one peaking early in the central-frontal area and the other late in a more frontal area. With a quantitative predictive coding model, we map the early and late subcomponents to the prediction errors that are tied to local and global regularities, respectively. Our study highlights the hierarchical complexity of MMN and offers an experimental and analytical platform for developing a multitiered neural marker applicable in clinical settings.


Subject(s)
Acoustic Stimulation , Electroencephalography , Evoked Potentials, Auditory , Humans , Male , Female , Electroencephalography/methods , Young Adult , Adult , Evoked Potentials, Auditory/physiology , Acoustic Stimulation/methods , Auditory Perception/physiology , Brain/physiology , Brain Mapping , Adolescent
20.
eNeuro ; 11(5)2024 May.
Article in English | MEDLINE | ID: mdl-38811162

ABSTRACT

This study compared the impact of spectral and temporal degradation on vocoded speech recognition between early-blind and sighted subjects. The participants included 25 early-blind subjects (30.32 ± 4.88 years; male:female, 14:11) and 25 age- and sex-matched sighted subjects. Tests included monosyllable recognition in noise at various signal-to-noise ratios (-18 to -4 dB), matrix sentence-in-noise recognition, and vocoded speech recognition with different numbers of channels (4, 8, 16, and 32) and temporal envelope cutoff frequencies (50 vs 500 Hz). Cortical-evoked potentials (N2 and P3b) were measured in response to spectrally and temporally degraded stimuli. The early-blind subjects displayed superior monosyllable and sentence recognition than sighted subjects (all p < 0.01). In the vocoded speech recognition test, a three-way repeated-measure analysis of variance (two groups × four channels × two cutoff frequencies) revealed significant main effects of group, channel, and cutoff frequency (all p < 0.001). Early-blind subjects showed increased sensitivity to spectral degradation for speech recognition, evident in the significant interaction between group and channel (p = 0.007). N2 responses in early-blind subjects exhibited shorter latency and greater amplitude in the 8-channel (p = 0.022 and 0.034, respectively) and shorter latency in the 16-channel (p = 0.049) compared with sighted subjects. In conclusion, early-blind subjects demonstrated speech recognition advantages over sighted subjects, even in the presence of spectral and temporal degradation. Spectral degradation had a greater impact on speech recognition in early-blind subjects, while the effect of temporal degradation was similar in both groups.


Subject(s)
Blindness , Speech Perception , Humans , Male , Female , Speech Perception/physiology , Adult , Blindness/physiopathology , Young Adult , Electroencephalography/methods , Acoustic Stimulation , Recognition, Psychology/physiology , Evoked Potentials, Auditory/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...