Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 18.363
Filter
1.
Hum Brain Mapp ; 45(11): e26793, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39037186

ABSTRACT

The auditory system can selectively attend to the target source in complex environments, the phenomenon known as the "cocktail party" effect. However, the spatiotemporal dynamics of electrophysiological activity associated with auditory selective spatial attention (ASSA) remain largely unexplored. In this study, single-source and multiple-source paradigms were designed to simulate different auditory environments, and microstate analysis was introduced to reveal the electrophysiological correlates of ASSA. Furthermore, cortical source analysis was employed to reveal the neural activity regions of these microstates. The results showed that five microstates could explain the spatiotemporal dynamics of ASSA, ranging from MS1 to MS5. Notably, MS2 and MS3 showed significantly lower partial properties in multiple-source situations than in single-source situations, whereas MS4 had shorter durations and MS5 longer durations in multiple-source situations than in single-source situations. MS1 had insignificant differences between the two situations. Cortical source analysis showed that the activation regions of these microstates initially transferred from the right temporal cortex to the temporal-parietal cortex, and subsequently to the dorsofrontal cortex. Moreover, the neural activity of the single-source situations was greater than that of the multiple-source situations in MS2 and MS3, correlating with the N1 and P2 components, with the greatest differences observed in the superior temporal gyrus and inferior parietal lobule. These findings suggest that these specific microstates and their associated activation regions may serve as promising substrates for decoding ASSA in complex environments.


Subject(s)
Attention , Auditory Perception , Electroencephalography , Evoked Potentials, Auditory , Space Perception , Humans , Male , Attention/physiology , Female , Young Adult , Space Perception/physiology , Evoked Potentials, Auditory/physiology , Adult , Auditory Perception/physiology , Acoustic Stimulation , Brain Mapping
2.
Hear Res ; 450: 109073, 2024 Sep 01.
Article in English | MEDLINE | ID: mdl-38996530

ABSTRACT

Tinnitus denotes the perception of a non-environmental sound and might result from aberrant auditory prediction. Successful prediction of formal (e.g., type) and temporal sound characteristics facilitates the filtering of irrelevant information, also labelled as 'sensory gating' (SG). Here, we explored if and how parallel manipulations of formal prediction violations and temporal predictability affect SG in persons with and without tinnitus. Age-, education- and sex-matched persons with and without tinnitus (N = 52) participated and listened to paired-tone oddball sequences, varying in formal (standard vs. deviant pitch) and temporal predictability (isochronous vs. random timing). EEG was recorded from 128 channels and data were analyzed by means of temporal spatial principal component analysis (tsPCA). SG was assessed by amplitude suppression for the 2nd tone in a pair and was observed in P50-like activity in both timing conditions and groups. Correspondingly, deviants elicited overall larger amplitudes than standards. However, only persons without tinnitus displayed a larger N100-like deviance response in the isochronous compared to the random timing condition. This result might imply that persons with tinnitus do not benefit similarly as persons without tinnitus from temporal predictability in deviance processing. Thus, persons with tinnitus might display less temporal sensitivity in auditory processing than persons without tinnitus.


Subject(s)
Acoustic Stimulation , Electroencephalography , Evoked Potentials, Auditory , Tinnitus , Humans , Tinnitus/physiopathology , Tinnitus/diagnosis , Female , Male , Adult , Middle Aged , Case-Control Studies , Principal Component Analysis , Sensory Gating , Auditory Perception , Time Factors , Young Adult , Aged , Pitch Perception
3.
Sci Rep ; 14(1): 16799, 2024 Jul 22.
Article in English | MEDLINE | ID: mdl-39039107

ABSTRACT

The auditory steady state response (ASSR) arises when periodic sounds evoke stable responses in auditory networks that reflect the acoustic characteristics of the stimuli, such as the amplitude of the sound envelope. Larger for some stimulus rates than others, the ASSR in the human electroencephalogram (EEG) is notably maximal for sounds modulated in amplitude at 40 Hz. To investigate the local circuit underpinnings of the large ASSR to 40 Hz amplitude-modulated (AM) sounds, we acquired skull EEG and local field potential (LFP) recordings from primary auditory cortex (A1) in the rat during the presentation of 20, 30, 40, 50, and 80 Hz AM tones. 40 Hz AM tones elicited the largest ASSR from the EEG acquired above auditory cortex and the LFP acquired from each cortical layer in A1. The large ASSR in the EEG to 40 Hz AM tones was not due to larger instantaneous amplitude of the signals or to greater phase alignment of the LFP across the cortical layers. Instead, it resulted from decreased latency variability (or enhanced temporal consistency) of the 40 Hz response. Statistical models indicate the EEG signal was best predicted by LFPs in either the most superficial or deep cortical layers, suggesting deep layer coordinators of the ASSR. Overall, our results indicate that the recruitment of non-uniform but more temporally consistent responses across A1 layers underlie the larger ASSR to amplitude-modulated tones at 40 Hz.


Subject(s)
Acoustic Stimulation , Auditory Cortex , Electroencephalography , Evoked Potentials, Auditory , Auditory Cortex/physiology , Electroencephalography/methods , Evoked Potentials, Auditory/physiology , Rats , Animals , Male , Auditory Perception/physiology , Humans
4.
J Integr Neurosci ; 23(7): 139, 2024 Jul 25.
Article in English | MEDLINE | ID: mdl-39082290

ABSTRACT

BACKGROUNDS: Segments and tone are important sub-syllabic units that play large roles in lexical processing in tonal languages. However, their roles in lexical processing remain unclear, and the event-related potential (ERP) technique will benefit the exploration of the cognitive mechanism in lexical processing. METHODS: The high temporal resolution of ERP enables the technique to interpret rapidly changing spoken language performances. The present ERP study examined the different roles of segments and tone in Mandarin Chinese lexical processing. An auditory priming experiment was designed that included five types of priming stimuli: consonant mismatch, vowel mismatch, tone mismatch, unrelated mismatch, and identity. Participants were asked to judge whether the target of the prime-target pair was a real Mandarin disyllabic word or not. RESULTS: Behavioral results including reaction time and response accuracy and ERP results were collected. Results were different from those of previous studies that showed the dominant role of consonants in lexical access in mainly non-tonal languages like English. Our results showed that consonants and vowels play comparable roles, whereas tone plays a less important role than do consonants and vowels in lexical processing in Mandarin. CONCLUSIONS: These results have implications for understanding the brain mechanisms in lexical processing of tonal languages.


Subject(s)
Electroencephalography , Evoked Potentials , Speech Perception , Humans , Male , Female , Young Adult , Speech Perception/physiology , Adult , Evoked Potentials/physiology , Reaction Time/physiology , Brain/physiology , Evoked Potentials, Auditory/physiology , Psycholinguistics , Language
5.
Adv Exp Med Biol ; 1455: 227-256, 2024.
Article in English | MEDLINE | ID: mdl-38918355

ABSTRACT

The aim of this chapter is to give an overview of how the perception of rhythmic temporal regularity such as a regular beat in music can be studied in human adults, human newborns, and nonhuman primates using event-related brain potentials (ERPs). First, we discuss different aspects of temporal structure in general, and musical rhythm in particular, and we discuss the possible mechanisms underlying the perception of regularity (e.g., a beat) in rhythm. Additionally, we highlight the importance of dissociating beat perception from the perception of other types of structure in rhythm, such as predictable sequences of temporal intervals, ordinal structure, and rhythmic grouping. In the second section of the chapter, we start with a discussion of auditory ERPs elicited by infrequent and frequent sounds: ERP responses to regularity violations, such as mismatch negativity (MMN), N2b, and P3, as well as early sensory responses to sounds, such as P1 and N1, have been shown to be instrumental in probing beat perception. Subsequently, we discuss how beat perception can be probed by comparing ERP responses to sounds in regular and irregular sequences, and by comparing ERP responses to sounds in different metrical positions in a rhythm, such as on and off the beat or on strong and weak beats. Finally, we will discuss previous research that has used the aforementioned ERPs and paradigms to study beat perception in human adults, human newborns, and nonhuman primates. In doing so, we consider the possible pitfalls and prospects of the technique, as well as future perspectives.


Subject(s)
Auditory Perception , Music , Primates , Humans , Animals , Auditory Perception/physiology , Infant, Newborn , Adult , Primates/physiology , Evoked Potentials, Auditory/physiology , Acoustic Stimulation/methods , Evoked Potentials/physiology , Electroencephalography
6.
Cereb Cortex ; 34(6)2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38879757

ABSTRACT

The reactions to novelty manifesting in mismatch negativity in the rat brain were studied. During dissociative anesthesia, mismatch negativity-like waves were recorded from the somatosensory cortex using an epidural 32-electrode array. Experimental animals: 7 wild-type Wistar rats and 3 transgenic rats. During high-dose anesthesia, deviant 1,500 Hz tones were presented randomly among many standard 1,000 Hz tones in the oddball paradigm. "Deviant minus standard_before_deviant" difference waves were calculated using both the classical method of Naatanen and method of cross-correlation of sub-averages. Both methods gave consistent results: an early phasic component of the N40 and later N100 to 200 (mismatch negativity itself) tonic component. The gamma and delta rhythms power and the frequency of down-states (suppressed activity periods) were assessed. In all rats, the amplitude of tonic component grew with increasing sedation depth. At the same time, a decrease in gamma power with a simultaneous increase in delta power and the frequency of down-states. The earlier phasic frontocentral component is associated with deviance detection, while the later tonic one over the auditory cortex reflects the orienting reaction. Under anesthesia, this slow mismatch negativity-like wave most likely reflects the tendency of the system to respond to any influences with delta waves, K-complexes and down-states, or produce them spontaneously.


Subject(s)
Rats, Wistar , Animals , Male , Acoustic Stimulation/methods , Electroencephalography/methods , Rats , Rats, Transgenic , Anesthetics, Dissociative/administration & dosage , Anesthetics, Dissociative/pharmacology , Evoked Potentials, Auditory/physiology , Somatosensory Cortex/physiology , Gamma Rhythm/physiology , Delta Rhythm/physiology , Delta Rhythm/drug effects
7.
Ann N Y Acad Sci ; 1536(1): 167-176, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38829709

ABSTRACT

Time discrimination, a critical aspect of auditory perception, is influenced by numerous factors. Previous research has suggested that musical experience can restructure the brain, thereby enhancing time discrimination. However, this phenomenon remains underexplored. In this study, we seek to elucidate the enhancing effect of musical experience on time discrimination, utilizing both behavioral and electroencephalogram methodologies. Additionally, we aim to explore, through brain connectivity analysis, the role of increased connectivity in brain regions associated with auditory perception as a potential contributory factor to time discrimination induced by musical experience. The results show that the music-experienced group demonstrated higher behavioral accuracy, shorter reaction time, and shorter P3 and mismatch response latencies as compared to the control group. Furthermore, the music-experienced group had higher connectivity in the left temporal lobe. In summary, our research underscores the positive impact of musical experience on time discrimination and suggests that enhanced connectivity in brain regions linked to auditory perception may be responsible for this enhancement.


Subject(s)
Auditory Perception , Electroencephalography , Music , Humans , Music/psychology , Male , Auditory Perception/physiology , Female , Adult , Young Adult , Time Perception/physiology , Reaction Time/physiology , Acoustic Stimulation/methods , Discrimination, Psychological/physiology , Evoked Potentials, Auditory/physiology , Brain/physiology
8.
eNeuro ; 11(6)2024 Jun.
Article in English | MEDLINE | ID: mdl-38834300

ABSTRACT

Following repetitive visual stimulation, post hoc phase analysis finds that visually evoked response magnitudes vary with the cortical alpha oscillation phase that temporally coincides with sensory stimulus. This approach has not successfully revealed an alpha phase dependence for auditory evoked or induced responses. Here, we test the feasibility of tracking alpha with scalp electroencephalogram (EEG) recordings and play sounds phase-locked to individualized alpha phases in real-time using a novel end-point corrected Hilbert transform (ecHT) algorithm implemented on a research device. Based on prior work, we hypothesize that sound-evoked and induced responses vary with the alpha phase at sound onset and the alpha phase that coincides with the early sound-evoked response potential (ERP) measured with EEG. Thus, we use each subject's individualized alpha frequency (IAF) and individual auditory ERP latency to define target trough and peak alpha phases that allow an early component of the auditory ERP to align to the estimated poststimulus peak and trough phases, respectively. With this closed-loop and individualized approach, we find opposing alpha phase-dependent effects on the auditory ERP and alpha oscillations that follow stimulus onset. Trough and peak phase-locked sounds result in distinct evoked and induced post-stimulus alpha level and frequency modulations. Though additional studies are needed to localize the sources underlying these phase-dependent effects, these results suggest a general principle for alpha phase-dependence of sensory processing that includes the auditory system. Moreover, this study demonstrates the feasibility of using individualized neurophysiological indices to deliver automated, closed-loop, phase-locked auditory stimulation.


Subject(s)
Acoustic Stimulation , Alpha Rhythm , Electroencephalography , Evoked Potentials, Auditory , Humans , Acoustic Stimulation/methods , Evoked Potentials, Auditory/physiology , Male , Female , Electroencephalography/methods , Alpha Rhythm/physiology , Adult , Young Adult , Brain/physiology , Auditory Perception/physiology , Algorithms , Feasibility Studies
9.
Neuroreport ; 35(12): 800-804, 2024 Aug 07.
Article in English | MEDLINE | ID: mdl-38935073

ABSTRACT

Accurate predictions and the processing of prediction error signals can be important for efficient interaction with the auditory environment. In a reanalysis of data from Simal et al . (2021), who found that informative tones elicited increased N1 and P2 event-related potential components, we sought to identify electrophysiological indicators in the time-frequency domain associated with disambiguation of the hearing context and prediction of forthcoming stimulation. Participants heard two isochronous sequences of pure tones separated by a silent retention interval. A sequence could contain one, three, or five tones. Fifteen participants heard the three load conditions randomly intermixed. In this case, when sequence length was unknown, the second and fourth tone during encoding contained information allowing the prediction of another tone. Other participants heard the sequences blocked by sequence length, and the second and fourth tone of the sequences provided no new information (and hence were not informative). We used wavelet analysis and Hilbert transform methods to analyse the oscillatory activity related to tone informativeness. We found a significant increase in theta (4-7 Hz) amplitude following a tone that was informative and allowed prediction, in comparison with a tone that carried no predictive information. Previous work suggests increased theta amplitude is linked with task switching and an increase in cognitive control. We suggest informative tones recruit higher-level control processes involved in prediction of upcoming auditory events.


Subject(s)
Auditory Perception , Theta Rhythm , Humans , Male , Female , Young Adult , Theta Rhythm/physiology , Adult , Auditory Perception/physiology , Electroencephalography/methods , Acoustic Stimulation/methods , Evoked Potentials, Auditory/physiology
10.
Int J Pediatr Otorhinolaryngol ; 182: 112001, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38885546

ABSTRACT

INTRODUCTION: The neural response telemetry (NRT) is a standard procedure in cochlear implantation mostly used to determine the functionality of implanted device and to check auditory nerve responds to the stimulus. Correlation between NRT measurements and subjective threshold (T) and maximum comfort (C) levels has been reported but results are inconsistent, and it is still not clear which of the NRT measurements could be the most useful in predicting fitting levels. PURPOSE: In our study we aimed to investigate which NRT measurement corresponds better to fitting levels. Impedance (IMP), Evoked Action Potential (ECAP) threshold and amplitude growth function (AGF) slope values were included in the study. Also, we tried to identify cochlear area at which the connection between NRT measurements and fitting levels would be the most pronounced. MATERIALS AND METHODS: Thirty-one children implanted with Cochlear device were included in this retrospective study. IMP, ECAP thresholds and AGF were obtained intra-operatively and 12 months after surgery at electrodes 5, 11 and 19 as representative for each part of cochlea. Subjective T and C levels were obtained 12 months after the surgery during cochlear fitting. RESULTS: ECAP thresholds obtained 12 months after surgery showed statistically significant correlation to both T and C levels at all 3 selected electrodes. IMP correlated with C levels while AGF showed tendency to correlate with T levels. However, these correlations were not statistically significant for all electrodes. CONCLUSION: ECAP threshold measurements correlated to T and C values better than AGF slope and IMP. Measurements obtained twelve months after surgery seems to be more predictive of T and C values compared to intra-operative measurements. The best correlation between ECAP threshold and T and C values was found at electrode 11 suggesting NRT measurements at mid-portion cochlear region to be the most useful in predicting fitting levels.


Subject(s)
Auditory Threshold , Cochlear Implantation , Cochlear Implants , Telemetry , Humans , Cochlear Implantation/methods , Male , Female , Retrospective Studies , Child , Child, Preschool , Auditory Threshold/physiology , Cochlear Nerve/physiology , Evoked Potentials, Auditory/physiology , Prosthesis Fitting/methods , Cochlea/physiology , Infant
11.
J Neural Eng ; 21(4)2024 Jul 12.
Article in English | MEDLINE | ID: mdl-38936392

ABSTRACT

Objective.Presence is an important aspect of user experience in virtual reality (VR). It corresponds to the illusion of being physically located in a virtual environment (VE). This feeling is usually measured through questionnaires that disrupt presence, are subjective and do not allow for real-time measurement. Electroencephalography (EEG), which measures brain activity, is increasingly used to monitor the state of users, especially while immersed in VR.Approach.In this paper, we present a way of evaluating presence, through the measure of the attention dedicated to the real environment via an EEG oddball paradigm. Using breaks in presence, this experimental protocol constitutes an ecological method for the study of presence, as different levels of presence are experienced in an identical VE.Main results.Through analysing the EEG data of 18 participants, a significant increase in the neurophysiological reaction to the oddball, i.e. the P300 amplitude, was found in low presence condition compared to high presence condition. This amplitude was significantly correlated with the self-reported measure of presence. Using Riemannian geometry to perform single-trial classification, we present a classification algorithm with 79% accuracy in detecting between two presence conditions.Significance.Taken together our results promote the use of EEG and oddball stimuli to monitor presence offline or in real-time without interrupting the user in the VE.


Subject(s)
Acoustic Stimulation , Electroencephalography , Virtual Reality , Humans , Male , Female , Electroencephalography/methods , Adult , Young Adult , Acoustic Stimulation/methods , Event-Related Potentials, P300/physiology , Algorithms , Attention/physiology , Evoked Potentials, Auditory/physiology
12.
Neuropsychologia ; 201: 108936, 2024 Aug 13.
Article in English | MEDLINE | ID: mdl-38851314

ABSTRACT

It is not clear whether the brain can detect changes in native and non-native speech sounds in both unattended and attended conditions, but this information would be important to understand the nature of potential native language advantage in speech perception. We recorded event-related potentials (ERPs) for changes in duration and in Chinese lexical tone in a repeated vowel /a/ in native speakers of Finnish and Chinese in passive and active listening conditions. ERP amplitudes reflecting deviance detection (mismatch negativity; MMN and N2b) and attentional shifts towards changes in speech sounds (P3a and P3b) were investigated. In the passive listening condition, duration changes elicited increased amplitude in the MMN latency window for both standard and deviant sounds in the Finnish speakers compared to the Chinese speakers, but no group differences were observed for P3a. In passive listening to lexical tones, P3a was increased in amplitude for both standard and deviant stimuli in Chinese speakers compared to Finnish speakers, but the groups did not differ in MMN. In active listening, both tone and duration changes elicited N2b and P3b, but the groups differed only in pattern of results for the deviant type. The results thus suggest an overall increased sensitivity to native speech sounds, especially in passive listening, while the mechanisms of change detection and attentional shifting seem to work well for both native and non-native speech sounds in the attentive mode.


Subject(s)
Acoustic Stimulation , Electroencephalography , Evoked Potentials, Auditory , Speech Perception , Humans , Male , Female , Speech Perception/physiology , Young Adult , Adult , Evoked Potentials, Auditory/physiology , Brain/physiology , Language , Attention/physiology , Phonetics , Reaction Time/physiology , Evoked Potentials/physiology , Brain Mapping
13.
Hear Res ; 450: 109071, 2024 Sep 01.
Article in English | MEDLINE | ID: mdl-38941694

ABSTRACT

Following adult-onset hearing impairment, crossmodal plasticity can occur within various sensory cortices, often characterized by increased neural responses to visual stimulation in not only the auditory cortex, but also in the visual and audiovisual cortices. In the present study, we used an established model of loud noise exposure in rats to examine, for the first time, whether the crossmodal plasticity in the audiovisual cortex that occurs following a relatively mild degree of hearing loss emerges solely from altered intracortical processing or if thalamocortical changes also contribute to the crossmodal effects. Using a combination of an established pharmacological 'cortical silencing' protocol and current source density analysis of the laminar activity recorded across the layers of the audiovisual cortex (i.e., the lateral extrastriate visual cortex, V2L), we observed layer-specific changes post-silencing in the strength of the residual visual, but not auditory, input in the noise exposed rats with mild hearing loss compared to rats with normal hearing. Furthermore, based on a comparison of the laminar profiles pre- versus post-silencing in both groups, we can conclude that noise exposure caused a re-allocation of the strength of visual inputs across the layers of the V2L cortex, including enhanced visual-evoked activity in the granular layer; findings consistent with thalamocortical plasticity. Finally, we confirmed that audiovisual integration within the V2L cortex depends on intact processing within intracortical circuits, and that this form of multisensory processing is vulnerable to disruption by noise-induced hearing loss. Ultimately, the present study furthers our understanding of the contribution of intracortical and thalamocortical processing to crossmodal plasticity as well as to audiovisual integration under both normal and mildly-impaired hearing conditions.


Subject(s)
Acoustic Stimulation , Auditory Cortex , Disease Models, Animal , Evoked Potentials, Visual , Neuronal Plasticity , Photic Stimulation , Visual Cortex , Animals , Visual Cortex/physiopathology , Auditory Cortex/physiopathology , Male , Hearing Loss, Noise-Induced/physiopathology , Visual Perception , Auditory Perception , Noise/adverse effects , Evoked Potentials, Auditory , Rats , Hearing , Rats, Sprague-Dawley
14.
J Neurodev Disord ; 16(1): 28, 2024 Jun 03.
Article in English | MEDLINE | ID: mdl-38831410

ABSTRACT

BACKGROUND: In the search for objective tools to quantify neural function in Rett Syndrome (RTT), which are crucial in the evaluation of therapeutic efficacy in clinical trials, recordings of sensory-perceptual functioning using event-related potential (ERP) approaches have emerged as potentially powerful tools. Considerable work points to highly anomalous auditory evoked potentials (AEPs) in RTT. However, an assumption of the typical signal-averaging method used to derive these measures is "stationarity" of the underlying responses - i.e. neural responses to each input are highly stereotyped. An alternate possibility is that responses to repeated stimuli are highly variable in RTT. If so, this will significantly impact the validity of assumptions about underlying neural dysfunction, and likely lead to overestimation of underlying neuropathology. To assess this possibility, analyses at the single-trial level assessing signal-to-noise ratios (SNR), inter-trial variability (ITV) and inter-trial phase coherence (ITPC) are necessary. METHODS: AEPs were recorded to simple 100 Hz tones from 18 RTT and 27 age-matched controls (Ages: 6-22 years). We applied standard AEP averaging, as well as measures of neuronal reliability at the single-trial level (i.e. SNR, ITV, ITPC). To separate signal-carrying components from non-neural noise sources, we also applied a denoising source separation (DSS) algorithm and then repeated the reliability measures. RESULTS: Substantially increased ITV, lower SNRs, and reduced ITPC were observed in auditory responses of RTT participants, supporting a "neural unreliability" account. Application of the DSS technique made it clear that non-neural noise sources contribute to overestimation of the extent of processing deficits in RTT. Post-DSS, ITV measures were substantially reduced, so much so that pre-DSS ITV differences between RTT and TD populations were no longer detected. In the case of SNR and ITPC, DSS substantially improved these estimates in the RTT population, but robust differences between RTT and TD were still fully evident. CONCLUSIONS: To accurately represent the degree of neural dysfunction in RTT using the ERP technique, a consideration of response reliability at the single-trial level is highly advised. Non-neural sources of noise lead to overestimation of the degree of pathological processing in RTT, and denoising source separation techniques during signal processing substantially ameliorate this issue.


Subject(s)
Electroencephalography , Evoked Potentials, Auditory , Rett Syndrome , Humans , Rett Syndrome/physiopathology , Rett Syndrome/complications , Adolescent , Female , Evoked Potentials, Auditory/physiology , Child , Young Adult , Auditory Perception/physiology , Reproducibility of Results , Acoustic Stimulation , Male , Signal-To-Noise Ratio , Adult
15.
Brain Lang ; 254: 105439, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38945108

ABSTRACT

Considerable work has investigated similarities between the processing of music and language, but it remains unclear whether typical, genuine music can influence speech processing via cross-domain priming. To investigate this, we measured ERPs to musical phrases and to syntactically ambiguous Chinese phrases that could be disambiguated by early or late prosodic boundaries. Musical primes also had either early or late prosodic boundaries and we asked participants to judge whether the prime and target have the same structure. Within musical phrases, prosodic boundaries elicited reduced N1 and enhanced P2 components (relative to the no-boundary condition) and musical phrases with late boundaries exhibited a closure positive shift (CPS) component. More importantly, primed target phrases elicited a smaller CPS compared to non-primed phrases, regardless of the type of ambiguous phrase. These results suggest that prosodic priming can occur across domains, supporting the existence of common neural processes in music and language processing.


Subject(s)
Electroencephalography , Evoked Potentials , Music , Speech Perception , Humans , Male , Female , Young Adult , Speech Perception/physiology , Adult , Evoked Potentials/physiology , Speech/physiology , Brain/physiology , Acoustic Stimulation , Language , Evoked Potentials, Auditory/physiology
16.
Otol Neurotol ; 45(7): e517-e524, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-38918070

ABSTRACT

HYPOTHESES: In newly implanted cochlear implant (CI) users, electrically evoked compound action (eCAPs) and electrocochleography (ECochGs) will remain stable over time. Electrode impedances will increase immediately postimplantation due to the initial inflammatory response, before decreasing after CI switch-on and stabilizing thereafter. BACKGROUND: The study of cochlear health (CH) has several applications, including explaining variation in CI outcomes, informing CI programming strategies, and evaluating the safety and efficacy of novel biological treatments for hearing loss. Very early postoperative CH patterns have not previously been intensively explored through longitudinal daily testing. Thanks to technological advances, electrode impedances, eCAPs, and ECochGs can be independently performed by CI users at home to monitor CH over time. METHODS: A group of newly implanted CI users performed daily impedances, eCAPs, and ECochGs for 3 months at home, starting from the first day postsurgery (N = 7) using the Active Insertion Monitoring system by Advanced Bionics. RESULTS: Measurement validity of 93.5, 93.0, and 81.6% for impedances, eCAPs, and ECochGs, respectively, revealed high participant compliance. Impedances increased postsurgery before dropping and stabilizing after switch-on. eCAPs showed good stability, though statistical analyses revealed a very small but significant increase in thresholds over time. Most ECochG thresholds did not reach the liberal signal-to-noise criterion of 2:1, with low threshold stability over time. CONCLUSION: Newly implanted CI recipients can confidently and successfully perform CH recordings at home, highlighting the valuable role of patients in longitudinal data collection. Electrode impedances and eCAPs are promising objective measurements for evaluating CH in newly implanted CI users.


Subject(s)
Audiometry, Evoked Response , Cochlear Implantation , Cochlear Implants , Electric Impedance , Humans , Cochlear Implantation/methods , Audiometry, Evoked Response/methods , Middle Aged , Female , Male , Aged , Adult , Cochlea/physiopathology , Cochlea/surgery , Postoperative Period , Evoked Potentials, Auditory/physiology , Action Potentials/physiology
17.
Otol Neurotol ; 45(7): 790-797, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-38923968

ABSTRACT

OBJECTIVES: To assess the clinical utility of spread of excitation (SOE) functions obtained via electrically evoked compound action potentials (eCAP) to 1) identify electrode array tip fold-over, 2) predict electrode placement factors confirmed via postoperative computed tomography (CT) imaging, and 3) predict postoperative speech recognition through the first year post-activation in a large clinical sample. STUDY DESIGN: Retrospective case review. SETTING: Cochlear implant (CI) program at a tertiary medical center. PATIENTS: Two hundred seventy-two ears (238 patients) with Cochlear Ltd. CIs (mean age = 46 yr, range = 9 mo-93 yr, 50% female) implanted between August 2014 and December 2022 were included. MAIN OUTCOME MEASURES: eCAP SOE widths (mm) (probe electrodes 5, 11, and 17), incidence of electrode tip fold-over, CT imaging data (electrode-to-modiolus distance, angular insertion depth, scalar location), and speech recognition outcomes (consonant-nucleus-consonant [CNC], AzBio quiet, and +5 dB SNR) through the first year after CI activation. RESULTS: 1) eCAP SOE demonstrated a sensitivity of 85.7% for identifying tip fold-over instances that were confirmed by CT imaging. In the current dataset, the tip fold-over incidence rate was 3.1% (7 patients), with all instances involving a precurved electrode array. 2) There was a significant positive relationship between eCAP SOE and mean electrode-to-modiolus distance for precurved arrays, and a significant positive relationship between eCAP SOE and angular insertion depth for straight arrays. No relationships between eCAP SOE and scalar location or cochlea diameter were found in this sample. 3) There were no significant relationships between eCAP SOE and speech recognition outcomes for any measure or time point, except for a weak negative correlation between average eCAP SOE widths and CNC word scores at 6 months post-activation for precurved arrays. CONCLUSIONS: In the absence of intraoperative CT or fluoroscopic imaging, eCAP SOE is a reasonable alternative method for identifying electrode array tip fold-over and should be routinely measured intraoperatively, especially for precurved electrode arrays with a sheath.


Subject(s)
Action Potentials , Cochlear Implantation , Cochlear Implants , Speech Perception , Humans , Female , Male , Middle Aged , Aged , Cochlear Implantation/methods , Adult , Retrospective Studies , Aged, 80 and over , Child, Preschool , Young Adult , Adolescent , Child , Infant , Speech Perception/physiology , Action Potentials/physiology , Evoked Potentials, Auditory/physiology , Tomography, X-Ray Computed
18.
Sci Rep ; 14(1): 13114, 2024 06 07.
Article in English | MEDLINE | ID: mdl-38849374

ABSTRACT

Aberrant neuronal circuit dynamics are at the core of complex neuropsychiatric disorders, such as schizophrenia (SZ). Clinical assessment of the integrity of neuronal circuits in SZ has consistently described aberrant resting-state gamma oscillatory activity, decreased auditory-evoked gamma responses, and abnormal mismatch responses. We hypothesized that corticothalamic circuit manipulation could recapitulate SZ circuit phenotypes in rodent models. In this study, we optogenetically inhibited the mediodorsal thalamus-to-prefrontal cortex (MDT-to-PFC) or the PFC-to-MDT projection in rats and assessed circuit function through electrophysiological readouts. We found that MDT-PFC perturbation could not recapitulate SZ-linked phenotypes such as broadband gamma disruption, altered evoked oscillatory activity, and diminished mismatch negativity responses. Therefore, the induced functional impairment of the MDT-PFC pathways cannot account for the oscillatory abnormalities described in SZ.


Subject(s)
Evoked Potentials, Auditory , Optogenetics , Prefrontal Cortex , Thalamus , Animals , Optogenetics/methods , Rats , Prefrontal Cortex/physiology , Male , Thalamus/physiology , Schizophrenia/physiopathology , Neural Pathways , Rats, Sprague-Dawley , Gamma Rhythm/physiology , Limbic System/physiology
19.
PLoS Biol ; 22(6): e3002665, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38935589

ABSTRACT

Loss of synapses between spiral ganglion neurons and inner hair cells (IHC synaptopathy) leads to an auditory neuropathy called hidden hearing loss (HHL) characterized by normal auditory thresholds but reduced amplitude of sound-evoked auditory potentials. It has been proposed that synaptopathy and HHL result in poor performance in challenging hearing tasks despite a normal audiogram. However, this has only been tested in animals after exposure to noise or ototoxic drugs, which can cause deficits beyond synaptopathy. Furthermore, the impact of supernumerary synapses on auditory processing has not been evaluated. Here, we studied mice in which IHC synapse counts were increased or decreased by altering neurotrophin 3 (Ntf3) expression in IHC supporting cells. As we previously showed, postnatal Ntf3 knockdown or overexpression reduces or increases, respectively, IHC synapse density and suprathreshold amplitude of sound-evoked auditory potentials without changing cochlear thresholds. We now show that IHC synapse density does not influence the magnitude of the acoustic startle reflex or its prepulse inhibition. In contrast, gap-prepulse inhibition, a behavioral test for auditory temporal processing, is reduced or enhanced according to Ntf3 expression levels. These results indicate that IHC synaptopathy causes temporal processing deficits predicted in HHL. Furthermore, the improvement in temporal acuity achieved by increasing Ntf3 expression and synapse density suggests a therapeutic strategy for improving hearing in noise for individuals with synaptopathy of various etiologies.


Subject(s)
Hair Cells, Auditory, Inner , Neurotrophin 3 , Synapses , Animals , Hair Cells, Auditory, Inner/metabolism , Hair Cells, Auditory, Inner/pathology , Synapses/metabolism , Synapses/physiology , Neurotrophin 3/metabolism , Neurotrophin 3/genetics , Mice , Auditory Threshold , Evoked Potentials, Auditory/physiology , Reflex, Startle/physiology , Auditory Perception/physiology , Spiral Ganglion/metabolism , Female , Male , Hearing Loss, Hidden
20.
J Acoust Soc Am ; 155(6): 3639-3653, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38836771

ABSTRACT

The estimation of auditory evoked potentials requires deconvolution when the duration of the responses to be recovered exceeds the inter-stimulus interval. Based on least squares deconvolution, in this article we extend the procedure to the case of a multi-response convolutional model, that is, a model in which different categories of stimulus are expected to evoke different responses. The computational cost of the multi-response deconvolution significantly increases with the number of responses to be deconvolved, which restricts its applicability in practical situations. In order to alleviate this restriction, we propose to perform the multi-response deconvolution in a reduced representation space associated with a latency-dependent filtering of auditory responses, which provides a significant dimensionality reduction. We demonstrate the practical viability of the multi-response deconvolution with auditory responses evoked by clicks presented at different levels and categorized according to their stimulation level. The multi-response deconvolution applied in a reduced representation space provides the least squares estimation of the responses with a reasonable computational load. matlab/Octave code implementing the proposed procedure is included as supplementary material.


Subject(s)
Acoustic Stimulation , Evoked Potentials, Auditory , Evoked Potentials, Auditory/physiology , Humans , Acoustic Stimulation/methods , Male , Adult , Electroencephalography/methods , Female , Least-Squares Analysis , Young Adult , Signal Processing, Computer-Assisted , Reaction Time , Auditory Perception/physiology
SELECTION OF CITATIONS
SEARCH DETAIL