Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 42
Filter
Add more filters










Publication year range
1.
Clin Neurophysiol ; 165: 44-54, 2024 Jun 15.
Article in English | MEDLINE | ID: mdl-38959535

ABSTRACT

OBJECTIVE: This study aimed to evaluate whether auditory brainstem response (ABR) using a paired-click stimulation paradigm could serve as a tool for detecting cochlear synaptopathy (CS). METHODS: The ABRs to single-clicks and paired-clicks with various inter-click intervals (ICIs) and scores for word intelligibility in degraded listening conditions were obtained from 57 adults with normal hearing. The wave I peak amplitude and root mean square values for the post-wave I response within a range delayed from the wave I peak (referred to as the RMSpost-w1) were calculated for the single- and second-click responses. RESULTS: The wave I peak amplitudes did not correlate with age except for the second-click responses at an ICI of 7 ms, and the word intelligibility scores. However, we found that the RMSpost-w1 values for the second-click responses significantly decreased with increasing age. Moreover, the RMSpost-w1 values for the second-click responses at an ICI of 5 ms correlated significantly with the scores for word intelligibility in degraded listening conditions. CONCLUSIONS: The magnitude of the post-wave I response for the second-click response could serve as a tool for detecting CS in humans. SIGNIFICANCE: Our findings shed new light on the analytical methods of ABR for quantifying CS.

2.
J Neurosci ; 43(21): 3876-3894, 2023 05 24.
Article in English | MEDLINE | ID: mdl-37185101

ABSTRACT

Natural sounds contain rich patterns of amplitude modulation (AM), which is one of the essential sound dimensions for auditory perception. The sensitivity of human hearing to AM measured by psychophysics takes diverse forms depending on the experimental conditions. Here, we address with a single framework the questions of why such patterns of AM sensitivity have emerged in the human auditory system and how they are realized by our neural mechanisms. Assuming that optimization for natural sound recognition has taken place during human evolution and development, we examined its effect on the formation of AM sensitivity by optimizing a computational model, specifically, a multilayer neural network, for natural sound (namely, everyday sounds and speech sounds) recognition and simulating psychophysical experiments in which the AM sensitivity of the model was assessed. Relatively higher layers in the model optimized to sounds with natural AM statistics exhibited AM sensitivity similar to that of humans, although the model was not designed to reproduce human-like AM sensitivity. Moreover, simulated neurophysiological experiments on the model revealed a correspondence between the model layers and the auditory brain regions. The layers in which human-like psychophysical AM sensitivity emerged exhibited substantial neurophysiological similarity with the auditory midbrain and higher regions. These results suggest that human behavioral AM sensitivity has emerged as a result of optimization for natural sound recognition in the course of our evolution and/or development and that it is based on a stimulus representation encoded in the neural firing rates in the auditory midbrain and higher regions.SIGNIFICANCE STATEMENT This study provides a computational paradigm to bridge the gap between the behavioral properties of human sensory systems as measured in psychophysics and neural representations as measured in nonhuman neurophysiology. This was accomplished by combining the knowledge and techniques in psychophysics, neurophysiology, and machine learning. As a specific target modality, we focused on the auditory sensitivity to sound AM. We built an artificial neural network model that performs natural sound recognition and simulated psychophysical and neurophysiological experiments in the model. Quantitative comparison of a machine learning model with human and nonhuman data made it possible to integrate the knowledge of behavioral AM sensitivity and neural AM tunings from the perspective of optimization to natural sound recognition.


Subject(s)
Auditory Cortex , Sound , Humans , Auditory Perception/physiology , Brain/physiology , Hearing , Mesencephalon/physiology , Acoustic Stimulation , Auditory Cortex/physiology
3.
J Cogn Neurosci ; 35(2): 276-290, 2023 02 01.
Article in English | MEDLINE | ID: mdl-36306257

ABSTRACT

Attention to the relevant object and space is the brain's strategy to effectively process the information of interest in complex environments with limited neural resources. Numerous studies have documented how attention is allocated in the visual domain, whereas the nature of attention in the auditory domain has been much less explored. Here, we show that the pupillary light response can serve as a physiological index of auditory attentional shift and can be used to probe the relationship between space-based and object-based attention as well. Experiments demonstrated that the pupillary response corresponds to the luminance condition where the attended auditory object (e.g., spoken sentence) was located, regardless of whether attention was directed by a spatial (left or right) or nonspatial (e.g., the gender of the talker) cue and regardless of whether the sound was presented via headphones or loudspeakers. These effects on the pupillary light response could not be accounted for as a consequence of small (although observable) biases in gaze position drifting. The overall results imply a unified audiovisual representation of spatial attention. Auditory object-based attention contains the space representation of the attended auditory object, even when the object is oriented without explicit spatial guidance.


Subject(s)
Auditory Perception , Cues , Humans , Auditory Perception/physiology , Space Perception/physiology
4.
Sci Rep ; 12(1): 19142, 2022 11 09.
Article in English | MEDLINE | ID: mdl-36351979

ABSTRACT

Individuals with autism spectrum disorders (ASD) are reported to exhibit degraded performance in sound localization. This study investigated whether the sensitivity to the interaural level differences (ILDs) and interaural time differences (ITDs), major cues for horizontal sound localization, are affected in ASD. Thresholds for discriminating the ILD and ITD were measured for adults with ASD and age- and IQ-matched controls in a lateralization experiment. Results show that the ASD group exhibited higher ILD and ITD thresholds than the control group. Moreover, there was a significant diversity of ITD sensitivity in the ASD group, and it contained a larger proportion of participants with poor ITD sensitivity than the control group. The current study suggests that deficits in relatively low-level processes in the auditory pathway are implicated in degraded performance of sound localization in individuals with ASD. The results are consistent with the structural abnormalities and great variability in the morphology in the brainstem reported by neuroanatomical studies of ASD.


Subject(s)
Autism Spectrum Disorder , Sound Localization , Adult , Humans , Auditory Pathways , Cues , Acoustic Stimulation
5.
Psychophysiology ; 59(8): e14028, 2022 08.
Article in English | MEDLINE | ID: mdl-35226355

ABSTRACT

A dynamic neural network change, accompanied by cognitive shifts such as internal perceptual alternation in bistable stimuli, is reconciled by the discharge of noradrenergic locus coeruleus neurons. Transient pupil dilation as a consequence of the reconciliation with the neural network in bistable perception has been reported to precede the reported perceptual alternation. Here, we found that baseline pupil size, an index of temporal fluctuation of arousal level over a longer range of timescales than that for the transient pupil changes, relates to the frequency of perceptual alternation in auditory bistability. Baseline pupil size was defined as the mean pupil diameter over a period of 1 s prior to the task requirement (i.e., before the observation period for counting the perceptual alternations in Experiment 1 and reporting whether participants experienced the perceptual alternations in Experiment 2). The results showed that the baseline pupil size monotonically increased with an increasing number of perceptual alternations and its occurrence probability. Furthermore, a cross-correlation analysis indicates that baseline pupil size predicted perceptual alternation at least 35 s before the behavioral response and that the overall correspondence between pupil size and perceptual alternation was maintained over a sustained time window of 45 s at minimum. The overall results suggest that variability of baseline pupil size reflects the stochastic dynamics of arousal fluctuation in the brain related to bistable perception.


Subject(s)
Auditory Perception , Pupil , Arousal , Auditory Perception/physiology , Brain , Humans , Locus Coeruleus/physiology , Pupil/physiology
6.
Cereb Cortex ; 32(22): 5121-5131, 2022 11 09.
Article in English | MEDLINE | ID: mdl-35094068

ABSTRACT

Expectations concerning the timing of a stimulus enhance attention at the time at which the event occurs, which confers significant sensory and behavioral benefits. Herein, we show that temporal expectations modulate even the sensory transduction in the auditory periphery via the descending pathway. We measured the medial olivocochlear reflex (MOCR), a sound-activated efferent feedback that controls outer hair cell motility and optimizes the dynamic range of the sensory system. MOCR was noninvasively assessed using otoacoustic emissions. We found that the MOCR was enhanced by a visual cue presented at a fixed interval before a sound but was unaffected if the interval was changing between trials. The MOCR was also observed to be stronger when the learned timing expectation matched with the timing of the sound but remained unvaried when these two factors did not match. This implies that the MOCR can be voluntarily controlled in a stimulus- and goal-directed manner. Moreover, we found that the MOCR was enhanced by the expectation of a strong but not a weak, sound intensity. This asymmetrical enhancement could facilitate antimasking and noise protective effects without disrupting the detection of faint signals. Therefore, the descending pathway conveys temporal and intensity expectations to modulate auditory processing.


Subject(s)
Cochlea , Motivation , Cochlea/physiology , Acoustic Stimulation , Otoacoustic Emissions, Spontaneous/physiology , Reflex/physiology
7.
Hear Res ; 408: 108274, 2021 09 01.
Article in English | MEDLINE | ID: mdl-34237495

ABSTRACT

When an amplitude modulated signal with a constant-frequency carrier is fed into a generic nonlinear amplifier, the phase of the carrier of the output signal is also modulated. This phenomenon is referred to as amplitude-modulation-to-phase-modulation (AM-to-PM) conversion and regarded as an unwanted signal distortion in the field of electro-communication engineering. Herein, we offer evidence that AM-to-PM conversion also occurs in the human cochlea and that listeners can use the PM information effectively to process the AM of sounds. We recorded otoacoustic emissions (OAEs) evoked by AM signals. The results showed that the OAE phase was modulated at the same rate as the stimulus modulation. The magnitude of the AM-induced PM of the OAE peaked generally around the stimulus level corresponding to the compression point of individual cochlear input-output functions, as estimated using a psychoacoustic method. A computational cochlear model incorporating a nonlinear active process replicates the abovementioned key features of the AM-induced PM observed in OAEs. These results indicate that AM-induced PM occurring at the cochlear partition can be estimated by measuring OAEs. Psychophysical experiments further revealed that, for individuals with higher sensitivity to PM, the PM magnitude is correlated with AM-detection performance. This result implies that the AM-induced PM information cannot be a dominant cue for AM detection, but listeners with higher sensitivity may partly rely on the AM-induced PM cue.


Subject(s)
Cochlea , Acoustic Stimulation , Humans , Otoacoustic Emissions, Spontaneous , Psychoacoustics , Sound
8.
Front Hum Neurosci ; 14: 571893, 2020.
Article in English | MEDLINE | ID: mdl-33324183

ABSTRACT

It is often assumed that the reaction time of a saccade toward visual and/or auditory stimuli reflects the sensitivities of our oculomotor-orienting system to stimulus saliency. Endogenous factors, as well as stimulus-related factors, would also affect the saccadic reaction time (SRT). However, it was not clear how these factors interact and to what extent visual and auditory-targeting saccades are accounted for by common mechanisms. The present study examined the effect of, and the interaction between, stimulus saliency and audiovisual spatial congruency on the SRT for visual- and for auditory-target conditions. We also analyzed pre-target pupil size to examine the relationship between saccade preparation and pupil size. Pupil size is considered to reflect arousal states coupling with locus-coeruleus (LC) activity during a cognitive task. The main findings were that (1) the pattern of the examined effects on the SRT varied between visual- and auditory-auditory target conditions, (2) the effect of stimulus saliency was significant for the visual-target condition, but not significant for the auditory-target condition, (3) Pupil velocity, not absolute pupil size, was sensitive to task set (i.e., visual-targeting saccade vs. auditory-targeting saccade), and (4) there was a significant correlation between the pre-saccade absolute pupil size and the SRTs for the visual-target condition but not for the auditory-target condition. The discrepancy between target modalities for the effect of pupil velocity and between the absolute pupil size and pupil velocity for the correlation with SRT may imply that the pupil effect for the visual-target condition was caused by a modality-specific link between pupil size modulation and the SC rather than by the LC-NE (locus coeruleus-norepinephrine) system. These results support the idea that different threshold mechanisms in the SC may be involved in the initiation of saccades toward visual and auditory targets.

9.
Front Psychol ; 11: 316, 2020.
Article in English | MEDLINE | ID: mdl-32194479

ABSTRACT

Auditory frisson is the experience of feeling of cold or shivering related to sound in the absence of a physical cold stimulus. Multiple examples of frisson-inducing sounds have been reported, but the mechanism of auditory frisson remains elusive. Typical frisson-inducing sounds may contain a looming effect, in which a sound appears to approach the listener's peripersonal space. Previous studies on sound in peripersonal space have provided objective measurements of sound-inducing effects, but few have investigated the subjective experience of frisson-inducing sounds. Here we explored whether it is possible to produce subjective feelings of frisson by moving a noise sound (white noise, rolling beads noise, or frictional noise produced by rubbing a plastic bag) stimulus around a listener's head. Our results demonstrated that sound-induced frisson can be experienced stronger when auditory stimuli are rotated around the head (binaural moving sounds) than the one without the rotation (monaural static sounds), regardless of the source of the noise sound. Pearson's correlation analysis showed that several acoustic features of auditory stimuli, such as variance of interaural level difference (ILD), loudness, and sharpness, were correlated with the magnitude of subjective frisson. We had also observed that the subjective feelings of frisson by moving a musical sound had increased comparing with a static musical sound.

10.
J Neurophysiol ; 123(2): 484-495, 2020 02 01.
Article in English | MEDLINE | ID: mdl-31825707

ABSTRACT

Recent studies using video-based eye tracking have presented accumulating evidence that postsaccadic oscillation defined in reference to the pupil center (PSOp) is larger than that to the iris center (PSOi). This indicates that the relative motion of the pupil reflects the viscoelasticity of the tissue of the iris. It is known that the pupil size controlled by the sphincter/dilator pupillae muscles reflects many aspects of cognition. A hypothesis derived from this fact is that cognitive tasks affect the properties of PSOp due to the change in the state of these muscles. To test this hypothesis, we conducted pro- and antisaccade tasks for human participants and adopted the recent physical model of PSO to evaluate the dynamic properties of PSOp/PSOi. The results showed the dependence of the elasticity coefficient of the PSOp on the antisaccade task, but this effect was not significant for the PSOi. This suggests that cognitive tasks such as antisaccade tasks affect elasticity of the muscle of the iris. We found that the trial-by-trial fluctuation in the presaccade absolute pupil size correlated with the elasticity coefficient of PSOp. We also found the task dependence of the viscosity coefficient and overshoot amount of PSOi, which probably reflects the dynamics of the entire eyeball movement. The difference in task dependence between PSOp and PSOi indicates that the separate measures of these two can be means to distinguish factors related to the oculomotor neural system from those related to the physiological states of the iris tissue.NEW & NOTEWORTHY The state of the eyeball varies dynamically moment by moment depending on underlying neural/cognitive processing. Combining simultaneous measurements of pupil-centric and iris-centric movements and a recent physical model of postsaccadic oscillation (PSO), we show that the pupil-centric PSO is sensitive to the type of saccade task, suggesting that the physical state of the iris muscles reflects the underlying cognitive processes.


Subject(s)
Inhibition, Psychological , Iris/physiology , Saccades/physiology , Visual Perception/physiology , Adult , Auditory Perception/physiology , Cues , Eye Movement Measurements , Female , Humans , Male , Middle Aged , Pupil/physiology , Young Adult
11.
J Acoust Soc Am ; 146(3): EL265, 2019 09.
Article in English | MEDLINE | ID: mdl-31590549

ABSTRACT

Some normal-hearing listeners report difficulties in speech perception in noisy environments, and the cause is not well understood. The present study explores the correlation between speech-in-noise reception performance and cochlear mechanical characteristics, which were evaluated using a principal component analysis of the otoacoustic emission (OAE) spectra. A principal component, specifically a characteristic dip at around 2-2.5 kHz in OAE spectra, correlated with speech reception thresholds in noise but not in quiet. The results suggest that subclinical cochlear dysfunction specifically contributes to difficulties in speech perception in noisy environments, which is possibly a new form of "hidden hearing deficits."


Subject(s)
Cochlea/physiology , Speech Perception/physiology , Acoustic Stimulation , Adult , Auditory Threshold , Female , Humans , Male , Noise , Otoacoustic Emissions, Spontaneous , Sound Spectrography , Speech Reception Threshold Test , Young Adult
12.
Nat Commun ; 10(1): 4030, 2019 09 06.
Article in English | MEDLINE | ID: mdl-31492881

ABSTRACT

The ability to track the statistics of our surroundings is a key computational challenge. A prominent theory proposes that the brain monitors for unexpected uncertainty - events which deviate substantially from model predictions, indicating model failure. Norepinephrine is thought to play a key role in this process by serving as an interrupt signal, initiating model-resetting. However, evidence is from paradigms where participants actively monitored stimulus statistics. To determine whether Norepinephrine routinely reports the statistical structure of our surroundings, even when not behaviourally relevant, we used rapid tone-pip sequences that contained salient pattern-changes associated with abrupt structural violations vs. emergence of regular structure. Phasic pupil dilations (PDR) were monitored to assess Norepinephrine. We reveal a remarkable specificity: When not behaviourally relevant, only abrupt structural violations evoke a PDR. The results demonstrate that Norepinephrine tracks unexpected uncertainty on rapid time scales relevant to sensory signals.


Subject(s)
Arousal/physiology , Evoked Potentials, Auditory/physiology , Psychomotor Performance/physiology , Pupil/physiology , Reaction Time/physiology , Sound , Acoustic Stimulation , Adult , Brain/metabolism , Brain/physiology , Female , Humans , Male , Norepinephrine/metabolism , Uncertainty , Young Adult
13.
J Neurosci ; 39(39): 7703-7714, 2019 09 25.
Article in English | MEDLINE | ID: mdl-31391262

ABSTRACT

Despite the prevalent use of alerting sounds in alarms and human-machine interface systems and the long-hypothesized role of the auditory system as the brain's "early warning system," we have only a rudimentary understanding of what determines auditory salience-the automatic attraction of attention by sound-and which brain mechanisms underlie this process. A major roadblock has been the lack of a robust, objective means of quantifying sound-driven attentional capture. Here we demonstrate that: (1) a reliable salience scale can be obtained from crowd-sourcing (N = 911), (2) acoustic roughness appears to be a driving feature behind this scaling, consistent with previous reports implicating roughness in the perceptual distinctiveness of sounds, and (3) crowd-sourced auditory salience correlates with objective autonomic measures. Specifically, we show that a salience ranking obtained from online raters correlated robustly with the superior colliculus-mediated ocular freezing response, microsaccadic inhibition (MSI), measured in naive, passively listening human participants (of either sex). More salient sounds evoked earlier and larger MSI, consistent with a faster orienting response. These results are consistent with the hypothesis that MSI reflects a general reorienting response that is evoked by potentially behaviorally important events regardless of their modality.SIGNIFICANCE STATEMENT Microsaccades are small, rapid, fixational eye movements that are measurable with sensitive eye-tracking equipment. We reveal a novel, robust link between microsaccade dynamics and the subjective salience of brief sounds (salience rankings obtained from a large number of participants in an online experiment): Within 300 ms of sound onset, the eyes of naive, passively listening participants demonstrate different microsaccade patterns as a function of the sound's crowd-sourced salience. These results position the superior colliculus (hypothesized to underlie microsaccade generation) as an important brain area to investigate in the context of a putative multimodal salience hub. They also demonstrate an objective means for quantifying auditory salience.


Subject(s)
Attention/physiology , Auditory Perception/physiology , Saccades/physiology , Superior Colliculi/physiology , Acoustic Stimulation , Adolescent , Adult , Crowdsourcing , Female , Humans , Male , Young Adult
14.
J Neurosci ; 39(28): 5517-5533, 2019 07 10.
Article in English | MEDLINE | ID: mdl-31092586

ABSTRACT

The auditory system converts the physical properties of a sound waveform to neural activities and processes them for recognition. During the process, the tuning to amplitude modulation (AM) is successively transformed by a cascade of brain regions. To test the functional significance of the AM tuning, we conducted single-unit recording in a deep neural network (DNN) trained for natural sound recognition. We calculated the AM representation in the DNN and quantitatively compared it with those reported in previous neurophysiological studies. We found that an auditory-system-like AM tuning emerges in the optimized DNN. Better-recognizing models showed greater similarity to the auditory system. We isolated the factors forming the AM representation in the different brain regions. Because the model was not designed to reproduce any anatomical or physiological properties of the auditory system other than the cascading architecture, the observed similarity suggests that the AM tuning in the auditory system might also be an emergent property for natural sound recognition during evolution and development.SIGNIFICANCE STATEMENT This study suggests that neural tuning to amplitude modulation may be a consequence of the auditory system evolving for natural sound recognition. We modeled the function of the entire auditory system; that is, recognizing sounds from raw waveforms with as few anatomical or physiological assumptions as possible. We analyzed the model using single-unit recording, which enabled a fair comparison with neurophysiological data with as few methodological biases as possible. Interestingly, our results imply that frequency decomposition in the inner ear might not be necessary for processing amplitude modulation. This implication could not have been obtained if we had used a model that assumes frequency decomposition.


Subject(s)
Auditory Perception , Models, Neurological , Neural Networks, Computer , Brain/physiology , Humans , Sound
15.
J Eye Mov Res ; 11(2)2018 Dec 14.
Article in English | MEDLINE | ID: mdl-33828696

ABSTRACT

There are indications that the pupillary dilation response (PDR) reflects surprising moments in an auditory sequence such as the appearance of a deviant noise against repetitively presented pure tones (4), and salient and loud sounds that are evaluated by human paricipants subjectively (12). In the current study, we further examined whether the reflection of PDR in auditory surprise can be accumulated and revealed in complex and yet structured auditory stimuli, i.e., music, and when the surprise is defined subjectively. Participants listened to 15 excerpts of music while their pupillary responses were recorded. In the surprise-rating session, participants rated how surprising an instance in the excerpt was, i.e., rich in variation versus monotonous, while they listened to it. In the passive-listening session, they listened to the same 15 excerpts again but were not involved in any task. The pupil diameter data obtained from both sessions were time-aligned to the rating data obtained from the surprise-rating session. Results showed that in both sessions, mean pupil diameter was larger at moments rated more surprising than unsurprising. The result suggests that the PDR reflects surprise in music automatically.

16.
Sci Rep ; 7(1): 16455, 2017 11 28.
Article in English | MEDLINE | ID: mdl-29184117

ABSTRACT

Our hearing is usually robust against reverberation. This study asked how such robustness to daily sound is realized, and what kinds of acoustic cues contribute to the robustness. We focused on the perception of materials based on impact sounds, which is a common daily experience, and for which the responsible acoustic features have already been identified in the absence of reverberation. In our experiment, we instructed the participants to identify materials from impact sounds with and without reverberation. The imposition of reverberation did not alter the average responses across participants to perceived materials. However, an analysis of each participant revealed the significant effect of reverberation with response patterns varying among participants. The effect depended on the context of the stimulus presentation, namely it was smaller for a constant reverberation than when the reverberation varied presentation by presentation. The context modified the relative contribution of the spectral features of the sounds to material identification, while no consistent change across participants was observed as regards the temporal features. Although the detailed results varied greatly among the participants, these results suggest that a mechanism exists in the auditory system that compensates for reverberation based on adaptation to the spectral features of reverberant sound.

17.
Neuroimage ; 159: 185-194, 2017 10 01.
Article in English | MEDLINE | ID: mdl-28756239

ABSTRACT

Interaural time (ITD) and level differences (ILD) constitute the two main cues for sound localization in the horizontal plane. Despite extensive research in animal models and humans, the mechanism of how these two cues are integrated into a unified percept is still far from clear. In this study, our aim was to test with human electroencephalography (EEG) whether integration of dynamic ITD and ILD cues is reflected in the so-called motion-onset response (MOR), an evoked potential elicited by moving sound sources. To this end, ITD and ILD trajectories were determined individually by cue trading psychophysics. We then measured EEG while subjects were presented with either static click-trains or click-trains that contained a dynamic portion at the end. The dynamic part was created by combining ITD with ILD either congruently to elicit the percept of a right/leftward moving sound, or incongruently to elicit the percept of a static sound. In two experiments that differed in the method to derive individual dynamic cue trading stimuli, we observed an MOR with at least a change-N1 (cN1) component for both the congruent and incongruent conditions at about 160-190 ms after motion-onset. A significant change-P2 (cP2) component for both the congruent and incongruent ITD/ILD combination was found only in the second experiment peaking at about 250 ms after motion onset. In sum, this study shows that a sound which - by a combination of counter-balanced ITD and ILD cues - induces a static percept can still elicit a motion-onset response, indicative of independent ITD and ILD processing at the level of the MOR - a component that has been proposed to be, at least partly, generated in non-primary auditory cortex.


Subject(s)
Auditory Cortex/physiology , Auditory Perception/physiology , Sound Localization/physiology , Acoustic Stimulation/methods , Adult , Electroencephalography , Evoked Potentials, Auditory/physiology , Female , Humans , Male , Young Adult
18.
Front Neurosci ; 11: 387, 2017.
Article in English | MEDLINE | ID: mdl-28729820

ABSTRACT

Interaural time differences (ITD) and interaural level differences (ILD) both signal horizontal sound source location. To achieve a unified percept of our acoustic environment, these two cues require integration. In the present study, we tested this integration of ITD and ILD with electroencephalography (EEG) by measuring the mismatch negativity (MMN). The MMN can arise in response to spatial changes and is at least partly generated in auditory cortex. In our study, we aimed at testing for an MMN in response to stimuli with counter-balanced ITD/ILD cues. To this end, we employed a roving oddball paradigm with alternating sound sequences in two types of blocks: (a) lateralized stimuli with congruently combined ITD/ILD cues and (b) midline stimuli created by counter-balanced, incongruently combined ITD/ILD cues. We observed a significant MMN peaking at about 112-128 ms after change onset for the congruent ITD/ILD cues, for both lower (0.5 kHz) and higher carrier frequency (4 kHz). More importantly, we also observed significant MMN peaking at about 129 ms for incongruently combined ITD/ILD cues, but this effect was only detectable in the lower frequency range (0.5 kHz). There were no significant differences of the MMN responses for the two types of cue combinations (congruent/incongruent). These results suggest that-at least in the lower frequency ranges (0.5 kHz)-ITD and ILD are processed independently at the level of the MMN in auditory cortex.

19.
Hear Res ; 350: 244-250, 2017 07.
Article in English | MEDLINE | ID: mdl-28323019

ABSTRACT

The two-tone sequence (ABA_), which comprises two different sounds (A and B) and a silent gap, has been used to investigate how the auditory system organizes sequential sounds depending on various stimulus conditions or brain states. Auditory streaming can be evoked by differences not only in the tone frequency ("spectral cue": ΔFTONE, TONE condition) but also in the amplitude modulation rate ("AM cue": ΔFAM, AM condition). The aim of the present study was to explore the relationship between the perceptual properties of auditory streaming for the TONE and AM conditions. A sequence with a long duration (400 repetitions of ABA_) was used to examine the property of the bistability of streaming. The ratio of feature differences that evoked an equivalent probability of the segregated percept was close to the ratio of the Q-values of the auditory and modulation filters, consistent with a "channeling theory" of auditory streaming. On the other hand, for values of ΔFAM and ΔFTONE evoking equal probabilities of the segregated percept, the number of perceptual switches was larger for the TONE condition than for the AM condition, indicating that the mechanism(s) that determine the bistability of auditory streaming are different between or sensitive to the two domains. Nevertheless, the number of switches for individual listeners was positively correlated between the spectral and AM domains. The results suggest a possibility that the neural substrates for spectral and AM processes share a common switching mechanism but differ in location and/or in the properties of neural activity or the strength of internal noise at each level.


Subject(s)
Auditory Pathways/physiology , Auditory Perception , Cues , Pattern Recognition, Physiological , Acoustic Stimulation , Adult , Audiometry, Pure-Tone , Female , Humans , Male , Time Factors , Young Adult
20.
Neuroscience ; 2017 Dec 30.
Article in English | MEDLINE | ID: mdl-29294342

ABSTRACT

This article has been withdrawn: please see Elsevier Policy on Article Withdrawal (http://www.elsevier.com/locate/withdrawalpolicy). This article has been withdrawn at the request of the authors. The authors regrets that the reason for withdrawal is due to an disagreement in authorship and in scope of data disclosure. The authors apologize to the readers for this unfortunate error.

SELECTION OF CITATIONS
SEARCH DETAIL