Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 19 de 19
Filter
1.
J Neurosci ; 43(39): 6667-6678, 2023 09 27.
Article in English | MEDLINE | ID: mdl-37604689

ABSTRACT

Rhythmic entrainment echoes-rhythmic brain responses that outlast rhythmic stimulation-can demonstrate endogenous neural oscillations entrained by the stimulus rhythm. Here, we tested for such echoes in auditory perception. Participants detected a pure tone target, presented at a variable delay after another pure tone that was rhythmically modulated in amplitude. In four experiments involving 154 human (female and male) participants, we tested (1) which stimulus rate produces the strongest entrainment echo and, inspired by the tonotopical organization of the auditory system and findings in nonhuman primates, (2) whether these are organized according to sound frequency. We found the strongest entrainment echoes after 6 and 8 Hz stimulation, respectively. The best moments for target detection (in phase or antiphase with the preceding rhythm) depended on whether sound frequencies of entraining and target stimuli matched, which is in line with a tonotopical organization. However, for the same experimental condition, best moments were not always consistent across experiments. We provide a speculative explanation for these differences that relies on the notion that neural entrainment and repetition-related adaptation might exercise competing opposite influences on perception. Together, we find rhythmic echoes in auditory perception that seem more complex than those predicted from initial theories of neural entrainment.SIGNIFICANCE STATEMENT Rhythmic entrainment echoes are rhythmic brain responses that are produced by a rhythmic stimulus and persist after its offset. These echoes play an important role for the identification of endogenous brain oscillations, entrained by rhythmic stimulation, and give us insights into whether and how participants predict the timing of events. In four independent experiments involving >150 participants, we examined entrainment echoes in auditory perception. We found that entrainment echoes have a preferred rate (between 6 and 8 Hz) and seem to follow the tonotopic organization of the auditory system. Although speculative, we also found evidence that several, potentially competing processes might interact to produce such echoes, a notion that might need to be considered for future experimental design.


Subject(s)
Auditory Perception , Periodicity , Humans , Male , Female , Acoustic Stimulation , Auditory Perception/physiology , Brain , Sound , Electroencephalography
2.
J Neural Eng ; 20(4)2023 07 13.
Article in English | MEDLINE | ID: mdl-37406631

ABSTRACT

Objective.Many recent studies investigating the processing of continuous natural speech have employed electroencephalography (EEG) due to its high temporal resolution. However, most of these studies explored the response mechanism limited to the electrode space. In this study, we intend to explore the underlying neural processing in the source space, particularly the dynamic functional interactions among different regions during neural entrainment to speech.Approach.We collected 128-channel EEG data while 22 participants listened to story speech and time-reversed speech using a naturalistic paradigm. We compared three different strategies to determine the best method to estimate the neural tracking responses from the sensor space to the brain source space. After that, we used dynamic graph theory to investigate the source connectivity dynamics among regions that were involved in speech tracking.Main result.By comparing the correlations between the predicted neural response and the original common neural response under the two experimental conditions, we found that estimating the common neural response of participants in the electrode space followed by source localization of neural responses achieved the best performance. Analysis of the distribution of brain sources entrained to story speech envelopes showed that not only auditory regions but also frontoparietal cognitive regions were recruited, indicating a hierarchical processing mechanism of speech. Further analysis of inter-region interactions based on dynamic graph theory found that neural entrainment to speech operates across multiple brain regions along the hierarchical structure, among which the bilateral insula, temporal lobe, and inferior frontal gyrus are key brain regions that control information transmission. All of these information flows result in dynamic fluctuations in functional connection strength and network topology over time, reflecting both bottom-up and top-down processing while orchestrating computations toward understanding.Significance.Our findings have important implications for understanding the neural mechanisms of the brain during processing natural speech stimuli.


Subject(s)
Speech Perception , Speech , Humans , Speech/physiology , Speech Perception/physiology , Brain/physiology , Electroencephalography , Temporal Lobe/physiology , Acoustic Stimulation/methods
3.
eNeuro ; 10(8)2023 08.
Article in English | MEDLINE | ID: mdl-37500493

ABSTRACT

When listening to speech, the low-frequency cortical response below 10 Hz can track the speech envelope. Previous studies have demonstrated that the phase lag between speech envelope and cortical response can reflect the mechanism by which the envelope-tracking response is generated. Here, we analyze whether the mechanism to generate the envelope-tracking response is modulated by the level of consciousness, by studying how the stimulus-response phase lag is modulated by the disorder of consciousness (DoC). It is observed that DoC patients in general show less reliable neural tracking of speech. Nevertheless, the stimulus-response phase lag changes linearly with frequency between 3.5 and 8 Hz, for DoC patients who show reliable cortical tracking to speech, regardless of the consciousness state. The mean phase lag is also consistent across these DoC patients. These results suggest that the envelope-tracking response to speech can be generated by an automatic process that is barely modulated by the consciousness state.


Subject(s)
Consciousness Disorders , Speech Perception , Humans , Consciousness , Acoustic Stimulation/methods , Speech Perception/physiology , Electroencephalography/methods
4.
Neuroimage ; 277: 120226, 2023 08 15.
Article in English | MEDLINE | ID: mdl-37321359

ABSTRACT

Neural entrainment, defined as unidirectional synchronization of neural oscillations to an external rhythmic stimulus, is a topic of major interest in the field of neuroscience. Despite broad scientific consensus on its existence, on its pivotal role in sensory and motor processes, and on its fundamental definition, empirical research struggles in quantifying it with non-invasive electrophysiology. To this date, broadly adopted state-of-the-art methods still fail to capture the dynamic underlying the phenomenon. Here, we present event-related frequency adjustment (ERFA) as a methodological framework to induce and to measure neural entrainment in human participants, optimized for multivariate EEG datasets. By applying dynamic phase and tempo perturbations to isochronous auditory metronomes during a finger-tapping task, we analyzed adaptive changes in instantaneous frequency of entrained oscillatory components during error correction. Spatial filter design allowed us to untangle, from the multivariate EEG signal, perceptual and sensorimotor oscillatory components attuned to the stimulation frequency. Both components dynamically adjusted their frequency in response to perturbations, tracking the stimulus dynamics by slowing down and speeding up the oscillation over time. Source separation revealed that sensorimotor processing enhanced the entrained response, supporting the notion that the active engagement of the motor system plays a critical role in processing rhythmic stimuli. In the case of phase shift, motor engagement was a necessary condition to observe any response, whereas sustained tempo changes induced frequency adjustment even in the perceptual oscillatory component. Although the magnitude of the perturbations was controlled across positive and negative direction, we observed a general bias in the frequency adjustments towards positive changes, which points at the effect of intrinsic dynamics constraining neural entrainment. We conclude that our findings provide compelling evidence for neural entrainment as mechanism underlying overt sensorimotor synchronization, and highlight that our methodology offers a paradigm and a measure for quantifying its oscillatory dynamics by means of non-invasive electrophysiology, rigorously informed by the fundamental definition of entrainment.


Subject(s)
Electroencephalography , Periodicity , Humans , Acoustic Stimulation/methods
5.
Biomed Phys Eng Express ; 8(4)2022 06 28.
Article in English | MEDLINE | ID: mdl-35320793

ABSTRACT

Neural entrainment, the synchronization of brain oscillations to the frequency of an external stimuli, is a key mechanism that shapes perceptual and cognitive processes.Objective.Using simulations, we investigated the dynamics of neural entrainment, particularly the period following the end of the stimulation, since the persistence (reverberation) of neural entrainment may condition future sensory representations based on predictions about stimulus rhythmicity.Methods.Neural entrainment was assessed using a modified Jansen-Rit neural mass model (NMM) of coupled cortical columns, in which the spectral features of the output resembled that of the electroencephalogram (EEG). We evaluated spectro-temporal features of entrainment as a function of the stimulation frequency, the resonant frequency of the neural populations comprising the NMM, and the coupling strength between cortical columns. Furthermore, we tested if the entrainment persistence depended on the phase of the EEG-like oscillation at the time the stimulus ended.Main Results.The entrainment of the column that received the stimulation was maximum when the frequency of the entrainer was within a narrow range around the resonant frequency of the column. When this occurred, entrainment persisted for several cycles after the stimulus terminated, and the propagation of the entrainment to other columns was facilitated. Propagation also depended on the resonant frequency of the second column, and the coupling strength between columns. The duration of the persistence of the entrainment depended on the phase of the neural oscillation at the time the entrainer terminated, such that falling phases (fromπ/2 to 3π/2 in a sine function) led to longer persistence than rising phases (from 0 toπ/2 and 3π/2 to 2π).Significance.The study bridges between models of neural oscillations and empirical electrophysiology, providing insights to the mechanisms underlying neural entrainment and the use of rhythmic sensory stimulation for neuroenhancement.


Subject(s)
Electroencephalography , Periodicity , Acoustic Stimulation/methods , Brain/physiology
6.
J Neurosci ; 41(33): 7065-7075, 2021 08 18.
Article in English | MEDLINE | ID: mdl-34261698

ABSTRACT

At any given moment our sensory systems receive multiple, often rhythmic, inputs from the environment. Processing of temporally structured events in one sensory modality can guide both behavioral and neural processing of events in other sensory modalities, but whether this occurs remains unclear. Here, we used human electroencephalography (EEG) to test the cross-modal influences of a continuous auditory frequency-modulated (FM) sound on visual perception and visual cortical activity. We report systematic fluctuations in perceptual discrimination of brief visual stimuli in line with the phase of the FM-sound. We further show that this rhythmic modulation in visual perception is related to an accompanying rhythmic modulation of neural activity recorded over visual areas. Importantly, in our task, perceptual and neural visual modulations occurred without any abrupt and salient onsets in the energy of the auditory stimulation and without any rhythmic structure in the visual stimulus. As such, the results provide a critical validation for the existence and functional role of cross-modal entrainment and demonstrates its utility for organizing the perception of multisensory stimulation in the natural environment.SIGNIFICANCE STATEMENT Our sensory environment is filled with rhythmic structures that are often multi-sensory in nature. Here, we show that the alignment of neural activity to the phase of an auditory frequency-modulated (FM) sound has cross-modal consequences for vision: yielding systematic fluctuations in perceptual discrimination of brief visual stimuli that are mediated by accompanying rhythmic modulation of neural activity recorded over visual areas. These cross-modal effects on visual neural activity and perception occurred without any abrupt and salient onsets in the energy of the auditory stimulation and without any rhythmic structure in the visual stimulus. The current work shows that continuous auditory fluctuations in the natural environment can provide a pacing signal for neural activity and perception across the senses.


Subject(s)
Acoustic Stimulation , Periodicity , Visual Cortex/physiology , Visual Perception/physiology , Adult , Association Learning/physiology , Electroencephalography , Female , Humans , Male , Young Adult
7.
Elife ; 102021 06 04.
Article in English | MEDLINE | ID: mdl-34086558

ABSTRACT

Temporal regularity is ubiquitous and essential to guiding attention and coordinating behavior within a dynamic environment. Previous researchers have modeled attention as an internal rhythm that may entrain to first-order regularity from rhythmic events to prioritize information selection at specific time points. Using the attentional blink paradigm, here we show that higher-order regularity based on rhythmic organization of contextual features (pitch, color, or motion) may serve as a temporal frame to recompose the dynamic profile of visual temporal attention. Critically, such attentional reframing effect is well predicted by cortical entrainment to the higher-order contextual structure at the delta band as well as its coupling with the stimulus-driven alpha power. These results suggest that the human brain involuntarily exploits multiscale regularities in rhythmic contexts to recompose dynamic attending in visual perception, and highlight neural entrainment as a central mechanism for optimizing our conscious experience of the world in the time dimension.


Subject(s)
Alpha Rhythm , Cerebral Cortex/physiology , Cortical Synchronization , Delta Rhythm , Visual Perception , Acoustic Stimulation , Adolescent , Adult , Attention , Auditory Perception , Electroencephalography , Female , Humans , Male , Photic Stimulation , Time Factors , Young Adult
8.
eNeuro ; 8(1)2021.
Article in English | MEDLINE | ID: mdl-33272971

ABSTRACT

Speech signals have a unique shape of long-term modulation spectrum that is distinct from environmental noise, music, and non-speech vocalizations. Does the human auditory system adapt to the speech long-term modulation spectrum and efficiently extract critical information from speech signals? To answer this question, we tested whether neural responses to speech signals can be captured by specific modulation spectra of non-speech acoustic stimuli. We generated amplitude modulated (AM) noise with the speech modulation spectrum and 1/f modulation spectra of different exponents to imitate temporal dynamics of different natural sounds. We presented these AM stimuli and a 10-min piece of natural speech to 19 human participants undergoing electroencephalography (EEG) recording. We derived temporal response functions (TRFs) to the AM stimuli of different spectrum shapes and found distinct neural dynamics for each type of TRFs. We then used the TRFs of AM stimuli to predict neural responses to the speech signals, and found that (1) the TRFs of AM modulation spectra of exponents 1, 1.5, and 2 preferably captured EEG responses to speech signals in the δ band and (2) the θ neural band of speech neural responses can be captured by the AM stimuli of an exponent of 0.75. Our results suggest that the human auditory system shows specificity to the long-term modulation spectrum and is equipped with characteristic neural algorithms tailored to extract critical acoustic information from speech signals.


Subject(s)
Auditory Cortex , Speech Perception , Acoustic Stimulation , Auditory Perception , Electroencephalography , Humans , Speech
9.
Brain Behav ; 10(11): e01836, 2020 11.
Article in English | MEDLINE | ID: mdl-32920995

ABSTRACT

INTRODUCTION: Music is ubiquitous and powerful in the world's cultures. Music listening involves abundant information processing (e.g., pitch, rhythm) in the central nervous system and can also induce changes in the physiology, such as heart rate and perspiration. Yet, previous studies tended to examine music information processing in the brain separately from physiological changes. In the current study, we focused on the temporal structure of music (i.e., beat and meter) and examined the physiology, neural processing, and, most importantly, the relation between the two areas. METHODS: Simultaneous MEG and ECG data were collected from a group of adults (N = 15) while they passively listened to duple and triple rhythmic patterns. To characterize physiology, we measured heart rate variability (HRV), indexing the parasympathetic nervous system function (PSNS). To characterize neural processing of beat and meter, we examined the neural entertainment and calculated the beat-to-meter ratio to index the relation between beat-level and meter-level entrainment. Specifically, the current study investigated three related questions: (a) whether listening to musical rhythms affects HRV; (b) whether the neural beat-to-meter ratio differed between metrical conditions, and (c) whether neural beat-to-meter ratio is related to HRV. RESULTS: Results suggest that while at the group level, both HRV and neural processing are highly similar across metrical conditions, at the individual level, neural beat-to-meter ratio significantly predicts HRV, establishing a neural-physiological link. CONCLUSION: This observed link is discussed under the theoretical "neurovisceral integration model," and it provides important new perspectives in music cognition and auditory neuroscience research.


Subject(s)
Music , Acoustic Stimulation , Auditory Perception , Brain , Cognition
10.
Eur J Neurosci ; 51(5): 1305-1314, 2020 03.
Article in English | MEDLINE | ID: mdl-29514397

ABSTRACT

The aim of this study was to investigate whether attentional influences on speech recognition are reflected in the neural phase entrained by an external modulator. Sentences were presented in 7 Hz sinusoidally modulated noise while the neural response to that modulation frequency was monitored by electroencephalogram (EEG) recordings in 21 participants. We implemented a selective attention paradigm including three different attention conditions while keeping physical stimulus parameters constant. The participants' task was either to repeat the sentence as accurately as possible (speech recognition task), to count the number of decrements implemented in modulated noise (decrement detection task), or to do both (dual task), while the EEG was recorded. Behavioural analysis revealed reduced performance in the dual task condition for decrement detection, possibly reflecting limited cognitive resources. EEG analysis revealed no significant differences in power for the 7 Hz modulation frequency, but an attention-dependent phase difference between tasks. Further phase analysis revealed a significant difference 500 ms after sentence onset between trials with correct and incorrect responses for speech recognition, indicating that speech recognition performance and the neural phase are linked via selective attention mechanisms, at least shortly after sentence onset. However, the neural phase effects identified were small and await further investigation.


Subject(s)
Speech Perception , Acoustic Stimulation , Electroencephalography , Humans , Language , Noise , Recognition, Psychology
11.
Exp Brain Res ; 237(8): 1981-1991, 2019 Aug.
Article in English | MEDLINE | ID: mdl-31152188

ABSTRACT

Both movement and neural activity in humans can be entrained by the regularities of an external stimulus, such as the beat of musical rhythms. Neural entrainment to auditory rhythms supports temporal perception, and is enhanced by selective attention and by hierarchical temporal structure imposed on rhythms. However, it is not known how neural entrainment to rhythms is related to the subjective experience of groove (the desire to move along with music or rhythm), the perception of a regular beat, the perception of complexity, and the experience of pleasure. In two experiments, we used musical rhythms (from Steve Reich's Clapping Music) to investigate whether rhythms that are performed by humans (with naturally variable timing) and rhythms that are mechanical (with precise timing), elicit differences in (1) neural entrainment, as measured by inter-trial phase coherence, and (2) subjective ratings of the complexity, preference, groove, and beat strength of rhythms. We also combined results from the two experiments to investigate relationships between neural entrainment and subjective perception of musical rhythms. We found that mechanical rhythms elicited a greater degree of neural entrainment than performed rhythms, likely due to the greater temporal precision in the stimulus, and the two types only elicited different ratings for some individual rhythms. Neural entrainment to performed rhythms, but not to mechanical ones, correlated with subjective desire to move and subjective complexity. These data, therefore, suggest multiple interacting influences on neural entrainment to rhythms, from low-level stimulus properties to high-level cognition and perception.


Subject(s)
Acoustic Stimulation/methods , Auditory Perception/physiology , Music , Periodicity , Pleasure/physiology , Time Perception/physiology , Adult , Electroencephalography/methods , Female , Humans , Male , Music/psychology
12.
Cortex ; 115: 56-71, 2019 06.
Article in English | MEDLINE | ID: mdl-30771622

ABSTRACT

Statistical learning, the process of extracting regularities from the environment, plays an essential role in many aspects of cognition, including speech segmentation and language acquisition. A key component of statistical learning in a linguistic context is the perceptual binding of adjacent individual units (e.g., syllables) into integrated composites (e.g., multisyllabic words). A second, conceptually dissociable component of statistical learning is the memory storage of these integrated representations. Here we examine whether these two dissociable components of statistical learning are differentially impacted by top-down, voluntary attentional resources. Learners' attention was either focused towards or diverted from a speech stream made up of repeating nonsense words. Building on our previous findings, we quantified the online perceptual binding of individual syllables into component words using an EEG-based neural entrainment measure. Following exposure, statistical learning was assessed using offline tests, sensitive to both perceptual binding and memory storage. Neural measures verified that our manipulation of selective attention successfully reduced limited-capacity resources to the speech stream. Diverting attention away from the speech stream did not alter neural entrainment to the component words or post-exposure familiarity ratings, but did impact performance on an indirect reaction-time based memory test. We conclude that theoretically dissociable components of statistically learning are differentially impacted by attention and top-down processing resources. A reduction in attention to the speech stream may impede memory storage of the component words. In contrast, the moment-by-moment perceptual binding of speech regularities can occur even while learners' attention is focused on a demanding concurrent task, and we found no evidence that selective attention modulates this process. These results suggest that learners can acquire basic statistical properties of language without directly focusing on the speech input, potentially opening up previously overlooked opportunities for language learning, particularly in adult learners.


Subject(s)
Attention/physiology , Evoked Potentials/physiology , Learning/physiology , Memory/physiology , Speech/physiology , Acoustic Stimulation , Electroencephalography , Female , Humans , Male , Reaction Time/physiology , Young Adult
13.
Cereb Cortex ; 29(4): 1561-1571, 2019 04 01.
Article in English | MEDLINE | ID: mdl-29788144

ABSTRACT

Segregating concurrent sound streams is a computationally challenging task that requires integrating bottom-up acoustic cues (e.g. pitch) and top-down prior knowledge about sound streams. In a multi-talker environment, the brain can segregate different speakers in about 100 ms in auditory cortex. Here, we used magnetoencephalographic (MEG) recordings to investigate the temporal and spatial signature of how the brain utilizes prior knowledge to segregate 2 speech streams from the same speaker, which can hardly be separated based on bottom-up acoustic cues. In a primed condition, the participants know the target speech stream in advance while in an unprimed condition no such prior knowledge is available. Neural encoding of each speech stream is characterized by the MEG responses tracking the speech envelope. We demonstrate that an effect in bilateral superior temporal gyrus and superior temporal sulcus is much stronger in the primed condition than in the unprimed condition. Priming effects are observed at about 100 ms latency and last more than 600 ms. Interestingly, prior knowledge about the target stream facilitates speech segregation by mainly suppressing the neural tracking of the non-target speech stream. In sum, prior knowledge leads to reliable speech segregation in auditory cortex, even in the absence of reliable bottom-up speech segregation cue.


Subject(s)
Auditory Cortex/physiology , Cues , Speech Perception/physiology , Acoustic Stimulation , Adolescent , Adult , Attention , Female , Humans , Magnetoencephalography , Male , Speech Acoustics , Young Adult
14.
Neuroimage ; 167: 396-407, 2018 02 15.
Article in English | MEDLINE | ID: mdl-29170070

ABSTRACT

Neural oscillations can synchronize to external rhythmic stimuli, as for example in speech and music. While previous studies have mainly focused on elucidating the fundamental concept of neural entrainment, less is known about the time course of entrainment. In this human electroencephalography (EEG) study, we unravel the temporal evolution of neural entrainment by contrasting short and long periods of rhythmic stimulation. Listeners had to detect short silent gaps that were systematically distributed with respect to the phase of a 3 Hz frequency-modulated tone. We found that gap detection performance was modulated by the stimulus stream with a consistent stimulus phase across participants for short and long stimulation. Electrophysiological analysis confirmed neural entrainment effects at 3 Hz and the 6 Hz harmonic for both short and long stimulation lengths. 3 Hz source level analysis revealed that longer stimulation resulted in a phase shift of a participant's neural phase relative to the stimulus phase. Phase coupling increased over the first second of stimulation, but no effects for phase coupling strength were observed over time. The dynamic evolution of phase alignment suggests that the brain attunes to external rhythmic stimulation by adapting the brain's internal representation of incoming environmental stimuli.


Subject(s)
Auditory Cortex/physiology , Auditory Perception/physiology , Brain Waves/physiology , Electroencephalography Phase Synchronization/physiology , Acoustic Stimulation , Adult , Female , Humans , Male , Time Factors , Young Adult
15.
J Neurophysiol ; 116(6): 2497-2512, 2016 12 01.
Article in English | MEDLINE | ID: mdl-27605528

ABSTRACT

During speech listening, the brain parses a continuous acoustic stream of information into computational units (e.g., syllables or words) necessary for speech comprehension. Recent neuroscientific hypotheses have proposed that neural oscillations contribute to speech parsing, but whether they do so on the basis of acoustic cues (bottom-up acoustic parsing) or as a function of available linguistic representations (top-down linguistic parsing) is unknown. In this magnetoencephalography study, we contrasted acoustic and linguistic parsing using bistable speech sequences. While listening to the speech sequences, participants were asked to maintain one of the two possible speech percepts through volitional control. We predicted that the tracking of speech dynamics by neural oscillations would not only follow the acoustic properties but also shift in time according to the participant's conscious speech percept. Our results show that the latency of high-frequency activity (specifically, beta and gamma bands) varied as a function of the perceptual report. In contrast, the phase of low-frequency oscillations was not strongly affected by top-down control. Whereas changes in low-frequency neural oscillations were compatible with the encoding of prelexical segmentation cues, high-frequency activity specifically informed on an individual's conscious speech percept.


Subject(s)
Brain Mapping , Brain Waves/physiology , Comprehension/physiology , Speech Perception/physiology , Speech/physiology , Acoustic Stimulation , Analysis of Variance , Female , Humans , Linguistics , Magnetoencephalography , Male , Reaction Time/physiology , Spectrum Analysis , Vocabulary , Young Adult
16.
Neuroimage ; 124(Pt A): 487-497, 2016 Jan 01.
Article in English | MEDLINE | ID: mdl-26386347

ABSTRACT

Alignment of neural oscillations with temporally regular input allows listeners to generate temporal expectations. However, it remains unclear how behavior is governed in the context of temporal variability: What role do temporal expectations play, and how do they interact with the strength of neural oscillatory activity? Here, human participants detected near-threshold targets in temporally variable acoustic sequences. Temporal expectation strength was estimated using an oscillator model and pre-target neural amplitudes in auditory cortex were extracted from magnetoencephalography signals. Temporal expectations modulated target-detection performance, however, only when neural delta-band amplitudes were large. Thus, slow neural oscillations act to gate influences of temporal expectation on perception. Furthermore, slow amplitude fluctuations governed linear and quadratic influences of auditory alpha-band activity on performance. By fusing a model of temporal expectation with neural oscillatory dynamics, the current findings show that human perception in temporally variable contexts relies on complex interactions between multiple neural frequency bands.


Subject(s)
Alpha Rhythm , Auditory Cortex/physiology , Auditory Threshold/physiology , Delta Rhythm , Acoustic Stimulation , Adult , Evoked Potentials, Auditory , Female , Humans , Magnetoencephalography , Male , Young Adult
17.
J Neurol Sci ; 352(1-2): 41-7, 2015 May 15.
Article in English | MEDLINE | ID: mdl-25805454

ABSTRACT

BACKGROUND: Self-reports by musicians affected with Tourette's syndrome and other sources of anecdotal evidence suggest that tics stop when subjects are involved in musical activity. For the first time, we studied this effect systematically using a questionnaire design to investigate the subjectively assessed impact of musical activity on tic frequency (study 1) and an experimental design to confirm these results (study 2). METHODS: A questionnaire was sent to 29 patients assessing whether listening to music and musical performance would lead to a tic frequency reduction or increase. Then, a within-subject repeated measures design was conducted with eight patients. Five experimental conditions were tested: baseline, musical performance, short time period after musical performance, listening to music and music imagery. Tics were counted based on videotapes. RESULTS: Analysis of the self-reports (study 1) yielded in a significant tic reduction both by listening to music and musical performance. In study 2, musical performance, listening to music and mental imagery of musical performance reduced tic frequency significantly. We found the largest reduction in the condition of musical performance, when tics almost completely stopped. Furthermore, we could find a short-term tic decreasing effect after musical performance. CONCLUSIONS: Self-report assessment revealed that active and passive participation in musical activity can significantly reduce tic frequency. Experimental testing confirmed patients' perception. Active and passive participation in musical activity reduces tic frequency including a short-term lasting tic decreasing effect. Fine motor control, focused attention and goal directed behavior are believed to be relevant factors for this observation.


Subject(s)
Imagery, Psychotherapy , Music/psychology , Self Report , Tics/therapy , Tourette Syndrome/therapy , Adult , Auditory Perception/physiology , Female , Humans , Male , Middle Aged , Surveys and Questionnaires , Tics/psychology , Time Factors , Tourette Syndrome/physiopathology , Tourette Syndrome/psychology , Treatment Outcome
18.
Philos Trans R Soc Lond B Biol Sci ; 369(1658): 20130393, 2014 Dec 19.
Article in English | MEDLINE | ID: mdl-25385771

ABSTRACT

The ability to perceive a regular beat in music and synchronize to this beat is a widespread human skill. Fundamental to musical behaviour, beat and meter refer to the perception of periodicities while listening to musical rhythms and often involve spontaneous entrainment to move on these periodicities. Here, we present a novel experimental approach inspired by the frequency-tagging approach to understand the perception and production of rhythmic inputs. This approach is illustrated here by recording the human electroencephalogram responses at beat and meter frequencies elicited in various contexts: mental imagery of meter, spontaneous induction of a beat from rhythmic patterns, multisensory integration and sensorimotor synchronization. Collectively, our observations support the view that entrainment and resonance phenomena subtend the processing of musical rhythms in the human brain. More generally, they highlight the potential of this approach to help us understand the link between the phenomenology of musical beat and meter and the bias towards periodicities arising under certain circumstances in the nervous system. Entrainment to music provides a highly valuable framework to explore general entrainment mechanisms as embodied in the human brain.


Subject(s)
Auditory Perception/physiology , Brain/physiology , Music/psychology , Periodicity , Acoustic Stimulation , Electroencephalography , Evoked Potentials/physiology , Humans , Time Factors
19.
J Neurosci ; 34(3): 784-92, 2014 Jan 15.
Article in English | MEDLINE | ID: mdl-24431437

ABSTRACT

Resolution of perceptual ambiguity is one function of cross-modal interactions. Here we investigate whether auditory and tactile stimuli can influence binocular rivalry generated by interocular temporal conflict in human subjects. Using dichoptic visual stimuli modulating at different temporal frequencies, we added modulating sounds or vibrations congruent with one or the other visual temporal frequency. Auditory and tactile stimulation both interacted with binocular rivalry by promoting dominance of the congruent visual stimulus. This effect depended on the cross-modal modulation strength and was absent when modulation depth declined to 33%. However, when auditory and tactile stimuli that were too weak on their own to bias binocular rivalry were combined, their influence over vision was very strong, suggesting the auditory and tactile temporal signals combined to influence vision. Similarly, interleaving discrete pulses of auditory and tactile stimuli also promoted dominance of the visual stimulus congruent with the supramodal frequency. When auditory and tactile stimuli were presented at maximum strength, but in antiphase, they had no influence over vision for low temporal frequencies, a null effect again suggesting audio-tactile combination. We also found that the cross-modal interaction was frequency-sensitive at low temporal frequencies, when information about temporal phase alignment can be perceptually tracked. These results show that auditory and tactile temporal processing is functionally linked, suggesting a common neural substrate for the two sensory modalities and that at low temporal frequencies visual activity can be synchronized by a congruent cross-modal signal in a frequency-selective way, suggesting the existence of a supramodal temporal binding mechanism.


Subject(s)
Acoustic Stimulation/methods , Photic Stimulation/methods , Physical Stimulation/methods , Psychomotor Performance/physiology , Vision, Binocular/physiology , Adult , Auditory Perception/physiology , Female , Humans , Male , Touch/physiology , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL