Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 26
1.
Eur J Neurosci ; 59(3): 394-414, 2024 Feb.
Article En | MEDLINE | ID: mdl-38151889

Human speech is a particularly relevant acoustic stimulus for our species, due to its role of information transmission during communication. Speech is inherently a dynamic signal, and a recent line of research focused on neural activity following the temporal structure of speech. We review findings that characterise neural dynamics in the processing of continuous acoustics and that allow us to compare these dynamics with temporal aspects in human speech. We highlight properties and constraints that both neural and speech dynamics have, suggesting that auditory neural systems are optimised to process human speech. We then discuss the speech-specificity of neural dynamics and their potential mechanistic origins and summarise open questions in the field.


Speech Perception , Speech , Humans , Acoustic Stimulation , Acoustics
2.
J Neurosci ; 43(39): 6667-6678, 2023 09 27.
Article En | MEDLINE | ID: mdl-37604689

Rhythmic entrainment echoes-rhythmic brain responses that outlast rhythmic stimulation-can demonstrate endogenous neural oscillations entrained by the stimulus rhythm. Here, we tested for such echoes in auditory perception. Participants detected a pure tone target, presented at a variable delay after another pure tone that was rhythmically modulated in amplitude. In four experiments involving 154 human (female and male) participants, we tested (1) which stimulus rate produces the strongest entrainment echo and, inspired by the tonotopical organization of the auditory system and findings in nonhuman primates, (2) whether these are organized according to sound frequency. We found the strongest entrainment echoes after 6 and 8 Hz stimulation, respectively. The best moments for target detection (in phase or antiphase with the preceding rhythm) depended on whether sound frequencies of entraining and target stimuli matched, which is in line with a tonotopical organization. However, for the same experimental condition, best moments were not always consistent across experiments. We provide a speculative explanation for these differences that relies on the notion that neural entrainment and repetition-related adaptation might exercise competing opposite influences on perception. Together, we find rhythmic echoes in auditory perception that seem more complex than those predicted from initial theories of neural entrainment.SIGNIFICANCE STATEMENT Rhythmic entrainment echoes are rhythmic brain responses that are produced by a rhythmic stimulus and persist after its offset. These echoes play an important role for the identification of endogenous brain oscillations, entrained by rhythmic stimulation, and give us insights into whether and how participants predict the timing of events. In four independent experiments involving >150 participants, we examined entrainment echoes in auditory perception. We found that entrainment echoes have a preferred rate (between 6 and 8 Hz) and seem to follow the tonotopic organization of the auditory system. Although speculative, we also found evidence that several, potentially competing processes might interact to produce such echoes, a notion that might need to be considered for future experimental design.


Auditory Perception , Periodicity , Humans , Male , Female , Acoustic Stimulation , Auditory Perception/physiology , Brain , Sound , Electroencephalography
3.
PLoS One ; 18(1): e0279024, 2023.
Article En | MEDLINE | ID: mdl-36634109

Auditory rhythms are ubiquitous in music, speech, and other everyday sounds. Yet, it is unclear how perceived rhythms arise from the repeating structure of sounds. For speech, it is unclear whether rhythm is solely derived from acoustic properties (e.g., rapid amplitude changes), or if it is also influenced by the linguistic units (syllables, words, etc.) that listeners extract from intelligible speech. Here, we present three experiments in which participants were asked to detect an irregularity in rhythmically spoken speech sequences. In each experiment, we reduce the number of possible stimulus properties that differ between intelligible and unintelligible speech sounds and show that these acoustically-matched intelligibility conditions nonetheless lead to differences in rhythm perception. In Experiment 1, we replicate a previous study showing that rhythm perception is improved for intelligible (16-channel vocoded) as compared to unintelligible (1-channel vocoded) speech-despite near-identical broadband amplitude modulations. In Experiment 2, we use spectrally-rotated 16-channel speech to show the effect of intelligibility cannot be explained by differences in spectral complexity. In Experiment 3, we compare rhythm perception for sine-wave speech signals when they are heard as non-speech (for naïve listeners), and subsequent to training, when identical sounds are perceived as speech. In all cases, detection of rhythmic regularity is enhanced when participants perceive the stimulus as speech compared to when they do not. Together, these findings demonstrate that intelligibility enhances the perception of timing changes in speech, which is hence linked to processes that extract abstract linguistic units from sound.


Speech Intelligibility , Speech Perception , Humans , Phonetics , Acoustics , Cognition , Acoustic Stimulation , Auditory Perception
5.
Nat Commun ; 12(1): 4839, 2021 08 10.
Article En | MEDLINE | ID: mdl-34376673

The ability to maintain a sequence of items in memory is a fundamental cognitive function. In the rodent hippocampus, the representation of sequentially organized spatial locations is reflected by the phase of action potentials relative to the theta oscillation (phase precession). We investigated whether the timing of neuronal activity relative to the theta brain oscillation also reflects sequence order in the medial temporal lobe of humans. We used a task in which human participants learned a fixed sequence of pictures and recorded single neuron and local field potential activity with implanted electrodes. We report that spikes for three consecutive items in the sequence (the preferred stimulus for each cell, as well as the stimuli immediately preceding and following it) were phase-locked at distinct phases of the theta oscillation. Consistent with phase precession, spikes were fired at progressively earlier phases as the sequence advanced. These findings generalize previous findings in the rodent hippocampus to the human temporal lobe and suggest that encoding stimulus information at distinct oscillatory phases may play a role in maintaining sequential order in memory.


Action Potentials/physiology , Epilepsy/physiopathology , Learning/physiology , Neurons/physiology , Theta Rhythm/physiology , Adolescent , Adult , Epilepsy/diagnosis , Female , Hippocampus/cytology , Hippocampus/physiology , Humans , Male , Models, Neurological , Neurons/cytology , Photic Stimulation/methods , Temporal Lobe/cytology , Temporal Lobe/physiology , Young Adult
6.
J Neurosci ; 41(31): 6714-6725, 2021 08 04.
Article En | MEDLINE | ID: mdl-34183446

An indispensable feature of episodic memory is our ability to temporally piece together different elements of an experience into a coherent memory. Hippocampal time cells-neurons that represent temporal information-may play a critical role in this process. Although these cells have been repeatedly found in rodents, it is still unclear to what extent similar temporal selectivity exists in the human hippocampus. Here, we show that temporal context modulates the firing activity of human hippocampal neurons during structured temporal experiences. We recorded neuronal activity in the human brain while patients of either sex learned predictable sequences of pictures. We report that human time cells fire at successive moments in this task. Furthermore, time cells also signaled inherently changing temporal contexts during empty 10 s gap periods between trials while participants waited for the task to resume. Finally, population activity allowed for decoding temporal epoch identity, both during sequence learning and during the gap periods. These findings suggest that human hippocampal neurons could play an essential role in temporally organizing distinct moments of an experience in episodic memory.SIGNIFICANCE STATEMENT Episodic memory refers to our ability to remember the what, where, and when of a past experience. Representing time is an important component of this form of memory. Here, we show that neurons in the human hippocampus represent temporal information. This temporal signature was observed both when participants were actively engaged in a memory task, as well as during 10-s-long gaps when they were asked to wait before performing the task. Furthermore, the activity of the population of hippocampal cells allowed for decoding one temporal epoch from another. These results suggest a robust representation of time in the human hippocampus.


Hippocampus/physiology , Memory, Episodic , Neurons/physiology , Time Perception/physiology , Adult , Electrocorticography , Female , Humans , Male , Middle Aged
7.
PLoS Biol ; 19(2): e3001142, 2021 02.
Article En | MEDLINE | ID: mdl-33635855

Rhythmic sensory or electrical stimulation will produce rhythmic brain responses. These rhythmic responses are often interpreted as endogenous neural oscillations aligned (or "entrained") to the stimulus rhythm. However, stimulus-aligned brain responses can also be explained as a sequence of evoked responses, which only appear regular due to the rhythmicity of the stimulus, without necessarily involving underlying neural oscillations. To distinguish evoked responses from true oscillatory activity, we tested whether rhythmic stimulation produces oscillatory responses which continue after the end of the stimulus. Such sustained effects provide evidence for true involvement of neural oscillations. In Experiment 1, we found that rhythmic intelligible, but not unintelligible speech produces oscillatory responses in magnetoencephalography (MEG) which outlast the stimulus at parietal sensors. In Experiment 2, we found that transcranial alternating current stimulation (tACS) leads to rhythmic fluctuations in speech perception outcomes after the end of electrical stimulation. We further report that the phase relation between electroencephalography (EEG) responses and rhythmic intelligible speech can predict the tACS phase that leads to most accurate speech perception. Together, we provide fundamental results for several lines of research-including neural entrainment and tACS-and reveal endogenous neural oscillations as a key underlying principle for speech perception.


Brain/physiology , Speech Perception/physiology , Adult , Biological Clocks , Electroencephalography , Female , Humans , Magnetoencephalography , Male , Middle Aged , Transcranial Direct Current Stimulation
8.
Curr Res Neurobiol ; 2: 100015, 2021.
Article En | MEDLINE | ID: mdl-36246513

In pandemic times, when visual speech cues are masked, it becomes particularly evident how much we rely on them to communicate. Recent research points to a key role of neural oscillations for cross-modal predictions during speech perception. This article bridges several fields of research - neural oscillations, cross-modal speech perception and brain stimulation - to propose ways forward for research on human communication. Future research can test: (1) whether "speech is special" for oscillatory processes underlying cross-modal predictions; (2) whether "visual control" of oscillatory processes in the auditory system is strongest in moments of reduced acoustic regularity; and (3) whether providing information to the brain via electric stimulation can overcome deficits associated with cross-modal information processing in certain pathological conditions.

9.
J Cogn Neurosci ; 32(2): 226-240, 2020 02.
Article En | MEDLINE | ID: mdl-31659922

Several recent studies have used transcranial alternating current stimulation (tACS) to demonstrate a causal role of neural oscillatory activity in speech processing. In particular, it has been shown that the ability to understand speech in a multi-speaker scenario or background noise depends on the timing of speech presentation relative to simultaneously applied tACS. However, it is possible that tACS did not change actual speech perception but rather auditory stream segregation. In this study, we tested whether the phase relation between tACS and the rhythm of degraded words, presented in silence, modulates word report accuracy. We found strong evidence for a tACS-induced modulation of speech perception, but only if the stimulation was applied bilaterally using ring electrodes (not for unilateral left hemisphere stimulation with square electrodes). These results were only obtained when data were analyzed using a statistical approach that was identified as optimal in a previous simulation study. The effect was driven by a phasic disruption of word report scores. Our results suggest a causal role of neural entrainment for speech perception and emphasize the importance of optimizing stimulation protocols and statistical approaches for brain stimulation research.


Cerebral Cortex/physiology , Speech Perception/physiology , Transcranial Direct Current Stimulation , Adult , Female , Humans , Male , Placebos , Psychomotor Performance/physiology , Time Factors , Young Adult
10.
Curr Biol ; 29(24): R1318-R1320, 2019 12 16.
Article En | MEDLINE | ID: mdl-31846682

Previous research has demonstrated that auditory perception fluctuates rhythmically after a cue. New research shows that these 'behavioural oscillations' critically depend on expectations from preceding stimulation.


Auditory Perception , Acoustic Stimulation
11.
Neuroimage ; 202: 116175, 2019 11 15.
Article En | MEDLINE | ID: mdl-31499178

Research on whether perception or other processes depend on the phase of neural oscillations is rapidly gaining popularity. However, it is unknown which methods are optimally suited to evaluate the hypothesized phase effect. Using a simulation approach, we here test the ability of different methods to detect such an effect on dichotomous (e.g., "hit" vs "miss") and continuous (e.g., scalp potentials) response variables. We manipulated parameters that characterise the phase effect or define the experimental approach to test for this effect. For each parameter combination and response variable, we identified an optimal method. We found that methods regressing single-trial responses on circular (sine and cosine) predictors perform best for all of the simulated parameters, regardless of the nature of the response variable (dichotomous or continuous). In sum, our study lays a foundation for optimized experimental designs and analyses in future studies investigating the role of phase for neural and behavioural responses. We provide MATLAB code for the statistical methods tested.


Brain/physiology , Models, Neurological , Neurons/physiology , Perception/physiology , Computer Simulation , Data Interpretation, Statistical , Electroencephalography , Humans , Magnetoencephalography , Transcranial Direct Current Stimulation
12.
Curr Biol ; 28(18): R1102-R1104, 2018 09 24.
Article En | MEDLINE | ID: mdl-30253150

It has been hypothesized that stimulus-aligned brain rhythms reflect predictions about upcoming input. New research shows that these rhythms bias subsequent speech perception, in line with a mechanism of prediction.


Speech Perception , Speech , Acoustic Stimulation , Brain , Hearing
13.
Front Neurosci ; 12: 95, 2018.
Article En | MEDLINE | ID: mdl-29563860

It is undisputed that presenting a rhythmic stimulus leads to a measurable brain response that follows the rhythmic structure of this stimulus. What is still debated, however, is the question whether this brain response exclusively reflects a regular repetition of evoked responses, or whether it also includes entrained oscillatory activity. Here we systematically present evidence in favor of an involvement of entrained neural oscillations in the processing of rhythmic input while critically pointing out which questions still need to be addressed before this evidence could be considered conclusive. In this context, we also explicitly discuss the potential functional role of such entrained oscillations, suggesting that these stimulus-aligned oscillations reflect, and serve as, predictive processes, an idea often only implicitly assumed in the literature.

14.
Curr Biol ; 28(3): 401-408.e5, 2018 02 05.
Article En | MEDLINE | ID: mdl-29358073

Due to their periodic nature, neural oscillations might represent an optimal "tool" for the processing of rhythmic stimulus input [1-3]. Indeed, the alignment of neural oscillations to a rhythmic stimulus, often termed phase entrainment, has been repeatedly demonstrated [4-7]. Phase entrainment is central to current theories of speech processing [8-10] and has been associated with successful speech comprehension [11-17]. However, typical manipulations that reduce speech intelligibility (e.g., addition of noise and time reversal [11, 12, 14, 16, 17]) could destroy critical acoustic cues for entrainment (such as "acoustic edges" [7]). Hence, the association between phase entrainment and speech intelligibility might only be "epiphenomenal"; i.e., both decline due to the same manipulation, without any causal link between the two [18]. Here, we use transcranial alternating current stimulation (tACS [19]) to manipulate the phase lag between neural oscillations and speech rhythm while measuring neural responses to intelligible and unintelligible vocoded stimuli with sparse fMRI. We found that this manipulation significantly modulates the BOLD response to intelligible speech in the superior temporal gyrus, and the strength of BOLD modulation is correlated with a phasic modulation of performance in a behavioral task. Importantly, these findings are absent for unintelligible speech and during sham stimulation; we thus demonstrate that phase entrainment has a specific, causal influence on neural responses to intelligible speech. Our results not only provide an important step toward understanding the neural foundation of human abilities at speech comprehension but also suggest new methods for enhancing speech perception that can be explored in the future.


Brain/physiology , Speech Intelligibility/physiology , Speech Perception/physiology , Adult , Comprehension , Female , Humans , Magnetic Resonance Imaging , Male , Middle Aged , Transcranial Direct Current Stimulation , Young Adult
15.
Lang Cogn Neurosci ; 32(7): 910-923, 2017 Aug 09.
Article En | MEDLINE | ID: mdl-28670598

Transcranial electric stimulation (tES), comprising transcranial direct current stimulation (tDCS) and transcranial alternating current stimulation (tACS), involves applying weak electrical current to the scalp, which can be used to modulate membrane potentials and thereby modify neural activity. Critically, behavioural or perceptual consequences of this modulation provide evidence for a causal role of neural activity in the stimulated brain region for the observed outcome. We present tES as a tool for the investigation of which neural responses are necessary for successful speech perception and comprehension. We summarise existing studies, along with challenges that need to be overcome, potential solutions, and future directions. We conclude that, although standardised stimulation parameters still need to be established, tES is a promising tool for revealing the neural basis of speech processing. Future research can use this method to explore the causal role of brain regions and neural processes for the perception and comprehension of speech.

16.
Front Neurosci ; 11: 296, 2017.
Article En | MEDLINE | ID: mdl-28603483

All sensory systems need to continuously prioritize and select incoming stimuli in order to avoid overflow or interference, and provide a structure to the brain's input. However, the characteristics of this input differ across sensory systems; therefore, and as a direct consequence, each sensory system might have developed specialized strategies to cope with the continuous stream of incoming information. Neural oscillations are intimately connected with this selection process, as they can be used by the brain to rhythmically amplify or attenuate input and therefore represent an optimal tool for stimulus selection. In this paper, we focus on oscillatory processes for stimulus selection in the visual and auditory systems. We point out both commonalities and differences between the two systems and develop several hypotheses, inspired by recently published findings: (1) The rhythmic component in its input is crucial for the auditory, but not for the visual system. The alignment between oscillatory phase and rhythmic input (phase entrainment) is therefore an integral part of stimulus selection in the auditory system whereas the visual system merely adjusts its phase to upcoming events, without the need for any rhythmic component. (2) When input is unpredictable, the visual system can maintain its oscillatory sampling, whereas the auditory system switches to a different, potentially internally oriented, "mode" of processing that might be characterized by alpha oscillations. (3) Visual alpha can be divided into a faster occipital alpha (10 Hz) and a slower frontal alpha (7 Hz) that critically depends on attention.

17.
Neuroimage ; 150: 344-357, 2017 04 15.
Article En | MEDLINE | ID: mdl-28188912

Neural entrainment, the alignment between neural oscillations and rhythmic stimulation, is omnipresent in current theories of speech processing - nevertheless, the underlying neural mechanisms are still largely unknown. Here, we hypothesized that laminar recordings in non-human primates provide us with important insight into these mechanisms, in particular with respect to processing in cortical layers. We presented one monkey with human everyday speech sounds and recorded neural (as current-source density, CSD) oscillations in primary auditory cortex (A1). We observed that the high-excitability phase of neural oscillations was only aligned with those spectral components of speech the recording site was tuned to; the opposite, low-excitability phase was aligned with other spectral components. As low- and high-frequency components in speech alternate, this finding might reflect a particularly efficient way of stimulus processing that includes the preparation of the relevant neuronal populations to the upcoming input. Moreover, presenting speech/noise sounds without systematic fluctuations in amplitude and spectral content and their time-reversed versions, we found significant entrainment in all conditions and cortical layers. When compared with everyday speech, the entrainment in the speech/noise conditions was characterized by a change in the phase relation between neural signal and stimulus and the low-frequency neural phase was dominantly coupled to activity in a lower gamma-band. These results show that neural entrainment in response to speech without slow fluctuations in spectral energy includes a process with specific characteristics that is presumably preserved across species.


Auditory Cortex/physiology , Electroencephalography Phase Synchronization/physiology , Speech Perception/physiology , Acoustic Stimulation , Animals , Electroencephalography , Female , Macaca mulatta , Signal Processing, Computer-Assisted
18.
Neuroimage ; 124(Pt A): 16-23, 2016 Jan 01.
Article En | MEDLINE | ID: mdl-26341026

Phase entrainment of neural oscillations, the brain's adjustment to rhythmic stimulation, is a central component in recent theories of speech comprehension: the alignment between brain oscillations and speech sound improves speech intelligibility. However, phase entrainment to everyday speech sound could also be explained by oscillations passively following the low-level periodicities (e.g., in sound amplitude and spectral content) of auditory stimulation-and not by an adjustment to the speech rhythm per se. Recently, using novel speech/noise mixture stimuli, we have shown that behavioral performance can entrain to speech sound even when high-level features (including phonetic information) are not accompanied by fluctuations in sound amplitude and spectral content. In the present study, we report that neural phase entrainment might underlie our behavioral findings. We observed phase-locking between electroencephalogram (EEG) and speech sound in response not only to original (unprocessed) speech but also to our constructed "high-level" speech/noise mixture stimuli. Phase entrainment to original speech and speech/noise sound did not differ in the degree of entrainment, but rather in the actual phase difference between EEG signal and sound. Phase entrainment was not abolished when speech/noise stimuli were presented in reverse (which disrupts semantic processing), indicating that acoustic (rather than linguistic) high-level features play a major role in the observed neural entrainment. Our results provide further evidence for phase entrainment as a potential mechanism underlying speech processing and segmentation, and for the involvement of high-level processes in the adjustment to the rhythm of speech.


Brain Waves , Cerebral Cortex/physiology , Speech Intelligibility/physiology , Speech Perception/physiology , Acoustic Stimulation , Adult , Electroencephalography , Evoked Potentials, Auditory , Female , Humans , Male , Noise , Speech Acoustics , Young Adult
19.
Front Hum Neurosci ; 9: 651, 2015.
Article En | MEDLINE | ID: mdl-26696863

Constantly bombarded with input, the brain has the need to filter out relevant information while ignoring the irrelevant rest. A powerful tool may be represented by neural oscillations which entrain their high-excitability phase to important input while their low-excitability phase attenuates irrelevant information. Indeed, the alignment between brain oscillations and speech improves intelligibility and helps dissociating speakers during a "cocktail party". Although well-investigated, the contribution of low- and high-level processes to phase entrainment to speech sound has only recently begun to be understood. Here, we review those findings, and concentrate on three main results: (1) Phase entrainment to speech sound is modulated by attention or predictions, likely supported by top-down signals and indicating higher-level processes involved in the brain's adjustment to speech. (2) As phase entrainment to speech can be observed without systematic fluctuations in sound amplitude or spectral content, it does not only reflect a passive steady-state "ringing" of the cochlea, but entails a higher-level process. (3) The role of intelligibility for phase entrainment is debated. Recent results suggest that intelligibility modulates the behavioral consequences of entrainment, rather than directly affecting the strength of entrainment in auditory regions. We conclude that phase entrainment to speech reflects a sophisticated mechanism: several high-level processes interact to optimally align neural oscillations with predicted events of high relevance, even when they are hidden in a continuous stream of background noise.

20.
Neuroreport ; 26(13): 773-8, 2015 Sep 09.
Article En | MEDLINE | ID: mdl-26164609

Evidence for rhythmic or 'discrete' sensory processing is abundant for the visual system, but sparse and inconsistent for the auditory system. Fundamental differences in the nature of visual and auditory inputs might account for this discrepancy: whereas the visual system mainly relies on spatial information, time might be the most important factor for the auditory system. In contrast to vision, temporal subsampling (i.e. taking 'snapshots') of the auditory input stream might thus prove detrimental for the brain as essential information would be lost. Rather than embracing the view of a continuous auditory processing, we recently proposed that discrete 'perceptual cycles' might exist in the auditory system, but on a hierarchically higher level of processing, involving temporally more stable features. This proposal leads to the prediction that the auditory system would be more robust to temporal subsampling when applied on a 'high-level' decomposition of auditory signals. To test this prediction, we constructed speech stimuli that were subsampled at different frequencies, either at the input level (following a wavelet transform) or at the level of auditory features (on the basis of LPC vocoding), and presented them to human listeners. Auditory recognition was significantly more robust to subsampling in the latter case, that is on a relatively high level of auditory processing. Although our results do not directly demonstrate perceptual cycles in the auditory domain, they (a) show that their existence is possible without disrupting temporal information to a critical extent and (b) confirm our proposal that, if they do exist, they should operate on a higher level of auditory processing.


Recognition, Psychology , Speech Perception , Acoustic Stimulation , Adult , Female , Humans , Male , Time Factors , Young Adult
...