Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 27
Filter
1.
Cereb Cortex ; 33(10): 6273-6281, 2023 05 09.
Article in English | MEDLINE | ID: mdl-36627246

ABSTRACT

When we attentively listen to an individual's speech, our brain activity dynamically aligns to the incoming acoustic input at multiple timescales. Although this systematic alignment between ongoing brain activity and speech in auditory brain areas is well established, the acoustic events that drive this phase-locking are not fully understood. Here, we use magnetoencephalographic recordings of 24 human participants (12 females) while they were listening to a 1 h story. We show that whereas speech-brain coupling is associated with sustained acoustic fluctuations in the speech envelope in the theta-frequency range (4-7 Hz), speech tracking in the low-frequency delta (below 1 Hz) was strongest around onsets of speech, like the beginning of a sentence. Crucially, delta tracking in bilateral auditory areas was not sustained after onsets, proposing a delta tracking during continuous speech perception that is driven by speech onsets. We conclude that both onsets and sustained components of speech contribute differentially to speech tracking in delta- and theta-frequency bands, orchestrating sampling of continuous speech. Thus, our results suggest a temporal dissociation of acoustically driven oscillatory activity in auditory areas during speech tracking, providing valuable implications for orchestration of speech tracking at multiple time scales.


Subject(s)
Auditory Cortex , Speech Perception , Female , Humans , Speech , Acoustic Stimulation/methods , Magnetoencephalography/methods , Auditory Perception
2.
Neuroscientist ; 29(1): 62-77, 2023 02.
Article in English | MEDLINE | ID: mdl-34873945

ABSTRACT

Bioelectromagnetism has contributed some of the most commonly used techniques to human neuroscience such as magnetoencephalography (MEG), electroencephalography (EEG), transcranial magnetic stimulation (TMS), and transcranial electric stimulation (TES). The considerable differences in their technical design and practical use give rise to the impression that these are quite different techniques altogether. Here, we review, discuss and illustrate the fundamental principle of Helmholtz reciprocity that provides a common ground for all four techniques. We show that, more than 150 years after its discovery by Helmholtz in 1853, reciprocity is important to appreciate the strengths and limitations of these four classical tools in neuroscience. We build this case by explaining the concept of Helmholtz reciprocity, presenting a methodological account of this principle for all four methods and, finally, by illustrating its application in practical clinical studies.


Subject(s)
Brain , Transcranial Magnetic Stimulation , Humans , Brain/physiology , Transcranial Magnetic Stimulation/methods , Electroencephalography/methods , Magnetoencephalography , Brain Mapping/methods
3.
Neuroimage ; 258: 119395, 2022 09.
Article in English | MEDLINE | ID: mdl-35718023

ABSTRACT

The systematic alignment of low-frequency brain oscillations with the acoustic speech envelope signal is well established and has been proposed to be crucial for actively perceiving speech. Previous studies investigating speech-brain coupling in source space are restricted to univariate pairwise approaches between brain and speech signals, and therefore speech tracking information in frequency-specific communication channels might be lacking. To address this, we propose a novel multivariate framework for estimating speech-brain coupling where neural variability from source-derived activity is taken into account along with the rate of envelope's amplitude change (derivative). We applied it in magnetoencephalographic (MEG) recordings while human participants (male and female) listened to one hour of continuous naturalistic speech, showing that a multivariate approach outperforms the corresponding univariate method in low- and high frequencies across frontal, motor, and temporal areas. Systematic comparisons revealed that the gain in low frequencies (0.6 - 0.8 Hz) was related to the envelope's rate of change whereas in higher frequencies (from 0.8 to 10 Hz) it was mostly related to the increased neural variability from source-derived cortical areas. Furthermore, following a non-negative matrix factorization approach we found distinct speech-brain components across time and cortical space related to speech processing. We confirm that speech envelope tracking operates mainly in two timescales (δ and θ frequency bands) and we extend those findings showing shorter coupling delays in auditory-related components and longer delays in higher-association frontal and motor components, indicating temporal differences of speech tracking and providing implications for hierarchical stimulus-driven speech processing.


Subject(s)
Auditory Cortex , Speech Perception , Acoustic Stimulation , Female , Humans , Magnetoencephalography , Male , Multivariate Analysis , Speech
4.
Biol Psychiatry ; 90(6): 419-429, 2021 09 15.
Article in English | MEDLINE | ID: mdl-34116790

ABSTRACT

BACKGROUND: This study aimed to examine whether 40-Hz auditory steady-state responses (ASSRs) are impaired in participants at clinical high-risk for psychosis (CHR-P) and predict clinical outcomes. METHODS: Magnetoencephalography data were collected during a 40-Hz ASSR paradigm for a group of 116 CHR-P participants, 33 patients with first-episode psychosis (15 antipsychotic-naïve), a psychosis risk-negative group (n = 38), and 49 healthy control subjects. Analysis of group differences of 40-Hz intertrial phase coherence and 40-Hz amplitude focused on right Heschl's gyrus, superior temporal gyrus, hippocampus, and thalamus after establishing significant activations during 40-Hz ASSR stimulation. Linear regression and linear discriminant analyses were used to predict clinical outcomes in CHR-P participants, including transition to psychosis and persistence of attenuated psychotic symptoms (APSs). RESULTS: CHR-P participants and patients with first-episode psychosis were impaired in 40-Hz amplitude in the right thalamus and hippocampus. In addition, patients with first-episode psychosis were impaired in 40-Hz amplitude in the right Heschl's gyrus, and CHR-P participants in 40-Hz intertrial phase coherence in the right Heschl's gyrus. The 40-Hz ASSR deficits were pronounced in CHR-P participants who later transitioned to psychosis (n = 13) or showed persistent APSs (n = 34). Importantly, both APS persistence and transition to psychosis were predicted by 40-Hz ASSR impairments, with ASSR activity in the right hippocampus, superior temporal gyrus, and middle temporal gyrus correctly classifying 69.2% individuals with nonpersistent APSs and 73.5% individuals with persistent APSs (area under the curve = 0.842), and right thalamus 40-Hz activity correctly classifying 76.9% transitioned and 53.6% nontransitioned CHR-P participants (area under the curve = 0.695). CONCLUSIONS: Our data indicate that deficits in gamma-band entrainment in the primary auditory cortex and subcortical areas constitute a potential biomarker for predicting clinical outcomes in CHR-P participants.


Subject(s)
Antipsychotic Agents , Auditory Cortex , Psychotic Disorders , Acoustic Stimulation , Electroencephalography , Evoked Potentials, Auditory , Humans , Magnetoencephalography
5.
Elife ; 92020 08 24.
Article in English | MEDLINE | ID: mdl-32831168

ABSTRACT

Visual speech carried by lip movements is an integral part of communication. Yet, it remains unclear in how far visual and acoustic speech comprehension are mediated by the same brain regions. Using multivariate classification of full-brain MEG data, we first probed where the brain represents acoustically and visually conveyed word identities. We then tested where these sensory-driven representations are predictive of participants' trial-wise comprehension. The comprehension-relevant representations of auditory and visual speech converged only in anterior angular and inferior frontal regions and were spatially dissociated from those representations that best reflected the sensory-driven word identity. These results provide a neural explanation for the behavioural dissociation of acoustic and visual speech comprehension and suggest that cerebral representations encoding word identities may be more modality-specific than often upheld.


Subject(s)
Brain/anatomy & histology , Brain/physiology , Phonetics , Speech , Acoustic Stimulation/methods , Adolescent , Adult , Brain Mapping , Female , Humans , Photic Stimulation , Reading , Speech Perception , Young Adult
6.
Hum Brain Mapp ; 41(15): 4419-4430, 2020 10 15.
Article in English | MEDLINE | ID: mdl-32662585

ABSTRACT

Sensory attenuation refers to the decreased intensity of a sensory percept when a sensation is self-generated compared with when it is externally triggered. However, the underlying brain regions and network interactions that give rise to this phenomenon remain to be determined. To address this issue, we recorded magnetoencephalographic (MEG) data from 35 healthy controls during an auditory task in which pure tones were either elicited through a button press or passively presented. We analyzed the auditory M100 at sensor- and source-level and identified movement-related magnetic fields (MRMFs). Regression analyses were used to further identify brain regions that contributed significantly to sensory attenuation, followed by a dynamic causal modeling (DCM) approach to explore network interactions between generators. Attenuation of the M100 was pronounced in right Heschl's gyrus (HES), superior temporal cortex (ST), thalamus, rolandic operculum (ROL), precuneus and inferior parietal cortex (IPL). Regression analyses showed that right postcentral gyrus (PoCG) and left precentral gyrus (PreCG) predicted M100 sensory attenuation. In addition, DCM results indicated that auditory sensory attenuation involved bi-directional information flow between thalamus, IPL, and auditory cortex. In summary, our data show that sensory attenuation is mediated by bottom-up and top-down information flow in a thalamocortical network, providing support for the role of predictive processing in sensory-motor system.


Subject(s)
Auditory Perception/physiology , Cerebral Cortex/physiology , Magnetoencephalography , Models, Statistical , Motor Activity/physiology , Nerve Net/physiology , Thalamus/physiology , Adult , Humans , Young Adult
7.
Neuroimage ; 219: 116936, 2020 10 01.
Article in English | MEDLINE | ID: mdl-32474080

ABSTRACT

Natural speech builds on contextual relations that can prompt predictions of upcoming utterances. To study the neural underpinnings of such predictive processing we asked 10 healthy adults to listen to a 1-h-long audiobook while their magnetoencephalographic (MEG) brain activity was recorded. We correlated the MEG signals with acoustic speech envelope, as well as with estimates of Bayesian word probability with and without the contextual word sequence (N-gram and Unigram, respectively), with a focus on time-lags. The MEG signals of auditory and sensorimotor cortices were strongly coupled to the speech envelope at the rates of syllables (4-8 â€‹Hz) and of prosody and intonation (0.5-2 â€‹Hz). The probability structure of word sequences, independently of the acoustical features, affected the ≤ 2-Hz signals extensively in auditory and rolandic regions, in precuneus, occipital cortices, and lateral and medial frontal regions. Fine-grained temporal progression patterns occurred across brain regions 100-1000 â€‹ms after word onsets. Although the acoustic effects were observed in both hemispheres, the contextual influences were statistically significantly lateralized to the left hemisphere. These results serve as a brain signature of the predictability of word sequences in listened continuous speech, confirming and extending previous results to demonstrate that deeply-learned knowledge and recent contextual information are employed dynamically and in a left-hemisphere-dominant manner in predicting the forthcoming words in natural speech.


Subject(s)
Brain/physiology , Speech Perception/physiology , Acoustic Stimulation , Adult , Attention/physiology , Auditory Cortex/physiology , Brain Mapping , Female , Humans , Magnetoencephalography , Male , Middle Aged , Speech/physiology , Young Adult
8.
Curr Biol ; 29(12): 1924-1937.e9, 2019 06 17.
Article in English | MEDLINE | ID: mdl-31130454

ABSTRACT

When we listen to speech, we have to make sense of a waveform of sound pressure. Hierarchical models of speech perception assume that, to extract semantic meaning, the signal is transformed into unknown, intermediate neuronal representations. Traditionally, studies of such intermediate representations are guided by linguistically defined concepts, such as phonemes. Here, we argue that in order to arrive at an unbiased understanding of the neuronal responses to speech, we should focus instead on representations obtained directly from the stimulus. We illustrate our view with a data-driven, information theoretic analysis of a dataset of 24 young, healthy humans who listened to a 1 h narrative while their magnetoencephalogram (MEG) was recorded. We find that two recent results, the improved performance of an encoding model in which annotated linguistic and acoustic features were combined and the decoding of phoneme subgroups from phoneme-locked responses, can be explained by an encoding model that is based entirely on acoustic features. These acoustic features capitalize on acoustic edges and outperform Gabor-filtered spectrograms, which can explicitly describe the spectrotemporal characteristics of individual phonemes. By replicating our results in publicly available electroencephalography (EEG) data, we conclude that models of brain responses based on linguistic features can serve as excellent benchmarks. However, we believe that in order to further our understanding of human cortical responses to speech, we should also explore low-level and parsimonious explanations for apparent high-level phenomena.


Subject(s)
Auditory Cortex/physiology , Language , Magnetoencephalography , Speech Perception/physiology , Acoustic Stimulation , Acoustics , Adult , Female , Humans , Male , Speech/physiology , Young Adult
9.
PLoS Biol ; 16(8): e2006558, 2018 08.
Article in English | MEDLINE | ID: mdl-30080855

ABSTRACT

Integration of multimodal sensory information is fundamental to many aspects of human behavior, but the neural mechanisms underlying these processes remain mysterious. For example, during face-to-face communication, we know that the brain integrates dynamic auditory and visual inputs, but we do not yet understand where and how such integration mechanisms support speech comprehension. Here, we quantify representational interactions between dynamic audio and visual speech signals and show that different brain regions exhibit different types of representational interaction. With a novel information theoretic measure, we found that theta (3-7 Hz) oscillations in the posterior superior temporal gyrus/sulcus (pSTG/S) represent auditory and visual inputs redundantly (i.e., represent common features of the two), whereas the same oscillations in left motor and inferior temporal cortex represent the inputs synergistically (i.e., the instantaneous relationship between audio and visual inputs is also represented). Importantly, redundant coding in the left pSTG/S and synergistic coding in the left motor cortex predict behavior-i.e., speech comprehension performance. Our findings therefore demonstrate that processes classically described as integration can have different statistical properties and may reflect distinct mechanisms that occur in different brain regions to support audiovisual speech comprehension.


Subject(s)
Motor Cortex/physiology , Speech Perception/physiology , Temporal Lobe/physiology , Acoustic Stimulation , Adolescent , Adult , Auditory Perception , Brain/physiology , Brain Mapping/methods , Comprehension/physiology , Female , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Photic Stimulation , Speech , Visual Perception
10.
Brain ; 141(8): 2511-2526, 2018 08 01.
Article in English | MEDLINE | ID: mdl-30020423

ABSTRACT

Hypofunction of the N-methyl-d-aspartate receptor (NMDAR) has been implicated as a possible mechanism underlying cognitive deficits and aberrant neuronal dynamics in schizophrenia. To test this hypothesis, we first administered a sub-anaesthetic dose of S-ketamine (0.006 mg/kg/min) or saline in a single-blind crossover design in 14 participants while magnetoencephalographic data were recorded during a visual task. In addition, magnetoencephalographic data were obtained in a sample of unmedicated first-episode psychosis patients (n = 10) and in patients with chronic schizophrenia (n = 16) to allow for comparisons of neuronal dynamics in clinical populations versus NMDAR hypofunctioning. Magnetoencephalographic data were analysed at source-level in the 1-90 Hz frequency range in occipital and thalamic regions of interest. In addition, directed functional connectivity analysis was performed using Granger causality and feedback and feedforward activity was investigated using a directed asymmetry index. Psychopathology was assessed with the Positive and Negative Syndrome Scale. Acute ketamine administration in healthy volunteers led to similar effects on cognition and psychopathology as observed in first-episode and chronic schizophrenia patients. However, the effects of ketamine on high-frequency oscillations and their connectivity profile were not consistent with these observations. Ketamine increased amplitude and frequency of gamma-power (63-80 Hz) in occipital regions and upregulated low frequency (5-28 Hz) activity. Moreover, ketamine disrupted feedforward and feedback signalling at high and low frequencies leading to hypo- and hyper-connectivity in thalamo-cortical networks. In contrast, first-episode and chronic schizophrenia patients showed a different pattern of magnetoencephalographic activity, characterized by decreased task-induced high-gamma band oscillations and predominantly increased feedforward/feedback-mediated Granger causality connectivity. Accordingly, the current data have implications for theories of cognitive dysfunctions and circuit impairments in the disorder, suggesting that acute NMDAR hypofunction does not recreate alterations in neural oscillations during visual processing observed in schizophrenia.


Subject(s)
Ketamine/adverse effects , Ketamine/pharmacology , Schizophrenia/physiopathology , Adult , Brain/drug effects , Cerebral Cortex/drug effects , Cross-Over Studies , Electroencephalography , Excitatory Amino Acid Antagonists/pharmacology , Female , Gamma Rhythm , Humans , Magnetoencephalography/methods , Male , Receptors, N-Methyl-D-Aspartate/drug effects , Schizophrenia/metabolism , Single-Blind Method , Thalamus/drug effects
11.
PLoS Biol ; 16(3): e2004473, 2018 03.
Article in English | MEDLINE | ID: mdl-29529019

ABSTRACT

During online speech processing, our brain tracks the acoustic fluctuations in speech at different timescales. Previous research has focused on generic timescales (for example, delta or theta bands) that are assumed to map onto linguistic features such as prosody or syllables. However, given the high intersubject variability in speaking patterns, such a generic association between the timescales of brain activity and speech properties can be ambiguous. Here, we analyse speech tracking in source-localised magnetoencephalographic data by directly focusing on timescales extracted from statistical regularities in our speech material. This revealed widespread significant tracking at the timescales of phrases (0.6-1.3 Hz), words (1.8-3 Hz), syllables (2.8-4.8 Hz), and phonemes (8-12.4 Hz). Importantly, when examining its perceptual relevance, we found stronger tracking for correctly comprehended trials in the left premotor (PM) cortex at the phrasal scale as well as in left middle temporal cortex at the word scale. Control analyses using generic bands confirmed that these effects were specific to the speech regularities in our stimuli. Furthermore, we found that the phase at the phrasal timescale coupled to power at beta frequency (13-30 Hz) in motor areas. This cross-frequency coupling presumably reflects top-down temporal prediction in ongoing speech perception. Together, our results reveal specific functional and perceptually relevant roles of distinct tracking and cross-frequency processes along the auditory-motor pathway.


Subject(s)
Auditory Cortex/physiology , Motor Cortex/physiology , Speech Perception , Speech , Acoustic Stimulation , Adolescent , Adult , Brain Mapping , Female , Humans , Magnetoencephalography , Male
12.
Neuroimage ; 147: 32-42, 2017 02 15.
Article in English | MEDLINE | ID: mdl-27903440

ABSTRACT

The timing of slow auditory cortical activity aligns to the rhythmic fluctuations in speech. This entrainment is considered to be a marker of the prosodic and syllabic encoding of speech, and has been shown to correlate with intelligibility. Yet, whether and how auditory cortical entrainment is influenced by the activity in other speech-relevant areas remains unknown. Using source-localized MEG data, we quantified the dependency of auditory entrainment on the state of oscillatory activity in fronto-parietal regions. We found that delta band entrainment interacted with the oscillatory activity in three distinct networks. First, entrainment in the left anterior superior temporal gyrus (STG) was modulated by beta power in orbitofrontal areas, possibly reflecting predictive top-down modulations of auditory encoding. Second, entrainment in the left Heschl's Gyrus and anterior STG was dependent on alpha power in central areas, in line with the importance of motor structures for phonological analysis. And third, entrainment in the right posterior STG modulated theta power in parietal areas, consistent with the engagement of semantic memory. These results illustrate the topographical network interactions of auditory delta entrainment and reveal distinct cross-frequency mechanisms by which entrainment can interact with different cognitive processes underlying speech perception.


Subject(s)
Auditory Cortex/physiology , Delta Rhythm/physiology , Frontal Lobe/physiology , Magnetoencephalography , Parietal Lobe/physiology , Acoustic Stimulation , Adult , Alpha Rhythm/physiology , Beta Rhythm/physiology , Female , Humans , Male , Nerve Net/physiology , Speech Perception/physiology , Temporal Lobe/physiology , Theta Rhythm/physiology , Young Adult
13.
PLoS Biol ; 14(6): e1002498, 2016 06.
Article in English | MEDLINE | ID: mdl-27355236

ABSTRACT

The human brain can be parcellated into diverse anatomical areas. We investigated whether rhythmic brain activity in these areas is characteristic and can be used for automatic classification. To this end, resting-state MEG data of 22 healthy adults was analysed. Power spectra of 1-s long data segments for atlas-defined brain areas were clustered into spectral profiles ("fingerprints"), using k-means and Gaussian mixture (GM) modelling. We demonstrate that individual areas can be identified from these spectral profiles with high accuracy. Our results suggest that each brain area engages in different spectral modes that are characteristic for individual areas. Clustering of brain areas according to similarity of spectral profiles reveals well-known brain networks. Furthermore, we demonstrate task-specific modulations of auditory spectral profiles during auditory processing. These findings have important implications for the classification of regional spectral activity and allow for novel approaches in neuroimaging and neurostimulation in health and disease.


Subject(s)
Brain Mapping/methods , Brain/physiology , Electroencephalography/methods , Magnetoencephalography/methods , Nerve Net/physiology , Acoustic Stimulation , Adult , Auditory Cortex/anatomy & histology , Auditory Cortex/physiology , Brain/anatomy & histology , Evoked Potentials, Auditory/physiology , Female , Humans , Male , Models, Anatomic , Models, Neurological , Nerve Net/anatomy & histology , Young Adult
14.
Elife ; 52016 05 05.
Article in English | MEDLINE | ID: mdl-27146891

ABSTRACT

During continuous speech, lip movements provide visual temporal signals that facilitate speech processing. Here, using MEG we directly investigated how these visual signals interact with rhythmic brain activity in participants listening to and seeing the speaker. First, we investigated coherence between oscillatory brain activity and speaker's lip movements and demonstrated significant entrainment in visual cortex. We then used partial coherence to remove contributions of the coherent auditory speech signal from the lip-brain coherence. Comparing this synchronization between different attention conditions revealed that attending visual speech enhances the coherence between activity in visual cortex and the speaker's lips. Further, we identified a significant partial coherence between left motor cortex and lip movements and this partial coherence directly predicted comprehension accuracy. Our results emphasize the importance of visually entrained and attention-modulated rhythmic brain activity for the enhancement of audiovisual speech processing.


Subject(s)
Brain Waves , Lip/physiology , Motor Cortex/physiology , Movement , Psychomotor Performance , Speech Intelligibility , Acoustic Stimulation , Adolescent , Adult , Female , Healthy Volunteers , Humans , Male , Visual Cortex , Young Adult
15.
J Neurosci ; 35(44): 14691-701, 2015 Nov 04.
Article in English | MEDLINE | ID: mdl-26538641

ABSTRACT

The entrainment of slow rhythmic auditory cortical activity to the temporal regularities in speech is considered to be a central mechanism underlying auditory perception. Previous work has shown that entrainment is reduced when the quality of the acoustic input is degraded, but has also linked rhythmic activity at similar time scales to the encoding of temporal expectations. To understand these bottom-up and top-down contributions to rhythmic entrainment, we manipulated the temporal predictive structure of speech by parametrically altering the distribution of pauses between syllables or words, thereby rendering the local speech rate irregular while preserving intelligibility and the envelope fluctuations of the acoustic signal. Recording EEG activity in human participants, we found that this manipulation did not alter neural processes reflecting the encoding of individual sound transients, such as evoked potentials. However, the manipulation significantly reduced the fidelity of auditory delta (but not theta) band entrainment to the speech envelope. It also reduced left frontal alpha power and this alpha reduction was predictive of the reduced delta entrainment across participants. Our results show that rhythmic auditory entrainment in delta and theta bands reflect functionally distinct processes. Furthermore, they reveal that delta entrainment is under top-down control and likely reflects prefrontal processes that are sensitive to acoustical regularities rather than the bottom-up encoding of acoustic features. SIGNIFICANCE STATEMENT: The entrainment of rhythmic auditory cortical activity to the speech envelope is considered to be critical for hearing. Previous work has proposed divergent views in which entrainment reflects either early evoked responses related to sound encoding or high-level processes related to expectation or cognitive selection. Using a manipulation of speech rate, we dissociated auditory entrainment at different time scales. Specifically, our results suggest that delta entrainment is controlled by frontal alpha mechanisms and thus support the notion that rhythmic auditory cortical entrainment is shaped by top-down mechanisms.


Subject(s)
Acoustic Stimulation/methods , Alpha Rhythm/physiology , Auditory Cortex/physiology , Evoked Potentials, Auditory/physiology , Speech Production Measurement/methods , Speech/physiology , Adolescent , Adult , Female , Frontal Lobe/physiology , Humans , Male , Young Adult
16.
PLoS One ; 10(8): e0136585, 2015.
Article in English | MEDLINE | ID: mdl-26302246

ABSTRACT

'Sensory attenuation', i.e., reduced neural responses to self-induced compared to externally generated stimuli, is a well-established phenomenon. However, very few studies directly compared sensory attenuation with attention effect, which leads to increased neural responses. In this study, we brought sensory attenuation and attention together in a behavioural auditory detection task, where both effects were quantitatively measured and compared. The classic auditory attention effect of facilitating detection performance was replicated. When attention and sensory attenuation were both present, attentional facilitation decreased but remained significant. The results are discussed in the light of current theories of sensory attenuation.


Subject(s)
Acoustic Stimulation , Attention/physiology , Auditory Perception/physiology , Psychomotor Performance/physiology , Adult , Brain Mapping , Cognition/physiology , Electroencephalography , Electrophysiology , Evoked Potentials, Auditory/physiology , Female , Hearing/physiology , Humans , Male , Neuroimaging/methods , Photic Stimulation , Sound
17.
Curr Biol ; 25(12): 1649-53, 2015 Jun 15.
Article in English | MEDLINE | ID: mdl-26028433

ABSTRACT

Humans show a remarkable ability to understand continuous speech even under adverse listening conditions. This ability critically relies on dynamically updated predictions of incoming sensory information, but exactly how top-down predictions improve speech processing is still unclear. Brain oscillations are a likely mechanism for these top-down predictions [1, 2]. Quasi-rhythmic components in speech are known to entrain low-frequency oscillations in auditory areas [3, 4], and this entrainment increases with intelligibility [5]. We hypothesize that top-down signals from frontal brain areas causally modulate the phase of brain oscillations in auditory cortex. We use magnetoencephalography (MEG) to monitor brain oscillations in 22 participants during continuous speech perception. We characterize prominent spectral components of speech-brain coupling in auditory cortex and use causal connectivity analysis (transfer entropy) to identify the top-down signals driving this coupling more strongly during intelligible speech than during unintelligible speech. We report three main findings. First, frontal and motor cortices significantly modulate the phase of speech-coupled low-frequency oscillations in auditory cortex, and this effect depends on intelligibility of speech. Second, top-down signals are significantly stronger for left auditory cortex than for right auditory cortex. Third, speech-auditory cortex coupling is enhanced as a function of stronger top-down signals. Together, our results suggest that low-frequency brain oscillations play a role in implementing predictive top-down control during continuous speech perception and that top-down control is largely directed at left auditory cortex. This suggests a close relationship between (left-lateralized) speech production areas and the implementation of top-down control in continuous speech perception.


Subject(s)
Auditory Cortex/physiology , Speech , Acoustic Stimulation , Brain Mapping , Humans , Magnetoencephalography , Speech Perception
18.
Cereb Cortex ; 23(6): 1388-95, 2013 Jun.
Article in English | MEDLINE | ID: mdl-22610392

ABSTRACT

Functional magnetic resonance imaging studies have repeatedly provided evidence for temporal voice areas (TVAs) with particular sensitivity to human voices along bilateral mid/anterior superior temporal sulci and superior temporal gyri (STS/STG). In contrast, electrophysiological studies of the spatio-temporal correlates of cerebral voice processing have yielded contradictory results, finding the earliest correlates either at ∼300-400 ms, or earlier at ∼200 ms ("fronto-temporal positivity to voice", FTPV). These contradictory results are likely the consequence of different stimulus sets and attentional demands. Here, we recorded magnetoencephalography activity while participants listened to diverse types of vocal and non-vocal sounds and performed different tasks varying in attentional demands. Our results confirm the existence of an early voice-preferential magnetic response (FTPVm, the magnetic counterpart of the FTPV) peaking at about 220 ms and distinguishing between vocal and non-vocal sounds as early as 150 ms after stimulus onset. The sources underlying the FTPVm were localized along bilateral mid-STS/STG, largely overlapping with the TVAs. The FTPVm was consistently observed across different stimulus subcategories, including speech and non-speech vocal sounds, and across different tasks. These results demonstrate the early, largely automatic recruitment of focal, voice-selective cerebral mechanisms with a time-course comparable to that of face processing.


Subject(s)
Auditory Perception/physiology , Brain Mapping , Evoked Potentials, Auditory/physiology , Magnetoencephalography , Temporal Lobe/physiology , Voice , Acoustic Stimulation , Acoustics , Adult , Analysis of Variance , Discrimination, Psychological , Electroencephalography , Eye Movements , Female , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Oxygen , Temporal Lobe/blood supply
19.
Cereb Cortex ; 23(6): 1378-87, 2013 Jun.
Article in English | MEDLINE | ID: mdl-22610394

ABSTRACT

A growing body of evidence shows that ongoing oscillations in auditory cortex modulate their phase to match the rhythm of temporally regular acoustic stimuli, increasing sensitivity to relevant environmental cues and improving detection accuracy. In the current study, we test the hypothesis that nonsensory information provided by linguistic content enhances phase-locked responses to intelligible speech in the human brain. Sixteen adults listened to meaningful sentences while we recorded neural activity using magnetoencephalography. Stimuli were processed using a noise-vocoding technique to vary intelligibility while keeping the temporal acoustic envelope consistent. We show that the acoustic envelopes of sentences contain most power between 4 and 7 Hz and that it is in this frequency band that phase locking between neural activity and envelopes is strongest. Bilateral oscillatory neural activity phase-locked to unintelligible speech, but this cerebro-acoustic phase locking was enhanced when speech was intelligible. This enhanced phase locking was left lateralized and localized to left temporal cortex. Together, our results demonstrate that entrainment to connected speech does not only depend on acoustic characteristics, but is also affected by listeners' ability to extract linguistic information. This suggests a biological framework for speech comprehension in which acoustic and linguistic cues reciprocally aid in stimulus prediction.


Subject(s)
Auditory Cortex/physiology , Comprehension/physiology , Contingent Negative Variation/physiology , Speech/physiology , Acoustic Stimulation , Adult , Analysis of Variance , Brain Mapping , Electroencephalography , Evoked Potentials, Auditory , Female , Fourier Analysis , Humans , Linguistics , Magnetic Resonance Imaging , Magnetoencephalography , Male , Sound Spectrography , Speech Perception , Time Factors , Vocabulary , Young Adult
20.
Curr Biol ; 22(16): R658-63, 2012 Aug 21.
Article in English | MEDLINE | ID: mdl-22917517

ABSTRACT

Oscillations in brain activity have long been known, but many fundamental aspects of such brain rhythms, particularly their functional importance, have been unclear. As we review here, new insights into these issues are emerging from the application of intervention approaches. In these approaches, the timing of brain oscillations is manipulated by non-invasive brain stimulation, either through sensory input or transcranially, and the behavioural consequence then monitored. Notably, such manipulations have led to rapid, periodic fluctuations in behavioural performance, which co-cycle with underlying brain oscillations. Such findings establish a causal relationship between brain oscillations and behaviour, and are allowing novel tests of longstanding models about the functions of brain oscillations.


Subject(s)
Biological Clocks , Brain/physiology , Circadian Rhythm , Animals , Electric Stimulation , Electric Stimulation Therapy , Humans
SELECTION OF CITATIONS
SEARCH DETAIL