Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 78
Filter
Add more filters

Publication year range
1.
PLoS Biol ; 21(3): e3002046, 2023 03.
Article in English | MEDLINE | ID: mdl-36947552

ABSTRACT

Understanding speech requires mapping fleeting and often ambiguous soundwaves to meaning. While humans are known to exploit their capacity to contextualize to facilitate this process, how internal knowledge is deployed online remains an open question. Here, we present a model that extracts multiple levels of information from continuous speech online. The model applies linguistic and nonlinguistic knowledge to speech processing, by periodically generating top-down predictions and incorporating bottom-up incoming evidence in a nested temporal hierarchy. We show that a nonlinguistic context level provides semantic predictions informed by sensory inputs, which are crucial for disambiguating among multiple meanings of the same word. The explicit knowledge hierarchy of the model enables a more holistic account of the neurophysiological responses to speech compared to using lexical predictions generated by a neural network language model (GPT-2). We also show that hierarchical predictions reduce peripheral processing via minimizing uncertainty and prediction error. With this proof-of-concept model, we demonstrate that the deployment of hierarchical predictions is a possible strategy for the brain to dynamically utilize structured knowledge and make sense of the speech input.


Subject(s)
Comprehension , Speech Perception , Humans , Comprehension/physiology , Speech , Speech Perception/physiology , Brain/physiology , Language
2.
J Neurosci ; 43(40): 6779-6795, 2023 10 04.
Article in English | MEDLINE | ID: mdl-37607822

ABSTRACT

Communication difficulties are one of the core criteria in diagnosing autism spectrum disorder (ASD), and are often characterized by speech reception difficulties, whose biological underpinnings are not yet identified. This deficit could denote atypical neuronal ensemble activity, as reflected by neural oscillations. Atypical cross-frequency oscillation coupling, in particular, could disrupt the joint tracking and prediction of dynamic acoustic stimuli, a dual process that is essential for speech comprehension. Whether such oscillatory anomalies already exist in very young children with ASD, and with what specificity they relate to individual language reception capacity is unknown. We collected neural activity data using electroencephalography (EEG) in 64 very young children with and without ASD (mean age 3; 17 females, 47 males) while they were exposed to naturalistic-continuous speech. EEG power of frequency bands typically associated with phrase-level chunking (δ, 1-3 Hz), phonemic encoding (low-γ, 25-35 Hz), and top-down control (ß, 12-20 Hz) were markedly reduced in ASD relative to typically developing (TD) children. Speech neural tracking by δ and θ (4-8 Hz) oscillations was also weaker in ASD compared with TD children. After controlling gaze-pattern differences, we found that the classical θ/γ coupling was replaced by an atypical ß/γ coupling in children with ASD. This anomaly was the single most specific predictor of individual speech reception difficulties in ASD children. These findings suggest that early interventions (e.g., neurostimulation) targeting the disruption of ß/γ coupling and the upregulation of θ/γ coupling could improve speech processing coordination in young children with ASD and help them engage in oral interactions.SIGNIFICANCE STATEMENT Very young children already present marked alterations of neural oscillatory activity in response to natural speech at the time of autism spectrum disorder (ASD) diagnosis. Hierarchical processing of phonemic-range and syllabic-range information (θ/γ coupling) is disrupted in ASD children. Abnormal bottom-up (low-γ) and top-down (low-ß) coordination specifically predicts speech reception deficits in very young ASD children, and no other cognitive deficit.


Subject(s)
Autism Spectrum Disorder , Autistic Disorder , Male , Female , Humans , Child , Child, Preschool , Speech/physiology , Autism Spectrum Disorder/diagnosis , Electroencephalography , Acoustic Stimulation
3.
PLoS Comput Biol ; 19(11): e1011595, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37934766

ABSTRACT

Natural speech perception requires processing the ongoing acoustic input while keeping in mind the preceding one and predicting the next. This complex computational problem could be handled by a dynamic multi-timescale hierarchical inferential process that coordinates the information flow up and down the language network hierarchy. Using a predictive coding computational model (Precoss-ß) that identifies online individual syllables from continuous speech, we address the advantage of a rhythmic modulation of up and down information flows, and whether beta oscillations could be optimal for this. In the model, and consistent with experimental data, theta and low-gamma neural frequency scales ensure syllable-tracking and phoneme-level speech encoding, respectively, while the beta rhythm is associated with inferential processes. We show that a rhythmic alternation of bottom-up and top-down processing regimes improves syllable recognition, and that optimal efficacy is reached when the alternation of bottom-up and top-down regimes, via oscillating prediction error precisions, is in the beta range (around 20-30 Hz). These results not only demonstrate the advantage of a rhythmic alternation of up- and down-going information, but also that the low-beta range is optimal given sensory analysis at theta and low-gamma scales. While specific to speech processing, the notion of alternating bottom-up and top-down processes with frequency multiplexing might generalize to other cognitive architectures.


Subject(s)
Speech Perception , Speech , Beta Rhythm , Language , Recognition, Psychology
4.
PLoS Biol ; 18(9): e3000833, 2020 09.
Article in English | MEDLINE | ID: mdl-32898188

ABSTRACT

The phonological deficit in dyslexia is associated with altered low-gamma oscillatory function in left auditory cortex, but a causal relationship between oscillatory function and phonemic processing has never been established. After confirming a deficit at 30 Hz with electroencephalography (EEG), we applied 20 minutes of transcranial alternating current stimulation (tACS) to transiently restore this activity in adults with dyslexia. The intervention significantly improved phonological processing and reading accuracy as measured immediately after tACS. The effect occurred selectively for a 30-Hz stimulation in the dyslexia group. Importantly, we observed that the focal intervention over the left auditory cortex also decreased 30-Hz activity in the right superior temporal cortex, resulting in reinstating a left dominance for the oscillatory response. These findings establish a causal role of neural oscillations in phonological processing and offer solid neurophysiological grounds for a potential correction of low-gamma anomalies and for alleviating the phonological deficit in dyslexia.


Subject(s)
Dyslexia/therapy , Reading , Speech Perception , Adolescent , Adult , Auditory Cortex/physiopathology , Auditory Cortex/radiation effects , Dyslexia/physiopathology , Electroencephalography , Evoked Potentials, Auditory/physiology , Evoked Potentials, Auditory/radiation effects , Female , Humans , Male , Middle Aged , Phonetics , Speech Perception/physiology , Speech Perception/radiation effects , Transcranial Direct Current Stimulation/methods , Verbal Behavior/physiology , Verbal Behavior/radiation effects , Young Adult
5.
Neuroimage ; 231: 117864, 2021 05 01.
Article in English | MEDLINE | ID: mdl-33592241

ABSTRACT

Both electroencephalography (EEG) and functional Magnetic Resonance Imaging (fMRI) are non-invasive methods that show complementary aspects of human brain activity. Despite measuring different proxies of brain activity, both the measured blood-oxygenation (fMRI) and neurophysiological recordings (EEG) are indirectly coupled. The electrophysiological and BOLD signal can map the underlying functional connectivity structure at the whole brain scale at different timescales. Previous work demonstrated a moderate but significant correlation between resting-state functional connectivity of both modalities, however there is a wide range of technical setups to measure simultaneous EEG-fMRI and the reliability of those measures between different setups remains unknown. This is true notably with respect to different magnetic field strengths (low and high field) and different spatial sampling of EEG (medium to high-density electrode coverage). Here, we investigated the reproducibility of the bimodal EEG-fMRI functional connectome in the most comprehensive resting-state simultaneous EEG-fMRI dataset compiled to date including a total of 72 subjects from four different imaging centers. Data was acquired from 1.5T, 3T and 7T scanners with simultaneously recorded EEG using 64 or 256 electrodes. We demonstrate that the whole-brain monomodal connectivity reproducibly correlates across different datasets and that a moderate crossmodal correlation between EEG and fMRI connectivity of r ≈ 0.3 can be reproducibly extracted in low- and high-field scanners. The crossmodal correlation was strongest in the EEG-ß frequency band but exists across all frequency bands. Both homotopic and within intrinsic connectivity network (ICN) connections contributed the most to the crossmodal relationship. This study confirms, using a considerably diverse range of recording setups, that simultaneous EEG-fMRI offers a consistent estimate of multimodal functional connectomes in healthy subjects that are dominantly linked through a functional core of ICNs across spanning across the different timescales measured by EEG and fMRI. This opens new avenues for estimating the dynamics of brain function and provides a better understanding of interactions between EEG and fMRI measures. This observed level of reproducibility also defines a baseline for the study of alterations of this coupling in pathological conditions and their role as potential clinical markers.


Subject(s)
Brain/diagnostic imaging , Connectome/standards , Databases, Factual/standards , Electroencephalography/standards , Magnetic Resonance Imaging/standards , Nerve Net/diagnostic imaging , Adolescent , Adult , Brain/physiology , Connectome/methods , Electroencephalography/methods , Female , Humans , Magnetic Resonance Imaging/methods , Male , Middle Aged , Nerve Net/physiology , Reproducibility of Results , Young Adult
6.
Proc Natl Acad Sci U S A ; 115(6): E1299-E1308, 2018 02 06.
Article in English | MEDLINE | ID: mdl-29363598

ABSTRACT

Percepts and words can be decoded from distributed neural activity measures. However, the existence of widespread representations might conflict with the more classical notions of hierarchical processing and efficient coding, which are especially relevant in speech processing. Using fMRI and magnetoencephalography during syllable identification, we show that sensory and decisional activity colocalize to a restricted part of the posterior superior temporal gyrus (pSTG). Next, using intracortical recordings, we demonstrate that early and focal neural activity in this region distinguishes correct from incorrect decisions and can be machine-decoded to classify syllables. Crucially, significant machine decoding was possible from neuronal activity sampled across different regions of the temporal and frontal lobes, despite weak or absent sensory or decision-related responses. These findings show that speech-sound categorization relies on an efficient readout of focal pSTG neural activity, while more distributed activity patterns, although classifiable by machine learning, instead reflect collateral processes of sensory perception and decision.


Subject(s)
Epilepsy/physiopathology , Phonetics , Speech Perception/physiology , Temporal Lobe/physiology , Acoustic Stimulation , Adult , Aged , Brain Mapping , Case-Control Studies , Female , Humans , Magnetic Resonance Imaging , Magnetoencephalography , Male , Young Adult
7.
Neuroimage ; 219: 116998, 2020 10 01.
Article in English | MEDLINE | ID: mdl-32480035

ABSTRACT

Long-range connectivity has become the most studied feature of human functional Magnetic Resonance Imaging (fMRI), yet the spatial and temporal relationship between its whole-brain dynamics and electrophysiological connectivity remains largely unknown. FMRI-derived functional connectivity exhibits spatial reconfigurations or time-varying dynamics at infraslow (<0.1Hz) speeds. Conversely, electrophysiological connectivity is based on cross-region coupling of fast oscillations (~1-100Hz). It is unclear whether such fast oscillation-based coupling varies at infraslow speeds, temporally coinciding with infraslow dynamics across the fMRI-based connectome. If so, does the association of fMRI-derived and electrophysiological dynamics spatially vary over the connectome across the functionally distinct electrophysiological oscillation bands? In two concurrent electroencephalography (EEG)-fMRI resting-state datasets, oscillation-based coherence in all canonical bands (delta through gamma) indeed reconfigured at infraslow speeds in tandem with fMRI-derived connectivity changes in corresponding region-pairs. Interestingly, irrespective of EEG frequency-band the cross-modal tie of connectivity dynamics comprised a large proportion of connections distributed across the entire connectome. However, there were frequency-specific differences in the relative strength of the cross-modal association. This association was strongest in visual to somatomotor connections for slower EEG-bands, and in connections involving the Default Mode Network for faster EEG-bands. Methodologically, the findings imply that neural connectivity dynamics can be reliably measured by fMRI despite heavy susceptibility to noise, and by EEG despite shortcomings of source reconstruction. Biologically, the findings provide evidence that contrast with known territories of oscillation power, oscillation coupling in all bands slowly reconfigures in a highly distributed manner across the whole-brain connectome.


Subject(s)
Brain/physiology , Connectome/methods , Electroencephalography/methods , Magnetic Resonance Imaging/methods , Nerve Net/physiology , Adolescent , Adult , Brain/diagnostic imaging , Female , Humans , Male , Nerve Net/diagnostic imaging , Young Adult
8.
Neuroimage ; 218: 116882, 2020 09.
Article in English | MEDLINE | ID: mdl-32439539

ABSTRACT

Neural oscillations in auditory cortex are argued to support parsing and representing speech constituents at their corresponding temporal scales. Yet, how incoming sensory information interacts with ongoing spontaneous brain activity, what features of the neuronal microcircuitry underlie spontaneous and stimulus-evoked spectral fingerprints, and what these fingerprints entail for stimulus encoding, remain largely open questions. We used a combination of human invasive electrophysiology, computational modeling and decoding techniques to assess the information encoding properties of brain activity and to relate them to a plausible underlying neuronal microarchitecture. We analyzed intracortical auditory EEG activity from 10 patients while they were listening to short sentences. Pre-stimulus neural activity in early auditory cortical regions often exhibited power spectra with a shoulder in the delta range and a small bump in the beta range. Speech decreased power in the beta range, and increased power in the delta-theta and gamma ranges. Using multivariate machine learning techniques, we assessed the spectral profile of information content for two aspects of speech processing: detection and discrimination. We obtained better phase than power information decoding, and a bimodal spectral profile of information content with better decoding at low (delta-theta) and high (gamma) frequencies than at intermediate (beta) frequencies. These experimental data were reproduced by a simple rate model made of two subnetworks with different timescales, each composed of coupled excitatory and inhibitory units, and connected via a negative feedback loop. Modeling and experimental results were similar in terms of pre-stimulus spectral profile (except for the iEEG beta bump), spectral modulations with speech, and spectral profile of information content. Altogether, we provide converging evidence from both univariate spectral analysis and decoding approaches for a dual timescale processing infrastructure in human auditory cortex, and show that it is consistent with the dynamics of a simple rate model.


Subject(s)
Auditory Cortex/physiology , Computer Simulation , Speech Perception/physiology , Adult , Electrocorticography , Female , Humans , Male , Signal Processing, Computer-Assisted
9.
Neuroimage ; 216: 116571, 2020 08 01.
Article in English | MEDLINE | ID: mdl-31987996

ABSTRACT

Naturalistic movie paradigms are exquisitely dynamic by nature, yet dedicated analytical methods typically remain static. Here, we deployed a dynamic inter-subject functional correlation (ISFC) analysis to study movie-driven functional brain changes in a population of male young adults diagnosed with autism spectrum disorder (ASD). We took inspiration from the resting-state research field in generating a set of whole-brain ISFC states expressed by the analysed ASD and typically developing (TD) subjects along time. Change points of state expression often involved transitions between different scenes of the movie, resulting in the reorganisation of whole-brain ISFC patterns to recruit different functional networks. Both subject populations showed idiosyncratic state expression at dedicated time points, but only TD subjects were also characterised by episodes of homogeneous recruitment. The temporal fluctuations in both quantities, as well as in cross-population dissimilarity, were tied to contextual movie cues. The prominent idiosyncrasy seen in ASD subjects was linked to individual symptomatology by partial least squares analysis, as different temporal sequences of ISFC states were expressed by subjects suffering from social and verbal communication impairments, as opposed to nonverbal communication deficits and stereotypic behaviours. Furthermore, the temporal expression of several of these states was correlated with the movie context, the presence of faces on screen, or overall luminosity. Overall, our results support the use of dynamic analytical frameworks to fully exploit the information obtained by naturalistic stimulation paradigms. They also show that autism should be understood as a multi-faceted disorder, in which the functional brain alterations seen in a given subject will vary as a function of the extent and balance of expressed symptoms.


Subject(s)
Auditory Perception/physiology , Autism Spectrum Disorder/physiopathology , Cerebral Cortex/physiopathology , Connectome/methods , Magnetic Resonance Imaging/methods , Motion Pictures , Social Perception , Visual Perception/physiology , Adolescent , Adult , Cerebral Cortex/diagnostic imaging , Humans , Male , Young Adult
10.
Neuroimage ; 212: 116635, 2020 05 15.
Article in English | MEDLINE | ID: mdl-32105884

ABSTRACT

Investigating context-dependent modulations of Functional Connectivity (FC) with functional magnetic resonance imaging is crucial to reveal the neurological underpinnings of cognitive processing. Most current analysis methods hypothesise sustained FC within the duration of a task, but this assumption has been shown too limiting by recent imaging studies. While several methods have been proposed to study functional dynamics during rest, task-based studies are yet to fully disentangle network modulations. Here, we propose a seed-based method to probe task-dependent modulations of brain activity by revealing Psychophysiological Interactions of Co-activation Patterns (PPI-CAPs). This point process-based approach temporally decomposes task-modulated connectivity into dynamic building blocks which cannot be captured by current methods, such as PPI or Dynamic Causal Modelling. Additionally, it identifies the occurrence of co-activation patterns at single frame resolution as opposed to window-based methods. In a naturalistic setting where participants watched a TV program, we retrieved several patterns of co-activation with a posterior cingulate cortex seed whose occurrence rates and polarity varied depending on the context; on the seed activity; or on an interaction between the two. Moreover, our method exposed the consistency in effective connectivity patterns across subjects and time, allowing us to uncover links between PPI-CAPs and specific stimuli contained in the video. Our study reveals that explicitly tracking connectivity pattern transients is paramount to advance our understanding of how different brain areas dynamically communicate when presented with a set of cues.


Subject(s)
Brain Mapping/methods , Brain/physiology , Cognition/physiology , Image Processing, Computer-Assisted/methods , Neural Pathways/physiology , Adult , Female , Humans , Magnetic Resonance Imaging/methods , Male , Models, Neurological , Psychophysiology , Young Adult
11.
J Neurosci ; 38(3): 710-722, 2018 01 17.
Article in English | MEDLINE | ID: mdl-29217685

ABSTRACT

Speech comprehension is preserved up to a threefold acceleration, but deteriorates rapidly at higher speeds. Current models posit that perceptual resilience to accelerated speech is limited by the brain's ability to parse speech into syllabic units using δ/θ oscillations. Here, we investigated whether the involvement of neuronal oscillations in processing accelerated speech also relates to their scale-free amplitude modulation as indexed by the strength of long-range temporal correlations (LRTC). We recorded MEG while 24 human subjects (12 females) listened to radio news uttered at different comprehensible rates, at a mostly unintelligible rate and at this same speed interleaved with silence gaps. δ, θ, and low-γ oscillations followed the nonlinear variation of comprehension, with LRTC rising only at the highest speed. In contrast, increasing the rate was associated with a monotonic increase in LRTC in high-γ activity. When intelligibility was restored with the insertion of silence gaps, LRTC in the δ, θ, and low-γ oscillations resumed the low levels observed for intelligible speech. Remarkably, the lower the individual subject scaling exponents of δ/θ oscillations, the greater the comprehension of the fastest speech rate. Moreover, the strength of LRTC of the speech envelope decreased at the maximal rate, suggesting an inverse relationship with the LRTC of brain dynamics when comprehension halts. Our findings show that scale-free amplitude modulation of cortical oscillations and speech signals are tightly coupled to speech uptake capacity.SIGNIFICANCE STATEMENT One may read this statement in 20-30 s, but reading it in less than five leaves us clueless. Our minds limit how much information we grasp in an instant. Understanding the neural constraints on our capacity for sensory uptake is a fundamental question in neuroscience. Here, MEG was used to investigate neuronal activity while subjects listened to radio news played faster and faster until becoming unintelligible. We found that speech comprehension is related to the scale-free dynamics of δ and θ bands, whereas this property in high-γ fluctuations mirrors speech rate. We propose that successful speech processing imposes constraints on the self-organization of synchronous cell assemblies and their scale-free dynamics adjusts to the temporal properties of spoken language.


Subject(s)
Brain/physiology , Comprehension/physiology , Neurons/physiology , Speech Perception/physiology , Female , Humans , Magnetoencephalography , Male
12.
J Neurosci ; 37(33): 7930-7938, 2017 08 16.
Article in English | MEDLINE | ID: mdl-28729443

ABSTRACT

Recent psychophysics data suggest that speech perception is not limited by the capacity of the auditory system to encode fast acoustic variations through neural γ activity, but rather by the time given to the brain to decode them. Whether the decoding process is bounded by the capacity of θ rhythm to follow syllabic rhythms in speech, or constrained by a more endogenous top-down mechanism, e.g., involving ß activity, is unknown. We addressed the dynamics of auditory decoding in speech comprehension by challenging syllable tracking and speech decoding using comprehensible and incomprehensible time-compressed auditory sentences. We recorded EEGs in human participants and found that neural activity in both θ and γ ranges was sensitive to syllabic rate. Phase patterns of slow neural activity consistently followed the syllabic rate (4-14 Hz), even when this rate went beyond the classical θ range (4-8 Hz). The power of θ activity increased linearly with syllabic rate but showed no sensitivity to comprehension. Conversely, the power of ß (14-21 Hz) activity was insensitive to the syllabic rate, yet reflected comprehension on a single-trial basis. We found different long-range dynamics for θ and ß activity, with ß activity building up in time while more contextual information becomes available. This is consistent with the roles of θ and ß activity in stimulus-driven versus endogenous mechanisms. These data show that speech comprehension is constrained by concurrent stimulus-driven θ and low-γ activity, and by endogenous ß activity, but not primarily by the capacity of θ activity to track the syllabic rhythm.SIGNIFICANCE STATEMENT Speech comprehension partly depends on the ability of the auditory cortex to track syllable boundaries with θ-range neural oscillations. The reason comprehension drops when speech is accelerated could hence be because θ oscillations can no longer follow the syllabic rate. Here, we presented subjects with comprehensible and incomprehensible accelerated speech, and show that neural phase patterns in the θ band consistently reflect the syllabic rate, even when speech becomes too fast to be intelligible. The drop in comprehension, however, is signaled by a significant decrease in the power of low-ß oscillations (14-21 Hz). These data suggest that speech comprehension is not limited by the capacity of θ oscillations to adapt to syllabic rate, but by an endogenous decoding process.


Subject(s)
Acoustic Stimulation/methods , Auditory Cortex/physiology , Beta Rhythm/physiology , Comprehension/physiology , Speech Perception/physiology , Theta Rhythm/physiology , Adult , Electroencephalography/methods , Female , Humans , Male , Random Allocation , Speech/physiology , Time Factors , Young Adult
13.
Hum Brain Mapp ; 39(6): 2391-2404, 2018 06.
Article in English | MEDLINE | ID: mdl-29504186

ABSTRACT

To refine our understanding of autism spectrum disorders (ASD), studies of the brain in dynamic, multimodal and ecological experimental settings are required. One way to achieve this is to compare the neural responses of ASD and typically developing (TD) individuals when viewing a naturalistic movie, but the temporal complexity of the stimulus hampers this task, and the presence of intrinsic functional connectivity (FC) may overshadow movie-driven fluctuations. Here, we detected inter-subject functional correlation (ISFC) transients to disentangle movie-induced functional changes from underlying resting-state activity while probing FC dynamically. When considering the number of significant ISFC excursions triggered by the movie across the brain, connections between remote functional modules were more heterogeneously engaged in the ASD population. Dynamically tracking the temporal profiles of those ISFC changes and tying them to specific movie subparts, this idiosyncrasy in ASD responses was then shown to involve functional integration and segregation mechanisms such as response inhibition, background suppression, or multisensory integration, while low-level visual processing was spared. Through the application of a new framework for the study of dynamic experimental paradigms, our results reveal a temporally localized idiosyncrasy in ASD responses, specific to short-lived episodes of long-range functional interplays.


Subject(s)
Autism Spectrum Disorder/pathology , Brain Mapping , Brain/diagnostic imaging , Comprehension/physiology , Motion Pictures , Adolescent , Adult , Female , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Neural Pathways/diagnostic imaging , Oxygen/blood , Time Factors , Young Adult
14.
Brain Sci ; 14(3)2024 Feb 21.
Article in English | MEDLINE | ID: mdl-38539585

ABSTRACT

Brain-Computer Interfaces (BCIs) aim to establish a pathway between the brain and an external device without the involvement of the motor system, relying exclusively on neural signals. Such systems have the potential to provide a means of communication for patients who have lost the ability to speak due to a neurological disorder. Traditional methodologies for decoding imagined speech directly from brain signals often deploy static classifiers, that is, decoders that are computed once at the beginning of the experiment and remain unchanged throughout the BCI use. However, this approach might be inadequate to effectively handle the non-stationary nature of electroencephalography (EEG) signals and the learning that accompanies BCI use, as parameters are expected to change, and all the more in a real-time setting. To address this limitation, we developed an adaptive classifier that updates its parameters based on the incoming data in real time. We first identified optimal parameters (the update coefficient, UC) to be used in an adaptive Linear Discriminant Analysis (LDA) classifier, using a previously recorded EEG dataset, acquired while healthy participants controlled a binary BCI based on imagined syllable decoding. We subsequently tested the effectiveness of this optimization in a real-time BCI control setting. Twenty healthy participants performed two BCI control sessions based on the imagery of two syllables, using a static LDA and an adaptive LDA classifier, in randomized order. As hypothesized, the adaptive classifier led to better performances than the static one in this real-time BCI control task. Furthermore, the optimal parameters for the adaptive classifier were closely aligned in both datasets, acquired using the same syllable imagery task. These findings highlight the effectiveness and reliability of adaptive LDA classifiers for real-time imagined speech decoding. Such an improvement can shorten the training time and favor the development of multi-class BCIs, representing a clear interest for non-invasive systems notably characterized by low decoding accuracies.

15.
bioRxiv ; 2024 Jan 21.
Article in English | MEDLINE | ID: mdl-37961305

ABSTRACT

Traditional models of speech perception posit that neural activity encodes speech through a hierarchy of cognitive processes, from low-level representations of acoustic and phonetic features to high-level semantic encoding. Yet it remains unknown how neural representations are transformed across levels of the speech hierarchy. Here, we analyzed unique microelectrode array recordings of neuronal spiking activity from the human left anterior superior temporal gyrus, a brain region at the interface between phonetic and semantic speech processing, during a semantic categorization task and natural speech perception. We identified distinct neural manifolds for semantic and phonetic features, with a functional separation of the corresponding low-dimensional trajectories. Moreover, phonetic and semantic representations were encoded concurrently and reflected in power increases in the beta and low-gamma local field potentials, suggesting top-down predictive and bottom-up cumulative processes. Our results are the first to demonstrate mechanisms for hierarchical speech transformations that are specific to neuronal population dynamics.

16.
J Neurosci ; 32(41): 14433-41, 2012 Oct 10.
Article in English | MEDLINE | ID: mdl-23055513

ABSTRACT

Both our environment and our behavior contain many spatiotemporal regularities. Preferential and differential tuning of neural populations to these regularities can be demonstrated by assessing rate dependence of neural responses evoked during continuous periodic stimulation. Here, we used functional magnetic resonance imaging to measure regional variations of temporal sensitivity along the human ventral visual stream. By alternating one face and one house stimulus, we combined sufficient low-level signal modulation with changes in semantic meaning and could therefore drive all tiers of visual cortex strongly enough to assess rate dependence. We found several dissociations between early visual cortex and middle- and higher-tier regions. First, there was a progressive slowing down of stimulation rates yielding peak responses along the ventral visual stream. This finding shows the width of temporal integration windows to increase at higher hierarchical levels. Next, for fixed rates, early but not higher visual cortex responses additionally depended on the length of stimulus exposure, which may indicate increased persistence of responses to short stimuli at higher hierarchical levels. Finally, attention, which was recruited by an incidental task, interacted with stimulation rate and shifted tuning peaks toward lower frequencies. Together, these findings quantify neural response properties that are likely to be operational during natural vision and that provide putative neurofunctional substrates of mechanisms that are relevant in several psychophysical phenomena as masking and the attentional blink. Moreover, they illustrate temporal constraints for translating the deployment of attention into enhanced neural responses and thereby account for lower limits of attentional dwell time.


Subject(s)
Photic Stimulation/methods , Psychomotor Performance/physiology , Reaction Time/physiology , Visual Cortex/physiology , Visual Pathways/physiology , Adolescent , Adult , Female , Humans , Male , Young Adult
17.
J Neurosci ; 32(1): 275-81, 2012 Jan 04.
Article in English | MEDLINE | ID: mdl-22219289

ABSTRACT

Asymmetry in auditory cortical oscillations could play a role in speech perception by fostering hemispheric triage of information across the two hemispheres. Due to this asymmetry, fast speech temporal modulations relevant for phonemic analysis could be best perceived by the left auditory cortex, while slower modulations conveying vocal and paralinguistic information would be better captured by the right one. It is unclear, however, whether and how early oscillation-based selection influences speech perception. Using a dichotic listening paradigm in human participants, where we provided different parts of the speech envelope to each ear, we show that word recognition is facilitated when the temporal properties of speech match the rhythmic properties of auditory cortices. We further show that the interaction between speech envelope and auditory cortices rhythms translates in their level of neural activity (as measured with fMRI). In the left auditory cortex, the neural activity level related to stimulus-brain rhythm interaction predicts speech perception facilitation. These data demonstrate that speech interacts with auditory cortical rhythms differently in right and left auditory cortex, and that in the latter, the interaction directly impacts speech perception performance.


Subject(s)
Auditory Cortex/physiology , Dominance, Cerebral/physiology , Evoked Potentials, Auditory/physiology , Speech Perception/physiology , Acoustic Stimulation/methods , Adult , Auditory Cortex/anatomy & histology , Female , Functional Laterality/physiology , Humans , Language Tests , Male , Speech Discrimination Tests/methods , Young Adult
18.
J Neurosci ; 32(41): 14305-10, 2012 Oct 10.
Article in English | MEDLINE | ID: mdl-23055501

ABSTRACT

Neural oscillations in the alpha band (8-12 Hz) are increasingly viewed as an active inhibitory mechanism that gates and controls sensory information processing as a function of cognitive relevance. Extending this view, phase synchronization of alpha oscillations across distant cortical regions could regulate integration of information. Here, we investigated whether such long-range cross-region coupling in the alpha band is intrinsically and selectively linked to activity in a distinct functionally specialized brain network. If so, this would provide new insight into the functional role of alpha band phase synchrony. We adapted the phase-locking value to assess fluctuations in synchrony that occur over time in ongoing activity. Concurrent EEG and functional magnetic resonance imaging (fMRI) were recorded during resting wakefulness in 26 human subjects. Fluctuations in global synchrony in the upper alpha band correlated positively with activity in several prefrontal and parietal regions (as measured by fMRI). fMRI intrinsic connectivity analysis confirmed that these regions correspond to the well known fronto-parietal (FP) network. Spectral correlations with this network's activity confirmed that no other frequency band showed equivalent results. This selective association supports an intrinsic relation between large-scale alpha phase synchrony and cognitive functions associated with the FP network. This network has been suggested to implement phasic aspects of top-down modulation such as initiation and change in moment-to-moment control. Mechanistically, long-range upper alpha band synchrony is well suited to support these functions. Complementing our previous findings that related alpha oscillation power to neural structures serving tonic control, the current findings link alpha phase synchrony to neural structures underpinning phasic control of alertness and task requirements.


Subject(s)
Adaptation, Physiological/physiology , Alpha Rhythm/physiology , Frontal Lobe/physiology , Nerve Net/physiology , Parietal Lobe/physiology , Adult , Female , Humans , Male , Young Adult
19.
Hum Brain Mapp ; 34(5): 1208-19, 2013 May.
Article in English | MEDLINE | ID: mdl-22287085

ABSTRACT

Post-lingual deafness induces a decline in the ability to process phonological sounds or evoke phonological representations. This decline is paralleled with abnormally high neural activity in the right posterior superior temporal gyrus/supramarginal gyrus (PSTG/SMG). As this neural plasticity negatively relates to cochlear implantation (CI) success, it appears important to understand its determinants. We addressed the neuro-functional mechanisms underlying this maladaptive phenomenon using behavioral and functional magnetic resonance imaging (fMRI) data acquired in 10 normal-hearing subjects and 10 post-lingual deaf candidates for CI. We compared two memory tasks where subjects had to evoke phonological (speech) and environmental sound representations from visually presented items. We observed dissociations in the dynamics of right versus left PSTG/SMG neural responses as a function of duration of deafness. Responses in the left PSTG/SMG to phonological processing and responses in the right PSTG/SMG to environmental sound imagery both declined. However, abnormally high neural activity was observed in response to phonological visual items in the right PSTG/SMG, i.e., contralateral to the zone where phonological activity decreased. In contrast, no such responses (overactivation) were observed in the left PSTG/SMG in response to environmental sounds. This asymmetry in functional adaptation to deafness suggests that maladaptive reorganization of the right PSTG/SMG region is not due to balanced hemispheric interaction, but to a specific take-over of the right PSTG/SMG region by phonological processing, presumably because speech remains behaviorally more relevant to communication than the processing of environmental sounds. These results demonstrate that cognitive long-term alteration of auditory processing shapes functional cerebral reorganization.


Subject(s)
Cochlear Implantation/methods , Deafness/pathology , Deafness/therapy , Functional Laterality/physiology , Temporal Lobe/physiopathology , Acoustic Stimulation , Adult , Aged , Female , Humans , Image Processing, Computer-Assisted , Imagery, Psychotherapy/methods , Magnetic Resonance Imaging , Male , Middle Aged , Oxygen , Phonetics , Reaction Time , Recognition, Psychology , Sensory Deprivation , Statistics, Nonparametric , Temporal Lobe/blood supply , Treatment Outcome , Vocabulary
20.
Proc Natl Acad Sci U S A ; 107(43): 18688-93, 2010 Oct 26.
Article in English | MEDLINE | ID: mdl-20956297

ABSTRACT

The physiological basis of human cerebral asymmetry for language remains mysterious. We have used simultaneous physiological and anatomical measurements to investigate the issue. Concentrating on neural oscillatory activity in speech-specific frequency bands and exploring interactions between gestural (motor) and auditory-evoked activity, we find, in the absence of language-related processing, that left auditory, somatosensory, articulatory motor, and inferior parietal cortices show specific, lateralized, speech-related physiological properties. With the addition of ecologically valid audiovisual stimulation, activity in auditory cortex synchronizes with left-dominant input from the motor cortex at frequencies corresponding to syllabic, but not phonemic, speech rhythms. Our results support theories of language lateralization that posit a major role for intrinsic, hardwired perceptuomotor processing in syllabic parsing and are compatible both with the evolutionary view that speech arose from a combination of syllable-sized vocalizations and meaningful hand gestures and with developmental observations suggesting phonemic analysis is a developmentally acquired process.


Subject(s)
Brain/physiology , Dominance, Cerebral/physiology , Language , Speech/physiology , Adult , Auditory Cortex/physiology , Brain/anatomy & histology , Electroencephalography , Humans , Magnetic Resonance Imaging , Male , Motor Cortex/physiology , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL