Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 68
Filter
1.
Cereb Cortex ; 34(2)2024 01 31.
Article in English | MEDLINE | ID: mdl-38367613

ABSTRACT

Does neural activity reveal how balanced bilinguals choose languages? Despite using diverse neuroimaging techniques, prior studies haven't provided a definitive solution to this problem. Nonetheless, studies involving direct brain stimulation in bilinguals have identified distinct brain regions associated with language production in different languages. In this magnetoencephalography study with 45 proficient Spanish-Basque bilinguals, we investigated language selection during covert picture naming and word reading tasks. Participants were prompted to name line drawings or read words if the color of the stimulus changed to green, in 10% of trials. The task was performed either in Spanish or Basque. Despite similar sensor-level evoked activity for both languages in both tasks, decoding analyses revealed language-specific classification ~100 ms post-stimulus onset. During picture naming, right occipital-temporal sensors predominantly contributed to language decoding, while left occipital-temporal sensors were crucial for decoding during word reading. Cross-task decoding analysis unveiled robust generalization effects from picture naming to word reading. Our methodology involved a fine-grained examination of neural responses using magnetoencephalography, offering insights into the dynamics of language processing in bilinguals. This study refines our understanding of the neural underpinnings of language selection and bridges the gap between non-invasive and invasive experimental evidence in bilingual language production.


Subject(s)
Magnetoencephalography , Multilingualism , Humans , Language , Brain/diagnostic imaging , Brain/physiology , Brain Mapping/methods
2.
Neuroimage ; 297: 120675, 2024 Jun 15.
Article in English | MEDLINE | ID: mdl-38885886

ABSTRACT

The synchronization between the speech envelope and neural activity in auditory regions, referred to as cortical tracking of speech (CTS), plays a key role in speech processing. The method selected for extracting the envelope is a crucial step in CTS measurement, and the absence of a consensus on best practices among the various methods can influence analysis outcomes and interpretation. Here, we systematically compare five standard envelope extraction methods the absolute value of Hilbert transform (absHilbert), gammatone filterbanks, heuristic approach, Bark scale, and vocalic energy), analyzing their impact on the CTS. We present performance metrics for each method based on the recording of brain activity from participants listening to speech in clear and noisy conditions, utilizing intracranial EEG, MEG and EEG data. As expected, we observed significant CTS in temporal brain regions below 10 Hz across all datasets, regardless of the extraction methods. In general, the gammatone filterbanks approach consistently demonstrated superior performance compared to other methods. Results from our study can guide scientists in the field to make informed decisions about the optimal analysis to extract the CTS, contributing to advancing the understanding of the neuronal mechanisms implicated in CTS.

3.
Neuroimage ; 273: 120072, 2023 06.
Article in English | MEDLINE | ID: mdl-37004829

ABSTRACT

Early research proposed that individuals with developmental dyslexia use contextual information to facilitate lexical access and compensate for phonological deficits. Yet at present there is no corroborating neuro-cognitive evidence. We explored this with a novel combination of magnetoencephalography (MEG), neural encoding and grey matter volume analyses. We analysed MEG data from 41 adult native Spanish speakers (14 with dyslexic symptoms) who passively listened to naturalistic sentences. We used multivariate Temporal Response Function analysis to capture online cortical tracking of both auditory (speech envelope) and contextual information. To compute contextual information tracking we used word-level Semantic Surprisal derived using a Transformer Neural Network language model. We related online information tracking to participants' reading scores and grey matter volumes within the reading-linked cortical network. We found that right hemisphere envelope tracking was related to better phonological decoding (pseudoword reading) for both groups, with dyslexic readers performing worse overall at this task. Consistently, grey matter volume in the superior temporal and bilateral inferior frontal areas increased with better envelope tracking abilities. Critically, for dyslexic readers only, stronger Semantic Surprisal tracking in the right hemisphere was related to better word reading. These findings further support the notion of a speech envelope tracking deficit in dyslexia and provide novel evidence for top-down semantic compensatory mechanisms.


Subject(s)
Dyslexia , Speech Perception , Adult , Humans , Reading , Speech , Semantics , Magnetoencephalography , Speech Perception/physiology
4.
Hum Brain Mapp ; 44(7): 2862-2872, 2023 05.
Article in English | MEDLINE | ID: mdl-36852454

ABSTRACT

The coordination between the theta phase (3-7 Hz) and gamma power (25-35 Hz) oscillations (namely theta-gamma phase-amplitude coupling, PAC) in the auditory cortex has been proposed as an essential neural mechanism involved in speech processing. However, it has not been established how this mechanism is related to the efficiency with which a listener processes speech. Speech processing in a non-native language offers a useful opportunity to evaluate if theta-gamma PAC is modulated by the challenges imposed by the reception of speech input in a non-native language. The present study investigates how auditory theta-gamma PAC (recorded with magnetoencephalography) is modulated in both native and non-native speech reception. Participants were Spanish native (L1) speakers studying Basque (L2) at three different levels: beginner (Grade 1), intermediate (Grade 2), and advanced (Grade 3). We found that during L2 speech processing (i) theta-gamma PAC was more highly coordinated for intelligible compared to unintelligible speech; (ii) this coupling was modulated by proficiency in Basque being lower for beginners, higher for intermediate, and highest for advanced speakers (no difference observed in Spanish); (iii) gamma power did not differ between languages and groups. These findings highlight how the coordinated theta-gamma oscillatory activity is tightly related to speech comprehension: the stronger this coordination is, the more the comprehension system will proficiently parse the incoming speech input.


Subject(s)
Auditory Cortex , Speech Perception , Humans , Magnetoencephalography , Comprehension , Language
5.
J Neurosci ; 2021 Jun 01.
Article in English | MEDLINE | ID: mdl-34088796

ABSTRACT

The ability to establish associations between visual objects and speech sounds is essential for human reading. Understanding the neural adjustments required for acquisition of these arbitrary audiovisual associations can shed light on fundamental reading mechanisms and help reveal how literacy builds on pre-existing brain circuits. To address these questions, the present longitudinal and cross-sectional MEG studies characterize the temporal and spatial neural correlates of audiovisual syllable congruency in children (4-9 years old, 22 males and 20 females) learning to read. Both studies showed that during the first years of reading instruction children gradually set up audiovisual correspondences between letters and speech sounds, which can be detected within the first 400 ms of a bimodal presentation and recruit the superior portions of the left temporal cortex. These findings suggest that children progressively change the way they treat audiovisual syllables as a function of their reading experience. This reading-specific brain plasticity implies (partial) recruitment of pre-existing brain circuits for audiovisual analysis.SIGNIFICANCE STATEMENTLinking visual and auditory linguistic representations is the basis for the development of efficient reading, while dysfunctional audiovisual letter processing predicts future reading disorders. Our developmental MEG project included a longitudinal and a cross-sectional study; both studies showed that children's audiovisual brain circuits progressively change as a function of reading experience. They also revealed an exceptional degree of neuroplasticity in audiovisual neural networks, showing that as children develop literacy, the brain progressively adapts so as to better detect new correspondences between letters and speech sounds.

6.
Cereb Cortex ; 31(8): 3820-3831, 2021 07 05.
Article in English | MEDLINE | ID: mdl-33791775

ABSTRACT

Cortical tracking of linguistic structures in speech, such as phrases (<3 Hz, delta band) and syllables (3-8 Hz, theta band), is known to be crucial for speech comprehension. However, it has not been established whether this effect is related to language proficiency. Here, we investigate how auditory cortical activity in second language (L2) learners tracked L2 speech. Using magnetoencephalography, we recorded brain activity from participants listening to Spanish and Basque. Participants were Spanish native (L1) language speakers studying Basque (L2) at the same language center at three different levels: beginner (Grade 1), intermediate (Grade 2), and advanced (Grade 3). We found that 1) both delta and theta tracking to L2 speech in the auditory cortex were related to L2 learning proficiency and that 2) top-down modulations of activity in the left auditory regions during L2 speech listening-by the left inferior frontal and motor regions in delta band and by the left middle temporal regions in theta band-were also related to L2 proficiency. Altogether, these results indicate that the ability to learn an L2 is related to successful cortical tracking of L2 speech and its modulation by neuronal oscillations in higher-order cortical regions.


Subject(s)
Cerebral Cortex/physiology , Language , Multilingualism , Speech/physiology , Adult , Auditory Cortex/physiology , Brain Mapping , Delta Rhythm , Female , Humans , Language Development , Learning , Magnetoencephalography , Male , Middle Aged , Psychomotor Performance/physiology , Theta Rhythm
7.
Cereb Cortex ; 31(9): 4092-4103, 2021 07 29.
Article in English | MEDLINE | ID: mdl-33825884

ABSTRACT

Cortical circuits rely on the temporal regularities of speech to optimize signal parsing for sound-to-meaning mapping. Bottom-up speech analysis is accelerated by top-down predictions about upcoming words. In everyday communications, however, listeners are regularly presented with challenging input-fluctuations of speech rate or semantic content. In this study, we asked how reducing speech temporal regularity affects its processing-parsing, phonological analysis, and ability to generate context-based predictions. To ensure that spoken sentences were natural and approximated semantic constraints of spontaneous speech we built a neural network to select stimuli from large corpora. We analyzed brain activity recorded with magnetoencephalography during sentence listening using evoked responses, speech-to-brain synchronization and representational similarity analysis. For normal speech theta band (6.5-8 Hz) speech-to-brain synchronization was increased and the left fronto-temporal areas generated stronger contextual predictions. The reverse was true for temporally irregular speech-weaker theta synchronization and reduced top-down effects. Interestingly, delta-band (0.5 Hz) speech tracking was greater when contextual/semantic predictions were lower or if speech was temporally jittered. We conclude that speech temporal regularity is relevant for (theta) syllabic tracking and robust semantic predictions while the joint support of temporal and contextual predictability reduces word and phrase-level cortical tracking (delta).


Subject(s)
Cerebral Cortex/physiology , Language , Speech Perception/physiology , Adaptation, Psychological/physiology , Adolescent , Adult , Anticipation, Psychological , Electroencephalography Phase Synchronization , Evoked Potentials , Female , Humans , Magnetoencephalography , Male , Middle Aged , Nerve Net/physiology , Speech/physiology , Theta Rhythm/physiology , Young Adult
8.
J Neurosci ; 40(5): 1053-1065, 2020 01 29.
Article in English | MEDLINE | ID: mdl-31889007

ABSTRACT

Lip-reading is crucial for understanding speech in challenging conditions. But how the brain extracts meaning from, silent, visual speech is still under debate. Lip-reading in silence activates the auditory cortices, but it is not known whether such activation reflects immediate synthesis of the corresponding auditory stimulus or imagery of unrelated sounds. To disentangle these possibilities, we used magnetoencephalography to evaluate how cortical activity in 28 healthy adult humans (17 females) entrained to the auditory speech envelope and lip movements (mouth opening) when listening to a spoken story without visual input (audio-only), and when seeing a silent video of a speaker articulating another story (video-only). In video-only, auditory cortical activity entrained to the absent auditory signal at frequencies <1 Hz more than to the seen lip movements. This entrainment process was characterized by an auditory-speech-to-brain delay of ∼70 ms in the left hemisphere, compared with ∼20 ms in audio-only. Entrainment to mouth opening was found in the right angular gyrus at <1 Hz, and in early visual cortices at 1-8 Hz. These findings demonstrate that the brain can use a silent lip-read signal to synthesize a coarse-grained auditory speech representation in early auditory cortices. Our data indicate the following underlying oscillatory mechanism: seeing lip movements first modulates neuronal activity in early visual cortices at frequencies that match articulatory lip movements; the right angular gyrus then extracts slower features of lip movements, mapping them onto the corresponding speech sound features; this information is fed to auditory cortices, most likely facilitating speech parsing.SIGNIFICANCE STATEMENT Lip-reading consists in decoding speech based on visual information derived from observation of a speaker's articulatory facial gestures. Lip-reading is known to improve auditory speech understanding, especially when speech is degraded. Interestingly, lip-reading in silence still activates the auditory cortices, even when participants do not know what the absent auditory signal should be. However, it was uncertain what such activation reflected. Here, using magnetoencephalographic recordings, we demonstrate that it reflects fast synthesis of the auditory stimulus rather than mental imagery of unrelated, speech or non-speech, sounds. Our results also shed light on the oscillatory dynamics underlying lip-reading.


Subject(s)
Auditory Cortex/physiology , Lipreading , Speech Perception/physiology , Acoustic Stimulation , Female , Humans , Magnetoencephalography , Male , Pattern Recognition, Visual/physiology , Sound Spectrography , Young Adult
9.
Neuroimage ; 239: 118314, 2021 10 01.
Article in English | MEDLINE | ID: mdl-34175428

ABSTRACT

Contextual information triggers predictions about the content ("what") of environmental stimuli to update an internal generative model of the surrounding world. However, visual information dynamically changes across time, and temporal predictability ("when") may influence the impact of internal predictions on visual processing. In this magnetoencephalography (MEG) study, we investigated how processing feature specific information ("what") is affected by temporal predictability ("when"). Participants (N = 16) were presented with four consecutive Gabor patches (entrainers) with constant spatial frequency but with variable orientation and temporal onset. A fifth target Gabor was presented after a longer delay and with higher or lower spatial frequency that participants had to judge. We compared the neural responses to entrainers where the Gabor orientation could, or could not be temporally predicted along the entrainer sequence, and with inter-entrainer timing that was constant (predictable), or variable (unpredictable). We observed suppression of evoked neural responses in the visual cortex for predictable stimuli. Interestingly, we found that temporal uncertainty increased expectation suppression. This suggests that in temporally uncertain scenarios the neurocognitive system invests less resources in integrating bottom-up information. Multivariate pattern analysis showed that predictable visual features could be decoded from neural responses. Temporal uncertainty did not affect decoding accuracy for early visual responses, with the feature specificity of early visual neural activity preserved across conditions. However, decoding accuracy was less sustained over time for temporally jittered than for isochronous predictable visual stimuli. These findings converge to suggest that the cognitive system processes visual features of temporally predictable stimuli in higher detail, while processing temporally uncertain stimuli may rely more heavily on abstract internal expectations.


Subject(s)
Anticipation, Psychological/physiology , Magnetoencephalography , Photic Stimulation , Time , Uncertainty , Visual Cortex/physiology , Visual Perception/physiology , Adult , Evoked Potentials/physiology , Female , Humans , Male , Multivariate Analysis , Reaction Time , Young Adult
10.
Hum Brain Mapp ; 42(6): 1777-1793, 2021 04 15.
Article in English | MEDLINE | ID: mdl-33368838

ABSTRACT

Recent evidence suggests that damage to the language network triggers its functional reorganization. Yet, the spectro-temporal fingerprints of this plastic rearrangement and its relation to anatomical changes is less well understood. Here, we combined magnetoencephalographic recordings with a proxy measure of white matter to investigate oscillatory activity supporting language plasticity and its relation to structural reshaping. First, cortical dynamics were acquired in a group of healthy controls during object and action naming. Results showed segregated beta (13-28 Hz) power decreases in left ventral and dorsal pathways, in a time-window associated to lexico-semantic processing (~250-500 ms). Six patients with left tumors invading either ventral or dorsal regions performed the same naming task before and 3 months after surgery for tumor resection. When longitudinally comparing patients' responses we found beta compensation mimicking the category-based segregation showed by controls, with ventral and dorsal damage leading to selective compensation for object and action naming, respectively. At the structural level, all patients showed preoperative changes in white matter tracts possibly linked to plasticity triggered by tumor growth. Furthermore, in some patients, structural changes were also evident after surgery and showed associations with longitudinal changes in beta power lateralization toward the contralesional hemisphere. Overall, our findings support the existence of anatomo-functional dependencies in language reorganization and highlight the potential role of oscillatory markers in tracking longitudinal plasticity in brain tumor patients. By doing so, they provide valuable information for mapping preoperative and postoperative neural reshaping and plan surgical strategies to preserve language function and patient's quality of life.


Subject(s)
Beta Rhythm/physiology , Brain Neoplasms/pathology , Brain Neoplasms/physiopathology , Cerebral Cortex/pathology , Cerebral Cortex/physiopathology , Neuronal Plasticity/physiology , Psycholinguistics , White Matter/pathology , Adult , Female , Humans , Longitudinal Studies , Magnetoencephalography , Male , Middle Aged , Young Adult
11.
Neuroimage ; 216: 116788, 2020 08 01.
Article in English | MEDLINE | ID: mdl-32348908

ABSTRACT

How the human brain uses self-generated auditory information during speech production is rather unsettled. Current theories of language production consider a feedback monitoring system that monitors the auditory consequences of speech output and an internal monitoring system, which makes predictions about the auditory consequences of speech before its production. To gain novel insights into underlying neural processes, we investigated the coupling between neuromagnetic activity and the temporal envelope of the heard speech sounds (i.e., cortical tracking of speech) in a group of adults who 1) read a text aloud, 2) listened to a recording of their own speech (i.e., playback), and 3) listened to another speech recording. Reading aloud was here used as a particular form of speech production that shares various processes with natural speech. During reading aloud, the reader's brain tracked the slow temporal fluctuations of the speech output. Specifically, auditory cortices tracked phrases (<1 â€‹Hz) but to a lesser extent than during the two speech listening conditions. Also, the tracking of words (2-4 â€‹Hz) and syllables (4-8 â€‹Hz) occurred at parietal opercula during reading aloud and at auditory cortices during listening. Directionality analyses were then used to get insights into the monitoring systems involved in the processing of self-generated auditory information. Analyses revealed that the cortical tracking of speech at <1 â€‹Hz, 2-4 â€‹Hz and 4-8 â€‹Hz is dominated by speech-to-brain directional coupling during both reading aloud and listening, i.e., the cortical tracking of speech during reading aloud mainly entails auditory feedback processing. Nevertheless, brain-to-speech directional coupling at 4-8 â€‹Hz was enhanced during reading aloud compared with listening, likely reflecting the establishment of predictions about the auditory consequences of speech before production. These data bring novel insights into how auditory verbal information is tracked by the human brain during perception and self-generation of connected speech.


Subject(s)
Brain Mapping/methods , Magnetoencephalography/methods , Neocortex/physiology , Psycholinguistics , Reading , Speech Perception/physiology , Speech/physiology , Adult , Auditory Cortex/physiology , Female , Humans , Male , Parietal Lobe/physiology , Young Adult
12.
Eur J Neurosci ; 51(9): 2008-2022, 2020 05.
Article in English | MEDLINE | ID: mdl-31872926

ABSTRACT

A continuous stream of syllables is segmented into discrete constituents based on the transitional probabilities (TPs) between adjacent syllables by means of statistical learning. However, we still do not know whether people attend to high TPs between frequently co-occurring syllables and cluster them together as parts of the discrete constituents or attend to low TPs aligned with the edges between the constituents and extract them as whole units. Earlier studies on TP-based segmentation also have not distinguished between the segmentation process (how people segment continuous speech) and the learning product (what is learnt by means of statistical learning mechanisms). In the current study, we explored the learning outcome separately from the learning process, focusing on three possible learning products: holistic constituents that are retrieved from memory during the recognition test, clusters of frequently co-occurring syllables, or a set of statistical regularities which can be used to reconstruct legitimate candidates for discrete constituents during the recognition test. Our data suggest that people employ boundary-finding mechanisms during online segmentation by attending to low inter-syllabic TPs during familiarization and also identify potential candidates for discrete constituents based on their statistical congruency with rules extracted during the learning process. Memory representations of recurrent constituents embedded in the continuous speech stream during familiarization facilitate subsequent recognition of these discrete constituents.


Subject(s)
Education, Distance , Speech Perception , Humans , Learning , Recognition, Psychology , Speech
13.
Dev Sci ; 23(6): e12947, 2020 11.
Article in English | MEDLINE | ID: mdl-32043677

ABSTRACT

Recent neurophysiological theories propose that the cerebral hemispheres collaborate to resolve the complex temporal nature of speech, such that left-hemisphere (or bilateral) gamma-band oscillatory activity would specialize in coding information at fast rates (phonemic information), whereas right-hemisphere delta- and theta-band activity would code for speech's slow temporal components (syllabic and prosodic information). Despite the relevance that neural entrainment to speech might have for reading acquisition and for core speech perception operations such as the perception of intelligible speech, no study had yet explored its development in young children. In the current study, speech-brain entrainment was recorded via EEG in a cohort of children at three different time points since they were 4-5 to 6-7 years of age. Our results showed that speech-brain entrainment occurred only at delta frequencies (0.5 Hz) at all testing times. The fact that, from the longitudinal perspective, coherence increased in bilateral temporal electrodes suggests that, contrary to previous hypotheses claiming for an innate right-hemispheric bias for processing prosodic information, at 7 years of age the low-frequency components of speech are processed in a bilateral manner. Lastly, delta speech-brain entrainment in the right hemisphere was related to an indirect measure of intelligibility, providing preliminary evidence that the entrainment phenomenon might support core linguistic operations since early childhood.


Subject(s)
Speech Perception , Speech , Brain , Child , Child, Preschool , Humans , Reading
14.
Neuroimage ; 169: 200-211, 2018 04 01.
Article in English | MEDLINE | ID: mdl-29247806

ABSTRACT

In the field of neuroimaging, researchers often resort to contrasting parametric maps to identify differences between conditions or populations. Unfortunately, contrast patterns mix effects related to amplitude and location differences and tend to peak away from sources of genuine brain activity to an extent that scales with the smoothness of the maps. Here, we illustrate this mislocation problem on source maps reconstructed from magnetoencephalographic recordings and propose a novel, dedicated location-comparison method. In realistic simulations, contrast mislocation was on average ∼10 mm when genuine sources were placed at the same location, and was still above 5 mm when sources were 20 mm apart. The dedicated location-comparison method achieved a sensitivity of ∼90% when inter-source distance was 12 mm. Its benefit is also illustrated on real brain-speech entrainment data. In conclusion, contrasts of parametric maps provide precarious information for source location. To specifically address the question of location difference, one should turn to dedicated methods as the one proposed here.


Subject(s)
Brain Mapping/standards , Brain/physiology , Data Interpretation, Statistical , Magnetoencephalography/standards , Signal Processing, Computer-Assisted , Adult , Brain/diagnostic imaging , Female , Humans , Magnetic Resonance Imaging , Male , Sensitivity and Specificity , Young Adult
15.
Neuroimage ; 175: 259-271, 2018 07 15.
Article in English | MEDLINE | ID: mdl-29605578

ABSTRACT

The current fMRI study was designed to investigate whether the processing of different gender-related cues embedded in nouns affects the computation of agreement dependencies and, if so, where this possible interaction is mapped in the brain. We used the Spanish gender agreement system, which makes it possible to manipulate two different factors: the agreement between different sentence constituents (i.e., by contrasting congruent versus incongruent determiner-noun pairs) and the formal (i.e., orthographical/morphological) and/or lexical information embedded in the noun -i.e., by contrasting transparent (e.g., libromasc. [book]; lunafem. [moon]) and opaque nouns (e.g., lápizmasc. [pencil]; vejezfem. [old age]). Crucially, these data illustrated, for the first time, how the network underlying agreement is sensitive to different gender-to-ending cues: different sources of gender information associated with nouns affect the neural circuits involved in the computation of local agreement dependencies. When the gender marking is informative (as in the case of transparent nouns), both formal and lexical information is used to establish grammatical relations. In contrast, when no formal cues are available (as in the case of opaque nouns), gender information is retrieved from the lexicon. We demonstrated the involvement of the posterior MTG/STG, pars triangularis within the IFG, and parietal regions during gender agreement computation. Critically, in order to integrate the different available information sources, the dynamics of this fronto-temporal loop change and additional regions, such as the hippocampus, the angular and the supramarginal gyri are recruited. These results underpin previous neuroanatomical models proposed in the context of both gender processing and sentence comprehension. But, more importantly, they provide valuable information regarding how and where the brain's language system dynamically integrates all the available form-based and lexical cues during comprehension.


Subject(s)
Brain Mapping/methods , Comprehension/physiology , Language , Prefrontal Cortex/physiology , Psycholinguistics , Temporal Lobe/physiology , Adult , Broca Area/diagnostic imaging , Broca Area/physiology , Humans , Magnetic Resonance Imaging/methods , Prefrontal Cortex/diagnostic imaging , Temporal Lobe/diagnostic imaging
16.
Eur J Neurosci ; 48(7): 2642-2650, 2018 10.
Article in English | MEDLINE | ID: mdl-29283465

ABSTRACT

Cortical oscillations phase-align to the quasi-rhythmic structure of the speech envelope. This speech-brain entrainment has been reported in two frequency bands, that is both in the theta band (4-8 Hz) and in the delta band (<4 Hz). However, it is not clear if these two phenomena reflect passive synchronization of the auditory cortex to the acoustics of the speech input, or if they reflect higher processes involved in actively parsing speech information. Here, we report two magnetoencephalography experiments in which we contrasted cortical entrainment to natural speech compared to qualitative different control conditions (Experiment 1: amplitude-modulated white-noise; Experiment 2: spectrally rotated speech). We computed the coherence between the oscillatory brain activity and the envelope of the auditory stimuli. At the sensor-level, we observed increased coherence for the delta and the theta band for all conditions in bilateral brain regions. However, only in the delta band (but not theta), speech entrainment was stronger than either of the control auditory inputs. Source reconstruction in the delta band showed that speech, compared to the control conditions, elicited larger coherence in the right superior temporal and left inferior frontal regions. In the theta band, no differential effects were observed for the speech compared to the control conditions. These results suggest that whereas theta entrainment mainly reflects perceptual processing of the auditory signal, delta entrainment involves additional higher-order computations in the service of language processing.


Subject(s)
Auditory Cortex/physiology , Frontal Lobe/physiology , Speech Perception/physiology , Speech/physiology , Acoustic Stimulation/methods , Adolescent , Adult , Brain/physiology , Female , Humans , Magnetoencephalography/methods , Male , Middle Aged , Young Adult
17.
Hum Brain Mapp ; 37(8): 2767-83, 2016 08.
Article in English | MEDLINE | ID: mdl-27061643

ABSTRACT

Developmental dyslexia is a reading disorder often characterized by reduced awareness of speech units. Whether the neural source of this phonological disorder in dyslexic readers results from the malfunctioning of the primary auditory system or damaged feedback communication between higher-order phonological regions (i.e., left inferior frontal regions) and the auditory cortex is still under dispute. Here we recorded magnetoencephalographic (MEG) signals from 20 dyslexic readers and 20 age-matched controls while they were listening to ∼10-s-long spoken sentences. Compared to controls, dyslexic readers had (1) an impaired neural entrainment to speech in the delta band (0.5-1 Hz); (2) a reduced delta synchronization in both the right auditory cortex and the left inferior frontal gyrus; and (3) an impaired feedforward functional coupling between neural oscillations in the right auditory cortex and the left inferior frontal regions. This shows that during speech listening, individuals with developmental dyslexia present reduced neural synchrony to low-frequency speech oscillations in primary auditory regions that hinders higher-order speech processing steps. The present findings, thus, strengthen proposals assuming that improper low-frequency acoustic entrainment affects speech sampling. This low speech-brain synchronization has the strong potential to cause severe consequences for both phonological and reading skills. Interestingly, the reduced speech-brain synchronization in dyslexic readers compared to normal readers (and its higher-order consequences across the speech processing network) appears preserved through the development from childhood to adulthood. Thus, the evaluation of speech-brain synchronization could possibly serve as a diagnostic tool for early detection of children at risk of dyslexia. Hum Brain Mapp 37:2767-2783, 2016. © 2016 Wiley Periodicals, Inc.


Subject(s)
Brain/physiopathology , Dyslexia/physiopathology , Speech Perception/physiology , Adult , Brain Mapping , Female , Humans , Magnetoencephalography , Male , Young Adult
18.
Neuroimage ; 118: 79-89, 2015 Sep.
Article in English | MEDLINE | ID: mdl-26067344

ABSTRACT

Literacy and numeracy are two fundamental cognitive skills that require mastering culturally-invented symbolic systems for representing spoken language and quantities. How numbers and words are processed in the human brain and their temporal dynamics remain unclear. Using MEG (magnetoencephalography), we find brain activation differences for literacy and numeracy from early stages of processing in the temporal-occipital and temporal-parietal regions. Native speakers of Spanish were exposed to visually presented words, pseudowords, strings of numbers, strings of letters and strings of symbols while engaged in a go/no-go task. Results showed more evoked neuromagnetic activity for words and pseudowords compared to symbols at ~120-130ms in the left occipito-temporal and temporal-parietal cortices (angular gyrus and intra-parietal sulcus) and at ~200ms in the left inferior frontal gyrus and left temporal areas. In contrast, numbers showed more activation than symbols at similar time windows in homologous regions of the right hemisphere: occipito-temporal and superior and middle temporal cortices at ~100-130ms. A direct comparison between the responses to words and numbers confirmed this distinct lateralization for the two stimulus types. These results suggest that literacy and numeracy follow distinct processing streams through the left and right hemispheres, respectively, and that the temporal-parietal and occipito-temporal regions may interact during processing alphanumeric stimuli.


Subject(s)
Brain/physiology , Mathematical Concepts , Reading , Adult , Female , Humans , Magnetoencephalography , Male , Occipital Lobe/physiology , Parietal Lobe/physiology , Temporal Lobe/physiology , Young Adult
19.
Hum Brain Mapp ; 36(12): 4986-5002, 2015 Dec.
Article in English | MEDLINE | ID: mdl-26356682

ABSTRACT

Whether phonological deficits in developmental dyslexia are associated with impaired neural sampling of auditory information at either syllabic- or phonemic-rates is still under debate. In addition, whereas neuroanatomical alterations in auditory regions have been documented in dyslexic readers, whether and how these structural anomalies are linked to auditory sampling and reading deficits remains poorly understood. In this study, we measured auditory neural synchronization at different frequencies corresponding to relevant phonological spectral components of speech in children and adults with and without dyslexia, using magnetoencephalography. Furthermore, structural MRI was used to estimate cortical thickness of the auditory cortex of participants. Dyslexics showed atypical brain synchronization at both syllabic (slow) and phonemic (fast) rates. Interestingly, while a left hemispheric asymmetry in cortical thickness was functionally related to a stronger left hemispheric lateralization of neural synchronization to stimuli presented at the phonemic rate in skilled readers, the same anatomical index in dyslexics was related to a stronger right hemispheric dominance for neural synchronization to syllabic-rate auditory stimuli. These data suggest that the acoustic sampling deficit in development dyslexia might be linked to an atypical specialization of the auditory cortex to both low and high frequency amplitude modulations.


Subject(s)
Auditory Cortex/growth & development , Auditory Cortex/pathology , Dyslexia/pathology , Dyslexia/physiopathology , Acoustic Stimulation , Adolescent , Adult , Age Factors , Analysis of Variance , Child , Female , Functional Laterality , Humans , Intelligence , Magnetic Resonance Imaging , Magnetoencephalography , Male , Memory, Short-Term/physiology , Middle Aged , Phonetics , Psychoacoustics , Reading , Statistics as Topic , Young Adult
20.
Neuroimage ; 88: 188-201, 2014 03.
Article in English | MEDLINE | ID: mdl-24291502

ABSTRACT

Language comprehension is incremental, involving the integration of information from different words together with the need to resolve conflicting cues when unexpected information occurs. The present fMRI design seeks to segregate the neuro-anatomical substrates of these two processes by comparing well-formed and ill-formed sentences during subject-verb agreement computation. Our experiment takes advantage of a particular Spanish feature, the Unagreement phenomenon: a subject-verb agreement mismatch that results in a grammatical sentence ("Los pintores trajimos…" [The painters3.pl (we)brought1.pl…]). Comprehension of this construction implies a shift in the semantic interpretation of the subject from 3rd-person to 1st-person, enabling the phrase "The painters" to be re-interpreted as "We painters". Our results include firstly a functional dissociation between well-formed and ill-formed sentences with Person Mismatches: while Person Mismatches recruited a fronto-parietal network associated to monitoring operations, grammatical sentences (both Unagreement and Default Agreement) recruited a fronto-temporal network related to syntactic-semantic integration. Secondly, there was activation in the posterior part of the left middle frontal gyrus for both Person Mismatches and Unagreement, reflecting the evaluation of the morpho-syntactic match between agreeing constituents. Thirdly, the left angular gyrus showed increased activation only for Unagreement, highlighting its crucial role in the comprehension of semantically complex but non-anomalous constructions. These findings point to a central role of the classic fronto-temporal network, plus two additional nodes: the posterior part of the left middle frontal gyrus and the left angular gyrus; opening new windows to the study of agreement computation and language comprehension.


Subject(s)
Brain Mapping/methods , Comprehension/physiology , Frontal Lobe/physiology , Language , Parietal Lobe/physiology , Adolescent , Adult , Female , Frontal Lobe/diagnostic imaging , Humans , Magnetic Resonance Imaging , Male , Parietal Lobe/diagnostic imaging , Psycholinguistics , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL