Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
1.
J Neurosci ; 43(29): 5350-5364, 2023 07 19.
Artículo en Inglés | MEDLINE | ID: mdl-37217308

RESUMEN

A sentence is more than the sum of its words: its meaning depends on how they combine with one another. The brain mechanisms underlying such semantic composition remain poorly understood. To shed light on the neural vector code underlying semantic composition, we introduce two hypotheses: (1) the intrinsic dimensionality of the space of neural representations should increase as a sentence unfolds, paralleling the growing complexity of its semantic representation; and (2) this progressive integration should be reflected in ramping and sentence-final signals. To test these predictions, we designed a dataset of closely matched normal and jabberwocky sentences (composed of meaningless pseudo words) and displayed them to deep language models and to 11 human participants (5 men and 6 women) monitored with simultaneous MEG and intracranial EEG. In both deep language models and electrophysiological data, we found that representational dimensionality was higher for meaningful sentences than jabberwocky. Furthermore, multivariate decoding of normal versus jabberwocky confirmed three dynamic patterns: (1) a phasic pattern following each word, peaking in temporal and parietal areas; (2) a ramping pattern, characteristic of bilateral inferior and middle frontal gyri; and (3) a sentence-final pattern in left superior frontal gyrus and right orbitofrontal cortex. These results provide a first glimpse into the neural geometry of semantic integration and constrain the search for a neural code of linguistic composition.SIGNIFICANCE STATEMENT Starting from general linguistic concepts, we make two sets of predictions in neural signals evoked by reading multiword sentences. First, the intrinsic dimensionality of the representation should grow with additional meaningful words. Second, the neural dynamics should exhibit signatures of encoding, maintaining, and resolving semantic composition. We successfully validated these hypotheses in deep neural language models, artificial neural networks trained on text and performing very well on many natural language processing tasks. Then, using a unique combination of MEG and intracranial electrodes, we recorded high-resolution brain data from human participants while they read a controlled set of sentences. Time-resolved dimensionality analysis showed increasing dimensionality with meaning, and multivariate decoding allowed us to isolate the three dynamical patterns we had hypothesized.


Asunto(s)
Encéfalo , Lenguaje , Masculino , Humanos , Femenino , Encéfalo/fisiología , Semántica , Lingüística , Mapeo Encefálico/métodos , Lectura , Imagen por Resonancia Magnética/métodos
2.
Neuroimage ; 226: 117499, 2021 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-33186717

RESUMEN

One of the central tasks of the human auditory system is to extract sound features from incoming acoustic signals that are most critical for speech perception. Specifically, phonological features and phonemes are the building blocks for more complex linguistic entities, such as syllables, words and sentences. Previous ECoG and EEG studies showed that various regions in the superior temporal gyrus (STG) exhibit selective responses to specific phonological features. However, electrical activity recorded by ECoG or EEG grids reflects average responses of large neuronal populations and is therefore limited in providing insights into activity patterns of single neurons. Here, we recorded spiking activity from 45 units in the STG from six neurosurgical patients who performed a listening task with phoneme stimuli. Fourteen units showed significant responsiveness to the stimuli. Using a Naïve-Bayes model, we find that single-cell responses to phonemes are governed by manner-of-articulation features and are organized according to sonority with two main clusters for sonorants and obstruents. We further find that 'neural similarity' (i.e. the similarity of evoked spiking activity between pairs of phonemes) is comparable to the 'perceptual similarity' (i.e. to what extent two phonemes are judged as sounding similar) based on perceptual confusion, assessed behaviorally in healthy subjects. Thus, phonemes that were perceptually similar also had similar neural responses. Taken together, our findings indicate that manner-of-articulation is the dominant organization dimension of phoneme representations at the single-cell level, suggesting a remarkable consistency across levels of analyses, from the single neuron level to that of large neuronal populations and behavior.


Asunto(s)
Modelos Neurológicos , Neuronas/fisiología , Percepción del Habla/fisiología , Lóbulo Temporal/fisiología , Adulto , Teorema de Bayes , Mapeo Encefálico/métodos , Electrocorticografía/métodos , Femenino , Humanos , Masculino , Persona de Mediana Edad , Fonética , Adulto Joven
3.
Entropy (Basel) ; 22(4)2020 Apr 16.
Artículo en Inglés | MEDLINE | ID: mdl-33286220

RESUMEN

Sentence comprehension requires inferring, from a sequence of words, the structure of syntactic relationships that bind these words into a semantic representation. Our limited ability to build some specific syntactic structures, such as nested center-embedded clauses (e.g., "The dog that the cat that the mouse bit chased ran away"), suggests a striking capacity limitation of sentence processing, and thus offers a window to understand how the human brain processes sentences. Here, we review the main hypotheses proposed in psycholinguistics to explain such capacity limitation. We then introduce an alternative approach, derived from our recent work on artificial neural networks optimized for language modeling, and predict that capacity limitation derives from the emergence of sparse and feature-specific syntactic units. Unlike psycholinguistic theories, our neural network-based framework provides precise capacity-limit predictions without making any a priori assumptions about the form of the grammar or parser. Finally, we discuss how our framework may clarify the mechanistic underpinning of language processing and its limitations in the human brain.

4.
Neurobiol Lang (Camb) ; 4(4): 611-636, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38144237

RESUMEN

A fundamental question in neurolinguistics concerns the brain regions involved in syntactic and semantic processing during speech comprehension, both at the lexical (word processing) and supra-lexical levels (sentence and discourse processing). To what extent are these regions separated or intertwined? To address this question, we introduce a novel approach exploiting neural language models to generate high-dimensional feature sets that separately encode semantic and syntactic information. More precisely, we train a lexical language model, GloVe, and a supra-lexical language model, GPT-2, on a text corpus from which we selectively removed either syntactic or semantic information. We then assess to what extent the features derived from these information-restricted models are still able to predict the fMRI time courses of humans listening to naturalistic text. Furthermore, to determine the windows of integration of brain regions involved in supra-lexical processing, we manipulate the size of contextual information provided to GPT-2. The analyses show that, while most brain regions involved in language comprehension are sensitive to both syntactic and semantic features, the relative magnitudes of these effects vary across these regions. Moreover, regions that are best fitted by semantic or syntactic features are more spatially dissociated in the left hemisphere than in the right one, and the right hemisphere shows sensitivity to longer contexts than the left. The novelty of our approach lies in the ability to control for the information encoded in the models' embeddings by manipulating the training set. These "information-restricted" models complement previous studies that used language models to probe the neural bases of language, and shed new light on its spatial organization.

5.
Trends Cogn Sci ; 26(9): 751-766, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-35933289

RESUMEN

Natural language is often seen as the single factor that explains the cognitive singularity of the human species. Instead, we propose that humans possess multiple internal languages of thought, akin to computer languages, which encode and compress structures in various domains (mathematics, music, shape…). These languages rely on cortical circuits distinct from classical language areas. Each is characterized by: (i) the discretization of a domain using a small set of symbols, and (ii) their recursive composition into mental programs that encode nested repetitions with variations. In various tasks of elementary shape or sequence perception, minimum description length in the proposed languages captures human behavior and brain activity, whereas non-human primate data are captured by simpler nonsymbolic models. Our research argues in favor of discrete symbolic models of human thought.


Asunto(s)
Lenguaje , Percepción , Humanos , Matemática
6.
Cognition ; 213: 104699, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-33941375

RESUMEN

Recursive processing in sentence comprehension is considered a hallmark of human linguistic abilities. However, its underlying neural mechanisms remain largely unknown. We studied whether a modern artificial neural network trained with "deep learning" methods mimics a central aspect of human sentence processing, namely the storing of grammatical number and gender information in working memory and its use in long-distance agreement (e.g., capturing the correct number agreement between subject and verb when they are separated by other phrases). Although the network, a recurrent architecture with Long Short-Term Memory units, was solely trained to predict the next word in a large corpus, analysis showed the emergence of a very sparse set of specialized units that successfully handled local and long-distance syntactic agreement for grammatical number. However, the simulations also showed that this mechanism does not support full recursion and fails with some long-range embedded dependencies. We tested the model's predictions in a behavioral experiment where humans detected violations in number agreement in sentences with systematic variations in the singular/plural status of multiple nouns, with or without embedding. Human and model error patterns were remarkably similar, showing that the model echoes various effects observed in human data. However, a key difference was that, with embedded long-range dependencies, humans remained above chance level, while the model's systematic errors brought it below chance. Overall, our study shows that exploring the ways in which modern artificial neural networks process sentences leads to precise and testable hypotheses about human linguistic performance.


Asunto(s)
Comprensión , Lenguaje , Humanos , Lingüística , Memoria a Corto Plazo , Redes Neurales de la Computación
7.
Nat Hum Behav ; 5(3): 389-398, 2021 03.
Artículo en Inglés | MEDLINE | ID: mdl-33257877

RESUMEN

Reading is a rapid, distributed process that engages multiple components of the ventral visual stream. To understand the neural constituents and their interactions that allow us to identify written words, we performed direct intra-cranial recordings in a large cohort of humans. This allowed us to isolate the spatiotemporal dynamics of visual word recognition across the entire left ventral occipitotemporal cortex. We found that mid-fusiform cortex is the first brain region sensitive to lexicality, preceding the traditional visual word form area. The magnitude and duration of its activation are driven by the statistics of natural language. Information regarding lexicality and word frequency propagates posteriorly from this region to visual word form regions and to earlier visual cortex, which, while active earlier, show sensitivity to words later. Further, direct electrical stimulation of this region results in reading arrest, further illustrating its crucial role in reading. This unique sensitivity of mid-fusiform cortex to sub-lexical and lexical characteristics points to its central role as the orthographic lexicon-the long-term memory representations of visual word forms.


Asunto(s)
Memoria a Largo Plazo/fisiología , Lóbulo Occipital/fisiología , Reconocimiento Visual de Modelos/fisiología , Psicolingüística , Lectura , Lóbulo Temporal/fisiología , Vías Visuales/fisiología , Adulto , Estimulación Eléctrica , Electrocorticografía , Humanos , Factores de Tiempo , Corteza Visual/fisiología , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA