Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
1.
Proc Natl Acad Sci U S A ; 114(18): E3669-E3678, 2017 05 02.
Artículo en Inglés | MEDLINE | ID: mdl-28416691

RESUMEN

Although sentences unfold sequentially, one word at a time, most linguistic theories propose that their underlying syntactic structure involves a tree of nested phrases rather than a linear sequence of words. Whether and how the brain builds such structures, however, remains largely unknown. Here, we used human intracranial recordings and visual word-by-word presentation of sentences and word lists to investigate how left-hemispheric brain activity varies during the formation of phrase structures. In a broad set of language-related areas, comprising multiple superior temporal and inferior frontal sites, high-gamma power increased with each successive word in a sentence but decreased suddenly whenever words could be merged into a phrase. Regression analyses showed that each additional word or multiword phrase contributed a similar amount of additional brain activity, providing evidence for a merge operation that applies equally to linguistic objects of arbitrary complexity. More superficial models of language, based solely on sequential transition probability over lexical and syntactic categories, only captured activity in the posterior middle temporal gyrus. Formal model comparison indicated that the model of multiword phrase construction provided a better fit than probability-based models at most sites in superior temporal and inferior frontal cortices. Activity in those regions was consistent with a neural implementation of a bottom-up or left-corner parser of the incoming language stream. Our results provide initial intracranial evidence for the neurophysiological reality of the merge operation postulated by linguists and suggest that the brain compresses syntactically well-formed sequences of words into a hierarchy of nested phrases.


Asunto(s)
Encéfalo/fisiología , Lóbulo Frontal/fisiología , Modelos Neurológicos , Habla/fisiología , Lóbulo Temporal/fisiología , Femenino , Humanos , Masculino
2.
Cogn Sci ; 47(7): e13312, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37417470

RESUMEN

To model behavioral and neural correlates of language comprehension in naturalistic environments, researchers have turned to broad-coverage tools from natural-language processing and machine learning. Where syntactic structure is explicitly modeled, prior work has relied predominantly on context-free grammars (CFGs), yet such formalisms are not sufficiently expressive for human languages. Combinatory categorial grammars (CCGs) are sufficiently expressive directly compositional models of grammar with flexible constituency that affords incremental interpretation. In this work, we evaluate whether a more expressive CCG provides a better model than a CFG for human neural signals collected with functional magnetic resonance imaging (fMRI) while participants listen to an audiobook story. We further test between variants of CCG that differ in how they handle optional adjuncts. These evaluations are carried out against a baseline that includes estimates of next-word predictability from a transformer neural network language model. Such a comparison reveals unique contributions of CCG structure-building predominantly in the left posterior temporal lobe: CCG-derived measures offer a superior fit to neural signals compared to those derived from a CFG. These effects are spatially distinct from bilateral superior temporal effects that are unique to predictability. Neural effects for structure-building are thus separable from predictability during naturalistic listening, and those effects are best characterized by a grammar whose expressive power is motivated on independent linguistic grounds.


Asunto(s)
Encéfalo , Lenguaje , Humanos , Encéfalo/diagnóstico por imagen , Lingüística , Mapeo Encefálico , Percepción Auditiva , Comprensión
3.
Cogn Sci ; 45(1): e12927, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-33415796

RESUMEN

Information-theoretic complexity metrics, such as Surprisal (Hale, 2001; Levy, 2008) and Entropy Reduction (Hale, 2003), are linking hypotheses that bridge theorized expectations about sentences and observed processing difficulty in comprehension. These expectations can be viewed as syntactic derivations constrained by a grammar. However, this expectation-based view is not limited to syntactic information alone. The present study combines structural and non-structural information in unified models of word-by-word sentence processing difficulty. Using probabilistic minimalist grammars (Stabler, 1997), we extend expectation-based models to include frequency information about noun phrase animacy. Entropy reductions derived from these grammars faithfully reflect the asymmetry between subject and object relatives (Staub, 2010; Staub, Dillon, & Clifton, 2017), as well as the effect of animacy on the measured difficulty profile (Lowder & Gordon, 2012; Traxler, Morris, & Seely, 2002). Visualizing probability distributions on the remaining alternatives at particular parser states allows us to explore new, linguistically plausible interpretations for the observed processing asymmetries, including the way that expectations about the relativized argument influence the processing of particular types of relative clauses (Wagers & Pendleton, 2016).


Asunto(s)
Motivación , Comprensión , Humanos , Lenguaje
4.
Neuropsychologia ; 146: 107479, 2020 09.
Artículo en Inglés | MEDLINE | ID: mdl-32428530

RESUMEN

Brain activity in numerous perisylvian brain regions is modulated by the expectedness of linguistic stimuli. We leverage recent advances in computational parsing models to test what representations guide the processes reflected by this activity. Recurrent Neural Network Grammars (RNNGs) are generative models of (tree, string) pairs that use neural networks to drive derivational choices. Parsing with them yields a variety of incremental complexity metrics that we evaluate against a publicly available fMRI data-set recorded while participants simply listen to an audiobook story. Surprisal, which captures a word's un-expectedness, correlates with a wide range of temporal and frontal regions when it is calculated based on word-sequence information using a top-performing LSTM neural network language model. The explicit encoding of hierarchy afforded by the RNNG additionally captures activity in left posterior temporal areas. A separate metric tracking the number of derivational steps taken between words correlates with activity in the left temporal lobe and inferior frontal gyrus. This pattern of results narrows down the kinds of linguistic representations at play during predictive processing across the brain's language network.


Asunto(s)
Mapeo Encefálico , Lingüística , Humanos , Lenguaje , Imagen por Resonancia Magnética , Redes Neurales de la Computación , Lóbulo Temporal
5.
PLoS One ; 14(1): e0207741, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30650078

RESUMEN

The grammar, or syntax, of human language is typically understood in terms of abstract hierarchical structures. However, theories of language processing that emphasize sequential information, not hierarchy, successfully model diverse phenomena. Recent work probing brain signals has shown mixed evidence for hierarchical information in some tasks. We ask whether sequential or hierarchical information guides the expectations that a human listener forms about a word's part-of-speech when simply listening to every-day language. We compare the predictions of three computational models against electroencephalography signals recorded from human participants who listen passively to an audiobook story. We find that predictions based on hierarchical structure correlate with the human brain response above-and-beyond predictions based only on sequential information. This establishes a link between hierarchical linguistic structure and neural signals that generalizes across the range of syntactic structures found in every-day language.


Asunto(s)
Percepción Auditiva/fisiología , Lingüística , Adolescente , Adulto , Electroencefalografía , Femenino , Cabeza , Humanos , Lenguaje , Masculino , Modelos Teóricos , Análisis de Regresión , Adulto Joven
6.
Brain Lang ; 157-158: 81-94, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27208858

RESUMEN

Neurolinguistic accounts of sentence comprehension identify a network of relevant brain regions, but do not detail the information flowing through them. We investigate syntactic information. Does brain activity implicate a computation over hierarchical grammars or does it simply reflect linear order, as in a Markov chain? To address this question, we quantify the cognitive states implied by alternative parsing models. We compare processing-complexity predictions from these states against fMRI timecourses from regions that have been implicated in sentence comprehension. We find that hierarchical grammars independently predict timecourses from left anterior and posterior temporal lobe. Markov models are predictive in these regions and across a broader network that includes the inferior frontal gyrus. These results suggest that while linear effects are wide-spread across the language network, certain areas in the left temporal lobe deal with abstract, hierarchical syntactic representations.


Asunto(s)
Comprensión/fisiología , Lingüística , Lóbulo Temporal/fisiología , Adolescente , Mapeo Encefálico , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Cadenas de Markov , Modelos Neurológicos , Corteza Prefrontal/fisiología , Factores de Tiempo , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA