Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Open Mind (Camb) ; 6: 147-168, 2022.
Article in English | MEDLINE | ID: mdl-36439069

ABSTRACT

Dependency length minimization is widely regarded as a cross-linguistic universal reflecting syntactic complexity in natural languages. A typical way to operationalize dependency length in corpus-based studies has been to count the number of words between syntactically related words. However, such a formulation ignores the syntactic nature of the linguistic material that intervenes a dependency. In this work, we investigate if the number of syntactic heads (rather than the number of words) that intervene a dependency better captures the syntactic complexity across languages. We demonstrate that the dependency length minimization constraint in terms of the number of words could arise as a consequence of constraints on the intervening heads and the tree properties such as node arity. The current study highlights the importance of syntactic heads as central regions of structure building during processing. The results show that when syntactically related words are nonadjacent, increased structure building in the intervening region is avoided.

2.
Front Psychol ; 11: 454, 2020.
Article in English | MEDLINE | ID: mdl-32256432

ABSTRACT

Syntactic priming is known to facilitate comprehension of the target sentence if the syntactic structure of the target sentence aligns with the structure of the prime (Branigan et al., 2005; Tooley and Traxler, 2010). Such a processing facilitation is understood to be constrained due to factors such as lexical overlap between the prime and the target, frequency of the prime structure, etc. Syntactic priming in SOV languages is also understood to be influenced by similar constraints (Arai, 2012). Sentence comprehension in SOV languages is known to be incremental and predictive. Such a top-down parsing process involves establishing various syntactic relations based on the linguistic cues of a sentence and the role of preverbal case-markers in achieving this is known to be critical. Given the evidence of syntactic priming during comprehension in these languages, this aspect of the comprehension process and its effect on syntactic priming becomes important. In this work, we show that syntactic priming during comprehension is affected by the probability of using the prime structure while parsing the target sentence. If the prime structure has a low probability given the sentential cues (e.g., nominal case-markers) in the target sentence, then the chances of persisting with the prime structure in the target reduces. Our work demonstrates the role of structural complexity of the target with regard to syntactic priming during comprehension and highlights that syntactic priming is modulated by an overarching preference of the parser to avoid rare structures.

3.
Cogn Sci ; 44(4): e12822, 2020 04.
Article in English | MEDLINE | ID: mdl-32223024

ABSTRACT

Much previous work has suggested that word order preferences across languages can be explained by the dependency distance minimization constraint (Ferrer-i Cancho, 2008, 2015; Hawkins, 1994). Consistent with this claim, corpus studies have shown that the average distance between a head (e.g., verb) and its dependent (e.g., noun) tends to be short cross-linguistically (Ferrer-i Cancho, 2014; Futrell, Mahowald, & Gibson, 2015; Liu, Xu, & Liang, 2017). This implies that on average languages avoid inefficient or complex structures for simpler structures. But a number of studies in psycholinguistics (Konieczny, 2000; Levy & Keller, 2013; Vasishth, Suckow, Lewis, & Kern, 2010) show that the comprehension system can adapt to the typological properties of a language, for example, verb-final order, leading to more complex structures, for example, having longer linear distance between a head and its dependent. In this paper, we conduct a corpus study for a group of 38 languages, which were either Subject-Verb-Object (SVO) or Subject-Object-Verb (SOV), in order to investigate the role of word order typology in determining syntactic complexity. We present results aggregated across all dependency types, as well as for specific verbal (objects, indirect objects, and adjuncts) and nonverbal (nominal, adjectival, and adverbial) dependencies. The results suggest that dependency distance in a language is determined by the default word order of a language, and crucially, the direction of a dependency (whether the head precedes the dependent or follows it; e.g., whether the noun precedes the verb or follows it). Particularly we show that in SOV languages (e.g., Hindi, Korean) as well as SVO languages (e.g., English, Spanish), longer linear distance (measured as number of words) between head and dependent arises in structures when they mirror the default word order of the language. In addition to showing results on linear distance, we also investigate the influence of word order typology on hierarchical distance (HD; measured as number of heads between head and dependent). The results for HD are similar to that of linear distance. At the same time, in comparison to linear distance, the influence of adaptability on HD seems less strong. In particular, the results show that most languages tend to avoid greater structural depth. Together, these results show evidence for "limited adaptability" to the default word order preferences in a language. Our results support a large body of work in the processing literature that highlights the importance of linguistic exposure and its interaction with working memory constraints in determining sentence complexity. Our results also point to the possible role of other factors such as the morphological richness of a language and a multifactor account of sentence complexity remains a promising area for future investigation.


Subject(s)
Linguistics , Comprehension , Humans , Memory, Short-Term , Psycholinguistics
4.
J Eye Mov Res ; 10(2)2017 Apr 04.
Article in English | MEDLINE | ID: mdl-33828649

ABSTRACT

We used the Potsdam-Allahabad Hindi eye-tracking corpus to investigate the role of wordlevel and sentence-level factors during sentence comprehension in Hindi. Extending previous work that used this eye-tracking data, we investigate the role of surprisal and retrieval cost metrics during sentence processing. While controlling for word-level predictors (word complexity, syllable length, unigram and bigram frequencies) as well as sentence-level predictors such as integration and storage costs, we find a significant effect of surprisal on first-pass reading times (higher surprisal value leads to increase in FPRT). Effect of retrieval cost was only found for a higher degree of parser parallelism. Interestingly, while surprisal has a significant effect on FPRT, storage cost (another predictionbased metric) does not. A significant effect of storage cost shows up only in total fixation time (TFT), thus indicating that these two measures perhaps capture different aspects of prediction. The study replicates previous findings that both prediction-based and memorybased metrics are required to account for processing patterns during sentence comprehension. The results also show that parser model assumptions are critical in order to draw generalizations about the utility of a metric (e.g. surprisal) across various phenomena in a language.

5.
Front Psychol ; 7: 403, 2016.
Article in English | MEDLINE | ID: mdl-27064660

ABSTRACT

Delaying the appearance of a verb in a noun-verb dependency tends to increase processing difficulty at the verb; one explanation for this locality effect is decay and/or interference of the noun in working memory. Surprisal, an expectation-based account, predicts that delaying the appearance of a verb either renders it no more predictable or more predictable, leading respectively to a prediction of no effect of distance or a facilitation. Recently, Husain et al. (2014) suggested that when the exact identity of the upcoming verb is predictable (strong predictability), increasing argument-verb distance leads to facilitation effects, which is consistent with surprisal; but when the exact identity of the upcoming verb is not predictable (weak predictability), locality effects are seen. We investigated Husain et al.'s proposal using Persian complex predicates (CPs), which consist of a non-verbal element-a noun in the current study-and a verb. In CPs, once the noun has been read, the exact identity of the verb is highly predictable (strong predictability); this was confirmed using a sentence completion study. In two self-paced reading (SPR) and two eye-tracking (ET) experiments, we delayed the appearance of the verb by interposing a relative clause (Experiments 1 and 3) or a long PP (Experiments 2 and 4). We also included a simple Noun-Verb predicate configuration with the same distance manipulation; here, the exact identity of the verb was not predictable (weak predictability). Thus, the design crossed Predictability Strength and Distance. We found that, consistent with surprisal, the verb in the strong predictability conditions was read faster than in the weak predictability conditions. Furthermore, greater verb-argument distance led to slower reading times; strong predictability did not neutralize or attenuate the locality effects. As regards the effect of distance on dependency resolution difficulty, these four experiments present evidence in favor of working memory accounts of argument-verb dependency resolution, and against the surprisal-based expectation account of Levy (2008). However, another expectation-based measure, entropy, which was computed using the offline sentence completion data, predicts reading times in Experiment 1 but not in the other experiments. Because participants tend to produce more ungrammatical continuations in the long-distance condition in Experiment 1, we suggest that forgetting due to memory overload leads to greater entropy at the verb.

6.
PLoS One ; 9(7): e100986, 2014.
Article in English | MEDLINE | ID: mdl-25010700

ABSTRACT

Expectation-driven facilitation (Hale, 2001; Levy, 2008) and locality-driven retrieval difficulty (Gibson, 1998, 2000; Lewis & Vasishth, 2005) are widely recognized to be two critical factors in incremental sentence processing; there is accumulating evidence that both can influence processing difficulty. However, it is unclear whether and how expectations and memory interact. We first confirm a key prediction of the expectation account: a Hindi self-paced reading study shows that when an expectation for an upcoming part of speech is dashed, building a rarer structure consumes more processing time than building a less rare structure. This is a strong validation of the expectation-based account. In a second study, we show that when expectation is strong, i.e., when a particular verb is predicted, strong facilitation effects are seen when the appearance of the verb is delayed; however, when expectation is weak, i.e., when only the part of speech "verb" is predicted but a particular verb is not predicted, the facilitation disappears and a tendency towards a locality effect is seen. The interaction seen between expectation strength and distance shows that strong expectations cancel locality effects, and that weak expectations allow locality effects to emerge.


Subject(s)
Language , Humans , India , Linguistics , Reading , Statistics as Topic , Time Factors , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...