Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 7 de 7
1.
J Speech Lang Hear Res ; : 1-16, 2024 May 15.
Article En | MEDLINE | ID: mdl-38749013

PURPOSE: Traumatic brain injury (TBI) is associated with a range of cognitive-communicative deficits that interfere with everyday communication and social interaction. Considerable effort has been directed at characterizing the nature and scope of cognitive-communication disorders in TBI, yet the underlying mechanisms of impairment are largely unspecified. The present research examines sensitivity to a common communicative cue, disfluency, and its impact on memory for spoken language in TBI. METHOD: Fifty-three participants with moderate-severe TBI and 53 noninjured comparison participants listened to a series of sentences, some of which contained disfluencies. A subsequent memory test probed memory for critical words in the sentences. RESULTS: Participants with TBI successfully remembered the spoken words (b = 1.57, p < .0001) at a similar level to noninjured comparison participants. Critically, participants with TBI also exhibited better recognition memory for words preceded by disfluency compared to words from fluent sentences (b = 0.57, p = .02). CONCLUSIONS: These findings advance mechanistic accounts of cognitive-communication disorder by revealing that, when isolated for experimental study, individuals with moderate-severe TBI are sensitive to attentional orienting cues in speech and exhibit enhanced recognition of individual words preceded by disfluency. These results suggest that some aspects of cognitive-communication disorders may not emerge from an inability to perceive and use individual communication cues, but rather from disruptions in managing (i.e., attending, weighting, integrating) multiple cognitive, communicative, and social cues in complex and dynamic interactions. This hypothesis warrants further investigation.

2.
Psychon Bull Rev ; 2024 Jan 12.
Article En | MEDLINE | ID: mdl-38216841

Empirical studies of conversational recall show that the amount of conversation that can be recalled after a delay is limited and biased in favor of one's own contributions. What aspects of a conversational interaction shape what will and will not be recalled? This study aims to predict the contents of conversation that will be recalled based on linguistic features of what was said. Across 59 conversational dyads, we observed that two linguistic features that are hallmarks of interactive language use-disfluency (um/uh) and backchannelling (ok, yeah)-promoted recall. Two other features-disagreements between the interlocutors and use of "like"-were not predictive of recall. While self-generated material was better remembered overall, both hearing and producing disfluency and backchannels improved memory for the associated utterances. Finally, the disfluency-related memory boost was similar regardless of the number of disfluencies in the utterance. Overall, we conclude that interactional linguistic features of conversation are predictive of what is and is not recalled following conversation.

3.
Cognition ; 241: 105543, 2023 Dec.
Article En | MEDLINE | ID: mdl-37713956

Grammatical cues are sometimes redundant with word meanings in natural language. For instance, English word order rules constrain the word order of a sentence like "The dog chewed the bone" even though the status of "dog" as subject and "bone" as object can be inferred from world knowledge and plausibility. Quantifying how often this redundancy occurs, and how the level of redundancy varies across typologically diverse languages, can shed light on the function and evolution of grammar. To that end, we performed a behavioral experiment in English and Russian and a cross-linguistic computational analysis measuring the redundancy of grammatical cues in transitive clauses extracted from corpus text. English and Russian speakers (n = 484) were presented with subjects, verbs, and objects (in random order and with morphological markings removed) extracted from naturally occurring sentences and were asked to identify which noun is the subject of the action. Accuracy was high in both languages (∼89% in English, ∼87% in Russian). Next, we trained a neural network machine classifier on a similar task: predicting which nominal in a subject-verb-object triad is the subject. Across 30 languages from eight language families, performance was consistently high: a median accuracy of 87%, comparable to the accuracy observed in the human experiments. The conclusion is that grammatical cues such as word order are necessary to convey subjecthood and objecthood in a minority of naturally occurring transitive clauses; nevertheless, they can (a) provide an important source of redundancy and (b) are crucial for conveying intended meaning that cannot be inferred from the words alone, including descriptions of human interactions, where roles are often reversible (e.g., Ray helped Lu/Lu helped Ray), and expressing non-prototypical meanings (e.g., "The bone chewed the dog.").

4.
Cogn Sci ; 47(9): e13336, 2023 09.
Article En | MEDLINE | ID: mdl-37695844

Semantic memory encompasses one's knowledge about the world. Distributional semantic models, which construct vector spaces with embedded words, are a proposed framework for understanding the representational structure of human semantic knowledge. Unlike some classic semantic models, distributional semantic models lack a mechanism for specifying the properties of concepts, which raises questions regarding their utility for a general theory of semantic knowledge. Here, we develop a computational model of a binary semantic classification task, in which participants judged target words for the referent's size or animacy. We created a family of models, evaluating multiple distributional semantic models, and mechanisms for performing the classification. The most successful model constructed two composite representations for each extreme of the decision axis (e.g., one averaging together representations of characteristically big things and another of characteristically small things). Next, the target item was compared to each composite representation, allowing the model to classify more than 1,500 words with human-range performance and to predict response times. We propose that when making a decision on a binary semantic classification task, humans use task prompts to retrieve instances representative of the extremes on that semantic dimension and compare the probe to those instances. This proposal is consistent with the principles of the instance theory of semantic memory.


Knowledge , Semantics , Humans , Memory , Reaction Time , Computer Simulation
5.
J Exp Psychol Learn Mem Cogn ; 49(8): 1306-1324, 2023 Aug.
Article En | MEDLINE | ID: mdl-35862078

Disfluencies such as pauses, "um"s, and "uh"s are common interruptions in the speech stream. Previous work probing memory for disfluent speech shows memory benefits for disfluent compared to fluent materials. Complementary evidence from studies of language production and comprehension have been argued to show that different disfluency types appear in distinct contexts and, as a result, serve as a meaningful cue. If the disfluency-memory boost is a result of sensitivity to these form-meaning mappings, forms of disfluency that cue new upcoming information (fillers and pauses) may produce a stronger memory boost compared to forms that reflect speaker difficulty (repetitions). If the disfluency-memory boost is simply due to the attentional-orienting properties of a disruption to fluent speech, different disfluency forms may produce similar memory benefit. Experiments 1 and 2 compared the relative mnemonic benefit of three types of disfluent interruptions. Experiments 3 and 4 examined the scope of the disfluency-memory boost to probe its cognitive underpinnings. Across the four experiments, we observed a disfluency-memory boost for three types of disfluency that were tested. This boost was local and position dependent, only manifesting when the disfluency immediately preceded a critical memory probe word at the end of the sentence. Our findings reveal a short-lived disfluency-memory boost that manifests at the end of the sentence but is evoked by multiple types of disfluent forms, consistent with the idea that disfluencies bring attentional focus to immediately upcoming material. The downstream consequence of this localized memory benefit is better understanding and encoding of the speaker's message. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Speech Perception , Speech , Humans , Speech/physiology , Language , Memory , Speech Perception/physiology
6.
J Neurosci ; 40(23): 4536-4550, 2020 06 03.
Article En | MEDLINE | ID: mdl-32317387

Aside from the language-selective left-lateralized frontotemporal network, language comprehension sometimes recruits a domain-general bilateral frontoparietal network implicated in executive functions: the multiple demand (MD) network. However, the nature of the MD network's contributions to language comprehension remains debated. To illuminate the role of this network in language processing in humans, we conducted a large-scale fMRI investigation using data from 30 diverse word and sentence comprehension experiments (481 unique participants [female and male], 678 scanning sessions). In line with prior findings, the MD network was active during many language tasks. Moreover, similar to the language-selective network, which is robustly lateralized to the left hemisphere, these responses were stronger in the left-hemisphere MD regions. However, in contrast with the language-selective network, the MD network responded more strongly (1) to lists of unconnected words than to sentences, and (2) in paradigms with an explicit task compared with passive comprehension paradigms. Indeed, many passive comprehension tasks failed to elicit a response above the fixation baseline in the MD network, in contrast to strong responses in the language-selective network. Together, these results argue against a role for the MD network in core aspects of sentence comprehension, such as inhibiting irrelevant meanings or parses, keeping intermediate representations active in working memory, or predicting upcoming words or structures. These results align with recent evidence of relatively poor tracking of the linguistic signal by the MD regions during naturalistic comprehension, and instead suggest that the MD network's engagement during language processing reflects effort associated with extraneous task demands.SIGNIFICANCE STATEMENT Domain-general executive processes, such as working memory and cognitive control, have long been implicated in language comprehension, including in neuroimaging studies that have reported activation in domain-general multiple demand (MD) regions for linguistic manipulations. However, much prior evidence has come from paradigms where language interpretation is accompanied by extraneous tasks. Using a large fMRI dataset (30 experiments/481 participants/678 sessions), we demonstrate that MD regions are engaged during language comprehension in the presence of task demands, but not during passive reading/listening, conditions that strongly activate the frontotemporal language network. These results present a fundamental challenge to proposals whereby linguistic computations, such as inhibiting irrelevant meanings, keeping representations active in working memory, or predicting upcoming elements, draw on domain-general executive resources.


Brain Mapping/methods , Comprehension/physiology , Language , Magnetic Resonance Imaging/methods , Nerve Net/diagnostic imaging , Nerve Net/physiology , Adolescent , Adult , Aged , Brain/diagnostic imaging , Brain/physiology , Executive Function/physiology , Female , Humans , Male , Middle Aged , Photic Stimulation/methods , Young Adult
7.
Neurobiol Lang (Camb) ; 1(1): 104-134, 2020.
Article En | MEDLINE | ID: mdl-36794007

The frontotemporal language network responds robustly and selectively to sentences. But the features of linguistic input that drive this response and the computations that these language areas support remain debated. Two key features of sentences are typically confounded in natural linguistic input: words in sentences (a) are semantically and syntactically combinable into phrase- and clause-level meanings, and (b) occur in an order licensed by the language's grammar. Inspired by recent psycholinguistic work establishing that language processing is robust to word order violations, we hypothesized that the core linguistic computation is composition, and, thus, can take place even when the word order violates the grammatical constraints of the language. This hypothesis predicts that a linguistic string should elicit a sentence-level response in the language network provided that the words in that string can enter into dependency relationships as in typical sentences. We tested this prediction across two fMRI experiments (total N = 47) by introducing a varying number of local word swaps into naturalistic sentences, leading to progressively less syntactically well-formed strings. Critically, local dependency relationships were preserved because combinable words remained close to each other. As predicted, word order degradation did not decrease the magnitude of the blood oxygen level-dependent response in the language network, except when combinable words were so far apart that composition among nearby words was highly unlikely. This finding demonstrates that composition is robust to word order violations, and that the language regions respond as strongly as they do to naturalistic linguistic input, providing that composition can take place.

...