Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 105
Filter
Add more filters











Publication year range
1.
J Mem Lang ; 1342024 Feb.
Article in English | MEDLINE | ID: mdl-39301181

ABSTRACT

In two structural priming experiments, we investigated the representations of lexically-specific syntactic restrictions of English verbs for highly proficient and immersed second language (L2) speakers of English. We considered the interplay of two possible mechanisms: generalization from the first language (L1) and statistical learning within the L2 (both of abstract structure and of lexically-specific information). In both experiments, L2 speakers with either Germanic or Romance languages as L1 were primed to produce dispreferred double-object structures involving non-alternating dative verbs. Priming occurred from ungrammatical double-object primes involving different non-alternating verbs (Experiment 1) and from grammatical primes involving alternating verbs (Experiment 2), supporting abstract statistical learning within the L2. However, we found no differences between L1-Germanic speakers (who have the double object structure in their L1) and L1-Romance speakers (who do not), inconsistent with the prediction for between-group differences of the L1-generalization account. Additionally, L2 speakers in Experiment 2 showed a lexical boost: There was stronger priming after (dispreferred) non-alternating same-verb double object primes than after (grammatical) alternating different-verb primes. Such lexically-driven persistence was also shown by L1 English speakers (Ivanova et al., 2012a) and may underlie statistical learning of lexically-dependent structural regularities. We conclude that lexically-specific syntactic restrictions in highly proficient and immersed L2 speakers are shaped by statistical learning (both abstract and lexically-specific) within the L2, but not by generalization from the L1.

2.
Sci Rep ; 14(1): 17110, 2024 07 24.
Article in English | MEDLINE | ID: mdl-39048617

ABSTRACT

Research suggests that interlocutors manage the timing demands of conversation by preparing what they want to say early. In three experiments, we used a verbal question-answering task to investigate what aspects of their response speakers prepare early. In all three experiments, participants answered more quickly when the critical content (here, barks) necessary for answer preparation occurred early (e.g., Which animal barks and is also a common household pet?) rather than late (e.g., Which animal is a common household pet and also barks?). In the individual experiments, we found no convincing evidence that participants were slower to produce longer answers, consisting of multiple words, than shorter answers, consisting of a single word. There was also no interaction between these two factors. A combined analysis of the first two experiments confirmed this lack of interaction, and demonstrated that participants were faster to answer questions when the critical content was available early rather than late and when the answer was short rather than long. These findings provide tentative evidence for an account in which interlocutors prepare the content of their answer as soon as they can, but sometimes do not prepare its length (and thus form) until they are ready to speak.


Subject(s)
Communication , Humans , Female , Male , Adult , Young Adult , Reaction Time/physiology , Speech
3.
Ear Hear ; 45(5): 1107-1114, 2024.
Article in English | MEDLINE | ID: mdl-38880953

ABSTRACT

Research investigating the complex interplay of cognitive mechanisms involved in speech listening for people with hearing loss has been gaining prominence. In particular, linguistic context allows the use of several cognitive mechanisms that are not well distinguished in hearing science, namely those relating to "postdiction", "integration", and "prediction". We offer the perspective that an unacknowledged impact of hearing loss is the differential use of predictive mechanisms relative to age-matched individuals with normal hearing. As evidence, we first review how degraded auditory input leads to reduced prediction in people with normal hearing, then consider the literature exploring context use in people with acquired postlingual hearing loss. We argue that no research on hearing loss has directly assessed prediction. Because current interventions for hearing do not fully alleviate difficulty in conversation, and avoidance of spoken social interaction may be a mediator between hearing loss and cognitive decline, this perspective could lead to greater understanding of cognitive effects of hearing loss and provide insight regarding new targets for intervention.


Subject(s)
Speech Perception , Humans , Hearing Loss/psychology , Hearing Loss/rehabilitation , Linguistics
4.
R Soc Open Sci ; 10(12): 231252, 2023 Dec.
Article in English | MEDLINE | ID: mdl-38094271

ABSTRACT

Although previous research has demonstrated that language comprehension can be egocentric, there is little evidence for egocentricity during prediction. In particular, comprehenders do not appear to predict egocentrically when the context makes it clear what the speaker is likely to refer to. But do comprehenders predict egocentrically when the context does not make it clear? We tested this hypothesis using a visual-world eye-tracking paradigm, in which participants heard sentences containing the gender-neutral pronoun They (e.g. They would like to wear…) while viewing four objects (e.g. tie, dress, drill, hairdryer). Two of these objects were plausible targets of the verb (tie and dress), and one was stereotypically compatible with the participant's gender (tie if the participant was male; dress if the participant was female). Participants rapidly fixated targets more than distractors, but there was no evidence that participants ever predicted egocentrically, fixating objects stereotypically compatible with their own gender. These findings suggest that participants do not fall back on their own egocentric perspective when predicting, even when they know that context does not make it clear what the speaker is likely to refer to.

5.
Behav Brain Sci ; 46: e243, 2023 10 02.
Article in English | MEDLINE | ID: mdl-37779289

ABSTRACT

The standardization account predicts short message service (SMS) interactions, allowed by current technology, will support the use and conventionalization of ideographs. Relying on psycholinguistic theories of dialogue, we argue that ideographs (such as emoji) can be used by interlocutors in SMS interactions, so that the main contributor can use them to accompany language and the addressee can use them as stand-alone feedback.


Subject(s)
Language , Text Messaging , Humans , Psycholinguistics
6.
Psychon Bull Rev ; 2023 Sep 22.
Article in English | MEDLINE | ID: mdl-37740119

ABSTRACT

To answer a question, speakers must determine their response and formulate it in words. But do they decide on a response before formulation, or do they formulate different potential answers before selecting one? We addressed this issue in a verbal question-answering experiment. Participants answered questions more quickly when they had one potential answer (e.g., Which tourist attraction in Paris is very tall?) than when they had multiple potential answers (e.g., What is the name of a Shakespeare play?). Participants also answered more quickly when the set of potential answers were on average short rather than long, regardless of whether there was only one or multiple potential answers. Thus, participants were not affected by the linguistic complexity of unselected but plausible answers. These findings suggest that participants select a single answer before formulation.

7.
Neuroimage ; 279: 120295, 2023 10 01.
Article in English | MEDLINE | ID: mdl-37536526

ABSTRACT

How does the brain code the meanings conveyed by language? Neuroimaging studies have investigated this by linking neural activity patterns during discourse comprehension to semantic models of language content. Here, we applied this approach to the production of discourse for the first time. Participants underwent fMRI while producing and listening to discourse on a range of topics. We used a distributional semantic model to quantify the similarity between different speech passages and identified where similarity in neural activity was predicted by semantic similarity. When people produced discourse, speech on similar topics elicited similar activation patterns in a widely distributed and bilateral brain network. This network was overlapping with, but more extensive than, the regions that showed similarity effects during comprehension. Critically, cross-task neural similarities between comprehension and production were also predicted by similarities in semantic content. This result suggests that discourse semantics engages a common neural code that is shared between comprehension and production. Effects of semantic similarity were bilateral in all three RSA analyses, even while univariate activation contrasts in the same data indicated left-lateralised BOLD responses. This indicates that right-hemisphere regions encode semantic properties even when they are not activated above baseline. We suggest that right-hemisphere regions play a supporting role in processing the meaning of discourse during both comprehension and production.


Subject(s)
Comprehension , Language , Humans , Comprehension/physiology , Brain/physiology , Semantics , Brain Mapping/methods , Magnetic Resonance Imaging
8.
PLoS One ; 18(7): e0288960, 2023.
Article in English | MEDLINE | ID: mdl-37471379

ABSTRACT

Prediction is often used during language comprehension. However, studies of prediction have tended to focus on L1 listeners in quiet conditions. Thus, it is unclear how listeners predict outside the laboratory and in specific communicative settings. Here, we report two eye-tracking studies which used a visual-world paradigm to investigate whether prediction during a consecutive interpreting task differs from prediction during a listening task in L2 listeners, and whether L2 listeners are able to predict in the noisy conditions that might be associated with this communicative setting. In a first study, thirty-six Dutch-English bilinguals either just listened to, or else listened to and then consecutively interpreted, predictable sentences presented on speech-shaped sound. In a second study, another thirty-six Dutch-English bilinguals carried out the same tasks in clear speech. Our results suggest that L2 listeners predict the meaning of upcoming words in noisy conditions. However, we did not find that predictive eye movements depended on task, nor that L2 listeners predicted upcoming word form. We also did not find a difference in predictive patterns when we compared our two studies. Thus, L2 listeners predict in noisy circumstances, supporting theories which posit that prediction regularly takes place in comprehension, but we did not find evidence that a subsequent production task or noise affects semantic prediction.


Subject(s)
Multilingualism , Speech Perception , Humans , Language , Noise , Speech
9.
Q J Exp Psychol (Hove) ; 76(11): 2579-2595, 2023 Nov.
Article in English | MEDLINE | ID: mdl-36655936

ABSTRACT

Previous research has found apparently contradictory effects of a semantically similar competitor on how people refer to previously mentioned entities. To address this issue, we conducted two picture-description experiments in spoken Mandarin. In Experiment 1, participants saw pictures and heard sentences referring to both the target referent and a competitor, and then described actions involving only the target referent. They produced fewer omissions and more repeated noun phrases when the competitor was semantically similar to the target referent than otherwise. In Experiment 2, participants saw introductory pictures and heard sentences referring to only the target referent, and then described actions involving both the target referent and a competitor. They produced more omissions and fewer pronouns when the competitor was semantically similar to the target referent than otherwise. We interpret the results in terms of the representation of discourse entities and the stages of language production.


Subject(s)
Language , Semantics , Humans , Cognition
10.
Q J Exp Psychol (Hove) ; 76(1): 180-195, 2023 Jan.
Article in English | MEDLINE | ID: mdl-35102784

ABSTRACT

In dialogue, people represent each other's utterances to take turns and communicate successfully. In previous work, speakers who were naming single pictures or picture pairs represented whether another speaker was engaged in the same task (vs a different or no task) concurrently but did not represent in detail the content of the other speaker's utterance. Here, we investigate the co-representation of whole sentences. In three experiments, pairs of speakers imagined each other producing active or passive descriptions of transitive events. Speakers took longer to begin speaking when they believed their partner was also preparing to speak, compared to when they did not. Interference occurred when speakers believed their partners were preparing to speak at the same time as them (synchronous production and co-representation; Experiment 1), and also when speakers believed that their partner would speak only after them (asynchronous production and co-representation; Experiments 2a and 2b). However, interference was generally no greater when speakers believed their partner was preparing a different compared to a similar utterance, providing no consistent evidence that speakers represented what their partners were preparing to say. Taken together, these findings indicate that speakers can represent another's intention to speak even as they are themselves preparing to speak, but that such representation tends to lack detail.


Subject(s)
Language , Speech , Humans , Intention
11.
Philos Trans R Soc Lond B Biol Sci ; 378(1870): 20210362, 2023 02 13.
Article in English | MEDLINE | ID: mdl-36571124

ABSTRACT

In dialogue, speakers process a great deal of information, take and give the floor to each other, and plan and adjust their contributions on the fly. Despite the level of coordination and control that it requires, dialogue is the easiest way speakers possess to come to similar conceptualizations of the world. In this paper, we show how speakers align with each other by mutually controlling the flow of the dialogue and constantly monitoring their own and their interlocutors' way of representing information. Through examples of conversation, we introduce the notions of shared control, meta-representations of alignment and commentaries on alignment, and show how they support mutual understanding and the collaborative creation of abstract concepts. Indeed, whereas speakers can share similar representations of concrete concepts just by mutually attending to a tangible referent or by recalling it, they are likely to need more negotiation and mutual monitoring to build similar representations of abstract concepts. This article is part of the theme issue 'Concepts in interaction: social engagement and inner experiences'.


Subject(s)
Metacognition , Social Cognition , Concept Formation , Communication , Mental Recall , Cognition
12.
Cortex ; 155: 287-306, 2022 10.
Article in English | MEDLINE | ID: mdl-36075141

ABSTRACT

Language processing requires the integration of diverse sources of information across multiple levels of processing. A range of psycholinguistic properties have been documented in previous studies as having influence on brain activation during language processing. However, most of those studies have used factorial designs to probe the effect of one or two individual properties using highly controlled stimuli and experimental paradigms. Little is known about the neural correlates of psycholinguistic properties in more naturalistic discourse, especially during language production. The aim of our study is to explore the above issues in a rich fMRI dataset in which participants both listened to recorded passages of discourse and produced their own narrative discourse in response to prompts. Specifically, we measured 13 psycholinguistic properties of the discourse comprehended or produced by the participants, and we used principal components analysis (PCA) to address covariation in these properties and extract a smaller set of latent language characteristics. These latent components indexed vocabulary complexity, sensory-motor and emotional language content, discourse coherence and speech quantity. A parametric approach was adopted to study the effects of these psycholinguistic variables on brain activation during comprehension and production. We found that the pattern of effects across the cortex was somewhat convergent across comprehension and production. However, the degree of convergence varied across language properties, being strongest for the component indexing sensory-motor language content. We report the full, unthresholded effect maps for each psycholinguistic variable, as well as mapping how these effects change along a large-scale cortical gradient of brain function. We believe that our findings provide a valuable starting point for future, confirmatory studies of discourse processing.


Subject(s)
Comprehension , Speech Perception , Brain/physiology , Brain Mapping , Comprehension/physiology , Humans , Magnetic Resonance Imaging , Psycholinguistics , Speech , Speech Perception/physiology
13.
R Soc Open Sci ; 9(4): 220107, 2022 Apr.
Article in English | MEDLINE | ID: mdl-35601453

ABSTRACT

Co-actors represent and integrate each other's actions, even when they need not monitor one another. However, monitoring is important for successful interactions, particularly those involving language, and monitoring others' utterances probably relies on similar mechanisms as monitoring one's own. We investigated the effect of monitoring on the integration of self- and other-generated utterances in the shared-Stroop task. In a solo version of the Stroop task (with a single participant responding to all stimuli; Experiment 1), participants named the ink colour of mismatching colour words (incongruent stimuli) more slowly than matching colour words (congruent). In the shared-Stroop task, one participant named the ink colour of words in one colour (e.g. red), while ignoring stimuli in the other colour (e.g. green); the other participant either named the other ink colour or did not respond. Crucially, participants either provided feedback about the correctness of their partner's response (Experiment 3) or did not (Experiment 2). Interference was greater when both participants responded than when they did not, but only when their partners provided feedback. We argue that feedback increased interference because monitoring one's partner enhanced representations of the partner's target utterance, which in turn interfered with self-monitoring of the participant's own utterance.

14.
Cognition ; 225: 105101, 2022 08.
Article in English | MEDLINE | ID: mdl-35339795

ABSTRACT

People sometimes interpret implausible sentences nonliterally, for example treating The mother gave the candle the daughter as meaning the daughter receiving the candle. But how do they do so? We contrasted a nonliteral syntactic analysis account, according to which people compute a syntactic analysis appropriate for this nonliteral meaning, with a nonliteral semantic interpretation account, according to which they arrive at this meaning via purely semantic processing. The former but not the latter account postulates that people consider not only a literal-but-implausible double-object (DO) analysis in comprehending The mother gave the candle the daughter, but also a nonliteral-but-plausible prepositional-object (PO) analysis (i.e., including to before the daughter). In three structural priming experiments, participants heard a plausible or implausible DO or PO prime sentence. They then answered a comprehension question first or described a picture of a dative event first. In accord with the nonliteral syntactic analysis account, priming was reduced following implausible sentences than following plausible sentences and following nonliterally interpreted implausible sentences than literally interpreted implausible sentences. The results suggest that comprehenders constructed a nonliteral syntactic analysis, which we argue was predicted early in the sentence.


Subject(s)
Comprehension , Language , Erythema Nodosum , Female , Fingers/abnormalities , Hearing , Humans , Mothers , Semantics
15.
Cereb Cortex ; 32(19): 4317-4330, 2022 09 19.
Article in English | MEDLINE | ID: mdl-35059718

ABSTRACT

When comprehending discourse, listeners engage default-mode regions associated with integrative semantic processing to construct a situation model of its content. We investigated how similar networks are engaged when we produce, as well as comprehend, discourse. During functional magnetic resonance imaging, participants spoke about a series of specific topics and listened to discourse on other topics. We tested how activation was predicted by natural fluctuations in the global coherence of the discourse, that is, the degree to which utterances conformed to the expected topic. The neural correlates of coherence were similar across speaking and listening, particularly in default-mode regions. This network showed greater activation when less coherent speech was heard or produced, reflecting updating of mental representations when discourse did not conform to the expected topic. In contrast, regions that exert control over semantic activation showed task-specific effects, correlating negatively with coherence during listening but not during production. Participants who showed greater activation in left inferior prefrontal cortex also produced more coherent discourse, suggesting a specific role for this region in goal-directed regulation of speech content. Results suggest strong correspondence of discourse representations during speaking and listening. However, they indicate that the semantic control network plays different roles in comprehension and production.


Subject(s)
Comprehension , Speech Perception , Brain/diagnostic imaging , Brain/physiology , Brain Mapping , Comprehension/physiology , Humans , Neural Networks, Computer , Speech Perception/physiology
16.
Cognition ; 220: 104987, 2022 03.
Article in English | MEDLINE | ID: mdl-34922159

ABSTRACT

We report the results of an eye-tracking study which used the Visual World Paradigm (VWP) to investigate the time-course of prediction during a simultaneous interpreting task. Twenty-four L1 French professional conference interpreters and twenty-four L1 French professional translators untrained in simultaneous interpretation listened to sentences in English and interpreted them simultaneously into French while looking at a visual scene. Sentences contained a highly predictable word (e.g., The dentist asked the man to open his mouth a little wider). The visual scene comprised four objects, one of which depicted either the target object (mouth; bouche), an English phonological competitor (mouse; souris), a French phonological competitor (cork; bouchon), or an unrelated word (bone; os). We considered 1) whether interpreters and translators predict upcoming nouns during a simultaneous interpreting task, 2) whether interpreters and translators predict the form of these nouns in English and in French and 3) whether interpreters and translators manifest different predictive behaviour. Our results suggest that both interpreters and translators predict upcoming nouns, but neither group predicts the word-form of these nouns. In addition, we did not find significant differences between patterns of prediction in interpreters and translators. Thus, evidence from the visual-world paradigm shows that prediction takes place in simultaneous interpreting, regardless of training and experience. However, we were unable to establish whether word-form was predicted.


Subject(s)
Language , Semantics , Animals , Auditory Perception , Eye-Tracking Technology , Humans , Mice , Mouth
17.
R Soc Open Sci ; 8(11): 211107, 2021 Nov.
Article in English | MEDLINE | ID: mdl-34849245

ABSTRACT

According to an influential hypothesis, people imitate motor movements to foster social interactions. Could imitation of language serve a similar function? We investigated this question in two pre-registered experiments. In Experiment 1, participants were asked to alternate naming pictures and matching pictures to a name provided by a partner. Crucially, and unknown to participants, the partner was in fact a computer program which in one group produced the same names as previously used by the participant, and in the other group consistently produced different names. We found no difference in how the two groups evaluated the partner or the interaction and no difference in their willingness to cooperate with the partner. In Experiment 2, we made the task more similar to natural interactions by adding a stage in which a participant and the partner introduced themselves to each other and included a measure of the participant's autistic traits. Once again, we found no effects of being imitated. We discuss how these null results may inform imitation research.

18.
Brain Res ; 1768: 147571, 2021 10 01.
Article in English | MEDLINE | ID: mdl-34216579

ABSTRACT

Determining when a partner's spoken or musical turn will end requires well-honed predictive abilities. Evidence suggests that our motor systems are activated during perception of both speech and music, and it has been argued that motor simulation is used to predict turn-ends across domains. Here we used a dual-task interference paradigm to investigate whether motor simulation of our partner's action underlies our ability to make accurate turn-end predictions in speech and in music. Furthermore, we explored how specific this simulation is to the action being predicted. We conducted two experiments, one investigating speech turn-ends, and one investigating music turn-ends. In each, 34 proficient pianists predicted turn-endings while (1) passively listening, (2) producing an effector-specific motor activity (mouth/hand movement), or (3) producing a task- and effector-specific motor activity (mouthing words/fingering a piano melody). In the speech experiment, any movement during speech perception disrupted predictions of spoken turn-ends, whether the movement was task-specific or not. In the music experiment, only task-specific movement (i.e., fingering a piano melody) disrupted predictions of musical turn-ends. These findings support the use of motor simulation to make turn-end predictions in both speech and music but suggest that the specificity of this simulation may differ between domains.


Subject(s)
Auditory Perception/physiology , Motor Activity/physiology , Speech Perception/physiology , Adult , Aged , Cooperative Behavior , Female , Humans , Male , Middle Aged , Movement , Music , Speech/physiology
19.
Q J Exp Psychol (Hove) ; 74(12): 2193-2209, 2021 Dec.
Article in English | MEDLINE | ID: mdl-34120522

ABSTRACT

Language comprehension depends heavily upon prediction, but how predictions are generated remains poorly understood. Several recent theories propose that these predictions are in fact generated by the language production system. Here, we directly test this claim. Participants read sentence contexts that either were or were not highly predictive of a final word, and we measured how quickly participants recognised that final word (Experiment 1), named that final word (Experiment 2), or used that word to name a picture (Experiment 3). We manipulated engagement of the production system by asking participants to read the sentence contexts either aloud or silently. Across the experiments, participants responded more quickly following highly predictive contexts. Importantly, the effect of contextual predictability was greater when participants had read the sentence contexts aloud rather than silently, a finding that was significant in Experiment 3, marginally significant in Experiment 2, and again significant in combined analyses of Experiments 1-3. These results indicate that language production (as used in reading aloud) can be used to facilitate prediction. We consider whether prediction benefits from production only in particular contexts and discuss the theoretical implications of our evidence.


Subject(s)
Language , Names , Comprehension , Humans , Reading
20.
Cognition ; 211: 104650, 2021 06.
Article in English | MEDLINE | ID: mdl-33721717

ABSTRACT

How do we update our linguistic knowledge? In seven experiments, we asked whether error-driven learning can explain under what circumstances adults and children are more likely to store and retain a new word meaning. Participants were exposed to novel object labels in the context of more or less constraining sentences or visual contexts. Both two-to-four-year-olds (Mage = 38 months) and adults were strongly affected by expectations based on sentence constraint when choosing the referent of a new label. In addition, adults formed stronger memory traces for novel words that violated a stronger prior expectation. However, preschoolers' memory was unaffected by the strength of their prior expectations. We conclude that the encoding of new word-object associations in memory is affected by prediction error in adults, but not in preschoolers.


Subject(s)
Learning , Verbal Learning , Adult , Child , Child, Preschool , Humans , Knowledge , Language , Linguistics
SELECTION OF CITATIONS
SEARCH DETAIL