Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 121
Filter
1.
J Cogn ; 7(1): 10, 2024.
Article in English | MEDLINE | ID: mdl-38223231

ABSTRACT

Using language requires access to domain-specific linguistic representations, but also draws on domain-general cognitive skills. A key issue in current psycholinguistics is to situate linguistic processing in the network of human cognitive abilities. Here, we focused on spoken word recognition and used an individual differences approach to examine the links of scores in word recognition tasks with scores on tasks capturing effects of linguistic experience, general processing speed, working memory, and non-verbal reasoning. 281 young native speakers of Dutch completed an extensive test battery assessing these cognitive skills. We used psychometric network analysis to map out the direct links between the scores, that is, the unique variance between pairs of scores, controlling for variance shared with the other scores. The analysis revealed direct links between word recognition skills and processing speed. We discuss the implications of these results and the potential of psychometric network analysis for studying language processing and its embedding in the broader cognitive system.

2.
Behav Res Methods ; 56(3): 2422-2436, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37749421

ABSTRACT

We introduce the Individual Differences in Language Skills (IDLaS-NL) web platform, which enables users to run studies on individual differences in Dutch language skills via the Internet. IDLaS-NL consists of 35 behavioral tests, previously validated in participants aged between 18 and 30 years. The platform provides an intuitive graphical interface for users to select the tests they wish to include in their research, to divide these tests into different sessions and to determine their order. Moreover, for standardized administration the platform provides an application (an emulated browser) wherein the tests are run. Results can be retrieved by mouse click in the graphical interface and are provided as CSV file output via e-mail. Similarly, the graphical interface enables researchers to modify and delete their study configurations. IDLaS-NL is intended for researchers, clinicians, educators and in general anyone conducting fundamental research into language and general cognitive skills; it is not intended for diagnostic purposes. All platform services are free of charge. Here, we provide a description of its workings as well as instructions for using the platform. The IDLaS-NL platform can be accessed at www.mpi.nl/idlas-nl .


Subject(s)
Individuality , Internet , Humans , Adolescent , Young Adult , Adult , Language , Cognition , Electronic Mail
3.
Acta Psychol (Amst) ; 241: 104073, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37948879

ABSTRACT

Word frequency plays a key role in theories of lexical access, which assume that the word frequency effect (WFE, faster access to high-frequency than low-frequency words) occurs as a result of differences in the representation and processing of the words. In a seminal paper, Jescheniak and Levelt (1994) proposed that the WFE arises during the retrieval of word forms, rather than the retrieval of their syntactic representations (their lemmas) or articulatory commands. An important part of Jescheniak and Levelt's argument was that they found a stable WFE in a picture naming task, which requires complete lexical access, but not in a gender decision task, which only requires access to the words' lemmas and not their word forms. We report two attempts to replicate this pattern, one with new materials, and one with Jescheniak and Levelt's orginal pictures. In both studies we found a strong WFE when the pictures were shown for the first time, but much weaker effects on their second and third presentation. Importantly these patterns were seen in both the picture naming and the gender decision tasks, suggesting that either word frequency does not exclusively affect word form retrieval, or that the gender decision task does not exclusively tap lemma access.


Subject(s)
Semantics , Humans , Reaction Time
4.
J Exp Psychol Learn Mem Cogn ; 49(12): 1971-1988, 2023 Dec.
Article in English | MEDLINE | ID: mdl-38032679

ABSTRACT

Language is used in communicative contexts to identify and successfully transmit new information that should be later remembered. In three studies, we used question-answer pairs, a naturalistic device for focusing information, to examine how properties of conversations inform later item memory. In Experiment 1, participants viewed three pictures while listening to a recorded question-answer exchange between two people about the locations of two of the displayed pictures. In a memory recognition test conducted online a day later, participants recognized the names of pictures that served as answers more accurately than the names of pictures that appeared as questions. This suggests that this type of focus indeed boosts memory. In Experiment 2, participants listened to the same items embedded in declarative sentences. There was a reduced memory benefit for the second item, confirming the role of linguistic focus on later memory beyond a simple serial-position effect. In Experiment 3, two participants asked and answered the same questions about objects in a dialogue. Here, answers continued to receive a memory benefit, and this focus effect was accentuated by language production such that information-seekers remembered the answers to their questions better than information-givers remembered the questions they had been asked. Combined, these studies show how people's memory for conversation is modulated by the referential status of the items mentioned and by the speaker's roles of the conversation participants. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Mental Recall , Names , Humans , Communication , Language , Linguistics
5.
J Neurosci ; 43(26): 4867-4883, 2023 06 28.
Article in English | MEDLINE | ID: mdl-37221093

ABSTRACT

To understand language, we need to recognize words and combine them into phrases and sentences. During this process, responses to the words themselves are changed. In a step toward understanding how the brain builds sentence structure, the present study concerns the neural readout of this adaptation. We ask whether low-frequency neural readouts associated with words change as a function of being in a sentence. To this end, we analyzed an MEG dataset by Schoffelen et al. (2019) of 102 human participants (51 women) listening to sentences and word lists, the latter lacking any syntactic structure and combinatorial meaning. Using temporal response functions and a cumulative model-fitting approach, we disentangled delta- and theta-band responses to lexical information (word frequency), from responses to sensory and distributional variables. The results suggest that delta-band responses to words are affected by sentence context in time and space, over and above entropy and surprisal. In both conditions, the word frequency response spanned left temporal and posterior frontal areas; however, the response appeared later in word lists than in sentences. In addition, sentence context determined whether inferior frontal areas were responsive to lexical information. In the theta band, the amplitude was larger in the word list condition ∼100 milliseconds in right frontal areas. We conclude that low-frequency responses to words are changed by sentential context. The results of this study show how the neural representation of words is affected by structural context and as such provide insight into how the brain instantiates compositionality in language.SIGNIFICANCE STATEMENT Human language is unprecedented in its combinatorial capacity: we are capable of producing and understanding sentences we have never heard before. Although the mechanisms underlying this capacity have been described in formal linguistics and cognitive science, how they are implemented in the brain remains to a large extent unknown. A large body of earlier work from the cognitive neuroscientific literature implies a role for delta-band neural activity in the representation of linguistic structure and meaning. In this work, we combine these insights and techniques with findings from psycholinguistics to show that meaning is more than the sum of its parts; the delta-band MEG signal differentially reflects lexical information inside and outside sentence structures.


Subject(s)
Brain , Language , Humans , Female , Brain/physiology , Linguistics , Psycholinguistics , Brain Mapping , Semantics
6.
J Cogn ; 6(1): 20, 2023.
Article in English | MEDLINE | ID: mdl-37033404

ABSTRACT

Turn-taking in everyday conversation is fast, with median latencies in corpora of conversational speech often reported to be under 300 ms. This seems like magic, given that experimental research on speech planning has shown that speakers need much more time to plan and produce even the shortest of utterances. This paper reviews how language scientists have combined linguistic analyses of conversations and experimental work to understand the skill of swift turn-taking and proposes a tentative solution to the riddle of fast turn-taking.

7.
N Engl J Med ; 387(11): 967-977, 2022 09 15.
Article in English | MEDLINE | ID: mdl-36018037

ABSTRACT

BACKGROUND: A polypill that includes key medications associated with improved outcomes (aspirin, angiotensin-converting-enzyme [ACE] inhibitor, and statin) has been proposed as a simple approach to the secondary prevention of cardiovascular death and complications after myocardial infarction. METHODS: In this phase 3, randomized, controlled clinical trial, we assigned patients with myocardial infarction within the previous 6 months to a polypill-based strategy or usual care. The polypill treatment consisted of aspirin (100 mg), ramipril (2.5, 5, or 10 mg), and atorvastatin (20 or 40 mg). The primary composite outcome was cardiovascular death, nonfatal type 1 myocardial infarction, nonfatal ischemic stroke, or urgent revascularization. The key secondary end point was a composite of cardiovascular death, nonfatal type 1 myocardial infarction, or nonfatal ischemic stroke. RESULTS: A total of 2499 patients underwent randomization and were followed for a median of 36 months. A primary-outcome event occurred in 118 of 1237 patients (9.5%) in the polypill group and in 156 of 1229 (12.7%) in the usual-care group (hazard ratio, 0.76; 95% confidence interval [CI], 0.60 to 0.96; P = 0.02). A key secondary-outcome event occurred in 101 patients (8.2%) in the polypill group and in 144 (11.7%) in the usual-care group (hazard ratio, 0.70; 95% CI, 0.54 to 0.90; P = 0.005). The results were consistent across prespecified subgroups. Medication adherence as reported by the patients was higher in the polypill group than in the usual-care group. Adverse events were similar between groups. CONCLUSIONS: Treatment with a polypill containing aspirin, ramipril, and atorvastatin within 6 months after myocardial infarction resulted in a significantly lower risk of major adverse cardiovascular events than usual care. (Funded by the European Union Horizon 2020; SECURE ClinicalTrials.gov number, NCT02596126; EudraCT number, 2015-002868-17.).


Subject(s)
Angiotensin-Converting Enzyme Inhibitors , Cardiovascular Diseases , Hydroxymethylglutaryl-CoA Reductase Inhibitors , Platelet Aggregation Inhibitors , Angiotensin-Converting Enzyme Inhibitors/adverse effects , Angiotensin-Converting Enzyme Inhibitors/therapeutic use , Aspirin/adverse effects , Aspirin/therapeutic use , Atorvastatin/adverse effects , Atorvastatin/therapeutic use , Cardiovascular Diseases/etiology , Cardiovascular Diseases/mortality , Cardiovascular Diseases/prevention & control , Humans , Hydroxymethylglutaryl-CoA Reductase Inhibitors/adverse effects , Hydroxymethylglutaryl-CoA Reductase Inhibitors/therapeutic use , Ischemic Stroke/prevention & control , Myocardial Infarction/complications , Myocardial Infarction/prevention & control , Myocardial Infarction/therapy , Platelet Aggregation Inhibitors/adverse effects , Platelet Aggregation Inhibitors/therapeutic use , Ramipril/adverse effects , Ramipril/therapeutic use , Secondary Prevention/methods
8.
PLoS Biol ; 20(7): e3001713, 2022 07.
Article in English | MEDLINE | ID: mdl-35834569

ABSTRACT

Human language stands out in the natural world as a biological signal that uses a structured system to combine the meanings of small linguistic units (e.g., words) into larger constituents (e.g., phrases and sentences). However, the physical dynamics of speech (or sign) do not stand in a one-to-one relationship with the meanings listeners perceive. Instead, listeners infer meaning based on their knowledge of the language. The neural readouts of the perceptual and cognitive processes underlying these inferences are still poorly understood. In the present study, we used scalp electroencephalography (EEG) to compare the neural response to phrases (e.g., the red vase) and sentences (e.g., the vase is red), which were close in semantic meaning and had been synthesized to be physically indistinguishable. Differences in structure were well captured in the reorganization of neural phase responses in delta (approximately <2 Hz) and theta bands (approximately 2 to 7 Hz),and in power and power connectivity changes in the alpha band (approximately 7.5 to 13.5 Hz). Consistent with predictions from a computational model, sentences showed more power, more power connectivity, and more phase synchronization than phrases did. Theta-gamma phase-amplitude coupling occurred, but did not differ between the syntactic structures. Spectral-temporal response function (STRF) modeling revealed different encoding states for phrases and sentences, over and above the acoustically driven neural response. Our findings provide a comprehensive description of how the brain encodes and separates linguistic structures in the dynamics of neural responses. They imply that phase synchronization and strength of connectivity are readouts for the constituent structure of language. The results provide a novel basis for future neurophysiological research on linguistic structure representation in the brain, and, together with our simulations, support time-based binding as a mechanism of structure encoding in neural dynamics.


Subject(s)
Language , Speech Perception , Comprehension/physiology , Electroencephalography/methods , Humans , Linguistics , Speech Perception/physiology
9.
Cognition ; 223: 105037, 2022 06.
Article in English | MEDLINE | ID: mdl-35123218

ABSTRACT

Corpus analyses have shown that turn-taking in conversation is much faster than laboratory studies of speech planning would predict. To explain fast turn-taking, Levinson and Torreira (2015) proposed that speakers are highly proactive: They begin to plan a response to their interlocutor's turn as soon as they have understood its gist, and launch this planned response when the turn-end is imminent. Thus, fast turn-taking is possible because speakers use the time while their partner is talking to plan their own utterance. In the present study, we asked how much time upcoming speakers actually have to plan their utterances. Following earlier psycholinguistic work, we used transcripts of spoken conversations in Dutch, German, and English. These transcripts consisted of segments, which are continuous stretches of speech by one speaker. In the psycholinguistic and phonetic literature, such segments have often been used as proxies for turns. We found that in all three corpora, large proportions of the segments comprised of only one or two words, which on our estimate does not give the next speaker enough time to fully plan a response. Further analyses showed that speakers indeed often did not respond to the immediately preceding segment of their partner, but continued an earlier segment of their own. More generally, our findings suggest that speech segments derived from transcribed corpora do not necessarily correspond to turns, and the gaps between speech segments therefore only provide limited information about the planning and timing of turns.


Subject(s)
Communication , Speech , Humans , Language , Phonetics , Psycholinguistics , Speech/physiology
10.
Front Psychol ; 12: 693124, 2021.
Article in English | MEDLINE | ID: mdl-34603124

ABSTRACT

Natural conversations are characterized by short transition times between turns. This holds in particular for multi-party conversations. The short turn transitions in everyday conversations contrast sharply with the much longer speech onset latencies observed in laboratory studies where speakers respond to spoken utterances. There are many factors that facilitate speech production in conversational compared to laboratory settings. Here we highlight one of them, the impact of competition for turns. In multi-party conversations, speakers often compete for turns. In quantitative corpus analyses of multi-party conversation, the fastest response determines the recorded turn transition time. In contrast, in dyadic conversations such competition for turns is much less likely to arise, and in laboratory experiments with individual participants it does not arise at all. Therefore, all responses tend to be recorded. Thus, competition for turns may reduce the recorded mean turn transition times in multi-party conversations for a simple statistical reason: slow responses are not included in the means. We report two studies illustrating this point. We first report the results of simulations showing how much the response times in a laboratory experiment would be reduced if, for each trial, instead of recording all responses, only the fastest responses of several participants responding independently on the trial were recorded. We then present results from a quantitative corpus analysis comparing turn transition times in dyadic and triadic conversations. There was no significant group size effect in question-response transition times, where the present speaker often selects the next one, thus reducing competition between speakers. But, as predicted, triads showed shorter turn transition times than dyads for the remaining turn transitions, where competition for the floor was more likely to arise. Together, these data show that turn transition times in conversation should be interpreted in the context of group size, turn transition type, and social setting.

11.
Z Evid Fortbild Qual Gesundhwes ; 165: 83-91, 2021 Oct.
Article in German | MEDLINE | ID: mdl-34474992

ABSTRACT

BACKGROUND: In the joint project "Mobile Care Backup" funded by the German Federal Ministry of Education and Research, the smartphone-based app "MoCaB" was developed in close cooperation with informal caregivers. It provides individualized, algorithm-based information and can accompany and support caring relatives in everyday life. After a multi-step development, informal caregivers tested the MoCaB app in a home setting at the end of the research project. The goal was to find out how the test persons evaluate MoCaB and in which form the app can provide support to informal caregivers. METHODS: Eighteen test persons caring for relatives participated in a four-week test of MoCaB. Guideline-based qualitative interviews to record usage behavior and experiences with the app were conducted after two and four weeks of testing, transcribed and analyzed using qualitative content analysis. RESULTS: The test persons described the care-related information as helpful. The individualized, algorithm-based mode of information delivery and the exercises provided for family caregivers were generally rated as helpful, but their use depends on the individual usage style. Three dimensions can describe the effects of MoCaB: 1) expansion of care-relevant knowledge, 2) stimulation of self-reflection, and 3) behavior towards the care recipients. DISCUSSION: With few exceptions, the testing caregivers felt that the MoCaB app was enriching. The support dimensions have an effect at different points in everyday life and vary in intensity, depending on the duration of the existing care activity and the individual preferences of the users. CONCLUSION: The way in which caregivers used the app was not always consistent with the expected behaviors. This demonstrates the relevance of open-ended, qualitative research methods in the evaluation of health apps.


Subject(s)
Caregivers , Mobile Applications , Exercise , Germany , Humans
12.
J Exp Psychol Gen ; 150(10): 2167-2174, 2021 Oct.
Article in English | MEDLINE | ID: mdl-34138601

ABSTRACT

Language comprehenders can use syntactic cues to generate predictions online about upcoming language. Previous research with reading-impaired adults and healthy, low-proficiency adult and child learners suggests that reading skills are related to prediction in spoken language comprehension. Here, we investigated whether differences in literacy are also related to predictive spoken language processing in non-reading-impaired proficient adult readers with varying levels of literacy experience. Using the visual world paradigm enabled us to measure prediction based on syntactic cues in the spoken sentence, prior to the (predicted) target word. Literacy experience was found to be the strongest predictor of target anticipation, independent of general cognitive abilities. These findings suggest that (a) experience with written language can enhance syntactic prediction of spoken language in normal adult language users and (b) processing skills can be transferred to related tasks (from reading to listening) if the domains involve similar processes (e.g., predictive dependencies) and representations (e.g., syntactic). (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Subject(s)
Language , Literacy , Child , Humans
13.
Brain Lang ; 218: 104941, 2021 07.
Article in English | MEDLINE | ID: mdl-34015683

ABSTRACT

Lexical-processing declines are a hallmark of aging. However, the extent of these declines may vary as a function of different factors. Motivated by findings from neurodegenerative diseases and healthy aging, we tested whether 'motor-relatedness' (the degree to which words are associated with particular human body movements) might moderate such declines. We investigated this question by examining data from three experiments. The experiments were carried out in different languages (Dutch, German, English) using different tasks (lexical decision, picture naming), and probed verbs and nouns, in all cases controlling for potentially confounding variables (e.g., frequency, age-of-acquisition, imageability). Whereas 'non-motor words' (e.g., steak) showed age-related performance decreases in all three experiments, 'motor words' (e.g., knife) yielded either smaller decreases (in one experiment) or no decreases (in two experiments). The findings suggest that motor-relatedness can attenuate or even prevent age-related lexical declines, perhaps due to the relative sparing of neural circuitry underlying such words.


Subject(s)
Motor Skills , Vocabulary , Humans , Language
14.
J Exp Psychol Gen ; 150(9): 1772-1799, 2021 Sep.
Article in English | MEDLINE | ID: mdl-33734778

ABSTRACT

In conversation, turns follow each other with minimal gaps. To achieve this, speakers must launch their utterances shortly before the predicted end of the partner's turn. We examined the relative importance of cues to partner utterance content and partner utterance length for launching coordinated speech. In three experiments, Dutch adult participants had to produce prepared utterances (e.g., vier, "four") immediately after a recording of a confederate's utterance (zeven, "seven"). To assess the role of corepresenting content versus attending to speech cues in launching coordinated utterances, we varied whether the participant could see the stimulus being named by the confederate, the confederate prompt's length, and whether within a block of trials, the confederate prompt's length was predictable. We measured how these factors affected the gap between turns and the participants' allocation of visual attention while preparing to speak. Using a machine-learning technique, model selection by k-fold cross-validation, we found that gaps were most strongly predicted by cues from the confederate speech signal, though some benefit was also conferred by seeing the confederate's stimulus. This shows that, at least in a simple laboratory task, speakers rely more on cues in the partner's speech than corepresentation of their utterance content. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Subject(s)
Cues , Speech Perception , Adult , Cognition , Humans , Language , Speech
15.
Cognition ; 211: 104636, 2021 06.
Article in English | MEDLINE | ID: mdl-33647750

ABSTRACT

In recent years, it has become clear that attention plays an important role in spoken word production. Some of this evidence comes from distributional analyses of reaction time (RT) in regular picture naming and picture-word interference. Yet we lack a mechanistic account of how the properties of RT distributions come to reflect attentional processes and how these processes may in turn modulate the amount of conflict between lexical representations. Here, we present a computational account according to which attentional lapses allow for existing conflict to build up unsupervised on a subset of trials, thus modulating the shape of the resulting RT distribution. Our process model resolves discrepancies between outcomes of previous studies on semantic interference. Moreover, the model's predictions were confirmed in a new experiment where participants' motivation to remain attentive determined the size and distributional locus of semantic interference in picture naming. We conclude that process modeling of RT distributions importantly improves our understanding of the interplay between attention and conflict in word production. Our model thus provides a framework for interpreting distributional analyses of RT data in picture naming tasks.


Subject(s)
Pattern Recognition, Visual , Semantics , Attention , Humans , Reaction Time
16.
Cognition ; 210: 104620, 2021 05.
Article in English | MEDLINE | ID: mdl-33571814

ABSTRACT

Cross-linguistic differences in morphological complexity could have important consequences for language learning. Specifically, it is often assumed that languages with more regular, compositional, and transparent grammars are easier to learn by both children and adults. Moreover, it has been shown that such grammars are more likely to evolve in bigger communities. Together, this suggests that some languages are acquired faster than others, and that this advantage can be traced back to community size and to the degree of systematicity in the language. However, the causal relationship between systematic linguistic structure and language learnability has not been formally tested, despite its potential importance for theories on language evolution, second language learning, and the origin of linguistic diversity. In this pre-registered study, we experimentally tested the effects of community size and systematic structure on adult language learning. We compared the acquisition of different yet comparable artificial languages that were created by big or small groups in a previous communication experiment, which varied in their degree of systematic linguistic structure. We asked (a) whether more structured languages were easier to learn; and (b) whether languages created by the bigger groups were easier to learn. We found that highly systematic languages were learned faster and more accurately by adults, but that the relationship between language learnability and linguistic structure was typically non-linear: high systematicity was advantageous for learning, but learners did not benefit from partly or semi-structured languages. Community size did not affect learnability: languages that evolved in big and small groups were equally learnable, and there was no additional advantage for languages created by bigger groups beyond their degree of systematic structure. Furthermore, our results suggested that predictability is an important advantage of systematic structure: participants who learned more structured languages were better at generalizing these languages to new, unfamiliar meanings, and different participants who learned the same more structured languages were more likely to produce similar labels. That is, systematic structure may allow speakers to converge effortlessly, such that strangers can immediately understand each other.


Subject(s)
Language , Learning , Adult , Child , Communication , Humans , Language Development , Linguistics
17.
J Exp Psychol Learn Mem Cogn ; 47(3): 466-480, 2021 Mar.
Article in English | MEDLINE | ID: mdl-33030939

ABSTRACT

In conversation, production and comprehension processes may overlap, causing interference. In 3 experiments, we investigated whether repetition priming can work as a supporting device, reducing costs associated with linguistic dual-tasking. Experiment 1 established the rate of decay of repetition priming from spoken words to picture naming for primes embedded in sentences. Experiments 2 and 3 investigated whether the rate of decay was faster when participants comprehended the prime while planning to name unrelated pictures. In all experiments, the primed picture followed the sentences featuring the prime on the same trial, or 10 or 50 trials later. The results of the 3 experiments were strikingly similar: robust repetition priming was observed when the primed picture followed the prime sentence. Thus, repetition priming was observed even when the primes were processed while the participants prepared an unrelated spoken utterance. Priming might, therefore, support utterance planning in conversation, where speakers routinely listen while planning their utterances. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Subject(s)
Linguistics , Repetition Priming , Speech , Adolescent , Adult , Auditory Perception , Female , Humans , Male , Young Adult
18.
Sci Data ; 7(1): 429, 2020 12 08.
Article in English | MEDLINE | ID: mdl-33293542

ABSTRACT

This resource contains data from 112 Dutch adults (18-29 years of age) who completed the Individual Differences in Language Skills test battery that included 33 behavioural tests assessing language skills and domain-general cognitive skills likely involved in language tasks. The battery included tests measuring linguistic experience (e.g. vocabulary size, prescriptive grammar knowledge), general cognitive skills (e.g. working memory, non-verbal intelligence) and linguistic processing skills (word production/comprehension, sentence production/comprehension). Testing was done in a lab-based setting resulting in high quality data due to tight monitoring of the experimental protocol and to the use of software and hardware that were optimized for behavioural testing. Each participant completed the battery twice (i.e., two test days of four hours each). We provide the raw data from all tests on both days as well as pre-processed data that were used to calculate various reliability measures (including internal consistency and test-retest reliability). We encourage other researchers to use this resource for conducting exploratory and/or targeted analyses of individual differences in language and general cognitive skills.


Subject(s)
Individuality , Language , Adult , Cognition , Comprehension , Humans , Memory, Short-Term , Netherlands , Vocabulary , Young Adult
19.
Front Psychol ; 11: 593671, 2020.
Article in English | MEDLINE | ID: mdl-33240183

ABSTRACT

In everyday conversation, turns often follow each other immediately or overlap in time. It has been proposed that speakers achieve this tight temporal coordination between their turns by engaging in linguistic dual-tasking, i.e., by beginning to plan their utterance during the preceding turn. This raises the question of how speakers manage to co-ordinate speech planning and listening with each other. Experimental work addressing this issue has mostly concerned the capacity demands and interference arising when speakers retrieve some content words while listening to others. However, many contributions to conversations are not content words, but backchannels, such as "hm". Backchannels do not provide much conceptual content and are therefore easy to plan and respond to. To estimate how much they might facilitate speech planning in conversation, we determined their frequency in a Dutch and a German corpus of conversational speech. We found that 19% of the contributions in the Dutch corpus, and 16% of contributions in the German corpus were backchannels. In addition, many turns began with fillers or particles, most often translation equivalents of "yes" or "no," which are likewise easy to plan. We proposed that to generate comprehensive models of using language in conversation psycholinguists should study not only the generation and processing of content words, as is commonly done, but also consider backchannels, fillers, and particles.

20.
J Neurosci ; 40(49): 9467-9475, 2020 12 02.
Article in English | MEDLINE | ID: mdl-33097640

ABSTRACT

Neural oscillations track linguistic information during speech comprehension (Ding et al., 2016; Keitel et al., 2018), and are known to be modulated by acoustic landmarks and speech intelligibility (Doelling et al., 2014; Zoefel and VanRullen, 2015). However, studies investigating linguistic tracking have either relied on non-naturalistic isochronous stimuli or failed to fully control for prosody. Therefore, it is still unclear whether low-frequency activity tracks linguistic structure during natural speech, where linguistic structure does not follow such a palpable temporal pattern. Here, we measured electroencephalography (EEG) and manipulated the presence of semantic and syntactic information apart from the timescale of their occurrence, while carefully controlling for the acoustic-prosodic and lexical-semantic information in the signal. EEG was recorded while 29 adult native speakers (22 women, 7 men) listened to naturally spoken Dutch sentences, jabberwocky controls with morphemes and sentential prosody, word lists with lexical content but no phrase structure, and backward acoustically matched controls. Mutual information (MI) analysis revealed sensitivity to linguistic content: MI was highest for sentences at the phrasal (0.8-1.1 Hz) and lexical (1.9-2.8 Hz) timescales, suggesting that the delta-band is modulated by lexically driven combinatorial processing beyond prosody, and that linguistic content (i.e., structure and meaning) organizes neural oscillations beyond the timescale and rhythmicity of the stimulus. This pattern is consistent with neurophysiologically inspired models of language comprehension (Martin, 2016, 2020; Martin and Doumas, 2017) where oscillations encode endogenously generated linguistic content over and above exogenous or stimulus-driven timing and rhythm information.SIGNIFICANCE STATEMENT Biological systems like the brain encode their environment not only by reacting in a series of stimulus-driven responses, but by combining stimulus-driven information with endogenous, internally generated, inferential knowledge and meaning. Understanding language from speech is the human benchmark for this. Much research focuses on the purely stimulus-driven response, but here, we focus on the goal of language behavior: conveying structure and meaning. To that end, we use naturalistic stimuli that contrast acoustic-prosodic and lexical-semantic information to show that, during spoken language comprehension, oscillatory modulations reflect computations related to inferring structure and meaning from the acoustic signal. Our experiment provides the first evidence to date that compositional structure and meaning organize the oscillatory response, above and beyond prosodic and lexical controls.


Subject(s)
Psycholinguistics , Acoustic Stimulation , Adult , Comprehension/physiology , Delta Rhythm/physiology , Electroencephalography , Female , Humans , Male , Mental Processes/physiology , Semantics , Speech Perception , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...