Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 16 de 16
Filter
Add more filters










Publication year range
1.
Cortex ; 177: 346-362, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38917725

ABSTRACT

Prediction has a fundamental role in language processing. However, predictions can be made at different levels, and it is not always clear whether speech sounds, morphemes, words, meanings, or communicative functions are anticipated during dialogues. Previous studies reported specific brain signatures of communicative pragmatic function, in particular enhanced brain responses immediately after encountering an utterance used to request an object from a partner, but relatively smaller ones when the same utterance was used for naming the object. The present experiment now investigates whether similar neuropragmatic signatures emerge in recipients before the onset of upcoming utterances carrying different predictable communicative functions. Trials started with a context question and object pictures displayed on the screen, raising the participant's expectation that words from a specific semantic category (food or tool) would subsequently be used to either name or request one of the objects. Already 600 msec before utterance onset, a larger prediction potential was observed when a request was anticipated relative to naming expectation. As this result is congruent with the neurophysiological difference previously observed right after the critical utterance, the anticipatory brain activity may index predictions about the social-communicative function of upcoming utterances. In addition, we also found that the predictable semantic category of the upcoming word was likewise reflected in the anticipatory brain potential. Thus, the neurophysiological characteristics of the prediction potential can capture different types of upcoming linguistic information, including semantic and pragmatic aspects of an upcoming utterance and communicative action.

2.
Neuropsychologia ; 196: 108816, 2024 Apr 15.
Article in English | MEDLINE | ID: mdl-38331022

ABSTRACT

Neural circuits related to language exhibit a remarkable ability to reorganize and adapt in response to visual deprivation. Particularly, early and late blindness induce distinct neuroplastic changes in the visual cortex, repurposing it for language and semantic processing. Interestingly, these functional changes provoke a unique cognitive advantage - enhanced verbal working memory, particularly in early blindness. Yet, the underlying neuromechanisms and the impact on language and memory-related circuits remain not fully understood. Here, we applied a brain-constrained neural network mimicking the structural and functional features of the frontotemporal-occipital cortices, to model conceptual acquisition in early and late blindness. The results revealed differential expansion of conceptual-related neural circuits into deprived visual areas depending on the timing of visual loss, which is most prominent in early blindness. This neural recruitment is fundamentally governed by the biological principles of neural circuit expansion and the absence of uncorrelated sensory input. Critically, the degree of these changes is constrained by the availability of neural matter previously allocated to visual experiences, as in the case of late blindness. Moreover, we shed light on the implication of visual deprivation on the neural underpinnings of verbal working memory, revealing longer reverberatory neural activity in 'blind models' as compared to the sighted ones. These findings provide a better understanding of the interplay between visual deprivations, neuroplasticity, language processing and verbal working memory.


Subject(s)
Language , Memory, Short-Term , Humans , Memory, Short-Term/physiology , Blindness , Brain , Occipital Lobe
3.
Cognition ; 242: 105635, 2024 01.
Article in English | MEDLINE | ID: mdl-37883821

ABSTRACT

Comprehenders are known to generate expectations about upcoming linguistic input at the sentence and discourse level. However, most previous studies on prediction focused mainly on word-induced brain activity rather than examining neural activity preceding a critical stimulus in discourse processing, where prediction actually takes place. In this EEG study, participants were presented with multiple sentences resembling a discourse including conditional sentences with either only if or if, which are characterized by different semantics, triggering stronger or weaker predictions about the possible continuation of the presented discourses, respectively. Results revealed that discourses including only if, as compared to discourses with bare if, triggered an increased predictive neural activity before the expected critical word, resembling the readiness potential. Moreover, word-induced P300 brain responses were found to be enhanced by unpredictable discourse continuations and reduced in predictable discourse continuations. Intriguingly, brain responses preceding and following the critical word were found to be correlated, which yields evidence for predictive activity modulating word-induced processing on the discourse level. These findings shed light on the predictive nature of neural processes at the discourse level, critically advancing our understanding of the functional interconnection between discourse understanding and prediction processes in brain and mind.


Subject(s)
Brain , Semantics , Humans , Brain/physiology , Language , Linguistics , Comprehension/physiology
4.
Cereb Cortex ; 33(11): 6872-6890, 2023 05 24.
Article in English | MEDLINE | ID: mdl-36807501

ABSTRACT

Although teaching animals a few meaningful signs is usually time-consuming, children acquire words easily after only a few exposures, a phenomenon termed "fast-mapping." Meanwhile, most neural network learning algorithms fail to achieve reliable information storage quickly, raising the question of whether a mechanistic explanation of fast-mapping is possible. Here, we applied brain-constrained neural models mimicking fronto-temporal-occipital regions to simulate key features of semantic associative learning. We compared networks (i) with prior encounters with phonological and conceptual knowledge, as claimed by fast-mapping theory, and (ii) without such prior knowledge. Fast-mapping simulations showed word-specific representations to emerge quickly after 1-10 learning events, whereas direct word learning showed word-meaning mappings only after 40-100 events. Furthermore, hub regions appeared to be essential for fast-mapping, and attention facilitated it, but was not strictly necessary. These findings provide a better understanding of the critical mechanisms underlying the human brain's unique ability to acquire new words rapidly.


Subject(s)
Brain , Semantics , Child , Humans , Linguistics , Brain Mapping , Occipital Lobe
5.
Brain Lang ; 236: 105203, 2023 01.
Article in English | MEDLINE | ID: mdl-36470125

ABSTRACT

What makes human communication exceptional is the ability to grasp speaker's intentions beyond what is said verbally. How the brain processes communicative functions is one of the central concerns of the neurobiology of language and pragmatics. Linguistic-pragmatic theories define these functions as speech acts, and various pragmatic traits characterise them at the levels of propositional content, action sequence structure, related commitments and social aspects. Here I discuss recent neurocognitive studies, which have shown that the use of identical linguistic signs in conveying different communicative functions elicits distinct and ultra-rapid neural responses. Interestingly, cortical areas show differential involvement underlying various pragmatic features related to theory-of-mind, emotion and action for specific speech acts expressed with the same utterances. Drawing on a neurocognitive model, I posit that understanding speech acts involves the expectation of typical partner follow-up actions and that this predictive knowledge is immediately reflected in mind and brain.


Subject(s)
Linguistics , Speech , Humans , Speech/physiology , Language , Communication , Brain/diagnostic imaging , Brain/physiology , Comprehension/physiology
6.
Sci Rep ; 12(1): 16053, 2022 09 26.
Article in English | MEDLINE | ID: mdl-36163225

ABSTRACT

Understanding language semantically related to actions activates the motor cortex. This activation is sensitive to semantic information such as the body part used to perform the action (e.g. arm-/leg-related action words). Additionally, motor movements of the hands/feet can have a causal effect on memory maintenance of action words, suggesting that the involvement of motor systems extends to working memory. This study examined brain correlates of verbal memory load for action-related words using event-related fMRI. Seventeen participants saw either four identical or four different words from the same category (arm-/leg-related action words) then performed a nonmatching-to-sample task. Results show that verbal memory maintenance in the high-load condition produced greater activation in left premotor and supplementary motor cortex, along with posterior-parietal areas, indicating that verbal memory circuits for action-related words include the cortical action system. Somatotopic memory load effects of arm- and leg-related words were observed, but only at more anterior cortical regions than was found in earlier studies employing passive reading tasks. These findings support a neurocomputational model of distributed action-perception circuits (APCs), according to which language understanding is manifest as full ignition of APCs, whereas working memory is realized as reverberant activity receding to multimodal prefrontal and lateral temporal areas.


Subject(s)
Magnetic Resonance Imaging , Motor Cortex , Brain/diagnostic imaging , Brain/physiology , Brain Mapping , Humans , Language , Magnetic Resonance Imaging/methods , Memory, Short-Term , Motor Cortex/diagnostic imaging , Motor Cortex/physiology
7.
Cereb Cortex ; 32(21): 4885-4901, 2022 10 20.
Article in English | MEDLINE | ID: mdl-35136980

ABSTRACT

During conversations, speech prosody provides important clues about the speaker's communicative intentions. In many languages, a rising vocal pitch at the end of a sentence typically expresses a question function, whereas a falling pitch suggests a statement. Here, the neurophysiological basis of intonation and speech act understanding were investigated with high-density electroencephalography (EEG) to determine whether prosodic features are reflected at the neurophysiological level. Already approximately 100 ms after the sentence-final word differing in prosody, questions, and statements expressed with the same sentences led to different neurophysiological activity recorded in the event-related potential. Interestingly, low-pass filtered sentences and acoustically matched nonvocal musical signals failed to show any neurophysiological dissociations, thus suggesting that the physical intonation alone cannot explain this modulation. Our results show rapid neurophysiological indexes of prosodic communicative information processing that emerge only when pragmatic and lexico-semantic information are fully expressed. The early enhancement of question-related activity compared with statements was due to sources in the articulatory-motor region, which may reflect the richer action knowledge immanent to questions, namely the expectation of the partner action of answering the question. The present findings demonstrate a neurophysiological correlate of prosodic communicative information processing, which enables humans to rapidly detect and understand speaker intentions in linguistic interactions.


Subject(s)
Speech Perception , Speech , Humans , Speech Perception/physiology , Evoked Potentials/physiology , Electroencephalography/methods , Linguistics
9.
Nat Rev Neurosci ; 22(8): 488-502, 2021 08.
Article in English | MEDLINE | ID: mdl-34183826

ABSTRACT

Neural network models are potential tools for improving our understanding of complex brain functions. To address this goal, these models need to be neurobiologically realistic. However, although neural networks have advanced dramatically in recent years and even achieve human-like performance on complex perceptual and cognitive tasks, their similarity to aspects of brain anatomy and physiology is imperfect. Here, we discuss different types of neural models, including localist, auto-associative, hetero-associative, deep and whole-brain networks, and identify aspects under which their biological plausibility can be improved. These aspects range from the choice of model neurons and of mechanisms of synaptic plasticity and learning to implementation of inhibition and control, along with neuroanatomical properties including areal structure and local and long-range connectivity. We highlight recent advances in developing biologically grounded cognitive theories and in mechanistically explaining, on the basis of these brain-constrained neural models, hitherto unaddressed issues regarding the nature, localization and ontogenetic and phylogenetic development of higher brain functions. In closing, we point to possible future clinical applications of brain-constrained modelling.


Subject(s)
Brain/physiology , Cognition/physiology , Models, Neurological , Neural Networks, Computer , Neurons/physiology , Humans , Neuronal Plasticity/physiology
10.
Cortex ; 135: 127-145, 2021 02.
Article in English | MEDLINE | ID: mdl-33360757

ABSTRACT

People normally know what they want to communicate before they start speaking. However, brain indicators of communication are typically observed only after speech act onset, and it is unclear when any anticipatory brain activity prior to speaking might first emerge, along with the communicative intentions it possibly reflects. Here, we investigated brain activity prior to the production of different speech act types, request and naming actions performed by uttering single words embedded into language games with a partner, similar to natural communication. Starting ca. 600 msec before speech onset, an event-related potential maximal at fronto-central electrodes, which resembled the Readiness Potential, was larger when preparing requests compared to naming actions. Analysis of the cortical sources of this anticipatory brain potential suggests a relatively stronger involvement of fronto-central motor regions for requests, which may reflect the speaker's expectation of the partner actions typically following requests, e.g., the handing over of a requested object. Our results indicate that different neuronal circuits underlying the processing of different speech act types activate already before speaking. Results are discussed in light of previous work addressing the neural basis of speech act understanding and predictive brain indexes of language comprehension.


Subject(s)
Speech Perception , Speech , Brain , Comprehension , Humans , Language
11.
Cereb Cortex ; 31(3): 1553-1568, 2021 02 05.
Article in English | MEDLINE | ID: mdl-33108460

ABSTRACT

With strong and valid predictions, grasping a message is easy, whereas more demanding processing is required in the absence of robust expectations. We here demonstrate that brain correlates of the interplay between prediction and perception mechanisms in the understanding of meaningful sentences. Sentence fragments that strongly predict subsequent words induced anticipatory brain activity preceding the expected words; this potential was absent if context did not strongly predict subsequent words. Subjective reports of certainty about upcoming words and objective corpus-based measures correlated with the size of the anticipatory signal, thus establishing its status as a semantic prediction potential (SPP). Crucially, there was an inverse correlation between the SPP and the N400 brain response. The main cortical generators of SPP and N400 were found in inferior prefrontal cortex and posterior temporal cortex, respectively. Interestingly, sentence meaning was reflected by both measures, with additional category-specific sources of SPPs and N400s falling into parieto-temporo-occipital (visual) and frontocentral (sensorimotor) areas for animal- and tool-related words, respectively. These results show that the well-known brain index of semantic comprehension, N400, has an antecedent with different brain localization but similar semantic discriminatory function. We discuss whether N400 dynamics may causally depend on mechanisms underlying SPP size and sources.


Subject(s)
Brain/physiology , Comprehension/physiology , Speech Perception/physiology , Adult , Electroencephalography , Female , Humans , Male , Semantics
12.
Sci Rep ; 9(1): 16285, 2019 11 08.
Article in English | MEDLINE | ID: mdl-31705052

ABSTRACT

During everyday social interaction, gestures are a fundamental part of human communication. The communicative pragmatic role of hand gestures and their interaction with spoken language has been documented at the earliest stage of language development, in which two types of indexical gestures are most prominent: the pointing gesture for directing attention to objects and the give-me gesture for making requests. Here we study, in adult human participants, the neurophysiological signatures of gestural-linguistic acts of communicating the pragmatic intentions of naming and requesting by simultaneously presenting written words and gestures. Already at ~150 ms, brain responses diverged between naming and request actions expressed by word-gesture combination, whereas the same gestures presented in isolation elicited their earliest neurophysiological dissociations significantly later (at ~210 ms). There was an early enhancement of request-evoked brain activity as compared with naming, which was due to sources in the frontocentral cortex, consistent with access to action knowledge in request understanding. In addition, an enhanced N400-like response indicated late semantic integration of gesture-language interaction. The present study demonstrates that word-gesture combinations used to express communicative pragmatic intentions speed up the brain correlates of comprehension processes - compared with gesture-only understanding - thereby calling into question current serial linguistic models viewing pragmatic function decoding at the end of a language comprehension cascade. Instead, information about the social-interactive role of communicative acts is processed instantaneously.


Subject(s)
Brain/physiology , Communication , Comprehension , Gestures , Verbal Behavior , Brain Mapping , Cerebral Cortex/physiology , Event-Related Potentials, P300 , Female , Humans , Male , Neurophysiology
13.
Sci Rep ; 9(1): 3579, 2019 03 05.
Article in English | MEDLINE | ID: mdl-30837569

ABSTRACT

In blind people, the visual cortex takes on higher cognitive functions, including language. Why this functional reorganisation mechanistically emerges at the neuronal circuit level is still unclear. Here, we use a biologically constrained network model implementing features of anatomical structure, neurophysiological function and connectivity of fronto-temporal-occipital areas to simulate word-meaning acquisition in visually deprived and undeprived brains. We observed that, only under visual deprivation, distributed word-related neural circuits 'grew into' the deprived visual areas, which therefore adopted a linguistic-semantic role. Three factors are crucial for explaining this deprivation-related growth: changes in the network's activity balance brought about by the absence of uncorrelated sensory input, the connectivity structure of the network, and Hebbian correlation learning. In addition, the blind model revealed long-lasting spiking neural activity compared to the sighted model during word recognition, which is a neural correlate of enhanced verbal working memory. The present neurocomputational model offers a neurobiological account for neural changes following sensory deprivation, thus closing the gap between cellular-level mechanisms, system-level linguistic and semantic function.


Subject(s)
Blindness/physiopathology , Language , Models, Neurological , Visual Cortex/physiopathology , Humans , Learning
14.
Front Comput Neurosci ; 12: 88, 2018.
Article in English | MEDLINE | ID: mdl-30459584

ABSTRACT

One of the most controversial debates in cognitive neuroscience concerns the cortical locus of semantic knowledge and processing in the human brain. Experimental data revealed the existence of various cortical regions relevant for meaning processing, ranging from semantic hubs generally involved in semantic processing to modality-preferential sensorimotor areas involved in the processing of specific conceptual categories. Why and how the brain uses such complex organization for conceptualization can be investigated using biologically constrained neurocomputational models. Here, we improve pre-existing neurocomputational models of semantics by incorporating spiking neurons and a rich connectivity structure between the model 'areas' to mimic important features of the underlying neural substrate. Semantic learning and symbol grounding in action and perception were simulated by associative learning between co-activated neuron populations in frontal, temporal and occipital areas. As a result of Hebbian learning of the correlation structure of symbol, perception and action information, distributed cell assembly circuits emerged across various cortices of the network. These semantic circuits showed category-specific topographical distributions, reaching into motor and visual areas for action- and visually-related words, respectively. All types of semantic circuits included large numbers of neurons in multimodal connector hub areas, which is explained by cortical connectivity structure and the resultant convergence of phonological and semantic information on these zones. Importantly, these semantic hub areas exhibited some category-specificity, which was less pronounced than that observed in primary and secondary modality-preferential cortices. The present neurocomputational model integrates seemingly divergent experimental results about conceptualization and explains both semantic hubs and category-specific areas as an emergent process causally determined by two major factors: neuroanatomical connectivity structure and correlated neuronal activation during language learning.

15.
Neuropsychologia ; 98: 111-129, 2017 04.
Article in English | MEDLINE | ID: mdl-27394150

ABSTRACT

Neuroimaging and patient studies show that different areas of cortex respectively specialize for general and selective, or category-specific, semantic processing. Why are there both semantic hubs and category-specificity, and how come that they emerge in different cortical regions? Can the activation time-course of these areas be predicted and explained by brain-like network models? In this present work, we extend a neurocomputational model of human cortical function to simulate the time-course of cortical processes of understanding meaningful concrete words. The model implements frontal and temporal cortical areas for language, perception, and action along with their connectivity. It uses Hebbian learning to semantically ground words in aspects of their referential object- and action-related meaning. Compared with earlier proposals, the present model incorporates additional neuroanatomical links supported by connectivity studies and downscaled synaptic weights in order to control for functional between-area differences purely due to the number of in- or output links of an area. We show that learning of semantic relationships between words and the objects and actions these symbols are used to speak about, leads to the formation of distributed circuits, which all include neuronal material in connector hub areas bridging between sensory and motor cortical systems. Therefore, these connector hub areas acquire a role as semantic hubs. By differentially reaching into motor or visual areas, the cortical distributions of the emergent 'semantic circuits' reflect aspects of the represented symbols' meaning, thus explaining category-specificity. The improved connectivity structure of our model entails a degree of category-specificity even in the 'semantic hubs' of the model. The relative time-course of activation of these areas is typically fast and near-simultaneous, with semantic hubs central to the network structure activating before modality-preferential areas carrying semantic information.


Subject(s)
Brain Mapping , Cerebral Cortex/anatomy & histology , Cerebral Cortex/physiology , Learning/physiology , Models, Neurological , Neural Pathways/physiology , Semantics , Analysis of Variance , Computer Simulation , Female , Humans , Male
16.
Front Comput Neurosci ; 10: 145, 2016.
Article in English | MEDLINE | ID: mdl-28149276

ABSTRACT

Experimental evidence indicates that neurophysiological responses to well-known meaningful sensory items and symbols (such as familiar objects, faces, or words) differ from those to matched but novel and senseless materials (unknown objects, scrambled faces, and pseudowords). Spectral responses in the high beta- and gamma-band have been observed to be generally stronger to familiar stimuli than to unfamiliar ones. These differences have been hypothesized to be caused by the activation of distributed neuronal circuits or cell assemblies, which act as long-term memory traces for learned familiar items only. Here, we simulated word learning using a biologically constrained neurocomputational model of the left-hemispheric cortical areas known to be relevant for language and conceptual processing. The 12-area spiking neural-network architecture implemented replicates physiological and connectivity features of primary, secondary, and higher-association cortices in the frontal, temporal, and occipital lobes of the human brain. We simulated elementary aspects of word learning in it, focussing specifically on semantic grounding in action and perception. As a result of spike-driven Hebbian synaptic plasticity mechanisms, distributed, stimulus-specific cell-assembly (CA) circuits spontaneously emerged in the network. After training, presentation of one of the learned "word" forms to the model correlate of primary auditory cortex induced periodic bursts of activity within the corresponding CA, leading to oscillatory phenomena in the entire network and spontaneous across-area neural synchronization. Crucially, Morlet wavelet analysis of the network's responses recorded during presentation of learned meaningful "word" and novel, senseless "pseudoword" patterns revealed stronger induced spectral power in the gamma-band for the former than the latter, closely mirroring differences found in neurophysiological data. Furthermore, coherence analysis of the simulated responses uncovered dissociated category specific patterns of synchronous oscillations in distant cortical areas, including indirectly connected primary sensorimotor areas. Bridging the gap between cellular-level mechanisms, neuronal-population behavior, and cognitive function, the present model constitutes the first spiking, neurobiologically, and anatomically realistic model able to explain high-frequency oscillatory phenomena indexing language processing on the basis of dynamics and competitive interactions of distributed cell-assembly circuits which emerge in the brain as a result of Hebbian learning and sensorimotor experience.

SELECTION OF CITATIONS
SEARCH DETAIL
...