ABSTRACT
Motor behaviors are often planned long before execution but only released after specific sensory events. Planning and execution are each associated with distinct patterns of motor cortex activity. Key questions are how these dynamic activity patterns are generated and how they relate to behavior. Here, we investigate the multi-regional neural circuits that link an auditory "Go cue" and the transition from planning to execution of directional licking. Ascending glutamatergic neurons in the midbrain reticular and pedunculopontine nuclei show short latency and phasic changes in spike rate that are selective for the Go cue. This signal is transmitted via the thalamus to the motor cortex, where it triggers a rapid reorganization of motor cortex state from planning-related activity to a motor command, which in turn drives appropriate movement. Our studies show how midbrain can control cortical dynamics via the thalamus for rapid and precise motor behavior.
Subject(s)
Motor Cortex , Movement , Thalamus , Animals , Mesencephalon , Mice , Motor Cortex/physiology , Neurons/physiology , Thalamus/physiologyABSTRACT
Neural activity underlying short-term memory is maintained by interconnected networks of brain regions. It remains unknown how brain regions interact to maintain persistent activity while exhibiting robustness to corrupt information in parts of the network. We simultaneously measured activity in large neuronal populations across mouse frontal hemispheres to probe interactions between brain regions. Activity across hemispheres was coordinated to maintain coherent short-term memory. Across mice, we uncovered individual variability in the organization of frontal cortical networks. A modular organization was required for the robustness of persistent activity to perturbations: each hemisphere retained persistent activity during perturbations of the other hemisphere, thus preventing local perturbations from spreading. A dynamic gating mechanism allowed hemispheres to coordinate coherent information while gating out corrupt information. Our results show that robust short-term memory is mediated by redundant modular representations across brain regions. Redundant modular representations naturally emerge in neural network models that learned robust dynamics.
Subject(s)
Frontal Lobe/physiology , Nerve Net/physiology , Aging/physiology , Animals , Behavior, Animal , Cerebrum/physiology , Choice Behavior , Female , Light , Male , Mice , Models, Neurological , Motor Cortex/physiology , Neurons/physiologyABSTRACT
To execute accurate movements, animals must continuously adapt their behavior to changes in their bodies and environments. Animals can learn changes in the relationship between their locomotor commands and the resulting distance moved, then adjust command strength to achieve a desired travel distance. It is largely unknown which circuits implement this form of motor learning, or how. Using whole-brain neuronal imaging and circuit manipulations in larval zebrafish, we discovered that the serotonergic dorsal raphe nucleus (DRN) mediates short-term locomotor learning. Serotonergic DRN neurons respond phasically to swim-induced visual motion, but little to motion that is not self-generated. During prolonged exposure to a given motosensory gain, persistent DRN activity emerges that stores the learned efficacy of motor commands and adapts future locomotor drive for tens of seconds. The DRN's ability to track the effectiveness of motor intent may constitute a computational building block for the broader functions of the serotonergic system. VIDEO ABSTRACT.
Subject(s)
Learning , Models, Neurological , Swimming , Zebrafish/physiology , Animals , Brain Mapping , Larva , Optogenetics , Raphe Nuclei/physiology , Serotonergic Neurons/cytology , Serotonergic Neurons/physiology , Spatial ProcessingABSTRACT
Nucleic acid-binding proteins (NABPs), including DNA-binding proteins (DBPs) and RNA-binding proteins (RBPs), play important roles in essential biological processes. To facilitate functional annotation and accurate prediction of different types of NABPs, many machine learning-based computational approaches have been developed. However, the datasets used for training and testing as well as the prediction scopes in these studies have limited their applications. In this paper, we developed new strategies to overcome these limitations by generating more accurate and robust datasets and developing deep learning-based methods including both hierarchical and multi-class approaches to predict the types of NABPs for any given protein. The deep learning models employ two layers of convolutional neural network and one layer of long short-term memory. Our approaches outperform existing DBP and RBP predictors with a balanced prediction between DBPs and RBPs, and are more practically useful in identifying novel NABPs. The multi-class approach greatly improves the prediction accuracy of DBPs and RBPs, especially for the DBPs with ~12% improvement. Moreover, we explored the prediction accuracy of single-stranded DNA binding proteins and their effect on the overall prediction accuracy of NABP predictions.
Subject(s)
Computational Biology , DNA-Binding Proteins , Deep Learning , RNA-Binding Proteins , RNA-Binding Proteins/metabolism , DNA-Binding Proteins/metabolism , Computational Biology/methods , Neural Networks, Computer , HumansABSTRACT
A commonly observed neural correlate of working memory is firing that persists after the triggering stimulus disappears. Substantial effort has been devoted to understanding the many potential mechanisms that may underlie memory-associated persistent activity. These rely either on the intrinsic properties of individual neurons or on the connectivity within neural circuits to maintain the persistent activity. Nevertheless, it remains unclear which mechanisms are at play in the many brain areas involved in working memory. Herein, we first summarize the palette of different mechanisms that can generate persistent activity. We then discuss recent work that asks which mechanisms underlie persistent activity in different brain areas. Finally, we discuss future studies that might tackle this question further. Our goal is to bridge between the communities of researchers who study either single-neuron biophysical, or neural circuit, mechanisms that can generate the persistent activity that underlies working memory.
Subject(s)
Action Potentials/physiology , Cerebral Cortex/physiology , Memory, Short-Term/physiology , Models, Neurological , Nerve Net/physiology , Animals , Neurons/physiology , Synaptic Transmission/physiologyABSTRACT
MOTIVATION: Exploring the potential long noncoding RNA (lncRNA)-disease associations (LDAs) plays a critical role for understanding disease etiology and pathogenesis. Given the high cost of biological experiments, developing a computational method is a practical necessity to effectively accelerate experimental screening process of candidate LDAs. However, under the high sparsity of LDA dataset, many computational models hardly exploit enough knowledge to learn comprehensive patterns of node representations. Moreover, although the metapath-based GNN has been recently introduced into LDA prediction, it discards intermediate nodes along the meta-path and results in information loss. RESULTS: This paper presents a new multi-view contrastive heterogeneous graph attention network (GAT) for lncRNA-disease association prediction, MCHNLDA for brevity. Specifically, MCHNLDA firstly leverages rich biological data sources of lncRNA, gene and disease to construct two-view graphs, feature structural graph of feature schema view and lncRNA-gene-disease heterogeneous graph of network topology view. Then, we design a cross-contrastive learning task to collaboratively guide graph embeddings of the two views without relying on any labels. In this way, we can pull closer the nodes of similar features and network topology, and push other nodes away. Furthermore, we propose a heterogeneous contextual GAT, where long short-term memory network is incorporated into attention mechanism to effectively capture sequential structure information along the meta-path. Extensive experimental comparisons against several state-of-the-art methods show the effectiveness of proposed framework.The code and data of proposed framework is freely available at https://github.com/zhaoxs686/MCHNLDA.
Subject(s)
RNA, Long Noncoding , RNA, Long Noncoding/genetics , LearningABSTRACT
Major histocompatibility complex (MHC) class II molecules play a pivotal role in antigen presentation and CD4+ T cell response. Accurate prediction of the immunogenicity of MHC class II-associated antigens is critical for vaccine design and cancer immunotherapies. However, current computational methods are limited by insufficient training data and algorithmic constraints, and the rules that govern which peptides are truly recognized by existing T cell receptors remain poorly understood. Here, we build a transfer learning-based, long short-term memory model named 'TLimmuno2' to predict whether epitope-MHC class II complex can elicit T cell response. Through leveraging binding affinity data, TLimmuno2 shows superior performance compared with existing models on independent validation datasets. TLimmuno2 can find real immunogenic neoantigen in real-world cancer immunotherapy data. The identification of significant MHC class II neoantigen-mediated immunoediting signal in the cancer genome atlas pan-cancer dataset further suggests the robustness of TLimmuno2 in identifying really immunogenic neoantigens that are undergoing negative selection during cancer evolution. Overall, TLimmuno2 is a powerful tool for the immunogenicity prediction of MHC class II presented epitopes and could promote the development of personalized immunotherapies.
Subject(s)
Histocompatibility Antigens Class II , Neoplasms , Humans , HLA Antigens , Antigen Presentation , Machine LearningABSTRACT
With the development of genome sequencing technology, using computing technology to predict grain protein function has become one of the important tasks of bioinformatics. The protein data of four grains, soybean, maize, indica and japonica are selected in this experimental dataset. In this paper, a novel neural network algorithm Chemical-SA-BiLSTM is proposed for grain protein function prediction. The Chemical-SA-BiLSTM algorithm fuses the chemical properties of proteins on the basis of amino acid sequences, and combines the self-attention mechanism with the bidirectional Long Short-Term Memory network. The experimental results show that the Chemical-SA-BiLSTM algorithm is superior to other classical neural network algorithms, and can more accurately predict the protein function, which proves the effectiveness of the Chemical-SA-BiLSTM algorithm in the prediction of grain protein function. The source code of our method is available at https://github.com/HwaTong/Chemical-SA-BiLSTM.
Subject(s)
Grain Proteins , Neural Networks, Computer , Algorithms , Proteins/chemistry , SoftwareABSTRACT
Type 1 diabetes (T1D) outcome prediction plays a vital role in identifying novel risk factors, ensuring early patient care and designing cohort studies. TEDDY is a longitudinal cohort study that collects a vast amount of multi-omics and clinical data from its participants to explore the progression and markers of T1D. However, missing data in the omics profiles make the outcome prediction a difficult task. TEDDY collected time series gene expression for less than 6% of enrolled participants. Additionally, for the participants whose gene expressions are collected, 79% time steps are missing. This study introduces an advanced bioinformatics framework for gene expression imputation and islet autoimmunity (IA) prediction. The imputation model generates synthetic data for participants with partially or entirely missing gene expression. The prediction model integrates the synthetic gene expression with other risk factors to achieve better predictive performance. Comprehensive experiments on TEDDY datasets show that: (1) Our pipeline can effectively integrate synthetic gene expression with family history, HLA genotype and SNPs to better predict IA status at 2 years (sensitivity 0.622, AUC 0.715) compared with the individual datasets and state-of-the-art results in the literature (AUC 0.682). (2) The synthetic gene expression contains predictive signals as strong as the true gene expression, reducing reliance on expensive and long-term longitudinal data collection. (3) Time series gene expression is crucial to the proposed improvement and shows significantly better predictive ability than cross-sectional gene expression. (4) Our pipeline is robust to limited data availability. Availability: Code is available at https://github.com/compbiolabucf/TEDDY.
Subject(s)
Diabetes Mellitus, Type 1 , Islets of Langerhans , Humans , Diabetes Mellitus, Type 1/genetics , Autoimmunity/genetics , Longitudinal Studies , Time Factors , Cross-Sectional Studies , Genetic Predisposition to Disease , Gene ExpressionABSTRACT
The Hi-C experiments have been extensively used for the studies of genomic structures. In the last few years, spatiotemporal Hi-C has largely contributed to the investigation of genome dynamic reorganization. However, computationally modeling and forecasting spatiotemporal Hi-C data still have not been seen in the literature. We present HiC4D for dealing with the problem of forecasting spatiotemporal Hi-C data. We designed and benchmarked a novel network and named it residual ConvLSTM (ResConvLSTM), which is a combination of residual network and convolutional long short-term memory (ConvLSTM). We evaluated our new ResConvLSTM networks and compared them with the other five methods, including a naïve network (NaiveNet) that we designed as a baseline method and four outstanding video-prediction methods from the literature: ConvLSTM, spatiotemporal LSTM (ST-LSTM), self-attention LSTM (SA-LSTM) and simple video prediction (SimVP). We used eight different spatiotemporal Hi-C datasets for the blind test, including two from mouse embryogenesis, one from somatic cell nuclear transfer (SCNT) embryos, three embryogenesis datasets from different species and two non-embryogenesis datasets. Our evaluation results indicate that our ResConvLSTM networks almost always outperform the other methods on the eight blind-test datasets in terms of accurately predicting the Hi-C contact matrices at future time-steps. Our benchmarks also indicate that all of the methods that we benchmarked can successfully recover the boundaries of topologically associating domains called on the experimental Hi-C contact matrices. Taken together, our benchmarks suggest that HiC4D is an effective tool for predicting spatiotemporal Hi-C data. HiC4D is publicly available at both http://dna.cs.miami.edu/HiC4D/ and https://github.com/zwang-bioinformatics/HiC4D/.
Subject(s)
Genome , Genomics , Animals , Mice , ForecastingABSTRACT
Precise targeting of transcription factor binding sites (TFBSs) is essential to comprehending transcriptional regulatory processes and investigating cellular function. Although several deep learning algorithms have been created to predict TFBSs, the models' intrinsic mechanisms and prediction results are difficult to explain. There is still room for improvement in prediction performance. We present DeepSTF, a unique deep-learning architecture for predicting TFBSs by integrating DNA sequence and shape profiles. We use the improved transformer encoder structure for the first time in the TFBSs prediction approach. DeepSTF extracts DNA higher-order sequence features using stacked convolutional neural networks (CNNs), whereas rich DNA shape profiles are extracted by combining improved transformer encoder structure and bidirectional long short-term memory (Bi-LSTM), and, finally, the derived higher-order sequence features and representative shape profiles are integrated into the channel dimension to achieve accurate TFBSs prediction. Experiments on 165 ENCODE chromatin immunoprecipitation sequencing (ChIP-seq) datasets show that DeepSTF considerably outperforms several state-of-the-art algorithms in predicting TFBSs, and we explain the usefulness of the transformer encoder structure and the combined strategy using sequence features and shape profiles in capturing multiple dependencies and learning essential features. In addition, this paper examines the significance of DNA shape features predicting TFBSs. The source code of DeepSTF is available at https://github.com/YuBinLab-QUST/DeepSTF/.
Subject(s)
DNA , Neural Networks, Computer , Binding Sites , Protein Binding , DNA/genetics , DNA/chemistry , Transcription Factors/genetics , Transcription Factors/chemistryABSTRACT
Promoters, which are short (50-1500 base-pair) in DNA regions, have emerged to play a critical role in the regulation of gene transcription. Numerous dangerous diseases, likewise cancer, cardiovascular, and inflammatory bowel diseases, are caused by genetic variations in promoters. Consequently, the correct identification and characterization of promoters are significant for the discovery of drugs. However, experimental approaches to recognizing promoters and their strengths are challenging in terms of cost, time, and resources. Therefore, computational techniques are highly desirable for the correct characterization of promoters from unannotated genomic data. Here, we designed a powerful bi-layer deep-learning based predictor named "PROCABLES", which discriminates DNA samples as promoters in the first-phase and strong or weak promoters in the second-phase respectively. The proposed method utilizes five distinct features, such as word2vec, k-spaced nucleotide pairs, trinucleotide propensity-based features, trinucleotide composition, and electron-ion interaction pseudopotentials, to extract the hidden patterns from the DNA sequence. Afterwards, a stacked framework is formed by integrating a convolutional neural network (CNN) with bidirectional long-short-term memory (LSTM) using multi-view attributes to train the proposed model. The PROCABLES model achieved an accuracy of 0.971 and 0.920 and the MCC 0.940 and 0.840 for the first and second-layer using the ten-fold cross-validation test, respectively. The predicted results anticipate that the proposed PROCABLES protocol outperformed the advanced computational predictors targeting promoters and their types. In summary, this research will provide useful hints for the recognition of large-scale promoters in particular and other DNA problems in general.
Subject(s)
Deep Learning , Promoter Regions, Genetic , Humans , Neural Networks, Computer , Computational Biology/methods , DNA/genetics , DNA/chemistryABSTRACT
The process of virtual screening relies heavily on the databases, but it is disadvantageous to conduct virtual screening based on commercial databases with patent-protected compounds, high compound toxicity and side effects. Therefore, this paper utilizes generative recurrent neural networks (RNN) containing long short-term memory (LSTM) cells to learn the properties of drug compounds in the DrugBank, aiming to obtain a new and virtual screening compounds database with drug-like properties. Ultimately, a compounds database consisting of 26,316 compounds is obtained by this method. To evaluate the potential of this compounds database, a series of tests are performed, including chemical space, ADME properties, compound fragmentation, and synthesizability analysis. As a result, it is proved that the database is equipped with good drug-like properties and a relatively new backbone, its potential in virtual screening is further tested. Finally, a series of seedling compounds with completely new backbones are obtained through docking and binding free energy calculations.
Subject(s)
Deep Learning , Molecular Docking Simulation , Molecular Docking Simulation/methods , Enzyme Inhibitors/chemistry , Enzyme Inhibitors/pharmacology , Drug Evaluation, Preclinical/methods , Humans , Databases, Pharmaceutical , Neural Networks, Computer , Databases, ChemicalABSTRACT
Playing a musical instrument engages numerous cognitive abilities, including sensory perception, selective attention, and short-term memory. Mounting evidence indicates that engaging these cognitive functions during musical training will improve performance of these same functions. Yet, it remains unclear the extent these benefits may extend to nonmusical tasks, and what neural mechanisms may enable such transfer. Here, we conducted a preregistered randomized clinical trial where nonmusicians underwent 8 wk of either digital musical rhythm training or word search as control. Only musical rhythm training placed demands on short-term memory, as well as demands on visual perception and selective attention, which are known to facilitate short-term memory. As hypothesized, only the rhythm training group exhibited improved short-term memory on a face recognition task, thereby providing important evidence that musical rhythm training can benefit performance on a nonmusical task. Analysis of electroencephalography data showed that neural activity associated with sensory processing and selective attention were unchanged by training. Rather, rhythm training facilitated neural activity associated with short-term memory encoding, as indexed by an increased P3 of the event-related potential to face stimuli. Moreover, short-term memory maintenance was enhanced, as evidenced by increased two-class (face/scene) decoding accuracy. Activity from both the encoding and maintenance periods each highlight the right superior parietal lobule (SPL) as a source for training-related changes. Together, these results suggest musical rhythm training may improve memory for faces by facilitating activity within the SPL to promote how memories are encoded and maintained, which can be used in a domain-general manner to enhance performance on a nonmusical task.
Subject(s)
Attention , Facial Recognition , Memory, Short-Term , Music , Cognition , Music/psychology , Visual PerceptionABSTRACT
The foveal visual image region provides the human visual system with the highest acuity. However, it is unclear whether such a high fidelity representational advantage is maintained when foveal image locations are committed to short-term memory. Here, we describe a paradoxically large distortion in foveal target location recall by humans. We briefly presented small, but high contrast, points of light at eccentricities ranging from 0.1 to 12°, while subjects maintained their line of sight on a stable target. After a brief memory period, the subjects indicated the remembered target locations via computer controlled cursors. The biggest localization errors, in terms of both directional deviations and amplitude percentage overshoots or undershoots, occurred for the most foveal targets, and such distortions were still present, albeit with qualitatively different patterns, when subjects shifted their gaze to indicate the remembered target locations. Foveal visual images are severely distorted in short-term memory.
Subject(s)
Fovea Centralis , Memory, Short-Term , Mental Recall , Fovea Centralis/physiology , Humans , Visual PerceptionABSTRACT
Understanding the neural mechanisms of conscious and unconscious experience is a major goal of fundamental and translational neuroscience. Here, we target the early visual cortex with a protocol of noninvasive, high-resolution alternating current stimulation while participants performed a delayed target-probe discrimination task and reveal dissociable mechanisms of mnemonic processing for conscious and unconscious perceptual contents. Entraining ß-rhythms in bilateral visual areas preferentially enhanced short-term memory for seen information, whereas α-entrainment in the same region preferentially enhanced short-term memory for unseen information. The short-term memory improvements were frequency-specific and long-lasting. The results add a mechanistic foundation to existing theories of consciousness, call for revisions to these theories, and contribute to the development of nonpharmacological therapeutics for improving visual cortical processing.
Subject(s)
Consciousness , Visual Perception , Humans , Consciousness/physiology , Visual Perception/physiology , Unconsciousness , Memory, Short-TermABSTRACT
Neurons within dorsolateral prefrontal cortex (PFC) of primates are characterized by robust persistent spiking activity exhibited during the delay period of working memory tasks. This includes the frontal eye field (FEF) where nearly half of the neurons are active when spatial locations are held in working memory. Past evidence has established the FEF's contribution to the planning and triggering of saccadic eye movements as well as to the control of visual spatial attention. However, it remains unclear whether persistent delay activity reflects a similar dual role in movement planning and visuospatial working memory. We trained male monkeys to alternate between different forms of a spatial working memory task which could dissociate remembered stimulus locations from planned eye movements. We tested the effects of inactivation of FEF sites on behavioral performance in the different tasks. Consistent with previous studies, FEF inactivation impaired the execution of memory-guided saccades (MGSs), and impaired performance when remembered locations matched the planned eye movement. In contrast, memory performance was largely unaffected when the remembered location was dissociated from the correct eye movement response. Overall, the inactivation effects demonstrated clear deficits in eye movements, regardless of task type, but little or no evidence of a deficit in spatial working memory. Thus, our results indicate that persistent delay activity in the FEF contributes primarily to the preparation of eye movements and not to spatial working memory.SIGNIFICANCE STATEMENT Many frontal eye field (FEF) neurons exhibit spatially tuned persistent spiking activity during the delay period of working memory tasks. However, the role of the FEF in spatial working memory remains unresolved. We tested the effects of inactivation of FEF sites on behavioral performance in different forms of a spatial working memory task, one of which dissociated the remembered stimulus locations from planned eye movements. We found that FEF inactivation produced clear deficits in eye movements, regardless of task type, but no deficit in spatial working memory when dissociated from those movements.
Subject(s)
Frontal Lobe , Memory, Short-Term , Animals , Male , Frontal Lobe/physiology , Eye Movements , Saccades , Neurons/physiologyABSTRACT
Quantitative measurement of RNA expression levels through RNA-Seq is an ideal replacement for conventional cancer diagnosis via microscope examination. Currently, cancer-related RNA-Seq studies focus on two aspects: classifying the status and tissue of origin of a sample and discovering marker genes. Existing studies typically identify marker genes by statistically comparing healthy and cancer samples. However, this approach overlooks marker genes with low expression level differences and may be influenced by experimental results. This paper introduces "GENESO," a novel framework for pan-cancer classification and marker gene discovery using the occlusion method in conjunction with deep learning. we first trained a baseline deep LSTM neural network capable of distinguishing the origins and statuses of samples utilizing RNA-Seq data. Then, we propose a novel marker gene discovery method called "Symmetrical Occlusion (SO)". It collaborates with the baseline LSTM network, mimicking the "gain of function" and "loss of function" of genes to evaluate their importance in pan-cancer classification quantitatively. By identifying the genes of utmost importance, we then isolate them to train new neural networks, resulting in higher-performance LSTM models that utilize only a reduced set of highly relevant genes. The baseline neural network achieves an impressive validation accuracy of 96.59% in pan-cancer classification. With the help of SO, the accuracy of the second network reaches 98.30%, while using 67% fewer genes. Notably, our method excels in identifying marker genes that are not differentially expressed. Moreover, we assessed the feasibility of our method using single-cell RNA-Seq data, employing known marker genes as a validation test.
Subject(s)
Deep Learning , Neoplasms , Humans , Neoplasms/genetics , Neoplasms/classification , Neural Networks, Computer , Biomarkers, Tumor/genetics , RNA-Seq/methodsABSTRACT
O-linked ß-N-acetylglucosamine (O-GlcNAc) is a post-translational modification (i.e., O-GlcNAcylation) on serine/threonine residues of proteins, regulating a plethora of physiological and pathological events. As a dynamic process, O-GlcNAc functions in a site-specific manner. However, the experimental identification of the O-GlcNAc sites remains challenging in many scenarios. Herein, by leveraging the recent progress in cataloguing experimentally identified O-GlcNAc sites and advanced deep learning approaches, we establish an ensemble model, O-GlcNAcPRED-DL, a deep learning-based tool, for the prediction of O-GlcNAc sites. In brief, to make a benchmark O-GlcNAc data set, we extracted the information on O-GlcNAc from the recently constructed database O-GlcNAcAtlas, which contains thousands of experimentally identified and curated O-GlcNAc sites on proteins from multiple species. To overcome the imbalance between positive and negative data sets, we selected five groups of negative data sets in humans and mice to construct an ensemble predictor based on connection of a convolutional neural network and bidirectional long short-term memory. By taking into account three types of sequence information, we constructed four network frameworks, with the systematically optimized parameters used for the models. The thorough comparison analysis on two independent data sets of humans and mice and six independent data sets from other species demonstrated remarkably increased sensitivity and accuracy of the O-GlcNAcPRED-DL models, outperforming other existing tools. Moreover, a user-friendly Web server for O-GlcNAcPRED-DL has been constructed, which is freely available at http://oglcnac.org/pred_dl.
Subject(s)
Deep Learning , Humans , Animals , Mice , Proteins/metabolism , Protein Processing, Post-Translational , Acetylglucosamine/chemistry , N-Acetylglucosaminyltransferases/metabolismABSTRACT
Sentence comprehension requires the integration of linguistic units presented in a temporal sequence based on a non-linear underlying syntactic structure. While it is uncontroversial that storage is mandatory for this process, there are opposing views regarding the relevance of general short-term-/working-memory capacities (STM/WM) versus language specific resources. Here we report results from 43 participants with an acquired brain lesion in the extended left hemispheric language network and resulting language deficits, who performed a sentence-to-picture matching task and an experimental task assessing phonological short-term memory. The sentence task systematically varied syntactic complexity (embedding depth and argument order) while lengths, number of propositions and plausibility were kept constant. Clinical data including digit-/ block-spans and lesion size and site were additionally used in the analyses. Correlational analyses confirm that performance on STM/WM-tasks (experimental task and digit-span) are the only two relevant predictors for correct sentence-picture-matching, while reaction times only depended on age and lesion size. Notably increasing syntactic complexity reduced the correlational strength speaking for the additional recruitment of language specific resources independent of more general verbal STM/WM capacities, when resolving complex syntactic structure. The complementary lesion-behaviour analysis yielded different lesion volumes correlating with either the sentence-task or the STM-task. Factoring out STM measures lesions in the anterior temporal lobe correlated with a larger decrease in accuracy with increasing syntactic complexity. We conclude that overall sentence comprehension depends on STM/WM capacity, while increases in syntactic complexity tax another independent cognitive resource.