Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 5.077
Filter
Add more filters

Publication year range
1.
Proc Natl Acad Sci U S A ; 121(23): e2320489121, 2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38805278

ABSTRACT

Neural oscillations reflect fluctuations in excitability, which biases the percept of ambiguous sensory input. Why this bias occurs is still not fully understood. We hypothesized that neural populations representing likely events are more sensitive, and thereby become active on earlier oscillatory phases, when the ensemble itself is less excitable. Perception of ambiguous input presented during less-excitable phases should therefore be biased toward frequent or predictable stimuli that have lower activation thresholds. Here, we show such a frequency bias in spoken word recognition using psychophysics, magnetoencephalography (MEG), and computational modelling. With MEG, we found a double dissociation, where the phase of oscillations in the superior temporal gyrus and medial temporal gyrus biased word-identification behavior based on phoneme and lexical frequencies, respectively. This finding was reproduced in a computational model. These results demonstrate that oscillations provide a temporal ordering of neural activity based on the sensitivity of separable neural populations.


Subject(s)
Language , Magnetoencephalography , Speech Perception , Humans , Speech Perception/physiology , Male , Female , Adult , Temporal Lobe/physiology , Young Adult , Models, Neurological
2.
Proc Natl Acad Sci U S A ; 121(23): e2311425121, 2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38814865

ABSTRACT

Theories of language development-informed largely by studies of Western, middleclass infants-have highlighted the language that caregivers direct to children as a key driver of language learning. However, some have argued that language development unfolds similarly across environmental contexts, including those in which childdirected language is scarce. This raises the possibility that children are able to learn from other sources of language in their environments, particularly the language directed to others in their environment. We explore this hypothesis with infants in an indigenous Tseltal-speaking community in Southern Mexico who are rarely spoken to, yet have the opportunity to overhear a great deal of other-directed language by virtue of being carried on their mothers' backs. Adapting a previously established gaze-tracking method for detecting early word knowledge to our field setting, we find that Tseltal infants exhibit implicit knowledge of common nouns (Exp. 1), analogous to their US peers who are frequently spoken to. Moreover, they exhibit comprehension of Tseltal honorific terms that are exclusively used to greet adults in the community (Exp. 2), representing language that could only have been learned through overhearing. In so doing, Tseltal infants demonstrate an ability to discriminate words with similar meanings and perceptually similar referents at an earlier age than has been shown among Western children. Together, these results suggest that for some infants, learning from overhearing may be an important path toward developing language.


Subject(s)
Comprehension , Language Development , Humans , Infant , Female , Male , Comprehension/physiology , Mexico , Language , Vocabulary
3.
Brief Bioinform ; 25(3)2024 Mar 27.
Article in English | MEDLINE | ID: mdl-38695119

ABSTRACT

Sequence similarity is of paramount importance in biology, as similar sequences tend to have similar function and share common ancestry. Scoring matrices, such as PAM or BLOSUM, play a crucial role in all bioinformatics algorithms for identifying similarities, but have the drawback that they are fixed, independent of context. We propose a new scoring method for amino acid similarity that remedies this weakness, being contextually dependent. It relies on recent advances in deep learning architectures that employ self-supervised learning in order to leverage the power of enormous amounts of unlabelled data to generate contextual embeddings, which are vector representations for words. These ideas have been applied to protein sequences, producing embedding vectors for protein residues. We propose the E-score between two residues as the cosine similarity between their embedding vector representations. Thorough testing on a wide variety of reference multiple sequence alignments indicate that the alignments produced using the new $E$-score method, especially ProtT5-score, are significantly better than those obtained using BLOSUM matrices. The new method proposes to change the way alignments are computed, with far-reaching implications in all areas of textual data that use sequence similarity. The program to compute alignments based on various $E$-scores is available as a web server at e-score.csd.uwo.ca. The source code is freely available for download from github.com/lucian-ilie/E-score.


Subject(s)
Algorithms , Computational Biology , Sequence Alignment , Sequence Alignment/methods , Computational Biology/methods , Software , Sequence Analysis, Protein/methods , Amino Acid Sequence , Proteins/chemistry , Proteins/genetics , Deep Learning , Databases, Protein
4.
Proc Natl Acad Sci U S A ; 120(52): e2305414120, 2023 Dec 26.
Article in English | MEDLINE | ID: mdl-38134198

ABSTRACT

Human migration and mobility drives major societal phenomena including epidemics, economies, innovation, and the diffusion of ideas. Although human mobility and migration have been heavily constrained by geographic distance throughout the history, advances, and globalization are making other factors such as language and culture increasingly more important. Advances in neural embedding models, originally designed for natural language, provide an opportunity to tame this complexity and open new avenues for the study of migration. Here, we demonstrate the ability of the model word2vec to encode nuanced relationships between discrete locations from migration trajectories, producing an accurate, dense, continuous, and meaningful vector-space representation. The resulting representation provides a functional distance between locations, as well as a "digital double" that can be distributed, re-used, and itself interrogated to understand the many dimensions of migration. We show that the unique power of word2vec to encode migration patterns stems from its mathematical equivalence with the gravity model of mobility. Focusing on the case of scientific migration, we apply word2vec to a database of three million migration trajectories of scientists derived from the affiliations listed on their publication records. Using techniques that leverage its semantic structure, we demonstrate that embeddings can learn the rich structure that underpins scientific migration, such as cultural, linguistic, and prestige relationships at multiple levels of granularity. Our results provide a theoretical foundation and methodological framework for using neural embeddings to represent and understand migration both within and beyond science.


Subject(s)
Language , Semantics , Humans , Machine Learning , Learning , Natural Language Processing
5.
Proc Natl Acad Sci U S A ; 120(1): e2209153119, 2023 01 03.
Article in English | MEDLINE | ID: mdl-36574655

ABSTRACT

In the second year of life, infants begin to rapidly acquire the lexicon of their native language. A key learning mechanism underlying this acceleration is syntactic bootstrapping: the use of hidden cues in grammar to facilitate vocabulary learning. How infants forge the syntactic-semantic links that underlie this mechanism, however, remains speculative. A hurdle for theories is identifying computationally light strategies that have high precision within the complexity of the linguistic signal. Here, we presented 20-mo-old infants with novel grammatical elements in a complex natural language environment and measured their resultant vocabulary expansion. We found that infants can learn and exploit a natural language syntactic-semantic link in less than 30 min. The rapid speed of acquisition of a new syntactic bootstrap indicates that even emergent syntactic-semantic links can accelerate language learning. The results suggest that infants employ a cognitive network of efficient learning strategies to self-supervise language development.


Subject(s)
Learning , Semantics , Humans , Infant , Language , Vocabulary , Linguistics , Language Development
6.
Proc Natl Acad Sci U S A ; 120(25): e2220726120, 2023 06 20.
Article in English | MEDLINE | ID: mdl-37307492

ABSTRACT

Large-scale language datasets and advances in natural language processing offer opportunities for studying people's cognitions and behaviors. We show how representations derived from language can be combined with laboratory-based word norms to predict implicit attitudes for diverse concepts. Our approach achieves substantially higher correlations than existing methods. We also show that our approach is more predictive of implicit attitudes than are explicit attitudes, and that it captures variance in implicit attitudes that is largely unexplained by explicit attitudes. Overall, our results shed light on how implicit attitudes can be measured by combining standard psychological data with large-scale language data. In doing so, we pave the way for highly accurate computational modeling of what people think and feel about the world around them.


Subject(s)
Cognition , Emotions , Humans , Computer Simulation , Laboratories , Attitude
7.
Cereb Cortex ; 34(7)2024 Jul 03.
Article in English | MEDLINE | ID: mdl-39011935

ABSTRACT

Companionship refers to one's being in the presence of another individual. For adults, acquiring a new language is a highly social activity that often involves learning in the context of companionship. However, the effects of companionship on new language learning have gone relatively underexplored, particularly with respect to word learning. Using a within-subject design, the current study employs electroencephalography to examine how two types of companionship (monitored and co-learning) affect word learning (semantic and lexical) in a new language. Dyads of Chinese speakers of English as a second language participated in a pseudo-word-learning task during which they were placed in monitored and co-learning companionship contexts. The results showed that exposure to co-learning companionship affected the early attention stage of word learning. Moreover, in this early stage, evidence of a higher representation similarity between co-learners showed additional support that co-learning companionship influenced attention. Observed increases in delta and theta interbrain synchronization further revealed that co-learning companionship facilitated semantic access. In all, the similar neural representations and interbrain synchronization between co-learners suggest that co-learning companionship offers important benefits for learning words in a new language.


Subject(s)
Brain , Electroencephalography , Humans , Male , Female , Young Adult , Adult , Brain/physiology , Learning/physiology , Semantics , Multilingualism , Language , Attention/physiology , Verbal Learning/physiology
8.
Cereb Cortex ; 34(2)2024 01 31.
Article in English | MEDLINE | ID: mdl-38367613

ABSTRACT

Does neural activity reveal how balanced bilinguals choose languages? Despite using diverse neuroimaging techniques, prior studies haven't provided a definitive solution to this problem. Nonetheless, studies involving direct brain stimulation in bilinguals have identified distinct brain regions associated with language production in different languages. In this magnetoencephalography study with 45 proficient Spanish-Basque bilinguals, we investigated language selection during covert picture naming and word reading tasks. Participants were prompted to name line drawings or read words if the color of the stimulus changed to green, in 10% of trials. The task was performed either in Spanish or Basque. Despite similar sensor-level evoked activity for both languages in both tasks, decoding analyses revealed language-specific classification ~100 ms post-stimulus onset. During picture naming, right occipital-temporal sensors predominantly contributed to language decoding, while left occipital-temporal sensors were crucial for decoding during word reading. Cross-task decoding analysis unveiled robust generalization effects from picture naming to word reading. Our methodology involved a fine-grained examination of neural responses using magnetoencephalography, offering insights into the dynamics of language processing in bilinguals. This study refines our understanding of the neural underpinnings of language selection and bridges the gap between non-invasive and invasive experimental evidence in bilingual language production.


Subject(s)
Magnetoencephalography , Multilingualism , Humans , Language , Brain/diagnostic imaging , Brain/physiology , Brain Mapping/methods
9.
Proc Natl Acad Sci U S A ; 119(44): e2212936119, 2022 11.
Article in English | MEDLINE | ID: mdl-36282918

ABSTRACT

The right and left cerebral hemispheres are important for face and word recognition, respectively-a specialization that emerges over human development. The question is whether this bilateral distribution is necessary or whether a single hemisphere, be it left or right, can support both face and word recognition. Here, face and word recognition accuracy in patients (median age 16.7 y) with a single hemisphere following childhood hemispherectomy was compared against matched typical controls. In experiment 1, participants viewed stimuli in central vision. Across both face and word tasks, accuracy of both left and right hemispherectomy patients, while significantly lower than controls' accuracy, averaged above 80% and did not differ from each other. To compare patients' single hemisphere more directly to one hemisphere of controls, in experiment 2, participants viewed stimuli in one visual field to constrain initial processing chiefly to a single (contralateral) hemisphere. Whereas controls had higher word accuracy when words were presented to the right than to the left visual field, there was no field/hemispheric difference for faces. In contrast, left and right hemispherectomy patients, again, showed comparable performance to one another on both face and word recognition, albeit significantly lower than controls. Altogether, the findings indicate that a single developing hemisphere, either left or right, may be sufficiently plastic for comparable representation of faces and words. However, perhaps due to increased competition or "neural crowding," constraining cortical representations to one hemisphere may collectively hamper face and word recognition, relative to that observed in typical development with two hemispheres.


Subject(s)
Facial Recognition , Hemispherectomy , Humans , Child , Adolescent , Visual Fields , Plastics , Pattern Recognition, Visual , Functional Laterality
10.
Proc Natl Acad Sci U S A ; 119(28): e2121798119, 2022 07 12.
Article in English | MEDLINE | ID: mdl-35787033

ABSTRACT

Using word embeddings from 850 billion words in English-language Google Books, we provide an extensive analysis of historical change and stability in social group representations (stereotypes) across a long timeframe (from 1800 to 1999), for a large number of social group targets (Black, White, Asian, Irish, Hispanic, Native American, Man, Woman, Old, Young, Fat, Thin, Rich, Poor), and their emergent, bottom-up associations with 14,000 words and a subset of 600 traits. The results provide a nuanced picture of change and persistence in stereotypes across 200 y. Change was observed in the top-associated words and traits: Whether analyzing the top 10 or 50 associates, at least 50% of top associates changed across successive decades. Despite this changing content of top-associated words, the average valence (positivity/negativity) of these top stereotypes was generally persistent. Ultimately, through advances in the availability of historical word embeddings, this study offers a comprehensive characterization of both change and persistence in social group representations as revealed through books of the English-speaking world from 1800 to 1999.


Subject(s)
Books , Search Engine , Female , History, 19th Century , History, 20th Century , Humans , Language , Male , Population Groups/history , Stereotyping
11.
Proc Natl Acad Sci U S A ; 119(24): e2122604119, 2022 06 14.
Article in English | MEDLINE | ID: mdl-35675428

ABSTRACT

Languages vary considerably in syntactic structure. About 40% of the world's languages have subject-verb-object order, and about 40% have subject-object-verb order. Extensive work has sought to explain this word order variation across languages. However, the existing approaches are not able to explain coherently the frequency distribution and evolution of word order in individual languages. We propose that variation in word order reflects different ways of balancing competing pressures of dependency locality and information locality, whereby languages favor placing elements together when they are syntactically related or contextually informative about each other. Using data from 80 languages in 17 language families and phylogenetic modeling, we demonstrate that languages evolve to balance these pressures, such that word order change is accompanied by change in the frequency distribution of the syntactic structures that speakers communicate to maintain overall efficiency. Variability in word order thus reflects different ways in which languages resolve these evolutionary pressures. We identify relevant characteristics that result from this joint optimization, particularly the frequency with which subjects and objects are expressed together for the same verb. Our findings suggest that syntactic structure and usage across languages coadapt to support efficient communication under limited cognitive resources.


Subject(s)
Language , Humans , Phylogeny
12.
Proc Natl Acad Sci U S A ; 119(25): e2120203119, 2022 06 21.
Article in English | MEDLINE | ID: mdl-35709321

ABSTRACT

Spoken language production involves selecting and assembling words and syntactic structures to convey one's message. Here we probe this process by analyzing natural language productions of individuals with primary progressive aphasia (PPA) and healthy individuals. Based on prior neuropsychological observations, we hypothesize that patients who have difficulty producing complex syntax might choose semantically richer words to make their meaning clear, whereas patients with lexicosemantic deficits may choose more complex syntax. To evaluate this hypothesis, we first introduce a frequency-based method for characterizing the syntactic complexity of naturally produced utterances. We then show that lexical and syntactic complexity, as measured by their frequencies, are negatively correlated in a large (n = 79) PPA population. We then show that this syntax-lexicon trade-off is also present in the utterances of healthy speakers (n = 99) taking part in a picture description task, suggesting that it may be a general property of the process by which humans turn thoughts into speech.


Subject(s)
Language , Speech , Aphasia, Primary Progressive/physiopathology , Humans , Speech/physiology
13.
Proc Natl Acad Sci U S A ; 119(10): e2108801119, 2022 03 08.
Article in English | MEDLINE | ID: mdl-35239440

ABSTRACT

SignificanceWe introduce an approach to identify latent topics in large-scale text data. Our approach integrates two prominent methods of computational text analysis: topic modeling and word embedding. We apply our approach to written narratives of violent death (e.g., suicides and homicides) in the National Violent Death Reporting System (NVDRS). Many of our topics reveal aspects of violent death not captured in existing classification schemes. We also extract gender bias in the topics themselves (e.g., a topic about long guns is particularly masculine). Our findings suggest new lines of research that could contribute to reducing suicides or homicides. Our methods are broadly applicable to text data and can unlock similar information in other administrative databases.


Subject(s)
Databases, Factual , Homicide , Models, Theoretical , Violence , Humans , United States
14.
J Neurosci ; 43(26): 4867-4883, 2023 06 28.
Article in English | MEDLINE | ID: mdl-37221093

ABSTRACT

To understand language, we need to recognize words and combine them into phrases and sentences. During this process, responses to the words themselves are changed. In a step toward understanding how the brain builds sentence structure, the present study concerns the neural readout of this adaptation. We ask whether low-frequency neural readouts associated with words change as a function of being in a sentence. To this end, we analyzed an MEG dataset by Schoffelen et al. (2019) of 102 human participants (51 women) listening to sentences and word lists, the latter lacking any syntactic structure and combinatorial meaning. Using temporal response functions and a cumulative model-fitting approach, we disentangled delta- and theta-band responses to lexical information (word frequency), from responses to sensory and distributional variables. The results suggest that delta-band responses to words are affected by sentence context in time and space, over and above entropy and surprisal. In both conditions, the word frequency response spanned left temporal and posterior frontal areas; however, the response appeared later in word lists than in sentences. In addition, sentence context determined whether inferior frontal areas were responsive to lexical information. In the theta band, the amplitude was larger in the word list condition ∼100 milliseconds in right frontal areas. We conclude that low-frequency responses to words are changed by sentential context. The results of this study show how the neural representation of words is affected by structural context and as such provide insight into how the brain instantiates compositionality in language.SIGNIFICANCE STATEMENT Human language is unprecedented in its combinatorial capacity: we are capable of producing and understanding sentences we have never heard before. Although the mechanisms underlying this capacity have been described in formal linguistics and cognitive science, how they are implemented in the brain remains to a large extent unknown. A large body of earlier work from the cognitive neuroscientific literature implies a role for delta-band neural activity in the representation of linguistic structure and meaning. In this work, we combine these insights and techniques with findings from psycholinguistics to show that meaning is more than the sum of its parts; the delta-band MEG signal differentially reflects lexical information inside and outside sentence structures.


Subject(s)
Brain , Language , Humans , Female , Brain/physiology , Linguistics , Psycholinguistics , Brain Mapping , Semantics
15.
BMC Bioinformatics ; 25(1): 102, 2024 Mar 07.
Article in English | MEDLINE | ID: mdl-38454333

ABSTRACT

BACKGROUND: Viral infections have been the main health issue in the last decade. Antiviral peptides (AVPs) are a subclass of antimicrobial peptides (AMPs) with substantial potential to protect the human body against various viral diseases. However, there has been significant production of antiviral vaccines and medications. Recently, the development of AVPs as an antiviral agent suggests an effective way to treat virus-affected cells. Recently, the involvement of intelligent machine learning techniques for developing peptide-based therapeutic agents is becoming an increasing interest due to its significant outcomes. The existing wet-laboratory-based drugs are expensive, time-consuming, and cannot effectively perform in screening and predicting the targeted motif of antiviral peptides. METHODS: In this paper, we proposed a novel computational model called Deepstacked-AVPs to discriminate AVPs accurately. The training sequences are numerically encoded using a novel Tri-segmentation-based position-specific scoring matrix (PSSM-TS) and word2vec-based semantic features. Composition/Transition/Distribution-Transition (CTDT) is also employed to represent the physiochemical properties based on structural features. Apart from these, the fused vector is formed using PSSM-TS features, semantic information, and CTDT descriptors to compensate for the limitations of single encoding methods. Information gain (IG) is applied to choose the optimal feature set. The selected features are trained using a stacked-ensemble classifier. RESULTS: The proposed Deepstacked-AVPs model achieved a predictive accuracy of 96.60%%, an area under the curve (AUC) of 0.98, and a precision-recall (PR) value of 0.97 using training samples. In the case of the independent samples, our model obtained an accuracy of 95.15%, an AUC of 0.97, and a PR value of 0.97. CONCLUSION: Our Deepstacked-AVPs model outperformed existing models with a ~ 4% and ~ 2% higher accuracy using training and independent samples, respectively. The reliability and efficacy of the proposed Deepstacked-AVPs model make it a valuable tool for scientists and may perform a beneficial role in pharmaceutical design and research academia.


Subject(s)
Biological Evolution , Peptides , Humans , Reproducibility of Results , Peptides/chemistry , Antiviral Agents/pharmacology
16.
Neuroimage ; 294: 120649, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38759354

ABSTRACT

Neurobehavioral studies have provided evidence for the effectiveness of anodal tDCS on language production, by stimulation of the left Inferior Frontal Gyrus (IFG) or of left Temporo-Parietal Junction (TPJ). However, tDCS is currently not used in clinical practice outside of trials, because behavioral effects have been inconsistent and underlying neural effects unclear. Here, we propose to elucidate the neural correlates of verb and noun learning and to determine if they can be modulated with anodal high-definition (HD) tDCS stimulation. Thirty-six neurotypical participants were randomly allocated to anodal HD-tDCS over either the left IFG, the left TPJ, or sham stimulation. On day one, participants performed a naming task (pre-test). On day two, participants underwent a new-word learning task with rare nouns and verbs concurrently to HD-tDCS for 20 min. The third day consisted of a post-test of naming performance. EEG was recorded at rest and during naming on each day. Verb learning was significantly facilitated by left IFG stimulation. HD-tDCS over the left IFG enhanced functional connectivity between the left IFG and TPJ and this correlated with improved learning. HD-tDCS over the left TPJ enabled stronger local activation of the stimulated area (as indexed by greater alpha and beta-band power decrease) during naming, but this did not translate into better learning. Thus, tDCS can induce local activation or modulation of network interactions. Only the enhancement of network interactions, but not the increase in local activation, leads to robust improvement of word learning. This emphasizes the need to develop new neuromodulation methods influencing network interactions. Our study suggests that this may be achieved through behavioral activation of one area and concomitant activation of another area with HD-tDCS.


Subject(s)
Transcranial Direct Current Stimulation , Humans , Transcranial Direct Current Stimulation/methods , Female , Male , Adult , Young Adult , Electroencephalography/methods , Prefrontal Cortex/physiology , Parietal Lobe/physiology , Verbal Learning/physiology , Temporal Lobe/physiology , Learning/physiology
17.
Hum Brain Mapp ; 45(2): e26607, 2024 Feb 01.
Article in English | MEDLINE | ID: mdl-38339897

ABSTRACT

Language comprehension involves multiple hierarchical processing stages across time, space, and levels of representation. When processing a word, the sensory input is transformed into increasingly abstract representations that need to be integrated with the linguistic context. Thus, language comprehension involves both input-driven as well as context-dependent processes. While neuroimaging research has traditionally focused on mapping individual brain regions to the distinct underlying processes, recent studies indicate that whole-brain distributed patterns of cortical activation might be highly relevant for cognitive functions, including language. One such pattern, based on resting-state connectivity, is the 'principal cortical gradient', which dissociates sensory from heteromodal brain regions. The present study investigated the extent to which this gradient provides an organizational principle underlying language function, using a multimodal neuroimaging dataset of functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG) recordings from 102 participants during sentence reading. We found that the brain response to individual representations of a word (word length, orthographic distance, and word frequency), which reflect visual; orthographic; and lexical properties, gradually increases towards the sensory end of the gradient. Although these properties showed opposite effect directions in fMRI and MEG, their association with the sensory end of the gradient was consistent across both neuroimaging modalities. In contrast, MEG revealed that properties reflecting a word's relation to its linguistic context (semantic similarity and position within the sentence) involve the heteromodal end of the gradient to a stronger extent. This dissociation between individual word and contextual properties was stable across earlier and later time windows during word presentation, indicating interactive processing of word representations and linguistic context at opposing ends of the principal gradient. To conclude, our findings indicate that the principal gradient underlies the organization of a range of linguistic representations while supporting a gradual distinction between context-independent and context-dependent representations. Furthermore, the gradient reveals convergent patterns across neuroimaging modalities (similar location along the gradient) in the presence of divergent responses (opposite effect directions).


Subject(s)
Brain , Comprehension , Humans , Comprehension/physiology , Brain/diagnostic imaging , Brain/physiology , Linguistics , Language , Semantics , Magnetic Resonance Imaging/methods , Brain Mapping/methods , Reading
18.
Hum Brain Mapp ; 45(1): e26546, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38014759

ABSTRACT

To explain how the human brain represents and organizes meaning, many theoretical and computational language models have been proposed over the years, varying in their underlying computational principles and in the language samples based on which they are built. However, how well they capture the neural encoding of lexical semantics remains elusive. We used representational similarity analysis (RSA) to evaluate to what extent three models of different types explained neural responses elicited by word stimuli: an External corpus-based word2vec model, an Internal free word association model, and a Hybrid ConceptNet model. Semantic networks were constructed using word relations computed in the three models and experimental stimuli were selected through a community detection procedure. The similarity patterns between language models and neural responses were compared at the community, exemplar, and word node levels to probe the potential hierarchical semantic structure. We found that semantic relations computed with the Internal model provided the closest approximation to the patterns of neural activation, whereas the External model did not capture neural responses as well. Compared with the exemplar and the node levels, community-level RSA demonstrated the broadest involvement of brain regions, engaging areas critical for semantic processing, including the angular gyrus, superior frontal gyrus and a large portion of the anterior temporal lobe. The findings highlight the multidimensional semantic organization in the brain which is better captured by Internal models sensitive to multiple modalities such as word association compared with External models trained on text corpora.


Subject(s)
Brain Mapping , Semantics , Humans , Language , Brain/diagnostic imaging , Brain/physiology , Temporal Lobe/physiology , Magnetic Resonance Imaging
19.
Hum Brain Mapp ; 45(4): e26655, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38488471

ABSTRACT

Reading entails transforming visual symbols to sound and meaning. This process depends on specialized circuitry in the visual cortex, the visual word form area (VWFA). Recent findings suggest that this text-selective cortex comprises at least two distinct subregions: the more posterior VWFA-1 is sensitive to visual features, while the more anterior VWFA-2 processes higher level language information. Here, we explore whether these two subregions also exhibit different patterns of functional connectivity. To this end, we capitalize on two complementary datasets: Using the Natural Scenes Dataset (NSD), we identify text-selective responses in high-quality 7T adult data (N = 8), and investigate functional connectivity patterns of VWFA-1 and VWFA-2 at the individual level. We then turn to the Healthy Brain Network (HBN) database to assess whether these patterns replicate in a large developmental sample (N = 224; age 6-20 years), and whether they relate to reading development. In both datasets, we find that VWFA-1 is primarily correlated with bilateral visual regions. In contrast, VWFA-2 is more strongly correlated with language regions in the frontal and lateral parietal lobes, particularly the bilateral inferior frontal gyrus. Critically, these patterns do not generalize to adjacent face-selective regions, suggesting a specific relationship between VWFA-2 and the frontal language network. No correlations were observed between functional connectivity and reading ability. Together, our findings support the distinction between subregions of the VWFA, and suggest that functional connectivity patterns in the ventral temporal cortex are consistent over a wide range of reading skills.


Subject(s)
Brain Mapping , Magnetic Resonance Imaging , Adult , Humans , Child , Adolescent , Young Adult , Language , Temporal Lobe/physiology , Cerebral Cortex , Reading
20.
Brief Bioinform ; 23(1)2022 01 17.
Article in English | MEDLINE | ID: mdl-34962264

ABSTRACT

Transcription factors (TFs) are proteins specifically involved in gene expression regulation. It is generally accepted in epigenetics that methylated nucleotides could prevent the TFs from binding to DNA fragments. However, recent studies have confirmed that some TFs have capability to interact with methylated DNA fragments to further regulate gene expression. Although biochemical experiments could recognize TFs binding to methylated DNA sequences, these wet experimental methods are time-consuming and expensive. Machine learning methods provide a good choice for quickly identifying these TFs without experimental materials. Thus, this study aims to design a robust predictor to detect methylated DNA-bound TFs. We firstly proposed using tripeptide word vector feature to formulate protein samples. Subsequently, based on recurrent neural network with long short-term memory, a two-step computational model was designed. The first step predictor was utilized to discriminate transcription factors from non-transcription factors. Once proteins were predicted as TFs, the second step predictor was employed to judge whether the TFs can bind to methylated DNA. Through the independent dataset test, the accuracies of the first step and the second step are 86.63% and 73.59%, respectively. In addition, the statistical analysis of the distribution of tripeptides in training samples showed that the position and number of some tripeptides in the sequence could affect the binding of TFs to methylated DNA. Finally, on the basis of our model, a free web server was established based on the proposed model, which can be available at https://bioinfor.nefu.edu.cn/TFPM/.


Subject(s)
DNA Methylation , Neural Networks, Computer , Transcription Factors/metabolism , Algorithms , Binding Sites , DNA/genetics , DNA-Binding Proteins , Deep Learning , Gene Expression Regulation , Humans , Protein Binding
SELECTION OF CITATIONS
SEARCH DETAIL