Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 34
Filtrar
1.
Brief Bioinform ; 25(2)2024 Jan 22.
Artigo em Inglês | MEDLINE | ID: mdl-38388681

RESUMO

MOTIVATION: Cell-type annotation of single-cell RNA-sequencing (scRNA-seq) data is a hallmark of biomedical research and clinical application. Current annotation tools usually assume the simultaneous acquisition of well-annotated data, but without the ability to expand knowledge from new data. Yet, such tools are inconsistent with the continuous emergence of scRNA-seq data, calling for a continuous cell-type annotation model. In addition, by their powerful ability of information integration and model interpretability, transformer-based pre-trained language models have led to breakthroughs in single-cell biology research. Therefore, the systematic combining of continual learning and pre-trained language models for cell-type annotation tasks is inevitable. RESULTS: We herein propose a universal cell-type annotation tool, called CANAL, that continuously fine-tunes a pre-trained language model trained on a large amount of unlabeled scRNA-seq data, as new well-labeled data emerges. CANAL essentially alleviates the dilemma of catastrophic forgetting, both in terms of model inputs and outputs. For model inputs, we introduce an experience replay schema that repeatedly reviews previous vital examples in current training stages. This is achieved through a dynamic example bank with a fixed buffer size. The example bank is class-balanced and proficient in retaining cell-type-specific information, particularly facilitating the consolidation of patterns associated with rare cell types. For model outputs, we utilize representation knowledge distillation to regularize the divergence between previous and current models, resulting in the preservation of knowledge learned from past training stages. Moreover, our universal annotation framework considers the inclusion of new cell types throughout the fine-tuning and testing stages. We can continuously expand the cell-type annotation library by absorbing new cell types from newly arrived, well-annotated training datasets, as well as automatically identify novel cells in unlabeled datasets. Comprehensive experiments with data streams under various biological scenarios demonstrate the versatility and high model interpretability of CANAL. AVAILABILITY: An implementation of CANAL is available from https://github.com/aster-ww/CANAL-torch. CONTACT: dengmh@pku.edu.cn. SUPPLEMENTARY INFORMATION: Supplementary data are available at Journal Name online.


Assuntos
Perfilação da Expressão Gênica , Software , Perfilação da Expressão Gênica/métodos , Análise da Expressão Gênica de Célula Única , Análise de Célula Única/métodos , Idioma , Análise de Sequência de RNA/métodos
2.
Brief Bioinform ; 24(1)2023 01 19.
Artigo em Inglês | MEDLINE | ID: mdl-36631407

RESUMO

Recently, peptide-based drugs have gained unprecedented interest in discovering and developing antifungal drugs due to their high efficacy, broad-spectrum activity, low toxicity and few side effects. However, it is time-consuming and expensive to identify antifungal peptides (AFPs) experimentally. Therefore, computational methods for accurately predicting AFPs are highly required. In this work, we develop AFP-MFL, a novel deep learning model that predicts AFPs only relying on peptide sequences without using any structural information. AFP-MFL first constructs comprehensive feature profiles of AFPs, including contextual semantic information derived from a pre-trained protein language model, evolutionary information, and physicochemical properties. Subsequently, the co-attention mechanism is utilized to integrate contextual semantic information with evolutionary information and physicochemical properties separately. Extensive experiments show that AFP-MFL outperforms state-of-the-art models on four independent test datasets. Furthermore, the SHAP method is employed to explore each feature contribution to the AFPs prediction. Finally, a user-friendly web server of the proposed AFP-MFL is developed and freely accessible at http://inner.wei-group.net/AFPMFL/, which can be considered as a powerful tool for the rapid screening and identification of novel AFPs.


Assuntos
Antifúngicos , alfa-Fetoproteínas , Antifúngicos/farmacologia , Algoritmos , Peptídeos/química , Biologia Computacional/métodos
3.
Brief Bioinform ; 24(6)2023 09 22.
Artigo em Inglês | MEDLINE | ID: mdl-37824738

RESUMO

The interactions between nucleic acids and proteins are important in diverse biological processes. The high-quality prediction of nucleic-acid-binding sites continues to pose a significant challenge. Presently, the predictive efficacy of sequence-based methods is constrained by their exclusive consideration of sequence context information, whereas structure-based methods are unsuitable for proteins lacking known tertiary structures. Though protein structures predicted by AlphaFold2 could be used, the extensive computing requirement of AlphaFold2 hinders its use for genome-wide applications. Based on the recent breakthrough of ESMFold for fast prediction of protein structures, we have developed GLMSite, which accurately identifies DNA- and RNA-binding sites using geometric graph learning on ESMFold predicted structures. Here, the predicted protein structures are employed to construct protein structural graph with residues as nodes and spatially neighboring residue pairs for edges. The node representations are further enhanced through the pre-trained language model ProtTrans. The network was trained using a geometric vector perceptron, and the geometric embeddings were subsequently fed into a common network to acquire common binding characteristics. Finally, these characteristics were input into two fully connected layers to predict binding sites with DNA and RNA, respectively. Through comprehensive tests on DNA/RNA benchmark datasets, GLMSite was shown to surpass the latest sequence-based methods and be comparable with structure-based methods. Moreover, the prediction was shown useful for inferring nucleic-acid-binding proteins, demonstrating its potential for protein function discovery. The datasets, codes, and trained models are available at https://github.com/biomed-AI/nucleic-acid-binding.


Assuntos
Redes Neurais de Computação , Proteínas , Sítios de Ligação , Proteínas/química , RNA/metabolismo , DNA , Idioma
4.
Brief Bioinform ; 23(6)2022 11 19.
Artigo em Inglês | MEDLINE | ID: mdl-36156661

RESUMO

Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.


Assuntos
Mineração de Dados , Processamento de Linguagem Natural
5.
Brief Bioinform ; 23(4)2022 07 18.
Artigo em Inglês | MEDLINE | ID: mdl-35649392

RESUMO

RNA binding proteins (RBPs) are critical for the post-transcriptional control of RNAs and play vital roles in a myriad of biological processes, such as RNA localization and gene regulation. Therefore, computational methods that are capable of accurately identifying RBPs are highly desirable and have important implications for biomedical and biotechnological applications. Here, we propose a two-stage deep transfer learning-based framework, termed RBP-TSTL, for accurate prediction of RBPs. In the first stage, the knowledge from the self-supervised pre-trained model was extracted as feature embeddings and used to represent the protein sequences, while in the second stage, a customized deep learning model was initialized based on an annotated pre-training RBPs dataset before being fine-tuned on each corresponding target species dataset. This two-stage transfer learning framework can enable the RBP-TSTL model to be effectively trained to learn and improve the prediction performance. Extensive performance benchmarking of the RBP-TSTL models trained using the features generated by the self-supervised pre-trained model and other models trained using hand-crafting encoding features demonstrated the effectiveness of the proposed two-stage knowledge transfer strategy based on the self-supervised pre-trained models. Using the best-performing RBP-TSTL models, we further conducted genome-scale RBP predictions for Homo sapiens, Arabidopsis thaliana, Escherichia coli, and Salmonella and established a computational compendium containing all the predicted putative RBPs candidates. We anticipate that the proposed RBP-TSTL approach will be explored as a useful tool for the characterization of RNA-binding proteins and exploration of their sequence-structure-function relationships.


Assuntos
Proteínas de Ligação a RNA , RNA , Sítios de Ligação/genética , Genoma , Humanos , Aprendizado de Máquina , RNA/química , Proteínas de Ligação a RNA/metabolismo , Análise de Sequência de RNA/métodos
6.
Brief Bioinform ; 23(1)2022 01 17.
Artigo em Inglês | MEDLINE | ID: mdl-34657153

RESUMO

Bacterial type IV secretion systems (T4SSs) are versatile and membrane-spanning apparatuses, which mediate both genetic exchange and delivery of effector proteins to target eukaryotic cells. The secreted effectors (T4SEs) can affect gene expression and signal transduction of the host cells. As such, they often function as virulence factors and play an important role in bacterial pathogenesis. Nowadays, T4SE prediction tools have utilized various machine learning algorithms, but the accuracy and speed of these tools remain to be improved. In this study, we apply a sequence embedding strategy from a pre-trained language model of protein sequences (TAPE) to the classification task of T4SEs. The training dataset is mainly derived from our updated type IV secretion system database SecReT4 with newly experimentally verified T4SEs. An online web server termed T4SEfinder is developed using TAPE and a multi-layer perceptron (MLP) for T4SE prediction after a comprehensive performance comparison with several candidate models, which achieves a slightly higher level of accuracy than the existing prediction tools. It only takes about 3 minutes to make a classification for 5000 protein sequences by T4SEfinder so that the computational speed is qualified for whole genome-scale T4SEs detection in pathogenic bacteria. T4SEfinder might contribute to meet the increasing demands of re-annotating secretion systems and effector proteins in sequenced bacterial genomes. T4SEfinder is freely accessible at https://tool2-mml.sjtu.edu.cn/T4SEfinder_TAPE/.


Assuntos
Biologia Computacional , Idioma , Bactérias/genética , Genoma Bacteriano , Proteínas/genética , Sistemas de Secreção Tipo IV/genética
7.
Methods ; 220: 11-20, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37871661

RESUMO

Secondary active transporters play pivotal roles in regulating ion and molecule transport across cell membranes, with implications in diseases like cancer. However, studying transporters via biochemical experiments poses challenges. We propose an effective computational approach to identify secondary active transporters from membrane protein sequences using pre-trained language models and deep learning neural networks. Our dataset comprised 290 secondary active transporters and 5,420 other membrane proteins from UniProt. Three types of features were extracted - one-hot encodings, position-specific scoring matrix profiles, and contextual embeddings from the ProtTrans language model. A multi-window convolutional neural network architecture scanned the ProtTrans embeddings using varying window sizes to capture multi-scale sequence patterns. The proposed model combining ProtTrans embeddings and multi-window convolutional neural networks achieved 86% sensitivity, 99% specificity and 98% overall accuracy in identifying secondary active transporters, outperforming conventional machine learning approaches. This work demonstrates the promise of integrating pre-trained language models like ProtTrans with multi-scale deep neural networks to effectively interpret transporter sequences for functional analysis. Our approach enables more accurate computational identification of secondary active transporters, advancing membrane protein research.


Assuntos
Aprendizado Profundo , Proteínas de Membrana , Redes Neurais de Computação , Aprendizado de Máquina , Sequência de Aminoácidos
8.
J Biomed Inform ; 155: 104657, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38772443

RESUMO

The increasing prevalence of overcrowding in Emergency Departments (EDs) threatens the effective delivery of urgent healthcare. Mitigation strategies include the deployment of monitoring systems capable of tracking and managing patient disposition to facilitate appropriate and timely care, which subsequently reduces patient revisits, optimizes resource allocation, and enhances patient outcomes. This study used âˆ¼ 250,000 emergency department visit records from Taipei Medical University-Shuang Ho Hospital to develop a natural language processing model using BlueBERT, a biomedical domain-specific pre-trained language model, to predict patient disposition status and unplanned readmissions. Data preprocessing and the integration of both structured and unstructured data were central to our approach. Compared to other models, BlueBERT outperformed due to its pre-training on a diverse range of medical literature, enabling it to better comprehend the specialized terminology, relationships, and context present in ED data. We found that translating Chinese-English clinical narratives into English and textualizing numerical data into categorical representations significantly improved the prediction of patient disposition (AUROC = 0.9014) and 72-hour unscheduled return visits (AUROC = 0.6475). The study concludes that the BlueBERT-based model demonstrated superior prediction capabilities, surpassing the performance of prior patient disposition predictive models, thus offering promising applications in the realm of ED clinical practice.


Assuntos
Serviço Hospitalar de Emergência , Processamento de Linguagem Natural , Readmissão do Paciente , Serviço Hospitalar de Emergência/estatística & dados numéricos , Humanos , Readmissão do Paciente/estatística & dados numéricos , Feminino , Masculino , Adulto , Pessoa de Meia-Idade , Registros Eletrônicos de Saúde , Narração , Idoso
9.
Sensors (Basel) ; 24(13)2024 Jun 22.
Artigo em Inglês | MEDLINE | ID: mdl-39000847

RESUMO

In the development of the Power Industry Internet of Things, the security of data interaction has always been an important challenge. In the power-based blockchain Industrial Internet of Things, node data interaction involves a large amount of sensitive data. In the current anti-leakage strategy for power business data interaction, regular expressions are used to identify sensitive data for matching. This approach is only suitable for simple structured data. For the processing of unstructured data, there is a lack of practical matching strategies. Therefore, this paper proposes a deep learning-based anti-leakage method for power business data interaction, aiming to ensure the security of power business data interaction between the State Grid business platform and third-party platforms. This method combines named entity recognition technologies and comprehensively uses regular expressions and the DeBERTa (Decoding-enhanced BERT with disentangled attention)-BiLSTM (Bidirectional Long Short-Term Memory)-CRF (Conditional Random Field) model. This method is based on the DeBERTa (Decoding-enhanced BERT with disentangled attention) model for pre-training feature extraction. It extracts sequence context semantic features through the BiLSTM, and finally obtains the global optimal through the CRF layer tag sequence. Sensitive data matching is performed on interactive structured and unstructured data to identify privacy-sensitive information in the power business. The experimental results show that the F1 score of the proposed method in this paper for identifying sensitive data entities using the CLUENER 2020 dataset reaches 81.26%, which can effectively prevent the risk of power business data leakage and provide innovative solutions for the power industry to ensure data security.

10.
Proteomics ; 23(23-24): e2200494, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37863817

RESUMO

Membrane proteins play a crucial role in various cellular processes and are essential components of cell membranes. Computational methods have emerged as a powerful tool for studying membrane proteins due to their complex structures and properties that make them difficult to analyze experimentally. Traditional features for protein sequence analysis based on amino acid types, composition, and pair composition have limitations in capturing higher-order sequence patterns. Recently, multiple sequence alignment (MSA) and pre-trained language models (PLMs) have been used to generate features from protein sequences. However, the significant computational resources required for MSA-based features generation can be a major bottleneck for many applications. Several methods and tools have been developed to accelerate the generation of MSAs and reduce their computational cost, including heuristics and approximate algorithms. Additionally, the use of PLMs such as BERT has shown great potential in generating informative embeddings for protein sequence analysis. In this review, we provide an overview of traditional and more recent methods for generating features from protein sequences, with a particular focus on MSAs and PLMs. We highlight the advantages and limitations of these approaches and discuss the methods and tools developed to address the computational challenges associated with features generation. Overall, the advancements in computational methods and tools provide a promising avenue for gaining deeper insights into the function and properties of membrane proteins, which can have significant implications in drug discovery and personalized medicine.


Assuntos
Algoritmos , Proteínas de Membrana , Animais , Cavalos , Alinhamento de Sequência , Sequência de Aminoácidos , Análise de Sequência de Proteína , Biologia Computacional/métodos
11.
Methods ; 203: 160-166, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35378296

RESUMO

Abstractive summarization models can generate summary auto-regressively, but the quality is often impacted by the noise in the text. Learning cross-sentence relations is a crucial step in this task and the graph-based network is more effective to capture the sentence relationship. Moreover, knowledge is very important to distinguish the noise of the text in special domain. A novel model structure called UGDAS is proposed in this paper, which combines a sentence-level denoiser based on an unsupervised graph-network and an auto-regressive generator. It utilizes domain knowledge and sentence position information to denoise the original text and further improve the quality of generated summaries. We use the recently-introduced dataset CORD-19 (COVID-19 Open Research Dataset) on text summarization task, which contains large-scale data on coronaviruses. The experimental results show that our model achieves the SOTA (state-of-the-art) result on CORD-19 dataset and outperforms the related baseline models on the PubMed Abstract dataset.


Assuntos
COVID-19 , Semântica , Formação de Conceito , Humanos
12.
J Biomed Inform ; 145: 104456, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37482171

RESUMO

Triplet extraction is one of the fundamental tasks in biomedical text mining. Compared with traditional pipeline approaches, joint methods can alleviate the error propagation problem from entity recognition to relation classification. However, existing methods face challenges in detecting overlapping entities and overlapping relations, which are ubiquitous in biomedical texts. In this work, we propose a novel pipeline method of end-to-end biomedical triplet extraction. In particular, a span-based detection strategy is used to detect the overlapping triplets by enumerating possible candidate spans and entity pairs. The strategy is further used to capture different contextualized representations via an entity model and a relation model, respectively. Furthermore, to enhance interrelation between spans, entity information from the output of the entity model is used to construct the input for the relation model without utilizing any external knowledge. Our approach is evaluated on the drug-drug interaction (DDI) and chemical-protein interaction (CHEMPROT) datasets, exhibiting improvement of the absolute F1-score in relation extraction by 3.5%-3.7% compared prior work. The experimental results highlight the importance of overlapping triplet detection using the span-based approach, acquisition of various contextualized representations via different in-domain pre-trained language models, and early fusion of entity information in the relation model.


Assuntos
Mineração de Dados , Idioma , Mineração de Dados/métodos , Processamento de Linguagem Natural , Proteínas , Interações Medicamentosas
13.
J Biomed Inform ; 144: 104442, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37429512

RESUMO

OBJECTIVE: We develop a deep learning framework based on the pre-trained Bidirectional Encoder Representations from Transformers (BERT) model using unstructured clinical notes from electronic health records (EHRs) to predict the risk of disease progression from Mild Cognitive Impairment (MCI) to Alzheimer's Disease (AD). METHODS: We identified 3657 patients diagnosed with MCI together with their progress notes from Northwestern Medicine Enterprise Data Warehouse (NMEDW) between 2000 and 2020. The progress notes no later than the first MCI diagnosis were used for the prediction. We first preprocessed the notes by deidentification, cleaning and splitting into sections, and then pre-trained a BERT model for AD (named AD-BERT) based on the publicly available Bio+Clinical BERT on the preprocessed notes. All sections of a patient were embedded into a vector representation by AD-BERT and then combined by global MaxPooling and a fully connected network to compute the probability of MCI-to-AD progression. For validation, we conducted a similar set of experiments on 2563 MCI patients identified at Weill Cornell Medicine (WCM) during the same timeframe. RESULTS: Compared with the 7 baseline models, the AD-BERT model achieved the best performance on both datasets, with Area Under receiver operating characteristic Curve (AUC) of 0.849 and F1 score of 0.440 on NMEDW dataset, and AUC of 0.883 and F1 score of 0.680 on WCM dataset. CONCLUSION: The use of EHRs for AD-related research is promising, and AD-BERT shows superior predictive performance in modeling MCI-to-AD progression prediction. Our study demonstrates the utility of pre-trained language models and clinical notes in predicting MCI-to-AD progression, which could have important implications for improving early detection and intervention for AD.


Assuntos
Doença de Alzheimer , Disfunção Cognitiva , Humanos , Doença de Alzheimer/diagnóstico , Disfunção Cognitiva/diagnóstico , Progressão da Doença
14.
Sensors (Basel) ; 23(6)2023 Mar 09.
Artigo em Inglês | MEDLINE | ID: mdl-36991693

RESUMO

People exchange emotions through conversations with others and provide different answers depending on the reasons for their emotions. During a conversation, it is important to find not only such emotions but also their cause. Emotion-cause pair extraction (ECPE) is a task used to determine emotions and their causes in a single pair within a text, and various studies have been conducted to accomplish ECPE tasks. However, existing studies have limitations in that some models conduct the task in two or more steps, whereas others extract only one emotion-cause pair for a given text. We propose a novel methodology for extracting multiple emotion-cause pairs simultaneously from a given conversation with a single model. Our proposed model is a token-classification-based emotion-cause pair extraction model, which applies the BIO (beginning-inside-outside) tagging scheme to efficiently extract multiple emotion-cause pairs in conversations. The proposed model showed the best performance on the RECCON benchmark dataset in comparative experiments with existing studies and was experimentally verified to efficiently extract multiple emotion-cause pairs in conversations.


Assuntos
Comunicação , Emoções , Humanos , Expressão Facial
15.
J Biomed Inform ; 127: 103999, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-35104642

RESUMO

The coronavirus disease (COVID-19) has claimed the lives of over 350,000 people and infected more than 173 million people worldwide, it triggers researchers from diverse fields are accelerating their research to help diagnostics, therapies, and vaccines. Researchers also publish their recent research progress through scientific papers. However, manually writing the abstract of a paper is time-consuming, and it increases the writing burden of the researchers. Abstractive summarization technique which automatically provides researchers reliable draft abstracts, can alleviate this problem. In this work, we propose a linguistically enriched SciBERT-based summarization model for COVID-19 scientific papers, named COVIDSum. Specifically, we first extract salient sentences from source papers and construct word co-occurrence graphs. Then, we adopt a SciBERT-based sequence encoder and a Graph Attention Networks-based graph encoder to encode sentences and word co-occurrence graphs, respectively. Finally, we fuse the above two encodings and generate an abstractive summary of each scientific paper. When evaluated on the publicly available COVID-19 open research dataset, the performance of our proposed model achieves significant improvement compared with other document summarization models.


Assuntos
COVID-19 , Humanos , Idioma , Editoração , SARS-CoV-2
16.
BMC Med Inform Decis Mak ; 22(Suppl 3): 235, 2022 09 06.
Artigo em Inglês | MEDLINE | ID: mdl-36068551

RESUMO

BACKGROUND: Clinical trial protocols are the foundation for advancing medical sciences, however, the extraction of accurate and meaningful information from the original clinical trials is very challenging due to the complex and unstructured texts of such documents. Named entity recognition (NER) is a fundamental and necessary step to process and standardize the unstructured text in clinical trials using Natural Language Processing (NLP) techniques. METHODS: In this study we fine-tuned pre-trained language models to support the NER task on clinical trial eligibility criteria. We systematically investigated four pre-trained contextual embedding models for the biomedical domain (i.e., BioBERT, BlueBERT, PubMedBERT, and SciBERT) and two models for the open domains (BERT and SpanBERT), for NER tasks using three existing clinical trial eligibility criteria corpora. In addition, we also investigated the feasibility of data augmentation approaches and evaluated their performance. RESULTS: Our evaluation results using tenfold cross-validation show that domain-specific transformer models achieved better performance than the general transformer models, with the best performance obtained by the PubMedBERT model (F1-scores of 0.715, 0.836, and 0.622 for the three corpora respectively). The data augmentation results show that it is feasible to leverage additional corpora to improve NER performance. CONCLUSIONS: Findings from this study not only demonstrate the importance of contextual embeddings trained from domain-specific corpora, but also shed lights on the benefits of leveraging multiple data sources for the challenging NER task in clinical trial eligibility criteria text.


Assuntos
Definição da Elegibilidade , Nomes , Ensaios Clínicos como Assunto , Humanos , Armazenamento e Recuperação da Informação , Idioma , Medicina , Processamento de Linguagem Natural
17.
Sensors (Basel) ; 22(24)2022 Dec 16.
Artigo em Inglês | MEDLINE | ID: mdl-36560289

RESUMO

A variety of Chinese textual operational text data has been recorded during the operation and maintenance of the high-speed railway catenary system. Such defect text records can facilitate defect detection and defect severity analysis if mined efficiently and accurately. Therefore, in this context, this paper focuses on a specific problem in defect text mining, which is to efficiently extract defect-relevant information from catenary defect text records and automatically identify catenary defect severity. The specific task is transformed into a machine learning problem for defect text classification. First, we summarize the characteristics of catenary defect texts and construct a text dataset. Second, we use BERT to learn defect texts and generate word embedding vectors with contextual features, fed into the classification model. Third, we developed a deep text categorization network (DTCN) to distinguish the catenary defect level, considering the contextualized semantic features. Finally, the effectiveness of our proposed method (BERT-DTCN) is validated using a catenary defect textual dataset collected from 2016 to 2018 in the China Railway Administration in Chengdu, Lanzhou, and Hengshui. Moreover, BERT-DTCN outperforms several competitive methods in terms of accuracy, precision, recall, and F1-score value.


Assuntos
Semântica , Humanos , China , Mineração de Dados , Aprendizado de Máquina
18.
Entropy (Basel) ; 24(9)2022 Aug 29.
Artigo em Inglês | MEDLINE | ID: mdl-36141091

RESUMO

Automated essay scoring aims to evaluate the quality of an essay automatically. It is one of the main educational application in the field of natural language processing. Recently, Pre-training techniques have been used to improve performance on downstream tasks, and many studies have attempted to use pre-training and then fine-tuning mechanisms in an essay scoring system. However, obtaining better features such as prompts by the pre-trained encoder is critical but not fully studied. In this paper, we create a prompt feature fusion method that is better suited for fine-tuning. Besides, we use multi-task learning by designing two auxiliary tasks, prompt prediction and prompt matching, to obtain better features. The experimental results show that both auxiliary tasks can improve model performance, and the combination of the two auxiliary tasks with the NEZHA pre-trained encoder produces the best results, with Quadratic Weighted Kappa improving 2.5% and Pearson's Correlation Coefficient improving 2% on average across all results on the HSK dataset.

19.
BMC Bioinformatics ; 22(1): 272, 2021 May 26.
Artigo em Inglês | MEDLINE | ID: mdl-34039273

RESUMO

BACKGROUND: Biomedical question answering (QA) is a sub-task of natural language processing in a specific domain, which aims to answer a question in the biomedical field based on one or more related passages and can provide people with accurate healthcare-related information. Recently, a lot of approaches based on the neural network and large scale pre-trained language model have largely improved its performance. However, considering the lexical characteristics of biomedical corpus and its small scale dataset, there is still much improvement room for biomedical QA tasks. RESULTS: Inspired by the importance of syntactic and lexical features in the biomedical corpus, we proposed a new framework to extract external features, such as part-of-speech and named-entity recognition, and fused them with the original text representation encoded by pre-trained language model, to enhance the biomedical question answering performance. Our model achieves an overall improvement of all three metrics on BioASQ 6b, 7b, and 8b factoid question answering tasks. CONCLUSIONS: The experiments on BioASQ question answering dataset demonstrated the effectiveness of our external feature-enriched framework. It is proven by the experiments conducted that external lexical and syntactic features can improve Pre-trained Language Model's performance in biomedical domain question answering task.


Assuntos
Processamento de Linguagem Natural , Redes Neurais de Computação , Humanos , Idioma
20.
J Biomed Inform ; 113: 103628, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33232839

RESUMO

Enriching terminology base (TB) is an important and continuous process, since formal term can be renamed and new term alias emerges all the time. As a potential supplementary for TB enrichment, electronic health record (EHR) is a fundamental source for clinical research and practise. The task to align the set of external terms in EHRs to TB can be regarded as entity alignment without structure information. Conventional approaches mainly use internal structural information of multiple knowledge bases (KBs) to map entities and their counterparts among KBs. However, the external terms in EHRs are independent clinical terms, which lack of interrelations. To achieve entity alignment in this case, we proposed a novel automatic TB enrichment approach, named semantic & structure embeddings-based relevancy prediction (S2ERP). To obtain the semantic embedding of external terms, we fed them with formal entity into a pre-trained language model. Meanwhile, a graph convolutional network was used to obtain the structure embeddings of the synonyms and hyponyms in TB. Afterwards, S2ERP combines both embeddings to measure the relevancy. Experimental results on clinical indicator TB, collected from 38 top-class hospitals of Shanghai Hospital Development Center, showed that the proposed approach outperforms baseline methods by 14.16% in Hits@1.


Assuntos
Registros Eletrônicos de Saúde , Bases de Conhecimento , China , Processamento de Linguagem Natural , Semântica
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA