Your browser doesn't support javascript.
loading
Prompt Tuning in Biomedical Relation Extraction.
He, Jianping; Li, Fang; Li, Jianfu; Hu, Xinyue; Nian, Yi; Xiang, Yang; Wang, Jingqi; Wei, Qiang; Li, Yiming; Xu, Hua; Tao, Cui.
Afiliação
  • He J; McWilliams School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX USA.
  • Li F; McWilliams School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX USA.
  • Li J; Department of Artificial Intelligence and Informatics, Mayo Clinic, Jacksonville, FL USA.
  • Hu X; McWilliams School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX USA.
  • Nian Y; Department of Artificial Intelligence and Informatics, Mayo Clinic, Jacksonville, FL USA.
  • Xiang Y; McWilliams School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX USA.
  • Wang J; Department of Artificial Intelligence and Informatics, Mayo Clinic, Jacksonville, FL USA.
  • Wei Q; McWilliams School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX USA.
  • Li Y; McWilliams School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX USA.
  • Xu H; McWilliams School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX USA.
  • Tao C; McWilliams School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX USA.
J Healthc Inform Res ; 8(2): 206-224, 2024 Jun.
Article em En | MEDLINE | ID: mdl-38681754
ABSTRACT
Biomedical relation extraction (RE) is critical in constructing high-quality knowledge graphs and databases as well as supporting many downstream text mining applications. This paper explores prompt tuning on biomedical RE and its few-shot scenarios, aiming to propose a simple yet effective model for this specific task. Prompt tuning reformulates natural language processing (NLP) downstream tasks into masked language problems by embedding specific text prompts into the original input, facilitating the adaption of pre-trained language models (PLMs) to better address these tasks. This study presents a customized prompt tuning model designed explicitly for biomedical RE, including its applicability in few-shot learning contexts. The model's performance was rigorously assessed using the chemical-protein relation (CHEMPROT) dataset from BioCreative VI and the drug-drug interaction (DDI) dataset from SemEval-2013, showcasing its superior performance over conventional fine-tuned PLMs across both datasets, encompassing few-shot scenarios. This observation underscores the effectiveness of prompt tuning in enhancing the capabilities of conventional PLMs, though the extent of enhancement may vary by specific model. Additionally, the model demonstrated a harmonious balance between simplicity and efficiency, matching state-of-the-art performance without needing external knowledge or extra computational resources. The pivotal contribution of our study is the development of a suitably designed prompt tuning model, highlighting prompt tuning's effectiveness in biomedical RE. It offers a robust, efficient approach to the field's challenges and represents a significant advancement in extracting complex relations from biomedical texts. Supplementary Information The online version contains supplementary material available at 10.1007/s41666-024-00162-9.
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Idioma: En Revista: J Healthc Inform Res Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Idioma: En Revista: J Healthc Inform Res Ano de publicação: 2024 Tipo de documento: Article