Your browser doesn't support javascript.
loading
Asking the right questions for mutagenicity prediction from BioMedical text.
Acharya, Sathwik; Shinada, Nicolas K; Koyama, Naoki; Ikemori, Megumi; Nishioka, Tomoki; Hitaoka, Seiji; Hakura, Atsushi; Asakura, Shoji; Matsuoka, Yukiko; Palaniappan, Sucheendra K.
Afiliação
  • Acharya S; The Systems Biology Institute, Tokyo, Japan.
  • Shinada NK; The Systems Biology Institute, Tokyo, Japan.
  • Koyama N; SBX Corporation, Tokyo, Japan.
  • Ikemori M; Global Drug Safety, Eisai Co., Ltd., Tokyo, Japan.
  • Nishioka T; Planning Operation, hhc Data Creation Center, Eisai Co., Ltd., Tokyo, Japan.
  • Hitaoka S; 5D Integration Unit, hhc Data Creation Center, Eisai Co., Ltd., Tokyo, Japan.
  • Hakura A; 5D Integration Unit, hhc Data Creation Center, Eisai Co., Ltd., Tokyo, Japan.
  • Asakura S; Global Drug Safety, Eisai Co., Ltd., Tokyo, Japan.
  • Matsuoka Y; Global Drug Safety, Eisai Co., Ltd., Tokyo, Japan.
  • Palaniappan SK; The Systems Biology Institute, Tokyo, Japan.
NPJ Syst Biol Appl ; 9(1): 63, 2023 Dec 18.
Article em En | MEDLINE | ID: mdl-38110446
ABSTRACT
Assessing the mutagenicity of chemicals is an essential task in the drug development process. Usually, databases and other structured sources for AMES mutagenicity exist, which have been carefully and laboriously curated from scientific publications. As knowledge accumulates over time, updating these databases is always an overhead and impractical. In this paper, we first propose the problem of predicting the mutagenicity of chemicals from textual information in scientific publications. More simply, given a chemical and evidence in the natural language form from publications where the mutagenicity of the chemical is described, the goal of the model/algorithm is to predict if it is potentially mutagenic or not. For this, we first construct a golden standard data set and then propose MutaPredBERT, a prediction model fine-tuned on BioLinkBERT based on a question-answering formulation of the problem. We leverage transfer learning and use the help of large transformer-based models to achieve a Macro F1 score of >0.88 even with relatively small data for fine-tuning. Our work establishes the utility of large language models for the construction of structured sources of knowledge bases directly from scientific publications.
Assuntos

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Mutagênicos Idioma: En Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Mutagênicos Idioma: En Ano de publicação: 2023 Tipo de documento: Article