Your browser doesn't support javascript.
loading
Multimodal learning on graphs for disease relation extraction.
Lin, Yucong; Lu, Keming; Yu, Sheng; Cai, Tianxi; Zitnik, Marinka.
Affiliation
  • Lin Y; Institute of Engineering Medicine, Beijing Institute of Technology, Beijing, China; Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China.
  • Lu K; Viterbi School of Engineering, University of Southern California, Los Angeles, CA, 90007, USA.
  • Yu S; Center for Statistical Science, Tsinghua University, Beijing, China; Department of Industrial Engineering, Tsinghua University, Beijing, China.
  • Cai T; Department of Biostatistics, Harvard T.H.Chan School of Public Health, Boston, MA, 02115, USA; Department of Biomedical Informatics, Harvard Medical School, Boston, MA, 02115, USA.
  • Zitnik M; Department of Biomedical Informatics, Harvard Medical School, Boston, MA, 02115, USA; Broad Institute of MIT and Harvard, Boston, MA, 02142, USA; Harvard Data Science Initiative, Cambridge, MA, 02138, USA. Electronic address: marinka@hms.harvard.com.
J Biomed Inform ; 143: 104415, 2023 07.
Article in En | MEDLINE | ID: mdl-37276949
ABSTRACT
Disease knowledge graphs have emerged as a powerful tool for artificial intelligence to connect, organize, and access diverse information about diseases. Relations between disease concepts are often distributed across multiple datasets, including unstructured plain text datasets and incomplete disease knowledge graphs. Extracting disease relations from multimodal data sources is thus crucial for constructing accurate and comprehensive disease knowledge graphs. We introduce REMAP, a multimodal approach for disease relation extraction. The REMAP machine learning approach jointly embeds a partial, incomplete knowledge graph and a medical language dataset into a compact latent vector space, aligning the multimodal embeddings for optimal disease relation extraction. Additionally, REMAP utilizes a decoupled model structure to enable inference in single-modal data, which can be applied under missing modality scenarios. We apply the REMAP approach to a disease knowledge graph with 96,913 relations and a text dataset of 1.24 million sentences. On a dataset annotated by human experts, REMAP improves language-based disease relation extraction by 10.0% (accuracy) and 17.2% (F1-score) by fusing disease knowledge graphs with language information. Furthermore, REMAP leverages text information to recommend new relationships in the knowledge graph, outperforming graph-based methods by 8.4% (accuracy) and 10.4% (F1-score). REMAP is a flexible multimodal approach for extracting disease relations by fusing structured knowledge and language information. This approach provides a powerful model to easily find, access, and evaluate relations between disease concepts.
Subject(s)
Key words

Full text: 1 Database: MEDLINE Main subject: Artificial Intelligence / Machine Learning Type of study: Prognostic_studies Limits: Humans Language: En Year: 2023 Type: Article

Full text: 1 Database: MEDLINE Main subject: Artificial Intelligence / Machine Learning Type of study: Prognostic_studies Limits: Humans Language: En Year: 2023 Type: Article