Your browser doesn't support javascript.
loading
Multimodal learning with graphs.
Ektefaie, Yasha; Dasoulas, George; Noori, Ayush; Farhat, Maha; Zitnik, Marinka.
Afiliação
  • Ektefaie Y; Bioinformatics and Integrative Genomics Program, Harvard Medical School, Boston, MA 02115, USA.
  • Dasoulas G; Department of Biomedical Informatics, Harvard University, Boston, MA 02115, USA.
  • Noori A; Department of Biomedical Informatics, Harvard University, Boston, MA 02115, USA.
  • Farhat M; Harvard Data Science Initiative, Cambridge, MA 02138, USA.
  • Zitnik M; Department of Biomedical Informatics, Harvard University, Boston, MA 02115, USA.
Nat Mach Intell ; 5(4): 340-350, 2023 Apr.
Article em En | MEDLINE | ID: mdl-38076673
Artificial intelligence for graphs has achieved remarkable success in modeling complex systems, ranging from dynamic networks in biology to interacting particle systems in physics. However, the increasingly heterogeneous graph datasets call for multimodal methods that can combine different inductive biases-the set of assumptions that algorithms use to make predictions for inputs they have not encountered during training. Learning on multimodal datasets presents fundamental challenges because the inductive biases can vary by data modality and graphs might not be explicitly given in the input. To address these challenges, multimodal graph AI methods combine different modalities while leveraging cross-modal dependencies using graphs. Diverse datasets are combined using graphs and fed into sophisticated multimodal architectures, specified as image-intensive, knowledge-grounded and language-intensive models. Using this categorization, we introduce a blueprint for multimodal graph learning, use it to study existing methods and provide guidelines to design new models.

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Nat Mach Intell Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Nat Mach Intell Ano de publicação: 2023 Tipo de documento: Article