Your browser doesn't support javascript.
loading
TransformerG2G: Adaptive time-stepping for learning temporal graph embeddings using transformers.
Varghese, Alan John; Bora, Aniruddha; Xu, Mengjia; Karniadakis, George Em.
Afiliação
  • Varghese AJ; School of Engineering, Brown University, Providence, RI 02912, USA.
  • Bora A; Division of Applied Mathematics, Brown University, Providence, RI 02912, USA.
  • Xu M; Department of Data Science, New Jersey Institute of Technology, Newark, NJ 07102, USA; Center for Brains, Minds and Machines, Massachusetts Institute of Technology, Cambridge, MA 02139, USA. Electronic address: mx6@njit.edu.
  • Karniadakis GE; School of Engineering, Brown University, Providence, RI 02912, USA; Division of Applied Mathematics, Brown University, Providence, RI 02912, USA; Pacific Northwest National Laboratory, Richland, WA 99354, USA.
Neural Netw ; 172: 106086, 2024 Apr.
Article em En | MEDLINE | ID: mdl-38159511
ABSTRACT
Dynamic graph embedding has emerged as a very effective technique for addressing diverse temporal graph analytic tasks (i.e., link prediction, node classification, recommender systems, anomaly detection, and graph generation) in various applications. Such temporal graphs exhibit heterogeneous transient dynamics, varying time intervals, and highly evolving node features throughout their evolution. Hence, incorporating long-range dependencies from the historical graph context plays a crucial role in accurately learning their temporal dynamics. In this paper, we develop a graph embedding model with uncertainty quantification, TransformerG2G, by exploiting the advanced transformer encoder to first learn intermediate node representations from its current state (t) and previous context (over timestamps [t-1,t-l], l is the length of context). Moreover, we employ two projection layers to generate lower-dimensional multivariate Gaussian distributions as each node's latent embedding at timestamp t. We consider diverse benchmarks with varying levels of "novelty" as measured by the TEA (Temporal Edge Appearance) plots. Our experiments demonstrate that the proposed TransformerG2G model outperforms conventional multi-step methods and our prior work (DynG2G) in terms of both link prediction accuracy and computational efficiency, especially for high degree of novelty. Furthermore, the learned time-dependent attention weights across multiple graph snapshots reveal the development of an automatic adaptive time stepping enabled by the transformer. Importantly, by examining the attention weights, we can uncover temporal dependencies, identify influential elements, and gain insights into the complex interactions within the graph structure. For example, we identified a strong correlation between attention weights and node degree at the various stages of the graph topology evolution.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Benchmarking / Aprendizagem Idioma: En Revista: Neural Netw Assunto da revista: NEUROLOGIA Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Estados Unidos

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Benchmarking / Aprendizagem Idioma: En Revista: Neural Netw Assunto da revista: NEUROLOGIA Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Estados Unidos