Your browser doesn't support javascript.
loading
Learning dynamic graph representations through timespan view contrasts.
Xu, Yiming; Peng, Zhen; Shi, Bin; Hua, Xu; Dong, Bo.
Afiliação
  • Xu Y; School of Computer Science and Technology, Xi'an Jiaotong University, PR China. Electronic address: xym0924@stu.xjtu.edu.cn.
  • Peng Z; School of Computer Science and Technology, Xi'an Jiaotong University, PR China. Electronic address: zhenpeng@xjtu.edu.cn.
  • Shi B; School of Computer Science and Technology, Xi'an Jiaotong University, PR China. Electronic address: shibin@xjtu.edu.cn.
  • Hua X; School of Computer Science and Technology, Xi'an Jiaotong University, PR China. Electronic address: huaxu@stu.xjtu.edu.cn.
  • Dong B; School of Distance Education, Xi'an Jiaotong University, PR China. Electronic address: dong.bo@xjtu.edu.cn.
Neural Netw ; 176: 106384, 2024 Aug.
Article em En | MEDLINE | ID: mdl-38754286
ABSTRACT
The rich information underlying graphs has inspired further investigation of unsupervised graph representation. Existing studies mainly depend on node features and topological properties within static graphs to create self-supervised signals, neglecting the temporal components carried by real-world graph data, such as timestamps of edges. To overcome this limitation, this paper explores how to model temporal evolution on dynamic graphs elegantly. Specifically, we introduce a new inductive bias, namely temporal translation invariance, which illustrates the tendency of the identical node to keep similar labels across different timespans. Based on this assumption, we develop a dynamic graph representation framework CLDG that encourages the node to maintain locally consistent temporal translation invariance through contrastive learning on different timespans. Except for standard CLDG which only considers explicit topological links, our further proposed CLDG++additionally employs graph diffusion to uncover global contextual correlations between nodes, and designs a multi-scale contrastive learning objective composed of local-local, local-global, and global-global contrasts to enhance representation capabilities. Interestingly, by measuring the consistency between different timespans to shape anomaly indicators, CLDG and CLDG++are seamlessly integrated with the task of spotting anomalies on dynamic graphs, which has broad applications in many high-impact domains, such as finance, cybersecurity, and healthcare. Experiments demonstrate that CLDG and CLDG++both exhibit desirable performance in downstream tasks including node classification and dynamic graph anomaly detection. Moreover, CLDG significantly reduces time and space complexity by implicitly exploiting temporal cues instead of complicated sequence models. The code and data are available at https//github.com/yimingxu24/CLDG.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Redes Neurais de Computação Limite: Humans Idioma: En Revista: Neural Netw Assunto da revista: NEUROLOGIA Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Redes Neurais de Computação Limite: Humans Idioma: En Revista: Neural Netw Assunto da revista: NEUROLOGIA Ano de publicação: 2024 Tipo de documento: Article
...