Your browser doesn't support javascript.
loading
Contrastive learning of graphs under label noise.
Li, Xianxian; Li, Qiyu; Li, De; Qian, Haodong; Wang, Jinyan.
Affiliation
  • Li X; Key Lab of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, Guilin, 541004, China; Guangxi Key Lab of Multi-Source Information Mining and Security, Guangxi Normal University, Guilin, 541004, China; School of Computer Science and Engineering, Guangxi
  • Li Q; School of Computer Science and Engineering, Guangxi Normal University, Guilin, 541004, China.
  • Li; School of Computer Science and Engineering, Guangxi Normal University, Guilin, 541004, China.
  • Qian H; School of Computer Science and Engineering, Guangxi Normal University, Guilin, 541004, China.
  • Wang J; Key Lab of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, Guilin, 541004, China; Guangxi Key Lab of Multi-Source Information Mining and Security, Guangxi Normal University, Guilin, 541004, China; School of Computer Science and Engineering, Guangxi
Neural Netw ; 172: 106113, 2024 Apr.
Article in En | MEDLINE | ID: mdl-38232430
ABSTRACT
In the domain of graph-structured data learning, semi-supervised node classification serves as a critical task, relying mainly on the information from unlabeled nodes and a minor fraction of labeled nodes for training. However, real-world graph-structured data often suffer from label noise, which significantly undermines the performance of Graph Neural Networks (GNNs). This problem becomes increasingly severe in situations where labels are scarce. To tackle this issue of sparse and noisy labels, we propose a novel approach Contrastive Robust Graph Neural Network (CR-GNN), Firstly, considering label sparsity and noise, we employ unsupervised contrastive loss and further incorporate homophily in the graph structure, thus introducing neighbor contrastive loss. Moreover, data augmentation is typically used to construct positive and negative samples in contrastive learning, which may result in inconsistent prediction outcomes. Based on this, we propose a dynamic cross-entropy loss, which selects the nodes with consistent predictions as reliable nodes for cross-entropy loss and benefits to mitigate the overfitting to labeling noise. Finally, we propose cross-space consistency to narrow the semantic gap between the contrast and classification spaces. Extensive experiments on multiple publicly available datasets demonstrate that CR-GNN notably outperforms existing methods in resisting label noise.
Subject(s)
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Neural Networks, Computer / Learning Type of study: Prognostic_studies Language: En Journal: Neural Netw Journal subject: NEUROLOGIA Year: 2024 Document type: Article

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Neural Networks, Computer / Learning Type of study: Prognostic_studies Language: En Journal: Neural Netw Journal subject: NEUROLOGIA Year: 2024 Document type: Article