Backdoor attacks on unsupervised graph representation learning.
Neural Netw
; 180: 106668, 2024 Aug 29.
Article
em En
| MEDLINE
| ID: mdl-39243511
ABSTRACT
Unsupervised graph learning techniques have garnered increasing interest among researchers. These methods employ the technique of maximizing mutual information to generate representations of nodes and graphs. We show that these methods are susceptible to backdoor attacks, wherein the adversary can poison a small portion of unlabeled graph data (e.g., node features and graph structure) by introducing triggers into the graph. This tampering disrupts the representations and increases the risk to various downstream applications. Previous backdoor attacks in supervised learning primarily operate directly on the label space and may not be suitable for unlabeled graph data. To tackle this challenge, we introduce GRBA,1 a gradient-based first-order backdoor attack method. To the best of our knowledge, this constitutes a pioneering endeavor in investigating backdoor attacks within the domain of unsupervised graph learning. The initiation of this method does not necessitate prior knowledge of downstream tasks, as it directly focuses on representations. Furthermore, it is versatile and can be applied to various downstream tasks, including node classification, node clustering and graph classification. We evaluate GRBA on state-of-the-art unsupervised learning models, and the experimental results substantiate the effectiveness and evasiveness of GRBA in both node-level and graph-level tasks.
Texto completo:
1
Base de dados:
MEDLINE
Idioma:
En
Ano de publicação:
2024
Tipo de documento:
Article