Your browser doesn't support javascript.
loading
When to Pre-Train Graph Neural Networks? From Data Generation Perspective!
Cao, Yuxuan; Xu, Jiarong; Yang, Carl; Wang, Jiaan; Zhang, Yunchao; Wang, Chunping; Chen, Lei; Yang, Yang.
Afiliación
  • Cao Y; Zhejiang University, Fudan University.
  • Xu J; Fudan University.
  • Yang C; Emory University.
  • Wang J; Soochow University.
  • Zhang Y; Zhejiang University.
  • Wang C; Finvolution Group.
  • Chen L; Finvolution Group.
  • Yang Y; Zhejiang University.
KDD ; 2023: 142-153, 2023 Aug.
Article en En | MEDLINE | ID: mdl-38333106
ABSTRACT
In recent years, graph pre-training has gained significant attention, focusing on acquiring transferable knowledge from unlabeled graph data to improve downstream performance. Despite these recent endeavors, the problem of negative transfer remains a major concern when utilizing graph pre-trained models to downstream tasks. Previous studies made great efforts on the issue of what to pre-train and how to pre-train by designing a variety of graph pre-training and fine-tuning strategies. However, there are cases where even the most advanced "pre-train and fine-tune" paradigms fail to yield distinct benefits. This paper introduces a generic framework W2PGNN to answer the crucial question of when to pre-train (i.e., in what situations could we take advantage of graph pre-training) before performing effortful pre-training or fine-tuning. We start from a new perspective to explore the complex generative mechanisms from the pre-training data to downstream data. In particular, W2PGNN first fits the pre-training data into graphon bases, each element of graphon basis (i.e., a graphon) identifies a fundamental transferable pattern shared by a collection of pre-training graphs. All convex combinations of graphon bases give rise to a generator space, from which graphs generated form the solution space for those downstream data that can benefit from pre-training. In this manner, the feasibility of pre-training can be quantified as the generation probability of the downstream data from any generator in the generator space. W2PGNN offers three broad applications providing the application scope of graph pre-trained models, quantifying the feasibility of pre-training, and assistance in selecting pre-training data to enhance downstream performance. We provide a theoretically sound solution for the first application and extensive empirical justifications for the latter two applications.
Palabras clave

Texto completo: 1 Base de datos: MEDLINE Idioma: En Revista: KDD Año: 2023 Tipo del documento: Article

Texto completo: 1 Base de datos: MEDLINE Idioma: En Revista: KDD Año: 2023 Tipo del documento: Article