Frameless Graph Knowledge Distillation.
IEEE Trans Neural Netw Learn Syst
; PP2024 Sep 04.
Article
en En
| MEDLINE
| ID: mdl-39231057
ABSTRACT
Knowledge distillation (KD) has shown great potential for transferring knowledge from a complex teacher model to a simple student model in which the heavy learning task can be accomplished efficiently and without losing too much prediction accuracy. Recently, many attempts have been made by applying the KD mechanism to graph representation learning models such as graph neural networks (GNNs) to accelerate the model's inference speed via student models. However, many existing KD-based GNNs utilize multilayer perceptron (MLP) as a universal approximator in the student model to imitate the teacher model's process without considering the graph knowledge from the teacher model. In this work, we provide a KD-based framework on multiscaled GNNs, known as graph framelet, and prove that by adequately utilizing the graph knowledge in a multiscaled manner provided by graph framelet decomposition, the student model is capable of adapting both homophilic and heterophilic graphs and has the potential of alleviating the oversquashing issue with a simple yet effective graph surgery. Furthermore, we show how the graph knowledge supplied by the teacher is learned and digested by the student model via both algebra and geometry. Comprehensive experiments show that our proposed model can generate learning accuracy identical to or even surpass the teacher model while maintaining the high speed of inference.
Texto completo:
1
Colección:
01-internacional
Base de datos:
MEDLINE
Idioma:
En
Revista:
IEEE Trans Neural Netw Learn Syst
Año:
2024
Tipo del documento:
Article
Pais de publicación:
Estados Unidos