Your browser doesn't support javascript.
loading
Self-Paced Co-Training of Graph Neural Networks for Semi-Supervised Node Classification.
IEEE Trans Neural Netw Learn Syst ; 34(11): 9234-9247, 2023 Nov.
Article en En | MEDLINE | ID: mdl-35312623
ABSTRACT
Graph neural networks (GNNs) have demonstrated great success in many graph data-based applications. The impressive behavior of GNNs typically relies on the availability of a sufficient amount of labeled data for model training. However, in practice, obtaining a large number of annotations is prohibitively labor-intensive and even impossible. Co-training is a popular semi-supervised learning (SSL) paradigm, which trains multiple models based on a common training set while augmenting the limited amount of labeled data used for training each model via the pseudolabeled data generated from the prediction results of other models. Most of the existing co-training works do not control the quality of pseudolabeled data when using them. Therefore, the inaccurate pseudolabels generated by immature models in the early stage of the training process are likely to cause noticeable errors when they are used for augmenting the training data for other models. To address this issue, we propose a self-paced co-training for the GNN (SPC-GNN) framework for semi-supervised node classification. This framework trains multiple GNNs with the same or different structures on different representations of the same training data. Each GNN carries out SSL by using both the originally available labeled data and the augmented pseudolabeled data generated from other GNNs. To control the quality of pseudolabels, a self-paced label augmentation strategy is designed to make the pseudolabels generated at a higher confidence level to be utilized earlier during training such that the negative impact of inaccurate pseudolabels on training data augmentation, and accordingly, the subsequent training process can be mitigated. Finally, each of the trained GNN is evaluated on a validation set, and the best-performing one is chosen as the output. To improve the training effectiveness of the framework, we devise a pretraining followed by a two-step optimization scheme to train GNNs. Experimental results on the node classification task demonstrate that the proposed framework achieves significant improvement over the state-of-the-art SSL methods.

Texto completo: 1 Base de datos: MEDLINE Tipo de estudio: Prognostic_studies Idioma: En Revista: IEEE Trans Neural Netw Learn Syst Año: 2023 Tipo del documento: Article

Texto completo: 1 Base de datos: MEDLINE Tipo de estudio: Prognostic_studies Idioma: En Revista: IEEE Trans Neural Netw Learn Syst Año: 2023 Tipo del documento: Article