Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
IEEE Trans Neural Netw Learn Syst ; 34(8): 5144-5155, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-34699371

RESUMO

Graph convolutional networks have attracted wide attention for their expressiveness and empirical success on graph-structured data. However, deeper graph convolutional networks with access to more information can often perform worse because their low-order Chebyshev polynomial approximation cannot learn adaptive and structure-aware representations. To solve this problem, many high-order graph convolution schemes have been proposed. In this article, we study the reason why high-order schemes have the ability to learn structure-aware representations. We first prove that these high-order schemes are generalized Weisfeiler-Lehman (WL) algorithm and conduct spectral analysis on these schemes to show that they correspond to polynomial filters in the graph spectral domain. Based on our analysis, we point out twofold limitations of existing high-order models: 1) lack mechanisms to generate individual feature combinations for each node and 2) fail to properly model the relationship between information from different distances. To enable a node-specific combination scheme and capture this interdistance relationship for each node efficiently, we propose a new adaptive feature combination method inspired by the squeeze-and-excitation module that can recalibrate features from different distances by explicitly modeling interdependencies between them. Theoretical analysis shows that models with our new approach can effectively learn structure-aware representations, and extensive experimental results show that our new approach can achieve significant performance gain compared with other high-order schemes.

2.
Artigo em Inglês | MEDLINE | ID: mdl-36001511

RESUMO

Graph convolutional networks (GCNs) have shown success in many graph-based applications as they can combine node features and graph topology to obtain expressive embeddings. While there exist numerous GCN variants, a typical graph convolution layer uses neighborhood aggregation and fully-connected (FC) layers to extract topological and node-wise features, respectively. However, when the receptive field of GCNs becomes larger, the tight coupling between the number of neighborhood aggregation and FC layers can increase the risk of over-fitting. Also, the FC layer between two successive aggregation operations will mix and pollute features in different channels, bringing noise and making node features hard to converge at each channel. In this article, we explore graph convolution without FC layers. We propose scale graph convolution, a new graph convolution using channel-wise scale transformation to extract node features. We provide empirical evidence that our new method has lower over-fitting risk and needs fewer layers to converge. We show from both theoretical and empirical perspectives that models with scale graph convolution have lower computational and memory costs than traditional GCN models. Experimental results on various datasets show that our method can achieve state-of-the-art results, in a cost-effective fashion.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA