Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Image Process ; 33: 3399-3412, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38787665

RESUMO

Existing multi-view graph learning methods often rely on consistent information for similar nodes within and across views, however they may lack adaptability when facing diversity challenges from noise, varied views, and complex data distributions. These challenges can be mainly categorized into: 1) View-specific diversity within intra-view from noise and incomplete information; 2) Cross-view diversity within inter-view caused by various latent semantics; 3) Cross-group diversity within inter-group due to data distribution differences. To this end, we propose a universal multi-view consensus graph learning framework that considers both original and generative graphs to balance consistency and diversity. Specifically, the proposed framework can be divided into the following four modules: i) Multi-channel graph module to extract principal node information, ensuring view-specific and cross-view consistency while mitigating view-specific and cross-view diversity within original graphs; ii) Generative module to produce cleaner and more realistic graphs, enriching graph structure while maintaining view-specific consistency and suppressing view-specific diversity; iii) Contrastive module to collaborate on generative semantics to facilitate cross-view consistency and reducing cross-view diversity within generative graphs; iv) Consensus graph module to consolidate learning a consensual graph, pursuing cross-group consistency and cross-group diversity. Extensive experimental results on real-world datasets demonstrate its effectiveness and superiority.

2.
Artigo em Inglês | MEDLINE | ID: mdl-37847634

RESUMO

Graph convolutional network (GCN) has gained widespread attention in semisupervised classification tasks. Recent studies show that GCN-based methods have achieved decent performance in numerous fields. However, most of the existing methods generally adopted a fixed graph that cannot dynamically capture both local and global relationships. This is because the hidden and important relationships may not be directed exhibited in the fixed structure, causing the degraded performance of semisupervised classification tasks. Moreover, the missing and noisy data yielded by the fixed graph may result in wrong connections, thereby disturbing the representation learning process. To cope with these issues, this article proposes a learnable GCN-based framework, aiming to obtain the optimal graph structures by jointly integrating graph learning and feature propagation in a unified network. Besides, to capture the optimal graph representations, this article designs dual-GCN-based meta-channels to simultaneously explore local and global relations during the training process. To minimize the interference of the noisy data, a semisupervised graph information bottleneck (SGIB) is introduced to conduct the graph structural learning (GSL) for acquiring the minimal sufficient representations. Concretely, SGIB aims to maximize the mutual information of both the same and different meta-channels by designing the constraints between them, thereby improving the node classification performance in the downstream tasks. Extensive experimental results on real-world datasets demonstrate the robustness of the proposed model, which outperforms state-of-the-art methods with fixed-structure graphs.

3.
IEEE Trans Pattern Anal Mach Intell ; 44(9): 5042-5055, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34018930

RESUMO

Sparsity-constrained optimization problems are common in machine learning, such as sparse coding, low-rank minimization and compressive sensing. However, most of previous studies focused on constructing various hand-crafted sparse regularizers, while little work was devoted to learning adaptive sparse regularizers from given input data for specific tasks. In this paper, we propose a deep sparse regularizer learning model that learns data-driven sparse regularizers adaptively. Via the proximal gradient algorithm, we find that the sparse regularizer learning is equivalent to learning a parameterized activation function. This encourages us to learn sparse regularizers in the deep learning framework. Therefore, we build a neural network composed of multiple blocks, each being differentiable and reusable. All blocks contain learnable piecewise linear activation functions which correspond to the sparse regularizer to be learned. Furthermore, the proposed model is trained with back propagation, and all parameters in this model are learned end-to-end. We apply our framework to multi-view clustering and semi-supervised classification tasks to learn a latent compact representation. Experimental results demonstrate the superiority of the proposed framework over state-of-the-art multi-view learning models.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA