Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-39159039

RESUMO

Object parts serve as crucial intermediate representations in various downstream tasks, but part-level representation learning still has not received as much attention as other vision tasks. Previous research has established that Vision Transformer can learn instance-level attention without labels, extracting high-quality instance-level representations for boosting downstream tasks. In this paper, we achieve unsupervised part-specific attention learning using a novel paradigm and further employ the part representations to improve part discovery performance. Specifically, paired images are generated from the same image with different geometric transformations, and multiple part representations are extracted from these paired images using a novel module, named PartFormer. These part representations from the paired images are then exchanged to improve geometric transformation invariance. Subsequently, the part representations are aligned with the feature map extracted by a feature map encoder, achieving high similarity with the pixel representations of the corresponding part regions and low similarity in irrelevant regions. Finally, the geometric and semantic constraints are applied to the part representations through the intermediate results in alignment for part-specific attention learning, encouraging the PartFormer to focus locally and the part representations to explicitly include the information of the corresponding parts. Moreover, the aligned part representations can further serve as a series of reliable detectors in the testing phase, predicting pixel masks for part discovery. Extensive experiments are carried out on four widely used datasets, and our results demonstrate that the proposed method achieves competitive performance and robustness due to its part-specific attention. The code will be released upon paper acceptance.

2.
Artigo em Inglês | MEDLINE | ID: mdl-38870002

RESUMO

As a pivotal subfield within the domain of time series forecasting, runoff forecasting plays a crucial role in water resource management and scheduling. Recent advancements in the application of artificial neural networks (ANNs) and attention mechanisms have markedly enhanced the accuracy of runoff forecasting models. This article introduces an innovative hybrid model, ResTCN-DAM, which synergizes the strengths of deep residual network (ResNet), temporal convolutional networks (TCNs), and dual attention mechanisms (DAMs). The proposed ResTCN-DAM is designed to leverage the unique attributes of these three modules: TCN has outstanding capability to process time series data in parallel. By combining with modified ResNet, multiple TCN layers can be densely stacked to capture more hidden information in the temporal dimension. DAM module adeptly captures the interdependencies within both temporal and feature dimensions, adeptly accentuating relevant time steps/features while diminishing less significant ones with minimal computational cost. Furthermore, the snapshot ensemble method is able to obtain the effect of training multiple models through one single training process, which ensures the accuracy and robustness of the forecasts. The deep integration and collaborative cooperation of these modules comprehensively enhance the model's forecasting capability from various perspectives. Ablation studies conducted validate the efficacy of each module, and through multiple sets of comparative experiments, it is shown that the proposed ResTCN-DAM has exceptional and consistent performance across varying lead times. We also employ visualization techniques to display heatmaps of the model's weights, thereby enhancing the interpretability of the model. When compared with the prevailing neural network-based runoff forecasting models, ResTCN-DAM exhibits state-of-the-art accuracy, temporal robustness, and interpretability, positioning it at the forefront of contemporary research.

3.
Neural Netw ; 172: 106097, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38286098

RESUMO

Graph Neural Networks (GNNs) are often viewed as black boxes due to their lack of transparency, which hinders their application in critical fields. Many explanation methods have been proposed to address the interpretability issue of GNNs. These explanation methods reveal explanatory information about graphs from different perspectives. However, the explanatory information may also pose an attack risk to GNN models. In this work, we will explore this problem from the explanatory subgraph perspective. To this end, we utilize a powerful GNN explanation method called SubgraphX and deploy it locally to obtain explanatory subgraphs from given graphs. Then we propose methods for conducting evasion attacks and backdoor attacks based on the local explainer. In evasion attacks, the attacker gets explanatory subgraphs of test graphs from the local explainer and replace their explanatory subgraphs with an explanatory subgraph of other labels, making the target model misclassify test graphs as wrong labels. In backdoor attacks, the attacker employs the local explainer to select an explanatory trigger and locate suitable injection locations. We validate the effectiveness of our proposed attacks on state-of-art GNN models and different datasets. The results also demonstrate that our proposed backdoor attack is more efficient, adaptable, and concealed than previous backdoor attacks.


Assuntos
Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA