Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Neural Netw ; 178: 106467, 2024 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-38908168

RESUMO

In recent years, the research on transferable feature-level adversarial attack has become a hot spot due to attacking unknown deep neural networks successfully. But the following problems limit its transferability. Existing feature disruption methods often focus on computing feature weights precisely, while overlooking the noise influence of feature maps, which results in disturbing non-critical features. Meanwhile, geometric augmentation algorithms are used to enhance image diversity but compromise information integrity, which hamper models from capturing comprehensive features. Furthermore, current feature perturbation could not pay attention to the density distribution of object-relevant key features, which mainly concentrate in salient region and fewer in the most distributed background region, and get limited transferability. To tackle these challenges, a feature distribution-aware transferable adversarial attack method, called FDAA, is proposed to implement distinct strategies for different image regions in the paper. A novel Aggregated Feature Map Attack (AFMA) is presented to significantly denoise feature maps, and an input transformation strategy, called Smixup, is introduced to help feature disruption algorithms to capture comprehensive features. Extensive experiments demonstrate that scheme proposed achieves better transferability with an average success rate of 78.6% on adversarially trained models.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA