GLH: From Global to Local Gradient Attacks with High-Frequency Momentum Guidance for Object Detection.
Entropy (Basel)
; 25(3)2023 Mar 06.
Article
en En
| MEDLINE
| ID: mdl-36981349
The adversarial attack is crucial to improving the robustness of deep learning models; they help improve the interpretability of deep learning and also increase the security of the models in real-world applications. However, existing attack algorithms mainly focus on image classification tasks, and they lack research targeting object detection. Adversarial attacks against image classification are global-based with no focus on the intrinsic features of the image. In other words, they generate perturbations that cover the whole image, and each added perturbation is quantitative and undifferentiated. In contrast, we propose a global-to-local adversarial attack based on object detection, which destroys important perceptual features of the object. More specifically, we differentially extract gradient features as a proportion of perturbation additions to generate adversarial samples, as the magnitude of the gradient is highly correlated with the model's point of interest. In addition, we reduce unnecessary perturbations by dynamically suppressing excessive perturbations to generate high-quality adversarial samples. After that, we improve the effectiveness of the attack using the high-frequency feature gradient as a motivation to guide the next gradient attack. Numerous experiments and evaluations have demonstrated the effectiveness and superior performance of our from global to Local gradient attacks with high-frequency momentum guidance (GLH), which is more effective than previous attacks. Our generated adversarial samples also have excellent black-box attack ability.
Texto completo:
1
Colección:
01-internacional
Banco de datos:
MEDLINE
Tipo de estudio:
Diagnostic_studies
/
Guideline
Idioma:
En
Revista:
Entropy (Basel)
Año:
2023
Tipo del documento:
Article
País de afiliación:
China