Adversarial attacks and adversarial training for burn image segmentation based on deep learning.
Med Biol Eng Comput
; 62(9): 2717-2735, 2024 Sep.
Article
en En
| MEDLINE
| ID: mdl-38693327
ABSTRACT
Deep learning has been widely applied in the fields of image classification and segmentation, while adversarial attacks can impact the model's results in image segmentation and classification. Especially in medical images, due to constraints from factors like shooting angles, environmental lighting, and diverse photography devices, medical images typically contain various forms of noise. In order to address the impact of these physically meaningful disturbances on existing deep learning models in the application of burn image segmentation, we simulate attack methods inspired by natural phenomena and propose an adversarial training approach specifically designed for burn image segmentation. The method is tested on our burn dataset. Through the defensive training using our approach, the segmentation accuracy of adversarial samples, initially at 54%, is elevated to 82.19%, exhibiting a 1.97% improvement compared to conventional adversarial training methods, while substantially reducing the training time. Ablation experiments validate the effectiveness of individual losses, and we assess and compare training results with different adversarial samples using various metrics.
Palabras clave
Texto completo:
1
Base de datos:
MEDLINE
Asunto principal:
Procesamiento de Imagen Asistido por Computador
/
Quemaduras
/
Aprendizaje Profundo
Límite:
Humans
Idioma:
En
Revista:
Med Biol Eng Comput
Año:
2024
Tipo del documento:
Article
País de afiliación:
China