Your browser doesn't support javascript.
loading
Comparing Detection Schemes for Adversarial Images against Deep Learning Models for Cancer Imaging.
Joel, Marina Z; Avesta, Arman; Yang, Daniel X; Zhou, Jian-Ge; Omuro, Antonio; Herbst, Roy S; Krumholz, Harlan M; Aneja, Sanjay.
Affiliation
  • Joel MZ; Department of Dermatology, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA.
  • Avesta A; Department of Therapeutic Radiology, Yale School of Medicine, New Haven, CT 06510, USA.
  • Yang DX; Department of Therapeutic Radiology, Yale School of Medicine, New Haven, CT 06510, USA.
  • Zhou JG; Department of Therapeutic Radiology, Yale School of Medicine, New Haven, CT 06510, USA.
  • Omuro A; Department of Chemistry, Physics and Atmospheric Science, Jackson State University, Jackson, MS 39217, USA.
  • Herbst RS; Department of Neurology, Yale School of Medicine, New Haven, CT 06510, USA.
  • Krumholz HM; Department of Medicine, Yale School of Medicine, New Haven, CT 06510, USA.
  • Aneja S; Department of Medicine, Yale School of Medicine, New Haven, CT 06510, USA.
Cancers (Basel) ; 15(5)2023 Mar 01.
Article in En | MEDLINE | ID: mdl-36900339
ABSTRACT
Deep learning (DL) models have demonstrated state-of-the-art performance in the classification of diagnostic imaging in oncology. However, DL models for medical images can be compromised by adversarial images, where pixel values of input images are manipulated to deceive the DL model. To address this limitation, our study investigates the detectability of adversarial images in oncology using multiple detection schemes. Experiments were conducted on thoracic computed tomography (CT) scans, mammography, and brain magnetic resonance imaging (MRI). For each dataset we trained a convolutional neural network to classify the presence or absence of malignancy. We trained five DL and machine learning (ML)-based detection models and tested their performance in detecting adversarial images. Adversarial images generated using projected gradient descent (PGD) with a perturbation size of 0.004 were detected by the ResNet detection model with an accuracy of 100% for CT, 100% for mammogram, and 90.0% for MRI. Overall, adversarial images were detected with high accuracy in settings where adversarial perturbation was above set thresholds. Adversarial detection should be considered alongside adversarial training as a defense technique to protect DL models for cancer imaging classification from the threat of adversarial images.
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Type of study: Diagnostic_studies Language: En Journal: Cancers (Basel) Year: 2023 Document type: Article Affiliation country:

Full text: 1 Collection: 01-internacional Database: MEDLINE Type of study: Diagnostic_studies Language: En Journal: Cancers (Basel) Year: 2023 Document type: Article Affiliation country: