Your browser doesn't support javascript.
loading
Adversarial attacks and adversarial robustness in computational pathology.
Ghaffari Laleh, Narmin; Truhn, Daniel; Veldhuizen, Gregory Patrick; Han, Tianyu; van Treeck, Marko; Buelow, Roman D; Langer, Rupert; Dislich, Bastian; Boor, Peter; Schulz, Volkmar; Kather, Jakob Nikolas.
Afiliação
  • Ghaffari Laleh N; Department of Medicine III, University Hospital RWTH Aachen, RWTH Aachen university, Aachen, Germany.
  • Truhn D; Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany.
  • Veldhuizen GP; Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany.
  • Han T; Department of Physics of Molecular Imaging Systems, Institute for Experimental Molecular Imaging, RWTH Aachen University, Aachen, Germany.
  • van Treeck M; Department of Medicine III, University Hospital RWTH Aachen, RWTH Aachen university, Aachen, Germany.
  • Buelow RD; Institute of Pathology, University Hospital RWTH Aachen, Aachen, Germany.
  • Langer R; Institute of Pathology, University of Bern, Bern, Switzerland.
  • Dislich B; Institute of Pathology and Molecular Pathology, Kepler University Hospital, Johannes Kepler University Linz, Linz, Austria.
  • Boor P; Institute of Pathology, University of Bern, Bern, Switzerland.
  • Schulz V; Institute of Pathology, University Hospital RWTH Aachen, Aachen, Germany.
  • Kather JN; Department of Physics of Molecular Imaging Systems, Institute for Experimental Molecular Imaging, RWTH Aachen University, Aachen, Germany.
Nat Commun ; 13(1): 5711, 2022 09 29.
Article em En | MEDLINE | ID: mdl-36175413
Artificial Intelligence (AI) can support diagnostic workflows in oncology by aiding diagnosis and providing biomarkers directly from routine pathology slides. However, AI applications are vulnerable to adversarial attacks. Hence, it is essential to quantify and mitigate this risk before widespread clinical use. Here, we show that convolutional neural networks (CNNs) are highly susceptible to white- and black-box adversarial attacks in clinically relevant weakly-supervised classification tasks. Adversarially robust training and dual batch normalization (DBN) are possible mitigation strategies but require precise knowledge of the type of attack used in the inference. We demonstrate that vision transformers (ViTs) perform equally well compared to CNNs at baseline, but are orders of magnitude more robust to white- and black-box attacks. At a mechanistic level, we show that this is associated with a more robust latent representation of clinically relevant categories in ViTs compared to CNNs. Our results are in line with previous theoretical studies and provide empirical evidence that ViTs are robust learners in computational pathology. This implies that large-scale rollout of AI models in computational pathology should rely on ViTs rather than CNN-based classifiers to provide inherent protection against perturbation of the input data, especially adversarial attacks.
Assuntos

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Inteligência Artificial / Redes Neurais de Computação Idioma: En Ano de publicação: 2022 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Inteligência Artificial / Redes Neurais de Computação Idioma: En Ano de publicação: 2022 Tipo de documento: Article