Robust prostate disease classification using transformers with discrete representations.
Int J Comput Assist Radiol Surg
; 2024 May 13.
Article
en En
| MEDLINE
| ID: mdl-38740720
ABSTRACT
PURPOSE:
Automated prostate disease classification on multi-parametric MRI has recently shown promising results with the use of convolutional neural networks (CNNs). The vision transformer (ViT) is a convolutional free architecture which only exploits the self-attention mechanism and has surpassed CNNs in some natural imaging classification tasks. However, these models are not very robust to textural shifts in the input space. In MRI, we often have to deal with textural shift arising from varying acquisition protocols. Here, we focus on the ability of models to generalise well to new magnet strengths for MRI.METHOD:
We propose a new framework to improve the robustness of vision transformer-based models for disease classification by constructing discrete representations of the data using vector quantisation. We sample a subset of the discrete representations to form the input into a transformer-based model. We use cross-attention in our transformer model to combine the discrete representations of T2-weighted and apparent diffusion coefficient (ADC) images.RESULTS:
We analyse the robustness of our model by training on a 1.5 T scanner and test on a 3 T scanner and vice versa. Our approach achieves SOTA performance for classification of lesions on prostate MRI and outperforms various other CNN and transformer-based models in terms of robustness to domain shift and perturbations in the input space.CONCLUSION:
We develop a method to improve the robustness of transformer-based disease classification of prostate lesions on MRI using discrete representations of the T2-weighted and ADC images.
Texto completo:
1
Banco de datos:
MEDLINE
Idioma:
En
Revista:
Int J Comput Assist Radiol Surg
Asunto de la revista:
RADIOLOGIA
Año:
2024
Tipo del documento:
Article