Your browser doesn't support javascript.
loading
Comparative Analysis of Vision Transformers and Conventional Convolutional Neural Networks in Detecting Referable Diabetic Retinopathy.
Goh, Jocelyn Hui Lin; Ang, Elroy; Srinivasan, Sahana; Lei, Xiaofeng; Loh, Johnathan; Quek, Ten Cheer; Xue, Cancan; Xu, Xinxing; Liu, Yong; Cheng, Ching-Yu; Rajapakse, Jagath C; Tham, Yih-Chung.
Affiliation
  • Goh JHL; Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore.
  • Ang E; School of Computer Science and Engineering, Nanyang Technological University, Singapore, Singapore.
  • Srinivasan S; Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore.
  • Lei X; Institute of High-Performance Computing, A∗STAR, Singapore, Singapore.
  • Loh J; Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore.
  • Quek TC; Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore.
  • Xue C; Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore.
  • Xu X; Institute of High-Performance Computing, A∗STAR, Singapore, Singapore.
  • Liu Y; Institute of High-Performance Computing, A∗STAR, Singapore, Singapore.
  • Cheng CY; Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore.
  • Rajapakse JC; Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore.
  • Tham YC; Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore and National University Health System, Singapore, Singapore.
Ophthalmol Sci ; 4(6): 100552, 2024.
Article in En | MEDLINE | ID: mdl-39165694
ABSTRACT

Objective:

Vision transformers (ViTs) have shown promising performance in various classification tasks previously dominated by convolutional neural networks (CNNs). However, the performance of ViTs in referable diabetic retinopathy (DR) detection is relatively underexplored. In this study, using retinal photographs, we evaluated the comparative performances of ViTs and CNNs on detection of referable DR.

Design:

Retrospective study.

Participants:

A total of 48 269 retinal images from the open-source Kaggle DR detection dataset, the Messidor-1 dataset and the Singapore Epidemiology of Eye Diseases (SEED) study were included.

Methods:

Using 41 614 retinal photographs from the Kaggle dataset, we developed 5 CNN (Visual Geometry Group 19, ResNet50, InceptionV3, DenseNet201, and EfficientNetV2S) and 4 ViTs models (VAN_small, CrossViT_small, ViT_small, and Hierarchical Vision transformer using Shifted Windows [SWIN]_tiny) for the detection of referable DR. We defined the presence of referable DR as eyes with moderate or worse DR. The comparative performance of all 9 models was evaluated in the Kaggle internal test dataset (with 1045 study eyes), and in 2 external test sets, the SEED study (5455 study eyes) and the Messidor-1 (1200 study eyes). Main Outcome

Measures:

Area under operating characteristics curve (AUC), specificity, and sensitivity.

Results:

Among all models, the SWIN transformer displayed the highest AUC of 95.7% on the internal test set, significantly outperforming the CNN models (all P < 0.001). The same observation was confirmed in the external test sets, with the SWIN transformer achieving AUC of 97.3% in SEED and 96.3% in Messidor-1. When specificity level was fixed at 80% for the internal test, the SWIN transformer achieved the highest sensitivity of 94.4%, significantly better than all the CNN models (sensitivity levels ranging between 76.3% and 83.8%; all P < 0.001). This trend was also consistently observed in both external test sets.

Conclusions:

Our findings demonstrate that ViTs provide superior performance over CNNs in detecting referable DR from retinal photographs. These results point to the potential of utilizing ViT models to improve and optimize retinal photo-based deep learning for referable DR detection. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Ophthalmol Sci Year: 2024 Document type: Article Affiliation country: Publication country: HOLANDA / HOLLAND / NETHERLANDS / NL / PAISES BAJOS / THE NETHERLANDS

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Ophthalmol Sci Year: 2024 Document type: Article Affiliation country: Publication country: HOLANDA / HOLLAND / NETHERLANDS / NL / PAISES BAJOS / THE NETHERLANDS