Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 20 de 21
Filtrar
1.
BMC Med Inform Decis Mak ; 24(1): 37, 2024 Feb 06.
Artículo en Inglés | MEDLINE | ID: mdl-38321416

RESUMEN

The most common eye infection in people with diabetes is diabetic retinopathy (DR). It might cause blurred vision or even total blindness. Therefore, it is essential to promote early detection to prevent or alleviate the impact of DR. However, due to the possibility that symptoms may not be noticeable in the early stages of DR, it is difficult for doctors to identify them. Therefore, numerous predictive models based on machine learning (ML) and deep learning (DL) have been developed to determine all stages of DR. However, existing DR classification models cannot classify every DR stage or use a computationally heavy approach. Common metrics such as accuracy, F1 score, precision, recall, and AUC-ROC score are not reliable for assessing DR grading. This is because they do not account for two key factors: the severity of the discrepancy between the assigned and predicted grades and the ordered nature of the DR grading scale. This research proposes computationally efficient ensemble methods for the classification of DR. These methods leverage pre-trained model weights, reducing training time and resource requirements. In addition, data augmentation techniques are used to address data limitations, improve features, and improve generalization. This combination offers a promising approach for accurate and robust DR grading. In particular, we take advantage of transfer learning using models trained on DR data and employ CLAHE for image enhancement and Gaussian blur for noise reduction. We propose a three-layer classifier that incorporates dropout and ReLU activation. This design aims to minimize overfitting while effectively extracting features and assigning DR grades. We prioritize the Quadratic Weighted Kappa (QWK) metric due to its sensitivity to label discrepancies, which is crucial for an accurate diagnosis of DR. This combined approach achieves state-of-the-art QWK scores (0.901, 0.967 and 0.944) in the Eyepacs, Aptos, and Messidor datasets.


Asunto(s)
Diabetes Mellitus , Retinopatía Diabética , Médicos , Humanos , Retinopatía Diabética/diagnóstico , Algoritmos , Aprendizaje Automático , Interpretación de Imagen Asistida por Computador/métodos
2.
Ophthalmic Res ; 61(2): 100-106, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30554213

RESUMEN

BACKGROUND: Digital retinal imaging is the gold standard technique for diabetic retinopathy (DR) and diabetic macular oedema (DME) assessment during DR screening. OBJECTIVES: To evaluate the diagnostic accuracy of digital retinal fundus image (DRFI) analysis in detecting DME using three manual grading systems (MGS) and comparing it with optical coherence tomography (OCT) findings. METHOD: A total of 287 DRFI of 287 eyes were analysed. Non-stereoscopic 45° images were acquired using a Kowa VX-20 camera and were graded according to three MGS: Early Treatment Diabetic Retinopathy Study (ETDRS), International Clinical Diabetic Retinopathy (ICDR), and United Kingdom National Screening Committee (UKNSC). The two graders were masked to the patient's clinical DR status. DME characteristics were analysed using OCTs. RESULTS: A very good agreement in detecting DME was found with Cohen's κ = 0.83 (ICDR vs. ETDRS), κ = 0.83 (ICDR vs. UKNSC), and κ = 0.82 (ETDRS vs. UKNSC). Sensitivity and specificity of DRFI analysis in DME assessment were 70.0 and 69.6% for UKNSC, 71.9 and 67.4% for ETDRS, and 70.9 and 65.2% for ICDR, respectively. Positive and negative predictive values were 91.7 and 32.7% for UKNSC, 91.4 and 33.3% for ETDRS, and 90.7 and 31.9% for ICDR, respectively. On OCT scans, micro-architectural damages of both inner and outer retinal layers and mean ganglion cell layer thickness showed a significant association with the presence of DME detected with DRFI analysis. CONCLUSIONS: Despite the low negative predictive value, the good specificity and sensitivity of DRFI in detecting DME make it a useful tool in a routine clinical setting, and its potential in diabetic eye screening is yet to be realized.


Asunto(s)
Diabetes Mellitus Tipo 2/diagnóstico por imagen , Retinopatía Diabética/diagnóstico por imagen , Fondo de Ojo , Procesamiento de Imagen Asistido por Computador/métodos , Anciano , Anciano de 80 o más Años , Estudios Transversales , Femenino , Angiografía con Fluoresceína , Humanos , Masculino , Persona de Mediana Edad , Células Fotorreceptoras de Vertebrados/patología , Reproducibilidad de los Resultados , Células Ganglionares de la Retina/patología , Estudios Retrospectivos , Sensibilidad y Especificidad , Tomografía de Coherencia Óptica/métodos , Agudeza Visual/fisiología
3.
Comput Biol Med ; 175: 108459, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38701588

RESUMEN

Diabetic retinopathy (DR) is the most common diabetic complication, which usually leads to retinal damage, vision loss, and even blindness. A computer-aided DR grading system has a significant impact on helping ophthalmologists with rapid screening and diagnosis. Recent advances in fundus photography have precipitated the development of novel retinal imaging cameras and their subsequent implementation in clinical practice. However, most deep learning-based algorithms for DR grading demonstrate limited generalization across domains. This inferior performance stems from variance in imaging protocols and devices inducing domain shifts. We posit that declining model performance between domains arises from learning spurious correlations in the data. Incorporating do-operations from causality analysis into model architectures may mitigate this issue and improve generalizability. Specifically, a novel universal structural causal model (SCM) was proposed to analyze spurious correlations in fundus imaging. Building on this, a causality-inspired diabetic retinopathy grading framework named CauDR was developed to eliminate spurious correlations and achieve more generalizable DR diagnostics. Furthermore, existing datasets were reorganized into 4DR benchmark for DG scenario. Results demonstrate the effectiveness and the state-of-the-art (SOTA) performance of CauDR. Diabetic retinopathy (DR) is the most common diabetic complication, which usually leads to retinal damage, vision loss, and even blindness. A computer-aided DR grading system has a significant impact on helping ophthalmologists with rapid screening and diagnosis. Recent advances in fundus photography have precipitated the development of novel retinal imaging cameras and their subsequent implementation in clinical practice. However, most deep learning-based algorithms for DR grading demonstrate limited generalization across domains. This inferior performance stems from variance in imaging protocols and devices inducing domain shifts. We posit that declining model performance between domains arises from learning spurious correlations in the data. Incorporating do-operations from causality analysis into model architectures may mitigate this issue and improve generalizability. Specifically, a novel universal structural causal model (SCM) was proposed to analyze spurious correlations in fundus imaging. Building on this, a causality-inspired diabetic retinopathy grading framework named CauDR was developed to eliminate spurious correlations and achieve more generalizable DR diagnostics. Furthermore, existing datasets were reorganized into 4DR benchmark for DG scenario. Results demonstrate the effectiveness and the state-of-the-art (SOTA) performance of CauDR.


Asunto(s)
Retinopatía Diabética , Retinopatía Diabética/diagnóstico por imagen , Retinopatía Diabética/diagnóstico , Humanos , Fondo de Ojo , Algoritmos , Aprendizaje Profundo , Interpretación de Imagen Asistida por Computador/métodos
4.
Comput Methods Programs Biomed ; 249: 108160, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38583290

RESUMEN

BACKGROUND AND OBJECTIVE: Early detection and grading of Diabetic Retinopathy (DR) is essential to determine an adequate treatment and prevent severe vision loss. However, the manual analysis of fundus images is time consuming and DR screening programs are challenged by the availability of human graders. Current automatic approaches for DR grading attempt the joint detection of all signs at the same time. However, the classification can be optimized if red lesions and bright lesions are independently processed since the task gets divided and simplified. Furthermore, clinicians would greatly benefit from explainable artificial intelligence (XAI) to support the automatic model predictions, especially when the type of lesion is specified. As a novelty, we propose an end-to-end deep learning framework for automatic DR grading (5 severity degrees) based on separating the attention of the dark structures from the bright structures of the retina. As the main contribution, this approach allowed us to generate independent interpretable attention maps for red lesions, such as microaneurysms and hemorrhages, and bright lesions, such as hard exudates, while using image-level labels only. METHODS: Our approach is based on a novel attention mechanism which focuses separately on the dark and the bright structures of the retina by performing a previous image decomposition. This mechanism can be seen as a XAI approach which generates independent attention maps for red lesions and bright lesions. The framework includes an image quality assessment stage and deep learning-related techniques, such as data augmentation, transfer learning and fine-tuning. We used the architecture Xception as a feature extractor and the focal loss function to deal with data imbalance. RESULTS: The Kaggle DR detection dataset was used for method development and validation. The proposed approach achieved 83.7 % accuracy and a Quadratic Weighted Kappa of 0.78 to classify DR among 5 severity degrees, which outperforms several state-of-the-art approaches. Nevertheless, the main result of this work is the generated attention maps, which reveal the pathological regions on the image distinguishing the red lesions and the bright lesions. These maps provide explainability to the model predictions. CONCLUSIONS: Our results suggest that our framework is effective to automatically grade DR. The separate attention approach has proven useful for optimizing the classification. On top of that, the obtained attention maps facilitate visual interpretation for clinicians. Therefore, the proposed method could be a diagnostic aid for the early detection and grading of DR.


Asunto(s)
Aprendizaje Profundo , Diabetes Mellitus , Retinopatía Diabética , Humanos , Retinopatía Diabética/diagnóstico , Inteligencia Artificial , Interpretación de Imagen Asistida por Computador/métodos , Fondo de Ojo
5.
Comput Biol Med ; 174: 108418, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38593641

RESUMEN

Domain adaptation (DA) is commonly employed in diabetic retinopathy (DR) grading using unannotated fundus images, allowing knowledge transfer from labeled color fundus images. Existing DAs often struggle with domain disparities, hindering DR grading performance compared to clinical diagnosis. A source-free active domain adaptation method (SFADA), which generates features of color fundus images by noise, selects valuable ultra-wide-field (UWF) fundus images through local representation matching, and adapts models using DR lesion prototypes, is proposed to upgrade DR diagnostic accuracy. Importantly, SFADA enhances data security and patient privacy by excluding source domain data. It reduces image resolution and boosts model training speed by modeling DR grade relationships directly. Experiments show SFADA significantly improves DR grading performance, increasing accuracy by 20.90% and quadratic weighted kappa by 18.63% over baseline, reaching 85.36% and 92.38%, respectively. This suggests SFADA's promise for real clinical applications.


Asunto(s)
Retinopatía Diabética , Fondo de Ojo , Humanos , Retinopatía Diabética/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Algoritmos
6.
Comput Biol Med ; 172: 108246, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38471350

RESUMEN

Diabetic retinopathy (DR) is a severe ocular complication of diabetes that can lead to vision damage and even blindness. Currently, traditional deep convolutional neural networks (CNNs) used for DR grading tasks face two primary challenges: (1) insensitivity to minority classes due to imbalanced data distribution, and (2) neglecting the relationship between the left and right eyes by utilizing the fundus image of only one eye for training without differentiating between them. To tackle these challenges, we proposed the DRGCNN (DR Grading CNN) model. To solve the problem caused by imbalanced data distribution, our model adopts a more balanced strategy by allocating an equal number of channels to feature maps representing various DR categories. Furthermore, we introduce a CAM-EfficientNetV2-M encoder dedicated to encoding input retinal fundus images for feature vector generation. The number of parameters of our encoder is 52.88 M, which is less than RegNet_y_16gf (80.57 M) and EfficientNetB7 (63.79 M), but the corresponding kappa value is higher. Additionally, in order to take advantage of the binocular relationship, we input fundus retinal images from both eyes of the patient into the network for features fusion during the training phase. We achieved a kappa value of 86.62% on the EyePACS dataset and 86.16% on the Messidor-2 dataset. Experimental results on these representative datasets for diabetic retinopathy (DR) demonstrate the exceptional performance of our DRGCNN model, establishing it as a highly competitive intelligent classification model in the field of DR. The code is available for use at https://github.com/Fat-Hai/DRGCNN.


Asunto(s)
Diabetes Mellitus , Retinopatía Diabética , Humanos , Retinopatía Diabética/diagnóstico por imagen , Redes Neurales de la Computación , Fondo de Ojo
7.
Quant Imaging Med Surg ; 14(2): 1820-1834, 2024 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-38415109

RESUMEN

Background: Diabetic retinopathy (DR) is one of the most common eye diseases. Convolutional neural networks (CNNs) have proven to be a powerful tool for learning DR features; however, accurate DR grading remains challenging due to the small lesions in optical coherence tomography angiography (OCTA) images and the small number of samples. Methods: In this article, we developed a novel deep-learning framework to achieve the fine-grained classification of DR; that is, the lightweight channel and spatial attention network (CSANet). Our CSANet comprises two modules: the baseline model, and the hybrid attention module (HAM) based on spatial attention and channel attention. The spatial attention module is used to mine small lesions and obtain a set of spatial position weights to address the problem of small lesions being ignored during the convolution process. The channel attention module uses a set of channel weights to focus on useful features and suppress irrelevant features. Results: The extensive experimental results for the OCTA-DR and diabetic retinopathy analysis challenge (DRAC) 2022 data sets showed that the CSANet achieved state-of-the-art DR grading results, showing the effectiveness of the proposed model. The CSANet had an accuracy rate of 97.41% for the OCTA-DR data set and 85.71% for the DRAC 2022 data set. Conclusions: Extensive experiments using the OCTA-DR and DRAC 2022 data sets showed that the proposed model effectively mitigated the problems of mutual confusion between DRs of different severity and small lesions being neglected in the convolution process, and thus improved the accuracy of DR classification.

8.
Comput Biol Med ; 155: 106631, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36805216

RESUMEN

Diabetic Retinopathy (DR) is a universal ocular complication of diabetes patients and also the main disease that causes blindness in the world wide. Automatic and efficient DR grading acts a vital role in timely treatment. However, it is difficult to effectively distinguish different types of distinct lesions (such as neovascularization in proliferative DR, microaneurysms in mild NPDR, etc.) using traditional convolutional neural networks (CNN), which greatly affects the ultimate classification results. In this article, we propose a triple-cascade network model (Triple-DRNet) to solve the aforementioned issue. The Triple-DRNet effectively subdivides the classification of five types of DR as well as improves the grading performance which mainly includes the following aspects: (1) In the first stage, the network carries out two types of classification, namely DR and No DR. (2) In the second stage, the cascade network is intended to distinguish the two categories between PDR and NPDR. (3) The final cascade network will be designed to differentiate the mild, moderate and severe types in NPDR. Experimental results show that the ACC of the Triple-DRNet on the APTOS 2019 Blindness Detection dataset achieves 92.08%, and the QWK metric reaches 93.62%, which proves the effectiveness of the devised Triple-DRNet compared with other mainstream models.


Asunto(s)
Diabetes Mellitus , Retinopatía Diabética , Humanos , Retinopatía Diabética/diagnóstico , Fondo de Ojo , Redes Neurales de la Computación , Ceguera , Neovascularización Patológica
9.
Phys Med Biol ; 69(1)2023 Dec 22.
Artículo en Inglés | MEDLINE | ID: mdl-38035368

RESUMEN

Objective.Diabetic retinopathy (DR) grading plays an important role in clinical diagnosis. However, automatic grading of DR is challenging due to the presence of intra-class variation and small lesions. On the one hand, deep features learned by convolutional neural networks often lose valid information about these small lesions. On the other hand, the great variability of lesion features, including differences in type and quantity, can exhibit considerable divergence even among fundus images of the same grade. To address these issues, we propose a novel multi-scale multi-attention network (MMNet).Approach.Firstly, to focus on different lesion features of fundus images, we propose a lesion attention module, which aims to encode multiple different lesion attention feature maps by combining channel attention and spatial attention, thus extracting global feature information and preserving diverse lesion features. Secondly, we propose a multi-scale feature fusion module to learn more feature information for small lesion regions, which combines complementary relationships between different convolutional layers to capture more detailed feature information. Furthermore, we introduce a Cross-layer Consistency Constraint Loss to overcome semantic differences between multi-scale features.Main results.The proposed MMNet obtains a high accuracy of 86.4% and a high kappa score of 88.4% for multi-class DR grading tasks on the EyePACS dataset, while 98.6% AUC, 95.3% accuracy, 92.7% recall, 95.0% precision, and 93.3% F1-score for referral and non-referral classification on the Messidor-1 dataset. Extensive experiments on two challenging benchmarks demonstrate that our MMNet achieves significant improvements and outperforms other state-of-the-art DR grading methods.Significance.MMNet has improved the diagnostic efficiency and accuracy of diabetes retinopathy and promoted the application of computer-aided medical diagnosis in DR screening.


Asunto(s)
Diabetes Mellitus , Retinopatía Diabética , Humanos , Retinopatía Diabética/diagnóstico por imagen , Benchmarking , Diagnóstico por Computador , Redes Neurales de la Computación
10.
Comput Biol Med ; 152: 106408, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36516580

RESUMEN

Diabetic retinopathy (DR) is the primary cause of blindness in adults. Incorporating machine learning into DR grading can improve the accuracy of medical diagnosis. However, problems, such as severe data imbalance, persists. Existing studies on DR grading ignore the correlation between its labels. In this study, a category weighted network (CWN) was proposed to achieve data balance at the model level. In the CWN, a reference for weight settings is provided by calculating the category gradient norm and reducing the experimental overhead. We proposed to use relation weighted labels instead of the one-hot label to investigate the distance relationship between labels. Experiments revealed that the proposed CWN achieved excellent performance on various DR datasets. Furthermore, relation weighted labels exhibit broad applicability and can improve other methods using one-hot labels. The proposed method achieved kappa scores of 0.9431 and 0.9226 and accuracy of 90.94% and 86.12% on DDR and APTOS datasets, respectively.


Asunto(s)
Diabetes Mellitus , Retinopatía Diabética , Humanos , Retinopatía Diabética/diagnóstico , Tamizaje Masivo/métodos , Aprendizaje Automático , Fondo de Ojo
11.
Heliyon ; 9(7): e17217, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37449186

RESUMEN

Accurate diabetic retinopathy (DR) grading is crucial for making the proper treatment plan to reduce the damage caused by vision loss. This task is challenging due to the fact that the DR related lesions are often small and subtle in visual differences and intra-class variations. Moreover, relationships between the lesions and the DR levels are complicated. Although many deep learning (DL) DR grading systems have been developed with some success, there are still rooms for grading accuracy improvement. A common issue is that not much medical knowledge was used in these DL DR grading systems. As a result, the grading results are not properly interpreted by ophthalmologists, thus hinder the potential for practical applications. This paper proposes a novel fine-grained attention & knowledge-based collaborative network (FA+KC-Net) to address this concern. The fine-grained attention network dynamically divides the extracted feature maps into smaller patches and effectively captures small image features that are meaningful in the sense of its training from large amount of retinopathy fundus images. The knowledge-based collaborative network extracts a-priori medical knowledge features, i.e., lesions such as the microaneurysms (MAs), soft exudates (SEs), hard exudates (EXs), and hemorrhages (HEs). Finally, decision rules are developed to fuse the DR grading results from the fine-grained network and the knowledge-based collaborative network to make the final grading. Extensive experiments are carried out on four widely-used datasets, the DDR, Messidor, APTOS, and EyePACS to evaluate the efficacy of our method and compare with other state-of-the-art (SOTA) DL models. Simulation results show that proposed FA+KC-Net is accurate and stable, achieves the best performances on the DDR, Messidor, and APTOS datasets.

12.
J Biophotonics ; 16(11): e202300052, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37421596

RESUMEN

PURPOSE: Diabetic retinopathy (DR) is one of the most common diseases caused by diabetes and can lead to vision loss or even blindness. The wide-field optical coherence tomography (OCT) angiography is non-invasive imaging technology and convenient to diagnose DR. METHODS: A newly constructed Retinal OCT-Angiography Diabetic retinopathy (ROAD) dataset is utilized for segmentation and grading tasks. It contains 1200 normal images, 1440 DR images, and 1440 ground truths for DR image segmentation. To handle the problem of grading DR, we propose a novel and effective framework, named projective map attention-based convolutional neural network (PACNet). RESULTS: The experimental results demonstrate the effectiveness of our PACNet. The accuracy of the proposed framework for grading DR is 87.5% on the ROAD dataset. CONCLUSIONS: The information on ROAD can be viewed at URL https://mip2019.github.io/ROAD. The ROAD dataset will be helpful for the development of the early detection of DR field and future research. TRANSLATIONAL RELEVANCE: The novel framework for grading DR is a valuable research and clinical diagnosis method.


Asunto(s)
Diabetes Mellitus , Retinopatía Diabética , Humanos , Retinopatía Diabética/diagnóstico por imagen , Tomografía de Coherencia Óptica/métodos , Angiografía con Fluoresceína , Redes Neurales de la Computación , Diagnóstico Precoz
13.
Diagnostics (Basel) ; 13(3)2023 Jan 18.
Artículo en Inglés | MEDLINE | ID: mdl-36766451

RESUMEN

The number of people who suffer from diabetes in the world has been considerably increasing recently. It affects people of all ages. People who have had diabetes for a long time are affected by a condition called Diabetic Retinopathy (DR), which damages the eyes. Automatic detection using new technologies for early detection can help avoid complications such as the loss of vision. Currently, with the development of Artificial Intelligence (AI) techniques, especially Deep Learning (DL), DL-based methods are widely preferred for developing DR detection systems. For this purpose, this study surveyed the existing literature on diabetic retinopathy diagnoses from fundus images using deep learning and provides a brief description of the current DL techniques that are used by researchers in this field. After that, this study lists some of the commonly used datasets. This is followed by a performance comparison of these reviewed methods with respect to some commonly used metrics in computer vision tasks.

14.
Comput Biol Med ; 157: 106750, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-36931202

RESUMEN

Diabetic retinopathy(DR) is a common early diabetic complication and one of the main causes of blindness. In clinical diagnosis and treatment, regular screening with fundus imaging is an effective way to prevent the development of DR. However, the regular fundus images used in most DR screening work have a small imaging range, narrow field of vision, and can not contain more complete lesion information, which leads to less ideal automatic DR grading results. In order to improve the accuracy of DR grading, we establish a dataset containing 101 ultra-wide-field(UWF) DR fundus images and propose a deep learning(DL) automatic classification method based on a new preprocessing method. The emerging UWF fundus images have the advantages of a large imaging range and wide field of vision and contain more information about the lesions. In data preprocessing, we design a data denoising method for UWF images and use data enhancement methods to improve their contrast and brightness to improve the classification effect. In order to verify the efficiency of our dataset and the effectiveness of our preprocessing method, we design a series of experiments including a variety of DL classification models. The experimental results show that we can achieve high classification accuracy by using only the backbone model. The most basic ResNet50 model reaches an average of classification accuracy(ACA) 0.66, Macro F1 0.6559, and Kappa 0.58. The best-performing Swin-S model reaches ACA 0.72, Macro F1 0.7018, and Kappa 0.65. DR grading using UWF images can achieve higher accuracy and efficiency, which has practical significance and value in clinical applications.


Asunto(s)
Diabetes Mellitus , Retinopatía Diabética , Humanos , Retinopatía Diabética/diagnóstico por imagen , Fondo de Ojo , Fotograbar/métodos
15.
Phys Med Biol ; 67(24)2022 12 06.
Artículo en Inglés | MEDLINE | ID: mdl-36322995

RESUMEN

Objective.Diabetic retinopathy (DR) grading is primarily performed by assessing fundus images. Many types of lesions, such as microaneurysms, hemorrhages, and soft exudates, are available simultaneously in a single image. However, their sizes may be small, making it difficult to differentiate adjacent DR grades even using deep convolutional neural networks (CNNs). Recently, a vision transformer has shown comparable or even superior performance to CNNs, and it also learns different visual representations from CNNs. Inspired by this finding, we propose a two-path contextual transformer with Xception network (CoT-XNet) to improve the accuracy of DR grading.Approach.The representations learned by CoT through one path and those by the Xception network through another path are concatenated before the fully connected layer. Meanwhile, the dedicated pre-processing, data resampling, and test time augmentation strategies are implemented. The performance of CoT-XNet is evaluated in the publicly available datasets of DDR, APTOS2019, and EyePACS, which include over 50 000 images. Ablation experiments and comprehensive comparisons with various state-of-the-art (SOTA) models have also been performed.Main results.Our proposed CoT-XNet shows better performance than available SOTA models, and the accuracy and Kappa are 83.10% and 0.8496, 84.18% and 0.9000 and 84.10% and 0.7684 respectively, in the three datasets (listed above). Class activation maps of CoT and Xception networks are different and complementary in most images.Significance.By concatenating the different visual representations learned by CoT and Xception networks, CoT-XNet can accurately grade DR from fundus images and present good generalizability. CoT-XNet will promote the application of artificial intelligence-based systems in the DR screening of large-scale populations.


Asunto(s)
Diabetes Mellitus , Retinopatía Diabética , Humanos , Retinopatía Diabética/diagnóstico por imagen , Inteligencia Artificial
16.
J Clin Med ; 11(11)2022 May 31.
Artículo en Inglés | MEDLINE | ID: mdl-35683522

RESUMEN

Poland has never had a widespread diabetic retinopathy (DR) screening program and subsequently has no purpose-trained graders and no established grader training scheme. Herein, we compare the performance and variability of three retinal specialists with no additional DR grading training in assessing images from 335 real-life screening encounters and contrast their performance against IDx-DR, a US Food and Drug Administration (FDA) approved DR screening suite. A total of 1501 fundus images from 670 eyes were assessed by each grader with a final grade on a per-eye level. Unanimous agreement between all graders was achieved for 385 eyes, and 110 patients, out of which 98% had a final grade of no DR. Thirty-six patients had final grades higher than mild DR, out of which only two had no grader disagreements regarding severity. A total of 28 eyes underwent adjudication due to complete grader disagreement. Four patients had discordant grades ranging from no DR to severe DR between the human graders and IDx-DR. Retina specialists achieved kappa scores of 0.52, 0.78, and 0.61. Retina specialists had relatively high grader variability and only a modest concordance with IDx-DR results. Focused training and verification are recommended for any potential DR graders before assessing DR screening images.

17.
Comput Biol Med ; 149: 105970, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-36058067

RESUMEN

Diabetic retinopathy (DR) is currently considered to be one of the most common diseases that cause blindness. However, DR grading methods are still challenged by the presence of imbalanced class distributions, small lesions, low accuracy of small sample classes and poor explainability. To address these issues, a resampling-based cost loss attention network for explainable imbalanced diabetic retinopathy grading is proposed. First, the progressively-balanced resampling strategy is put forward to create a balanced training data by mixing the two sets of samples obtained from instance-based sampling and class-based sampling. Subsequently, a neuron and normalized channel-spatial attention module (Neu-NCSAM) is designed to learn the global features with 3-D weights and a weight sparsity penalty is applied to the attention module to suppress irrelevant channels or pixels, thereby capturing detailed small lesion information. Thereafter, a weighted loss function of the Cost-Sensitive (CS) regularization and Gaussian label smoothing loss, called cost loss, is proposed to intelligently penalize the incorrect predictions and thus to improve the grading accuracy of small sample classes. Finally, the Gradient-weighted Class Activation Mapping (Grad-CAM) is performed to acquire the localization map of the questionable lesions in order to visually interpret and understand the effect of our model. Comprehensive experiments are carried out on two public datasets, and the subjective and objective results demonstrate that the proposed network outperforms the state-of-the-art methods and achieves the best DR grading results with 83.46%, 60.44%, 65.18%, 63.69% and 92.26% for Kappa, BACC, MCC, F1 and mAUC, respectively.


Asunto(s)
Diabetes Mellitus , Retinopatía Diabética , Retinopatía Diabética/patología , Humanos
18.
Ophthalmol Sci ; 2(2)2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35647573

RESUMEN

Purpose: To study the wider field swept-source optical coherence tomography angiography (WF SS-OCTA) metrics, especially non-perfusion area (NPA), in the diagnosing and staging of DR. Design: Cross-sectional observational study (November 2018-September 2020). Participants: 473 eyes of 286 patients (69 eyes of 49 control patients and 404 eyes of 237 diabetic patients). Methods: We imaged using 6mm×6mm and 12mm×12mm angiograms on WF SS-OCTA. Images were analyzed using the ARI Network and FIJI ImageJ. Mixed effects multiple regression models and receiver operator characteristic analysis was used for statistical analyses. Main Outcome Measures: Quantitative metrics such as vessel density (VD); vessel skeletonized density (VSD); foveal avascular zone (FAZ) area, circularity, and perimeter; and NPA in DR and their relative performance for its diagnosis and grading. Results: Among patients with diabetes (median age 59 years), 51 eyes had no DR, 185 eyes (88 mild, 97 moderate-severe) had non-proliferative DR (NPDR); and 168 eyes had proliferative DR (PDR). Trend analysis revealed a progressive decline in superficial capillary plexus (SCP) VD and VSD, and increased NPA with increasing DR severity. Additionally, there was a significant reduction in deep capillary plexus (DCP) VD and VSD in early DR (mild NPDR), but the progressive reduction in advanced DR stages was not significant. NPA was the best parameter to diagnose DR (AUC:0.96), whereas all parameters combined on both angiograms efficiently diagnosed (AUC:0.97) and differentiated between DR stages (AUC range:0.83-0.97). The presence of diabetic macular edema was associated with reduced SCP and DCP VD and VSD within mild NPDR eyes, whereas an increased VD and VSD in SCP among moderate-severe NPDR group. Conclusions: Our work highlights the importance of NPA, which can be more readily and easily measured with WF SS-OCTA compared to fluorescein angiography. It is additionally quick and non-invasive, and hence can be an important adjunct for DR diagnosis and management. In our study, a combination of all OCTA metrics on both 6mm×6mm and 12mm×12mm angiograms had the best diagnostic accuracy for DR and its severity. Further longitudinal studies are needed to assess NPA as a biomarker for progression or regression of DR severity.

19.
Med Image Anal ; 63: 101715, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-32434128

RESUMEN

Diabetic retinopathy (DR) grading is crucial in determining the adequate treatment and follow up of patient, but the screening process can be tiresome and prone to errors. Deep learning approaches have shown promising performance as computer-aided diagnosis (CAD) systems, but their black-box behaviour hinders clinical application. We propose DR|GRADUATE, a novel deep learning-based DR grading CAD system that supports its decision by providing a medically interpretable explanation and an estimation of how uncertain that prediction is, allowing the ophthalmologist to measure how much that decision should be trusted. We designed DR|GRADUATE taking into account the ordinal nature of the DR grading problem. A novel Gaussian-sampling approach built upon a Multiple Instance Learning framework allow DR|GRADUATE to infer an image grade associated with an explanation map and a prediction uncertainty while being trained only with image-wise labels. DR|GRADUATE was trained on the Kaggle DR detection training set and evaluated across multiple datasets. In DR grading, a quadratic-weighted Cohen's kappa (κ) between 0.71 and 0.84 was achieved in five different datasets. We show that high κ values occur for images with low prediction uncertainty, thus indicating that this uncertainty is a valid measure of the predictions' quality. Further, bad quality images are generally associated with higher uncertainties, showing that images not suitable for diagnosis indeed lead to less trustworthy predictions. Additionally, tests on unfamiliar medical image data types suggest that DR|GRADUATE allows outlier detection. The attention maps generally highlight regions of interest for diagnosis. These results show the great potential of DR|GRADUATE as a second-opinion system in DR severity grading.


Asunto(s)
Aprendizaje Profundo , Diabetes Mellitus , Retinopatía Diabética , Retinopatía Diabética/diagnóstico por imagen , Diagnóstico por Computador , Fondo de Ojo , Humanos , Incertidumbre
20.
Transl Vis Sci Technol ; 9(2): 34, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-32832207

RESUMEN

Purpose: Introducing a new technique to improve deep learning (DL) models designed for automatic grading of diabetic retinopathy (DR) from retinal fundus images by enhancing predictions' consistency. Methods: A convolutional neural network (CNN) was optimized in three different manners to predict DR grade from eye fundus images. The optimization criteria were (1) the standard cross-entropy (CE) loss; (2) CE supplemented with label smoothing (LS), a regularization approach widely employed in computer vision tasks; and (3) our proposed non-uniform label smoothing (N-ULS), a modification of LS that models the underlying structure of expert annotations. Results: Performance was measured in terms of quadratic-weighted κ score (quad-κ) and average area under the receiver operating curve (AUROC), as well as with suitable metrics for analyzing diagnostic consistency, like weighted precision, recall, and F1 score, or Matthews correlation coefficient. While LS generally harmed the performance of the CNN, N-ULS statistically significantly improved performance with respect to CE in terms quad-κ score (73.17 vs. 77.69, P < 0.025), without any performance decrease in average AUROC. N-ULS achieved this while simultaneously increasing performance for all other analyzed metrics. Conclusions: For extending standard modeling approaches from DR detection to the more complex task of DR grading, it is essential to consider the underlying structure of expert annotations. The approach introduced in this article can be easily implemented in conjunction with deep neural networks to increase their consistency without sacrificing per-class performance. Translational Relevance: A straightforward modification of current standard training practices of CNNs can substantially improve consistency in DR grading, better modeling expert annotations and human variability.


Asunto(s)
Diabetes Mellitus , Retinopatía Diabética , Retinopatía Diabética/diagnóstico por imagen , Fondo de Ojo , Humanos , Redes Neurales de la Computación
SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda