Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 29
Filtrar
Más filtros

País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Biomed Eng Online ; 22(1): 16, 2023 Feb 21.
Artículo en Inglés | MEDLINE | ID: mdl-36810105

RESUMEN

BACKGROUND: Fundus fluorescein angiography (FA) can be used to diagnose fundus diseases by observing dynamic fluorescein changes that reflect vascular circulation in the fundus. As FA may pose a risk to patients, generative adversarial networks have been used to convert retinal fundus images into fluorescein angiography images. However, the available methods focus on generating FA images of a single phase, and the resolution of the generated FA images is low, being unsuitable for accurately diagnosing fundus diseases. METHODS: We propose a network that generates multi-frame high-resolution FA images. This network consists of a low-resolution GAN (LrGAN) and a high-resolution GAN (HrGAN), where LrGAN generates low-resolution and full-size FA images with global intensity information, HrGAN takes the FA images generated by LrGAN as input to generate multi-frame high-resolution FA patches. Finally, the FA patches are merged into full-size FA images. RESULTS: Our approach combines supervised and unsupervised learning methods and achieves better quantitative and qualitative results than using either method alone. Structural similarity (SSIM), normalized cross-correlation (NCC) and peak signal-to-noise ratio (PSNR) were used as quantitative metrics to evaluate the performance of the proposed method. The experimental results show that our method achieves better quantitative results with structural similarity of 0.7126, normalized cross-correlation of 0.6799, and peak signal-to-noise ratio of 15.77. In addition, ablation experiments also demonstrate that using a shared encoder and residual channel attention module in HrGAN is helpful for the generation of high-resolution images. CONCLUSIONS: Overall, our method has higher performance for generating retinal vessel details and leaky structures in multiple critical phases, showing a promising clinical diagnostic value.


Asunto(s)
Atención , Procesamiento de Imagen Asistido por Computador , Humanos , Angiografía con Fluoresceína , Procesamiento de Imagen Asistido por Computador/métodos , Fondo de Ojo , Relación Señal-Ruido
2.
Sensors (Basel) ; 24(1)2023 Dec 31.
Artículo en Inglés | MEDLINE | ID: mdl-38203101

RESUMEN

Glaucoma, a leading cause of blindness, damages the optic nerve, making early diagnosis challenging due to no initial symptoms. Fundus eye images taken with a non-mydriatic retinograph help diagnose glaucoma by revealing structural changes, including the optic disc and cup. This research aims to thoroughly analyze saliency maps in interpreting convolutional neural network decisions for diagnosing glaucoma from fundus images. These maps highlight the most influential image regions guiding the network's decisions. Various network architectures were trained and tested on 739 optic nerve head images, with nine saliency methods used. Some other popular datasets were also used for further validation. The results reveal disparities among saliency maps, with some consensus between the folds corresponding to the same architecture. Concerning the significance of optic disc sectors, there is generally a lack of agreement with standard medical criteria. The background, nasal, and temporal sectors emerge as particularly influential for neural network decisions, showing a likelihood of being the most relevant ranging from 14.55% to 28.16% on average across all evaluated datasets. We can conclude that saliency maps are usually difficult to interpret and even the areas indicated as the most relevant can be very unintuitive. Therefore, its usefulness as an explanatory tool may be compromised, at least in problems such as the one addressed in this study, where the features defining the model prediction are generally not consistently reflected in relevant regions of the saliency maps, and they even cannot always be related to those used as medical standards.


Asunto(s)
Glaucoma , Disco Óptico , Humanos , Fondo de Ojo , Glaucoma/diagnóstico por imagen , Disco Óptico/diagnóstico por imagen , Diagnóstico por Imagen , Redes Neurales de la Computación
3.
Sensors (Basel) ; 23(10)2023 May 14.
Artículo en Inglés | MEDLINE | ID: mdl-37430650

RESUMEN

This study aims to analyze the asymmetry between both eyes of the same patient for the early diagnosis of glaucoma. Two imaging modalities, retinal fundus images and optical coherence tomographies (OCTs), have been considered in order to compare their different capabilities for glaucoma detection. From retinal fundus images, the difference between cup/disc ratio and the width of the optic rim has been extracted. Analogously, the thickness of the retinal nerve fiber layer has been measured in spectral-domain optical coherence tomographies. These measurements have been considered as asymmetry characteristics between eyes in the modeling of decision trees and support vector machines for the classification of healthy and glaucoma patients. The main contribution of this work is indeed the use of different classification models with both imaging modalities to jointly exploit the strengths of each of these modalities for the same diagnostic purpose based on the asymmetry characteristics between the eyes of the patient. The results show that the optimized classification models provide better performance with OCT asymmetry features between both eyes (sensitivity 80.9%, specificity 88.2%, precision 66.7%, accuracy 86.5%) than with those extracted from retinographies, although a linear relationship has been found between certain asymmetry features extracted from both imaging modalities. Therefore, the resulting performance of the models based on asymmetry features proves their ability to differentiate healthy from glaucoma patients using those metrics. Models trained from fundus characteristics are a useful option as a glaucoma screening method in the healthy population, although with lower performance than those trained from the thickness of the peripapillary retinal nerve fiber layer. In both imaging modalities, the asymmetry of morphological characteristics can be used as a glaucoma indicator, as detailed in this work.


Asunto(s)
Glaucoma , Humanos , Glaucoma/diagnóstico por imagen , Tomografía de Coherencia Óptica , Retina/diagnóstico por imagen , Diagnóstico Precoz , Fondo de Ojo
4.
Sensors (Basel) ; 21(20)2021 Oct 19.
Artículo en Inglés | MEDLINE | ID: mdl-34696149

RESUMEN

The stage and duration of hypertension are connected to the occurrence of Hypertensive Retinopathy (HR) of eye disease. Currently, a few computerized systems have been developed to recognize HR by using only two stages. It is difficult to define specialized features to recognize five grades of HR. In addition, deep features have been used in the past, but the classification accuracy is not up-to-the-mark. In this research, a new hypertensive retinopathy (HYPER-RETINO) framework is developed to grade the HR based on five grades. The HYPER-RETINO system is implemented based on pre-trained HR-related lesions. To develop this HYPER-RETINO system, several steps are implemented such as a preprocessing, the detection of HR-related lesions by semantic and instance-based segmentation and a DenseNet architecture to classify the stages of HR. Overall, the HYPER-RETINO system determined the local regions within input retinal fundus images to recognize five grades of HR. On average, a 10-fold cross-validation test obtained sensitivity (SE) of 90.5%, specificity (SP) of 91.5%, accuracy (ACC) of 92.6%, precision (PR) of 91.7%, Matthews correlation coefficient (MCC) of 61%, F1-score of 92% and area-under-the-curve (AUC) of 0.915 on 1400 HR images. Thus, the applicability of the HYPER-RETINO method to reliably diagnose stages of HR is verified by experimental findings.


Asunto(s)
Aprendizaje Profundo , Retinopatía Diabética , Retinopatía Hipertensiva , Fondo de Ojo , Humanos , Retinopatía Hipertensiva/diagnóstico , Semántica
5.
Adv Exp Med Biol ; 1213: 121-132, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32030667

RESUMEN

Early detection of glaucoma is important to slow down progression of the disease and to prevent total vision loss. Retinal fundus photography is frequently obtained for various eye disease diagnosis and record and is a suitable screening exam for its simplicity and low cost. However, the number of ophthalmologists who are specialized in glaucoma diagnosis is limited. We have been studying automated schemes for detection of nerve fiber layer defects and analysis of optic disc deformation, two major signs of glaucoma, in assisting ophthalmologists' accurate and efficient diagnosis. In this chapter, our recent progress in computerized methods is discussed.


Asunto(s)
Aprendizaje Profundo , Fondo de Ojo , Glaucoma/diagnóstico por imagen , Glaucoma/patología , Humanos , Fibras Nerviosas/patología , Disco Óptico/diagnóstico por imagen , Disco Óptico/patología
6.
Physiol Meas ; 45(5)2024 May 03.
Artículo en Inglés | MEDLINE | ID: mdl-38599224

RESUMEN

Objective.This study aims to automate the segmentation of retinal arterioles and venules (A/V) from digital fundus images (DFI), as changes in the spatial distribution of retinal microvasculature are indicative of cardiovascular diseases, positioning the eyes as windows to cardiovascular health.Approach.We utilized active learning to create a new DFI dataset with 240 crowd-sourced manual A/V segmentations performed by 15 medical students and reviewed by an ophthalmologist. We then developed LUNet, a novel deep learning architecture optimized for high-resolution A/V segmentation. The LUNet model features a double dilated convolutional block to widen the receptive field and reduce parameter count, alongside a high-resolution tail to refine segmentation details. A custom loss function was designed to prioritize the continuity of blood vessel segmentation.Main Results.LUNet significantly outperformed three benchmark A/V segmentation algorithms both on a local test set and on four external test sets that simulated variations in ethnicity, comorbidities and annotators.Significance.The release of the new datasets and the LUNet model (www.aimlab-technion.com/lirot-ai) provides a valuable resource for the advancement of retinal microvasculature analysis. The improvements in A/V segmentation accuracy highlight LUNet's potential as a robust tool for diagnosing and understanding cardiovascular diseases through retinal imaging.


Asunto(s)
Aprendizaje Profundo , Fondo de Ojo , Procesamiento de Imagen Asistido por Computador , Humanos , Vénulas/diagnóstico por imagen , Vénulas/anatomía & histología , Procesamiento de Imagen Asistido por Computador/métodos , Arteriolas/diagnóstico por imagen , Arteriolas/anatomía & histología , Vasos Retinianos/diagnóstico por imagen
7.
Front Endocrinol (Lausanne) ; 15: 1364519, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38549767

RESUMEN

Objective: To develop and validate an artificial intelligence diagnostic model based on fundus images for predicting Carotid Intima-Media Thickness (CIMT) in individuals with Type 2 Diabetes Mellitus (T2DM). Methods: In total, 1236 patients with T2DM who had both retinal fundus images and CIMT ultrasound records within a single hospital stay were enrolled. Data were divided into normal and thickened groups and sent to eight deep learning models: convolutional neural networks of the eight models were all based on ResNet or ResNeXt. Their encoder and decoder modes are different, including the standard mode, the Parallel learning mode, and the Siamese mode. Except for the six unimodal networks, two multimodal networks based on ResNeXt under the Parallel learning mode or the Siamese mode were embedded with ages. Performance of eight models were compared via the confusion matrix, precision, recall, specificity, F1 value, and ROC curve, and recall was regarded as the main indicator. Besides, Grad-CAM was used to visualize the decisions made by Siamese ResNeXt network, which is the best performance. Results: Performance of various models demonstrated the following points: 1) the RexNeXt showed a notable improvement over the ResNet; 2) the structural Siamese networks, which extracted features parallelly and independently, exhibited slight performance enhancements compared to the traditional networks. Notably, the Siamese networks resulted in significant improvements; 3) the performance of classification declined if the age factor was embedded in the network. Taken together, the Siamese ResNeXt unimodal model performed best for its superior efficacy and robustness. This model achieved a recall rate of 88.0% and an AUC value of 90.88% in the validation subset. Additionally, heatmaps calculated by the Grad-CAM algorithm presented concentrated and orderly mappings around the optic disc vascular area in normal CIMT groups and dispersed, irregular patterns in thickened CIMT groups. Conclusion: We provided a Siamese ResNeXt neural network for predicting the carotid intimal thickness of patients with T2DM from fundus images and confirmed the correlation between fundus microvascular lesions and CIMT.


Asunto(s)
Inteligencia Artificial , Diabetes Mellitus Tipo 2 , Humanos , Diabetes Mellitus Tipo 2/complicaciones , Diabetes Mellitus Tipo 2/diagnóstico por imagen , Grosor Intima-Media Carotídeo , Redes Neurales de la Computación , Algoritmos
8.
PeerJ Comput Sci ; 10: e2135, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39314692

RESUMEN

Background: Early diagnosis and treatment of diabetic eye disease (DED) improve prognosis and lessen the possibility of permanent vision loss. Screening of retinal fundus images is a significant process widely employed for diagnosing patients with DED or other eye problems. However, considerable time and effort are required to detect these images manually. Methods: Deep learning approaches in machine learning have attained superior performance for the binary classification of healthy and pathological retinal fundus images. In contrast, multi-class retinal eye disease classification is still a difficult task. Therefore, a two-phase transfer learning approach is developed in this research for automated classification and segmentation of multi-class DED pathologies. Results: In the first step, a Modified ResNet-50 model pre-trained on the ImageNet dataset was transferred and learned to classify normal diabetic macular edema (DME), diabetic retinopathy, glaucoma, and cataracts. In the second step, the defective region of multiple eye diseases is segmented using the transfer learning-based DenseUNet model. From the publicly accessible dataset, the suggested model is assessed using several retinal fundus images. Our proposed model for multi-class classification achieves a maximum specificity of 99.73%, a sensitivity of 99.54%, and an accuracy of 99.67%.

9.
Heliyon ; 10(18): e36996, 2024 Sep 30.
Artículo en Inglés | MEDLINE | ID: mdl-39309959

RESUMEN

Early diagnosis and continuous monitoring of patients with eye diseases are critical in computer-aided detection (CAD) techniques. Semantic segmentation, a key component in computer vision, enables pixel-level classification and provides detailed information about objects within images. In this study, we present three U-Net models designed for multi-class semantic segmentation, leveraging the U-Net architecture with transfer learning. To generate ground truth for the HRF dataset, we combine two U-Net models, namely MSU-Net and BU-Net, to predict probability maps for the optic disc and cup regions. Binary masks are then derived from these probability maps to extract the optic disc and cup regions from retinal images. The dataset used in this study includes pre-existing blood vessels and manually annotated peripapillary atrophy zones (alpha and beta) provided by expert ophthalmologists. This comprehensive dataset, integrating existing blood vessels and expert-marked peripapillary atrophy zones, fulfills the study's objectives. The effectiveness of the proposed approach is validated by training nine pre-trained models on the HRF dataset comprising 45 retinal images, successfully segmenting the optic disc, cup, blood vessels, and peripapillary atrophy zones (alpha and beta). The results demonstrate 87.7 % pixel accuracy, 87 % Intersection over Union (IoU), 86.9 % F1 Score, 85 % mean IoU (mIoU), and 15 % model loss, significantly contributing to the early diagnosis and monitoring of glaucoma and optic nerve disorders.

10.
J Clin Med ; 12(10)2023 May 21.
Artículo en Inglés | MEDLINE | ID: mdl-37240693

RESUMEN

This article provides a comprehensive and up-to-date overview of the repositories that contain color fundus images. We analyzed them regarding availability and legality, presented the datasets' characteristics, and identified labeled and unlabeled image sets. This study aimed to complete all publicly available color fundus image datasets to create a central catalog of available color fundus image datasets.

11.
Biomedicines ; 11(6)2023 May 28.
Artículo en Inglés | MEDLINE | ID: mdl-37371661

RESUMEN

Diabetic retinopathy (DR) is the foremost cause of blindness in people with diabetes worldwide, and early diagnosis is essential for effective treatment. Unfortunately, the present DR screening method requires the skill of ophthalmologists and is time-consuming. In this study, we present an automated system for DR severity classification employing the fine-tuned Compact Convolutional Transformer (CCT) model to overcome these issues. We assembled five datasets to generate a more extensive dataset containing 53,185 raw images. Various image pre-processing techniques and 12 types of augmentation procedures were applied to improve image quality and create a massive dataset. A new DR-CCTNet model is proposed. It is a modification of the original CCT model to address training time concerns and work with a large amount of data. Our proposed model delivers excellent accuracy even with low-pixel images and still has strong performance with fewer images, indicating that the model is robust. We compare our model's performance with transfer learning models such as VGG19, VGG16, MobileNetV2, and ResNet50. The test accuracy of the VGG19, ResNet50, VGG16, and MobileNetV2 were, respectively, 72.88%, 76.67%, 73.22%, and 71.98%. Our proposed DR-CCTNet model to classify DR outperformed all of these with a 90.17% test accuracy. This approach provides a novel and efficient method for the detection of DR, which may lower the burden on ophthalmologists and expedite treatment for patients.

12.
PNAS Nexus ; 2(9): pgad290, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37746328

RESUMEN

We present a structured approach to combine explainability of artificial intelligence (AI) with the scientific method for scientific discovery. We demonstrate the utility of this approach in a proof-of-concept study where we uncover biomarkers from a convolutional neural network (CNN) model trained to classify patient sex in retinal images. This is a trait that is not currently recognized by diagnosticians in retinal images, yet, one successfully classified by CNNs. Our methodology consists of four phases: In Phase 1, CNN development, we train a visual geometry group (VGG) model to recognize patient sex in retinal images. In Phase 2, Inspiration, we review visualizations obtained from post hoc interpretability tools to make observations, and articulate exploratory hypotheses. Here, we listed 14 hypotheses retinal sex differences. In Phase 3, Exploration, we test all exploratory hypotheses on an independent dataset. Out of 14 exploratory hypotheses, nine revealed significant differences. In Phase 4, Verification, we re-tested the nine flagged hypotheses on a new dataset. Five were verified, revealing (i) significantly greater length, (ii) more nodes, and (iii) more branches of retinal vasculature, (iv) greater retinal area covered by the vessels in the superior temporal quadrant, and (v) darker peripapillary region in male eyes. Finally, we trained a group of ophthalmologists (N=26) to recognize the novel retinal features for sex classification. While their pretraining performance was not different from chance level or the performance of a nonexpert group (N=31), after training, their performance increased significantly (p<0.001, d=2.63). These findings showcase the potential for retinal biomarker discovery through CNN applications, with the added utility of empowering medical practitioners with new diagnostic capabilities to enhance their clinical toolkit.

13.
Diagnostics (Basel) ; 13(3)2023 Jan 18.
Artículo en Inglés | MEDLINE | ID: mdl-36766451

RESUMEN

The number of people who suffer from diabetes in the world has been considerably increasing recently. It affects people of all ages. People who have had diabetes for a long time are affected by a condition called Diabetic Retinopathy (DR), which damages the eyes. Automatic detection using new technologies for early detection can help avoid complications such as the loss of vision. Currently, with the development of Artificial Intelligence (AI) techniques, especially Deep Learning (DL), DL-based methods are widely preferred for developing DR detection systems. For this purpose, this study surveyed the existing literature on diabetic retinopathy diagnoses from fundus images using deep learning and provides a brief description of the current DL techniques that are used by researchers in this field. After that, this study lists some of the commonly used datasets. This is followed by a performance comparison of these reviewed methods with respect to some commonly used metrics in computer vision tasks.

14.
Diagnostics (Basel) ; 13(8)2023 Apr 17.
Artículo en Inglés | MEDLINE | ID: mdl-37189539

RESUMEN

Hypertensive retinopathy (HR) is a serious eye disease that causes the retinal arteries to change. This change is mainly due to the fact of high blood pressure. Cotton wool patches, bleeding in the retina, and retinal artery constriction are affected lesions of HR symptoms. An ophthalmologist often makes the diagnosis of eye-related diseases by analyzing fundus images to identify the stages and symptoms of HR. The likelihood of vision loss can significantly decrease the initial detection of HR. In the past, a few computer-aided diagnostics (CADx) systems were developed to automatically detect HR eye-related diseases using machine learning (ML) and deep learning (DL) techniques. Compared to ML methods, the CADx systems use DL techniques that require the setting of hyperparameters, domain expert knowledge, a huge training dataset, and a high learning rate. Those CADx systems have shown to be good for automating the extraction of complex features, but they cause problems with class imbalance and overfitting. By ignoring the issues of a small dataset of HR, a high level of computational complexity, and the lack of lightweight feature descriptors, state-of-the-art efforts depend on performance enhancement. In this study, a pretrained transfer learning (TL)-based MobileNet architecture is developed by integrating dense blocks to optimize the network for the diagnosis of HR eye-related disease. We developed a lightweight HR-related eye disease diagnosis system, known as Mobile-HR, by integrating a pretrained model and dense blocks. To increase the size of the training and test datasets, we applied a data augmentation technique. The outcomes of the experiments show that the suggested approach was outperformed in many cases. This Mobile-HR system achieved an accuracy of 99% and an F1 score of 0.99 on different datasets. The results were verified by an expert ophthalmologist. These results indicate that the Mobile-HR CADx model produces positive outcomes and outperforms state-of-the-art HR systems in terms of accuracy.

15.
Comput Biol Med ; 151(Pt A): 106277, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-36370579

RESUMEN

Automated retinal image analysis holds prime significance in the accurate diagnosis of various critical eye diseases that include diabetic retinopathy (DR), age-related macular degeneration (AMD), atherosclerosis, and glaucoma. Manual diagnosis of retinal diseases by ophthalmologists takes time, effort, and financial resources, and is prone to error, in comparison to computer-aided diagnosis systems. In this context, robust classification and segmentation of retinal images are primary operations that aid clinicians in the early screening of patients to ensure the prevention and/or treatment of these diseases. This paper conducts an extensive review of the state-of-the-art methods for the detection and segmentation of retinal image features. Existing notable techniques for the detection of retinal features are categorized into essential groups and compared in depth. Additionally, a summary of quantifiable performance measures for various important stages of retinal image analysis, such as image acquisition and preprocessing, is provided. Finally, the widely used in the literature datasets for analyzing retinal images are described and their significance is emphasized.


Asunto(s)
Retinopatía Diabética , Degeneración Macular , Enfermedades de la Retina , Humanos , Fondo de Ojo , Retina/diagnóstico por imagen , Retinopatía Diabética/diagnóstico por imagen , Enfermedades de la Retina/diagnóstico por imagen , Algoritmos
16.
Phys Eng Sci Med ; 45(3): 847-858, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-35737221

RESUMEN

The fundus imaging method of eye screening detects eye diseases by segmenting the optic disc (OD) and optic cup (OC). OD and OC are still challenging to segment accurately. This work proposes three-layer graph-based deep architecture with an enhanced fusion method for OD and OC segmentation. CNN encoder-decoder architecture, extended graph network, and approximation via fusion-based rule are explored for connecting local and global information. A graph-based model is developed for combining local and overall knowledge. By extending feature masking, regularization of repetitive features with fusion for combining channels has been done. The performance of the proposed network is evaluated through the analysis of different metric parameters such as dice similarity coefficient (DSC), intersection of union (IOU), accuracy, specificity, sensitivity. Experimental verification of this methodology has been done using the four benchmarks publicly available datasets DRISHTI-GS, RIM-ONE for OD, and OC segmentation. In addition, DRIONS-DB and HRF fundus imaging datasets were analyzed for optimizing the model's performance based on OD segmentation. DSC metric of methodology achieved 0.97 and 0.96 for DRISHTI-GS and RIM-ONE, respectively. Similarly, IOU measures for DRISHTI-GS and RIM-ONE datasets were 0.96 and 0.93, respectively, for OD measurement. For OC segmentation, DSC and IOU were measured as 0.93 and 0.90 respectively for DRISHTI-GS and 0.83 and 0.82 for RIM-ONE data. The proposed technique improved value of metrics with most of the existing methods in terms of DSC and IOU of the results metric of the experiments for OD and OC segmentation.


Asunto(s)
Glaucoma , Disco Óptico , Diagnóstico por Imagen , Fondo de Ojo , Glaucoma/diagnóstico por imagen , Humanos , Disco Óptico/diagnóstico por imagen , Retina
17.
J Diabetes ; 14(2): 111-120, 2022 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-34889059

RESUMEN

BACKGROUND: The aim of our research was to prospectively explore the clinical value of a deep learning algorithm (DLA) to detect referable diabetic retinopathy (DR) in different subgroups stratified by types of diabetes, blood pressure, sex, BMI, age, glycosylated hemoglobin (HbA1c), diabetes duration, urine albumin-to-creatinine ratio (UACR), and estimated glomerular filtration rate (eGFR) at a real-world diabetes center in China. METHODS: A total of 1147 diabetic patients from Shanghai General Hospital were recruited from October 2018 to August 2019. Retinal fundus images were graded by the DLA, and the detection of referable DR (moderate nonproliferative DR or worse) was compared with a reference standard generated by one certified retinal specialist with more than 12 years of experience. The performance of DLA across different subgroups stratified by types of diabetes, blood pressure, sex, BMI, age, HbA1c, diabetes duration, UACR, and eGFR was evaluated. RESULTS: For all 1674 gradable images, the area under the receiver operating curve, sensitivity, and specificity of the DLA for referable DR were 0.942 (95% CI, 0.920-0.964), 85.1% (95% CI, 83.4%-86.8%), and 95.6% (95% CI, 94.6%-96.6%), respectively. The DLA showed consistent performance across most subgroups, while it showed superior performance in the subgroups of patients with type 1 diabetes, UACR ≥ 30 mg/g, and eGFR < 90 mL/min/1.73m2 . CONCLUSIONS: This study showed that the DLA was a reliable alternative method for the detection of referable DR and performed superior in patients with type 1 diabetes and diabetic nephropathy who were prone to DR.


Asunto(s)
Aprendizaje Profundo , Diabetes Mellitus , Retinopatía Diabética , Algoritmos , China , Retinopatía Diabética/diagnóstico , Humanos , Tamizaje Masivo
18.
Phys Eng Sci Med ; 44(3): 639-653, 2021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-34033015

RESUMEN

Eye care professionals generally use fundoscopy to confirm the occurrence of Diabetic Retinopathy (DR) in patients. Early DR detection and accurate DR grading are critical for the care and management of this disease. This work proposes an automated DR grading method in which features can be extracted from the fundus images and categorized based on severity using deep learning and Machine Learning (ML) algorithms. A Multipath Convolutional Neural Network (M-CNN) is used for global and local feature extraction from images. Then, a machine learning classifier is used to categorize the input according to the severity. The proposed model is evaluated across different publicly available databases (IDRiD, Kaggle (for DR detection), and MESSIDOR) and different ML classifiers (Support Vector Machine (SVM), Random Forest, and J48). The metrics selected for model evaluation are the False Positive Rate (FPR), Specificity, Precision, Recall, F1-score, K-score, and Accuracy. The experiments show that the best response is produced by the M-CNN network with the J48 classifier. The classifiers are evaluated across the pre-trained network features and existing DR grading methods. The average accuracy obtained for the proposed work is 99.62% for DR grading. The experiments and evaluation results show that the proposed method works well for accurate DR grading and early disease detection.


Asunto(s)
Diabetes Mellitus , Retinopatía Diabética , Algoritmos , Retinopatía Diabética/diagnóstico , Fondo de Ojo , Humanos , Aprendizaje Automático , Redes Neurales de la Computación
19.
Transl Vis Sci Technol ; 9(6): 28, 2020 11.
Artículo en Inglés | MEDLINE | ID: mdl-33184590

RESUMEN

Purpose: To evaluate high accumulation of coronary artery calcium (CAC) from retinal fundus images with deep learning technologies as an inexpensive and radiation-free screening method. Methods: Individuals who underwent bilateral retinal fundus imaging and CAC score (CACS) evaluation from coronary computed tomography scans on the same day were identified. With this database, performances of deep learning algorithms (inception-v3) to distinguish high CACS from CACS of 0 were evaluated at various thresholds for high CACS. Vessel-inpainted and fovea-inpainted images were also used as input to investigate areas of interest in determining CACS. Results: A total of 44,184 images from 20,130 individuals were included. A deep learning algorithm for discrimination of no CAC from CACS >100 achieved area under receiver operating curve (AUROC) of 82.3% (79.5%-85.0%) and 83.2% (80.2%-86.3%) using unilateral and bilateral fundus images, respectively, under a 5-fold cross validation setting. AUROC increased as the criterion for high CACS was increased, showing a plateau at 100 and losing significant improvement thereafter. AUROC decreased when fovea was inpainted and decreased further when vessels were inpainted, whereas AUROC increased when bilateral images were used as input. Conclusions: Visual patterns of retinal fundus images in subjects with CACS > 100 could be recognized by deep learning algorithms compared with those with no CAC. Exploiting bilateral images improves discrimination performance, and ablation studies removing retinal vasculature or fovea suggest that recognizable patterns reside mainly in these areas. Translational Relevance: Retinal fundus images can be used by deep learning algorithms for prediction of high CACS.


Asunto(s)
Vasos Coronarios , Aprendizaje Profundo , Algoritmos , Vasos Coronarios/diagnóstico por imagen , Fondo de Ojo , Humanos , Tomografía Computarizada por Rayos X
20.
Phys Eng Sci Med ; 43(3): 927-945, 2020 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-32648111

RESUMEN

Diabetic retinopathy (DR) is a complication of diabetes mellitus that damages the blood vessels in the retina. DR is considered a serious vision-threatening impediment that most diabetic subjects are at risk of developing. Effective automatic detection of DR is challenging. Feature extraction plays an important role in the effective classification of disease. Here we focus on a feature extraction technique that combines two feature extractors, speeded up robust features and binary robust invariant scalable keypoints, to extract the relevant features from retinal fundus images. The selection of top-ranked features using the MR-MR (maximum relevance-minimum redundancy) feature selection and ranking method enhances the efficiency of classification. The system is evaluated across various classifiers, such as support vector machine, Adaboost, Naive Bayes, Random Forest, and multi-layer perception (MLP) when giving input image features extracted from standard datasets (IDRiD, MESSIDOR, and DIARETDB0). The performances of the classifiers were analyzed by comparing their specificity, precision, recall, false positive rate, and accuracy values. We found that when the proposed feature extraction and selection technique is used together with MLP outperforms all the other classifiers for all datasets in binary and multiclass classification.


Asunto(s)
Algoritmos , Retinopatía Diabética/clasificación , Retinopatía Diabética/diagnóstico , Automatización , Teorema de Bayes , Bases de Datos como Asunto , Retinopatía Diabética/diagnóstico por imagen , Fondo de Ojo , Humanos , Redes Neurales de la Computación , Reproducibilidad de los Resultados , Máquina de Vectores de Soporte
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA