Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Int J Ophthalmol ; 17(9): 1581-1591, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39296560

RESUMO

AIM: To develop a deep learning-based model for automatic retinal vascular segmentation, analyzing and comparing parameters under diverse glucose metabolic status (normal, prediabetes, diabetes) and to assess the potential of artificial intelligence (AI) in image segmentation and retinal vascular parameters for predicting prediabetes and diabetes. METHODS: Retinal fundus photos from 200 normal individuals, 200 prediabetic patients, and 200 diabetic patients (600 eyes in total) were used. The U-Net network served as the foundational architecture for retinal artery-vein segmentation. An automatic segmentation and evaluation system for retinal vascular parameters was trained, encompassing 26 parameters. RESULTS: Significant differences were found in retinal vascular parameters across normal, prediabetes, and diabetes groups, including artery diameter (P=0.008), fractal dimension (P=0.000), vein curvature (P=0.003), C-zone artery branching vessel count (P=0.049), C-zone vein branching vessel count (P=0.041), artery branching angle (P=0.005), vein branching angle (P=0.001), artery angle asymmetry degree (P=0.003), vessel length density (P=0.000), and vessel area density (P=0.000), totaling 10 parameters. CONCLUSION: The deep learning-based model facilitates retinal vascular parameter identification and quantification, revealing significant differences. These parameters exhibit potential as biomarkers for prediabetes and diabetes.

2.
Int J Ophthalmol ; 15(3): 495-501, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35310049

RESUMO

AIM: To explore a more accurate quantifying diagnosis method of diabetic macular edema (DME) by displaying detailed 3D morphometry beyond the gold-standard quantification indicator-central retinal thickness (CRT) and apply it in follow-up of DME patients. METHODS: Optical coherence tomography (OCT) scans of 229 eyes from 160 patients were collected. We manually annotated cystoid macular edema (CME), subretinal fluid (SRF) and fovea as ground truths. Deep convolution neural networks (DCNNs) were constructed including U-Net, sASPP, HRNetV2-W48, and HRNetV2-W48+Object-Contextual Representation (OCR) for fluid (CME+SRF) segmentation and fovea detection respectively, based on which the thickness maps of CME, SRF and retina were generated and divided by Early Treatment Diabetic Retinopathy Study (ETDRS) grid. RESULTS: In fluid segmentation, with the best DCNN constructed and loss function, the dice similarity coefficients (DSC) of segmentation reached 0.78 (CME), 0.82 (SRF), and 0.95 (retina). In fovea detection, the average deviation between the predicted fovea and the ground truth reached 145.7±117.8 µm. The generated macular edema thickness maps are able to discover center-involved DME by intuitive morphometry and fluid volume, which is ignored by the traditional definition of CRT>250 µm. Thickness maps could also help to discover fluid above or below the fovea center ignored or underestimated by a single OCT B-scan. CONCLUSION: Compared to the traditional unidimensional indicator-CRT, 3D macular edema thickness maps are able to display more intuitive morphometry and detailed statistics of DME, supporting more accurate diagnoses and follow-up of DME patients.

3.
Front Med (Lausanne) ; 9: 839088, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35652075

RESUMO

Purpose: To evaluate the performance of a deep learning (DL)-based artificial intelligence (AI) hierarchical diagnosis software, EyeWisdom V1 for diabetic retinopathy (DR). Materials and Methods: The prospective study was a multicenter, double-blind, and self-controlled clinical trial. Non-dilated posterior pole fundus images were evaluated by ophthalmologists and EyeWisdom V1, respectively. The diagnosis of manual grading was considered as the gold standard. Primary evaluation index (sensitivity and specificity) and secondary evaluation index like positive predictive values (PPV), negative predictive values (NPV), etc., were calculated to evaluate the performance of EyeWisdom V1. Results: A total of 1,089 fundus images from 630 patients were included, with a mean age of (56.52 ± 11.13) years. For any DR, the sensitivity, specificity, PPV, and NPV were 98.23% (95% CI 96.93-99.08%), 74.45% (95% CI 69.95-78.60%), 86.38% (95% CI 83.76-88.72%), and 96.23% (95% CI 93.50-98.04%), respectively; For sight-threatening DR (STDR, severe non-proliferative DR or worse), the above indicators were 80.47% (95% CI 75.07-85.14%), 97.96% (95% CI 96.75-98.81%), 92.38% (95% CI 88.07-95.50%), and 94.23% (95% CI 92.46-95.68%); For referral DR (moderate non-proliferative DR or worse), the sensitivity and specificity were 92.96% (95% CI 90.66-94.84%) and 93.32% (95% CI 90.65-95.42%), with the PPV of 94.93% (95% CI 92.89-96.53%) and the NPV of 90.78% (95% CI 87.81-93.22%). The kappa score of EyeWisdom V1 was 0.860 (0.827-0.890) with the AUC of 0.958 for referral DR. Conclusion: The EyeWisdom V1 could provide reliable DR grading and referral recommendation based on the fundus images of diabetics.

4.
Int J Ophthalmol ; 14(12): 1895-1902, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34926205

RESUMO

AIM: To assist with retinal vein occlusion (RVO) screening, artificial intelligence (AI) methods based on deep learning (DL) have been developed to alleviate the pressure experienced by ophthalmologists and discover and treat RVO as early as possible. METHODS: A total of 8600 color fundus photographs (CFPs) were included for training, validation, and testing of disease recognition models and lesion segmentation models. Four disease recognition and four lesion segmentation models were established and compared. Finally, one disease recognition model and one lesion segmentation model were selected as superior. Additionally, 224 CFPs from 130 patients were included as an external test set to determine the abilities of the two selected models. RESULTS: Using the Inception-v3 model for disease identification, the mean sensitivity, specificity, and F1 for the three disease types and normal CFPs were 0.93, 0.99, and 0.95, respectively, and the mean area under the curve (AUC) was 0.99. Using the DeepLab-v3 model for lesion segmentation, the mean sensitivity, specificity, and F1 for four lesion types (abnormally dilated and tortuous blood vessels, cotton-wool spots, flame-shaped hemorrhages, and hard exudates) were 0.74, 0.97, and 0.83, respectively. CONCLUSION: DL models show good performance when recognizing RVO and identifying lesions using CFPs. Because of the increasing number of RVO patients and increasing demand for trained ophthalmologists, DL models will be helpful for diagnosing RVO early in life and reducing vision impairment.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA