Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
1.
Br J Ophthalmol ; 2023 Jun 28.
Article in English | MEDLINE | ID: mdl-37380352

ABSTRACT

PURPOSE: To determine associations between deprivation using the Index of Multiple Deprivation (IMD and individual IMD subdomains) with incident referable diabetic retinopathy/maculopathy (termed rDR). METHODS: Anonymised demographic and screening data collected by the South-East London Diabetic Eye Screening Programme were extracted from September 2013 to December 2019. Multivariable Cox proportional models were used to explore the association between the IMD, IMD subdomains and rDR. RESULTS: From 118 508 people with diabetes who attended during the study period, 88 910 (75%) were eligible. The mean (± SD) age was 59.6 (±14.7) years; 53.94% were male, 52.58% identified as white, 94.28% had type 2 diabetes and the average duration of diabetes was 5.81 (±6.9) years; rDR occurred in 7113 patients (8.00%). Known risk factors of younger age, Black ethnicity, type 2 diabetes, more severe baseline DR and diabetes duration conferred a higher risk of incident rDR. After adjusting for these known risk factors, the multivariable analysis did not show a significant association between IMD (decile 1 vs decile 10) and rDR (HR: 1.08, 95% CI: 0.87 to 1.34, p=0.511). However, high deprivation (decile 1) in three IMD subdomains was associated with rDR, namely living environment (HR: 1.64, 95% CI: 1.12 to 2.41, p=0.011), education skills (HR: 1.64, 95% CI: 1.12 to 2.41, p=0.011) and income (HR: 1.19, 95% CI: 1.02 to 1.38, p=0.024). CONCLUSION: IMD subdomains allow for the detection of associations between aspects of deprivation and rDR, which may be missed when using the aggregate IMD. The generalisation of these findings outside the UK population requires corroboration internationally.

2.
Sci Rep ; 13(1): 1392, 2023 01 25.
Article in English | MEDLINE | ID: mdl-36697482

ABSTRACT

Diabetic retinopathy (DR) at risk of vision loss (referable DR) needs to be identified by retinal screening and referred to an ophthalmologist. Existing automated algorithms have mostly been developed from images acquired with high cost mydriatic retinal cameras and cannot be applied in the settings used in most low- and middle-income countries. In this prospective multicentre study, we developed a deep learning system (DLS) that detects referable DR from retinal images acquired using handheld non-mydriatic fundus camera by non-technical field workers in 20 sites across India. Macula-centred and optic-disc-centred images from 16,247 eyes (9778 participants) were used to train and cross-validate the DLS and risk factor based logistic regression models. The DLS achieved an AUROC of 0.99 (1000 times bootstrapped 95% CI 0.98-0.99) using two-field retinal images, with 93.86 (91.34-96.08) sensitivity and 96.00 (94.68-98.09) specificity at the Youden's index operational point. With single field inputs, the DLS reached AUROC of 0.98 (0.98-0.98) for the macula field and 0.96 (0.95-0.98) for the optic-disc field. Intergrader performance was 90.01 (88.95-91.01) sensitivity and 96.09 (95.72-96.42) specificity. The image based DLS outperformed all risk factor-based models. This DLS demonstrated a clinically acceptable performance for the identification of referable DR despite challenging image capture conditions.


Subject(s)
Deep Learning , Diabetic Retinopathy , Diagnostic Imaging , Humans , Diabetes Mellitus/pathology , Diabetic Retinopathy/diagnostic imaging , Mass Screening/methods , Mydriatics , Photography/methods , Prospective Studies , Retina/diagnostic imaging , Sensitivity and Specificity , Diagnostic Imaging/methods
3.
Sci Rep ; 12(1): 11196, 2022 07 01.
Article in English | MEDLINE | ID: mdl-35778615

ABSTRACT

Diabetic retinopathy (DR) screening images are heterogeneous and contain undesirable non-retinal, incorrect field and ungradable samples which require curation, a laborious task to perform manually. We developed and validated single and multi-output laterality, retinal presence, retinal field and gradability classification deep learning (DL) models for automated curation. The internal dataset comprised of 7743 images from DR screening (UK) with 1479 external test images (Portugal and Paraguay). Internal vs external multi-output laterality AUROC were right (0.994 vs 0.905), left (0.994 vs 0.911) and unidentifiable (0.996 vs 0.680). Retinal presence AUROC were (1.000 vs 1.000). Retinal field AUROC were macula (0.994 vs 0.955), nasal (0.995 vs 0.962) and other retinal field (0.997 vs 0.944). Gradability AUROC were (0.985 vs 0.918). DL effectively detects laterality, retinal presence, retinal field and gradability of DR screening images with generalisation between centres and populations. DL models could be used for automated image curation within DR screening.


Subject(s)
Deep Learning , Diabetes Mellitus , Diabetic Retinopathy , Macula Lutea , Diabetic Retinopathy/diagnostic imaging , Humans , Mass Screening/methods , Retina/diagnostic imaging
4.
J Clin Med ; 11(3)2022 Jan 26.
Article in English | MEDLINE | ID: mdl-35160065

ABSTRACT

Artificial Intelligence has showcased clear capabilities to automatically grade diabetic retinopathy (DR) on mydriatic retinal images captured by clinical experts on fixed table-top retinal cameras within hospital settings. However, in many low- and middle-income countries, screening for DR revolves around minimally trained field workers using handheld non-mydriatic cameras in community settings. This prospective study evaluated the diagnostic accuracy of a deep learning algorithm developed using mydriatic retinal images by the Singapore Eye Research Institute, commercially available as Zeiss VISUHEALTH-AI DR, on images captured by field workers on a Zeiss Visuscout® 100 non-mydriatic handheld camera from people with diabetes in a house-to-house cross-sectional study across 20 regions in India. A total of 20,489 patient eyes from 11,199 patients were used to evaluate algorithm performance in identifying referable DR, non-referable DR, and gradability. For each category, the algorithm achieved precision values of 29.60 (95% CI 27.40, 31.88), 92.56 (92.13, 92.97), and 58.58 (56.97, 60.19), recall values of 62.69 (59.17, 66.12), 85.65 (85.11, 86.18), and 65.06 (63.40, 66.69), and F-score values of 40.22 (38.25, 42.21), 88.97 (88.62, 89.31), and 61.65 (60.50, 62.80), respectively. Model performance reached 91.22 (90.79, 91.64) sensitivity and 65.06 (63.40, 66.69) specificity at detecting gradability and 72.08 (70.68, 73.46) sensitivity and 85.65 (85.11, 86.18) specificity for the detection of all referable eyes. Algorithm accuracy is dependent on the quality of acquired retinal images, and this is a major limiting step for its global implementation in community non-mydriatic DR screening using handheld cameras. This study highlights the need to develop and train deep learning-based screening tools in such conditions before implementation.

6.
J Clin Med ; 9(8)2020 Aug 06.
Article in English | MEDLINE | ID: mdl-32781564

ABSTRACT

Reliable outcome measures are required for clinical trials investigating novel agents for preventing progression of capillary non-perfusion (CNP) in retinal vascular diseases. Currently, accurate quantification of topographical distribution of CNP on ultrawide field fluorescein angiography (UWF-FA) by retinal experts is subjective and lack standardisation. A U-net style network was trained to extract a dense segmentation of CNP from a newly created dataset of 75 UWF-FA images. A subset of 20 images was also segmented by a second expert grader for inter-grader reliability evaluation. Further, a circular grid centred on the FAZ was used to provide standardised CNP distribution analysis. The model for dense segmentation was five-fold cross-validated achieving area under the receiving operating characteristic of 0.82 (0.03) and area under precision-recall curve 0.73 (0.05). Inter-grader assessment on the 20 image subset achieves: precision 59.34 (10.92), recall 76.99 (12.5), and dice similarity coefficient (DSC) 65.51 (4.91), and the centred operating point of the automated model reached: precision 64.41 (13.66), recall 70.02 (16.2), and DSC 66.09 (13.32). Agreement of CNP grid assessment reached: Kappa 0.55 (0.03), perfused intraclass correlation (ICC) 0.89 (0.77, 0.93), non-perfused ICC 0.86 (0.73, 0.92), inter-grader agreement of CNP grid assessment values are Kappa 0.43 (0.03), perfused ICC 0.70 (0.48, 0.83), non-perfused ICC 0.71 (0.48, 0.83). Automated dense segmentation of CNP in UWF-FA images achieves performance levels comparable to inter-grader agreement values. A grid placed on the deep learning-based automatic segmentation of CNP generates a reliable and quantifiable method of measurement of CNP, to overcome the subjectivity of human graders.

7.
Br J Ophthalmol ; 104(4): 480-486, 2020 04.
Article in English | MEDLINE | ID: mdl-31266775

ABSTRACT

AIMS: Using optical coherence tomography angiography (OCTA) to characterise microvascular changes in the retinal plexuses and choriocapillaris (CC) of patients with MYO7A and USH2A mutations and correlate with genotype, retinal structure and function. METHODS: Twenty-seven patients with molecularly confirmed USH2A (n=21) and MYO7A (n=6) mutations underwent macular 6×6 mm OCTA using the AngioVue. Heidelberg spectral-domain OCT scans and MAIA microperimetry were also performed, the preserved ellipsoid zone (EZ) band width and mean macular sensitivity (MS) were recorded. OCTA of the inner retina, superficial capillary plexus (SCP), deep capillary plexus (DCP) and CC were analysed. Vessel density (VD) was calculated from the en face OCT angiograms of retinal circulation. RESULTS: Forty-eight eyes with either USH2A (n=37, mean age: 34.4±12.2 years) or MYO7A (n=11, mean age: 37.1±12.4 years), and 35 eyes from 18 age-matched healthy participants were included. VD was significantly decreased in the retinal circulation of patients with USH2A and MYO7A mutations compared with controls (p<0.001). Changes were observed in both the SCP and DCP, but no differences in retinal perfusion were detected between USH2A and MYO7A groups. No vascular defects were detected in CC of the USH2A group, but peripheral defects were detected in older MYO7A patients from the fourth decade of life. VD in the DCP showed strong association with MS and EZ width (Spearman's rho =0.64 and 0.59, respectively, p<0.001). CONCLUSION: OCTA was able to detect similar retinal microvascular changes in patients with USH2A and MYO7A mutations. The CC was generally affected in MYO7A mutations. OCT angiography may further enhance our understanding of inherited eye diseases and their phenotype-genotype associations.


Subject(s)
Extracellular Matrix Proteins/genetics , Mutation , Myosin VIIa/genetics , Retinal Diseases/diagnosis , Retinal Vessels/pathology , Usher Syndromes/pathology , Adult , Choroid/blood supply , Choroid/diagnostic imaging , Female , Fluorescein Angiography , Humans , Male , Middle Aged , Retinal Diseases/genetics , Retinal Diseases/physiopathology , Retinal Vessels/diagnostic imaging , Tomography, Optical Coherence , Usher Syndromes/diagnostic imaging , Usher Syndromes/genetics , Visual Acuity/physiology , Visual Field Tests , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...