Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros

Banco de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
medRxiv ; 2024 Mar 28.
Artículo en Inglés | MEDLINE | ID: mdl-38585957

RESUMEN

Purpose: To quantify relevant fundus autofluorescence (FAF) image features cross-sectionally and longitudinally in a large cohort of inherited retinal diseases (IRDs) patients. Design: Retrospective study of imaging data (55-degree blue-FAF on Heidelberg Spectralis) from patients. Participants: Patients with a clinical and molecularly confirmed diagnosis of IRD who have undergone FAF 55-degree imaging at Moorfields Eye Hospital (MEH) and the Royal Liverpool Hospital (RLH) between 2004 and 2019. Methods: Five FAF features of interest were defined: vessels, optic disc, perimacular ring of increased signal (ring), relative hypo-autofluorescence (hypo-AF) and hyper-autofluorescence (hyper-AF). Features were manually annotated by six graders in a subset of patients based on a defined grading protocol to produce segmentation masks to train an AI model, AIRDetect, which was then applied to the entire imaging dataset. Main Outcome Measures: Quantitative FAF imaging features including area in mm 2 and vessel metrics, were analysed cross-sectionally by gene and age, and longitudinally to determine rate of progression. AIRDetect feature segmentation and detection were validated with Dice score and precision/recall, respectively. Results: A total of 45,749 FAF images from 3,606 IRD patients from MEH covering 170 genes were automatically segmented using AIRDetect. Model-grader Dice scores for disc, hypo-AF, hyper-AF, ring and vessels were respectively 0.86, 0.72, 0.69, 0.68 and 0.65. The five genes with the largest hypo-AF areas were CHM , ABCC6 , ABCA4 , RDH12 , and RPE65 , with mean per-patient areas of 41.5, 30.0, 21.9, 21.4, and 15.1 mm 2 . The five genes with the largest hyper-AF areas were BEST1 , CDH23 , RDH12 , MYO7A , and NR2E3 , with mean areas of 0.49, 0.45, 0.44, 0.39, and 0.34 mm 2 respectively. The five genes with largest ring areas were CDH23 , NR2E3 , CRX , EYS and MYO7A, with mean areas of 3.63, 3.32, 2.84, 2.39, and 2.16 mm 2 . Vessel density was found to be highest in EFEMP1 , BEST1 , TIMP3 , RS1 , and PRPH2 (10.6%, 10.3%, 9.8%, 9.7%, 8.9%) and was lower in Retinitis Pigmentosa (RP) and Leber Congenital Amaurosis genes. Longitudinal analysis of decreasing ring area in four RP genes ( RPGR, USH2A, RHO, EYS ) found EYS to be the fastest progressor at -0.18 mm 2 /year. Conclusions: We have conducted the first large-scale cross-sectional and longitudinal quantitative analysis of FAF features across a diverse range of IRDs using a novel AI approach.

2.
Ophthalmol Sci ; 3(2): 100258, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-36685715

RESUMEN

Purpose: Rare disease diagnosis is challenging in medical image-based artificial intelligence due to a natural class imbalance in datasets, leading to biased prediction models. Inherited retinal diseases (IRDs) are a research domain that particularly faces this issue. This study investigates the applicability of synthetic data in improving artificial intelligence-enabled diagnosis of IRDs using generative adversarial networks (GANs). Design: Diagnostic study of gene-labeled fundus autofluorescence (FAF) IRD images using deep learning. Participants: Moorfields Eye Hospital (MEH) dataset of 15 692 FAF images obtained from 1800 patients with confirmed genetic diagnosis of 1 of 36 IRD genes. Methods: A StyleGAN2 model is trained on the IRD dataset to generate 512 × 512 resolution images. Convolutional neural networks are trained for classification using different synthetically augmented datasets, including real IRD images plus 1800 and 3600 synthetic images, and a fully rebalanced dataset. We also perform an experiment with only synthetic data. All models are compared against a baseline convolutional neural network trained only on real data. Main Outcome Measures: We evaluated synthetic data quality using a Visual Turing Test conducted with 4 ophthalmologists from MEH. Synthetic and real images were compared using feature space visualization, similarity analysis to detect memorized images, and Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) score for no-reference-based quality evaluation. Convolutional neural network diagnostic performance was determined on a held-out test set using the area under the receiver operating characteristic curve (AUROC) and Cohen's Kappa (κ). Results: An average true recognition rate of 63% and fake recognition rate of 47% was obtained from the Visual Turing Test. Thus, a considerable proportion of the synthetic images were classified as real by clinical experts. Similarity analysis showed that the synthetic images were not copies of the real images, indicating that copied real images, meaning the GAN was able to generalize. However, BRISQUE score analysis indicated that synthetic images were of significantly lower quality overall than real images (P < 0.05). Comparing the rebalanced model (RB) with the baseline (R), no significant change in the average AUROC and κ was found (R-AUROC = 0.86[0.85-88], RB-AUROC = 0.88[0.86-0.89], R-k = 0.51[0.49-0.53], and RB-k = 0.52[0.50-0.54]). The synthetic data trained model (S) achieved similar performance as the baseline (S-AUROC = 0.86[0.85-87], S-k = 0.48[0.46-0.50]). Conclusions: Synthetic generation of realistic IRD FAF images is feasible. Synthetic data augmentation does not deliver improvements in classification performance. However, synthetic data alone deliver a similar performance as real data, and hence may be useful as a proxy to real data. Financial Disclosure(s): Proprietary or commercial disclosure may be found after the references.

3.
BMJ Open ; 13(3): e071043, 2023 03 20.
Artículo en Inglés | MEDLINE | ID: mdl-36940949

RESUMEN

INTRODUCTION: Inherited retinal diseases (IRD) are a leading cause of visual impairment and blindness in the working age population. Mutations in over 300 genes have been found to be associated with IRDs and identifying the affected gene in patients by molecular genetic testing is the first step towards effective care and patient management. However, genetic diagnosis is currently slow, expensive and not widely accessible. The aim of the current project is to address the evidence gap in IRD diagnosis with an AI algorithm, Eye2Gene, to accelerate and democratise the IRD diagnosis service. METHODS AND ANALYSIS: The data-only retrospective cohort study involves a target sample size of 10 000 participants, which has been derived based on the number of participants with IRD at three leading UK eye hospitals: Moorfields Eye Hospital (MEH), Oxford University Hospital (OUH) and Liverpool University Hospital (LUH), as well as a Japanese hospital, the Tokyo Medical Centre (TMC). Eye2Gene aims to predict causative genes from retinal images of patients with a diagnosis of IRD. For this purpose, 36 most common causative IRD genes have been selected to develop a training dataset for the software to have enough examples for training and validation for detection of each gene. The Eye2Gene algorithm is composed of multiple deep convolutional neural networks, which will be trained on MEH IRD datasets, and externally validated on OUH, LUH and TMC. ETHICS AND DISSEMINATION: This research was approved by the IRB and the UK Health Research Authority (Research Ethics Committee reference 22/WA/0049) 'Eye2Gene: accelerating the diagnosis of IRDs' Integrated Research Application System (IRAS) project ID: 242050. All research adhered to the tenets of the Declaration of Helsinki. Findings will be reported in an open-access format.


Asunto(s)
Inteligencia Artificial , Enfermedades de la Retina , Humanos , Estudios Retrospectivos , Enfermedades de la Retina/diagnóstico , Enfermedades de la Retina/genética , Retina , Pruebas Genéticas/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA