Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros

Base de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Asia Pac J Ophthalmol (Phila) ; 13(4): 100087, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39069106

RESUMEN

PURPOSE: Saliency maps (SM) allow clinicians to better understand the opaque decision-making process in artificial intelligence (AI) models by visualising the important features responsible for predictions. This ultimately improves interpretability and confidence. In this work, we review the use case for SMs, exploring their impact on clinicians' understanding and trust in AI models. We use the following ophthalmic conditions as examples: (1) glaucoma, (2) myopia, (3) age-related macular degeneration, and (4) diabetic retinopathy. METHOD: A multi-field search on MEDLINE, Embase, and Web of Science was conducted using specific keywords. Only studies on the use of SMs in glaucoma, myopia, AMD, or DR were considered for inclusion. RESULTS: Findings reveal that SMs are often used to validate AI models and advocate for their adoption, potentially leading to biased claims. Overlooking the technical limitations of SMs, and the conductance of superficial assessments of their quality and relevance, was discerned. Uncertainties persist regarding the role of saliency maps in building trust in AI. It is crucial to enhance understanding of SMs' technical constraints and improve evaluation of their quality, impact, and suitability for specific tasks. Establishing a standardised framework for selecting and assessing SMs, as well as exploring their relationship with other reliability sources (e.g. safety and generalisability), is essential for enhancing clinicians' trust in AI. CONCLUSION: We conclude that SMs are not beneficial for interpretability and trust-building purposes in their current forms. Instead, SMs may confer benefits to model debugging, model performance enhancement, and hypothesis testing (e.g. novel biomarkers).


Asunto(s)
Inteligencia Artificial , Oftalmólogos , Humanos , Confianza , Glaucoma/fisiopatología
2.
Nature ; 622(7981): 156-163, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37704728

RESUMEN

Medical artificial intelligence (AI) offers great potential for recognizing signs of health conditions in retinal images and expediting the diagnosis of eye diseases and systemic disorders1. However, the development of AI models requires substantial annotation and models are usually task-specific with limited generalizability to different clinical applications2. Here, we present RETFound, a foundation model for retinal images that learns generalizable representations from unlabelled retinal images and provides a basis for label-efficient model adaptation in several applications. Specifically, RETFound is trained on 1.6 million unlabelled retinal images by means of self-supervised learning and then adapted to disease detection tasks with explicit labels. We show that adapted RETFound consistently outperforms several comparison models in the diagnosis and prognosis of sight-threatening eye diseases, as well as incident prediction of complex systemic disorders such as heart failure and myocardial infarction with fewer labelled data. RETFound provides a generalizable solution to improve model performance and alleviate the annotation workload of experts to enable broad clinical AI applications from retinal imaging.


Asunto(s)
Inteligencia Artificial , Oftalmopatías , Retina , Humanos , Oftalmopatías/complicaciones , Oftalmopatías/diagnóstico por imagen , Insuficiencia Cardíaca/complicaciones , Insuficiencia Cardíaca/diagnóstico , Infarto del Miocardio/complicaciones , Infarto del Miocardio/diagnóstico , Retina/diagnóstico por imagen , Aprendizaje Automático Supervisado
3.
Ophthalmol Sci ; 3(2): 100258, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-36685715

RESUMEN

Purpose: Rare disease diagnosis is challenging in medical image-based artificial intelligence due to a natural class imbalance in datasets, leading to biased prediction models. Inherited retinal diseases (IRDs) are a research domain that particularly faces this issue. This study investigates the applicability of synthetic data in improving artificial intelligence-enabled diagnosis of IRDs using generative adversarial networks (GANs). Design: Diagnostic study of gene-labeled fundus autofluorescence (FAF) IRD images using deep learning. Participants: Moorfields Eye Hospital (MEH) dataset of 15 692 FAF images obtained from 1800 patients with confirmed genetic diagnosis of 1 of 36 IRD genes. Methods: A StyleGAN2 model is trained on the IRD dataset to generate 512 × 512 resolution images. Convolutional neural networks are trained for classification using different synthetically augmented datasets, including real IRD images plus 1800 and 3600 synthetic images, and a fully rebalanced dataset. We also perform an experiment with only synthetic data. All models are compared against a baseline convolutional neural network trained only on real data. Main Outcome Measures: We evaluated synthetic data quality using a Visual Turing Test conducted with 4 ophthalmologists from MEH. Synthetic and real images were compared using feature space visualization, similarity analysis to detect memorized images, and Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) score for no-reference-based quality evaluation. Convolutional neural network diagnostic performance was determined on a held-out test set using the area under the receiver operating characteristic curve (AUROC) and Cohen's Kappa (κ). Results: An average true recognition rate of 63% and fake recognition rate of 47% was obtained from the Visual Turing Test. Thus, a considerable proportion of the synthetic images were classified as real by clinical experts. Similarity analysis showed that the synthetic images were not copies of the real images, indicating that copied real images, meaning the GAN was able to generalize. However, BRISQUE score analysis indicated that synthetic images were of significantly lower quality overall than real images (P < 0.05). Comparing the rebalanced model (RB) with the baseline (R), no significant change in the average AUROC and κ was found (R-AUROC = 0.86[0.85-88], RB-AUROC = 0.88[0.86-0.89], R-k = 0.51[0.49-0.53], and RB-k = 0.52[0.50-0.54]). The synthetic data trained model (S) achieved similar performance as the baseline (S-AUROC = 0.86[0.85-87], S-k = 0.48[0.46-0.50]). Conclusions: Synthetic generation of realistic IRD FAF images is feasible. Synthetic data augmentation does not deliver improvements in classification performance. However, synthetic data alone deliver a similar performance as real data, and hence may be useful as a proxy to real data. Financial Disclosure(s): Proprietary or commercial disclosure may be found after the references.

4.
Transl Vis Sci Technol ; 11(7): 12, 2022 07 08.
Artículo en Inglés | MEDLINE | ID: mdl-35833885

RESUMEN

Purpose: To externally validate a deep learning pipeline (AutoMorph) for automated analysis of retinal vascular morphology on fundus photographs. AutoMorph has been made publicly available, facilitating widespread research in ophthalmic and systemic diseases. Methods: AutoMorph consists of four functional modules: image preprocessing, image quality grading, anatomical segmentation (including binary vessel, artery/vein, and optic disc/cup segmentation), and vascular morphology feature measurement. Image quality grading and anatomical segmentation use the most recent deep learning techniques. We employ a model ensemble strategy to achieve robust results and analyze the prediction confidence to rectify false gradable cases in image quality grading. We externally validate the performance of each module on several independent publicly available datasets. Results: The EfficientNet-b4 architecture used in the image grading module achieves performance comparable to that of the state of the art for EyePACS-Q, with an F1-score of 0.86. The confidence analysis reduces the number of images incorrectly assessed as gradable by 76%. Binary vessel segmentation achieves an F1-score of 0.73 on AV-WIDE and 0.78 on DR HAGIS. Artery/vein scores are 0.66 on IOSTAR-AV, and disc segmentation achieves 0.94 in IDRID. Vascular morphology features measured from the AutoMorph segmentation map and expert annotation show good to excellent agreement. Conclusions: AutoMorph modules perform well even when external validation data show domain differences from training data (e.g., with different imaging devices). This fully automated pipeline can thus allow detailed, efficient, and comprehensive analysis of retinal vascular morphology on color fundus photographs. Translational Relevance: By making AutoMorph publicly available and open source, we hope to facilitate ophthalmic and systemic disease research, particularly in the emerging field of oculomics.


Asunto(s)
Aprendizaje Profundo , Técnicas de Diagnóstico Oftalmológico , Fondo de Ojo , Fotograbar
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA