Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 101
Filtrar
1.
Int J Retina Vitreous ; 10(1): 31, 2024 Apr 08.
Artículo en Inglés | MEDLINE | ID: mdl-38589936

RESUMEN

Artificial intelligence (AI) has emerged as a transformative technology across various fields, and its applications in the medical domain, particularly in ophthalmology, has gained significant attention. The vast amount of high-resolution image data, such as optical coherence tomography (OCT) images, has been a driving force behind AI growth in this field. Age-related macular degeneration (AMD) is one of the leading causes for blindness in the world, affecting approximately 196 million people worldwide in 2020. Multimodal imaging has been for a long time the gold standard for diagnosing patients with AMD, however, currently treatment and follow-up in routine disease management are mainly driven by OCT imaging. AI-based algorithms have by their precision, reproducibility and speed, the potential to reliably quantify biomarkers, predict disease progression and assist treatment decisions in clinical routine as well as academic studies. This review paper aims to provide a summary of the current state of AI in AMD, focusing on its applications, challenges, and prospects.

2.
Ophthalmol Sci ; 4(4): 100466, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38591046

RESUMEN

Objective: To identify the individual progression of geographic atrophy (GA) lesions from baseline OCT images of patients in routine clinical care. Design: Clinical evaluation of a deep learning-based algorithm. Subjects: One hundred eighty-four eyes of 100 consecutively enrolled patients. Methods: OCT and fundus autofluorescence (FAF) images (both Spectralis, Heidelberg Engineering) of patients with GA secondary to age-related macular degeneration in routine clinical care were used for model validation. Fundus autofluorescence images were annotated manually by delineating the GA area by certified readers of the Vienna Reading Center. The annotated FAF images were anatomically registered in an automated manner to the corresponding OCT scans, resulting in 2-dimensional en face OCT annotations, which were taken as a reference for the model performance. A deep learning-based method for modeling the GA lesion growth over time from a single baseline OCT was evaluated. In addition, the ability of the algorithm to identify fast progressors for the top 10%, 15%, and 20% of GA growth rates was analyzed. Main Outcome Measures: Dice similarity coefficient (DSC) and mean absolute error (MAE) between manual and predicted GA growth. Results: The deep learning-based tool was able to reliably identify disease activity in GA using a standard OCT image taken at a single baseline time point. The mean DSC for the total GA region increased for the first 2 years of prediction (0.80-0.82). With increasing time intervals beyond 3 years, the DSC decreased slightly to a mean of 0.70. The MAE was low over the first year and with advancing time slowly increased, with mean values ranging from 0.25 mm to 0.69 mm for the total GA region prediction. The model achieved an area under the curve of 0.81, 0.79, and 0.77 for the identification of the top 10%, 15%, and 20% growth rates, respectively. Conclusions: The proposed algorithm is capable of fully automated GA lesion growth prediction from a single baseline OCT in a time-continuous fashion in the form of en face maps. The results are a promising step toward clinical decision support tools for therapeutic dosing and guidance of patient management because the first treatment for GA has recently become available. Financial Disclosures: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

3.
IEEE Trans Med Imaging ; PP2024 Apr 24.
Artículo en Inglés | MEDLINE | ID: mdl-38656867

RESUMEN

Self-supervised learning (SSL) has emerged as a powerful technique for improving the efficiency and effectiveness of deep learning models. Contrastive methods are a prominent family of SSL that extract similar representations of two augmented views of an image while pushing away others in the representation space as negatives. However, the state-of-the-art contrastive methods require large batch sizes and augmentations designed for natural images that are impractical for 3D medical images. To address these limitations, we propose a new longitudinal SSL method, 3DTINC, based on non-contrastive learning. It is designed to learn perturbation-invariant features for 3D optical coherence tomography (OCT) volumes, using augmentations specifically designed for OCT. We introduce a new non-contrastive similarity loss term that learns temporal information implicitly from intra-patient scans acquired at different times. Our experiments show that this temporal information is crucial for predicting progression of retinal diseases, such as age-related macular degeneration (AMD). After pretraining with 3DTINC, we evaluated the learned representations and the prognostic models on two large-scale longitudinal datasets of retinal OCTs where we predict the conversion to wet-AMD within a six-month interval. Our results demonstrate that each component of our contributions is crucial for learning meaningful representations useful in predicting disease progression from longitudinal volumetric scans.

4.
IEEE Trans Med Imaging ; PP2024 Apr 18.
Artículo en Inglés | MEDLINE | ID: mdl-38635383

RESUMEN

The lack of reliable biomarkers makes predicting the conversion from intermediate to neovascular age-related macular degeneration (iAMD, nAMD) a challenging task. We develop a Deep Learning (DL) model to predict the future risk of conversion of an eye from iAMD to nAMD from its current OCT scan. Although eye clinics generate vast amounts of longitudinal OCT scans to monitor AMD progression, only a small subset can be manually labeled for supervised DL. To address this issue, we propose Morph-SSL, a novel Self-supervised Learning (SSL) method for longitudinal data. It uses pairs of unlabelled OCT scans from different visits and involves morphing the scan from the previous visit to the next. The Decoder predicts the transformation for morphing and ensures a smooth feature manifold that can generate intermediate scans between visits through linear interpolation. Next, the Morph-SSL trained features are input to a Classifier which is trained in a supervised manner to model the cumulative probability distribution of the time to conversion with a sigmoidal function. Morph-SSL was trained on unlabelled scans of 399 eyes (3570 visits). The Classifier was evaluated with a five-fold cross-validation on 2418 scans from 343 eyes with clinical labels of the conversion date. The Morph-SSL features achieved an AUC of 0.779 in predicting the conversion to nAMD within the next 6 months, outperforming the same network when trained end-to-end from scratch or pre-trained with popular SSL methods. Automated prediction of the future risk of nAMD onset can enable timely treatment and individualized AMD management.

5.
Am J Ophthalmol ; 264: 53-65, 2024 Feb 29.
Artículo en Inglés | MEDLINE | ID: mdl-38428557

RESUMEN

PURPOSE: To investigate differences in volume and distribution of the main exudative biomarkers across all types and subtypes of macular neovascularization (MNV) using artificial intelligence (AI). DESIGN: Cross-sectional study. METHODS: An AI-based analysis was conducted on 34,528 OCT B-scans consisting of 281 (250 unifocal, 31 multifocal) MNV3, 55 MNV2, and 121 (30 polypoidal, 91 non-polypoidal) MNV1 treatment-naive eyes. Means (SDs), medians and heat maps of cystic intraretinal fluid (IRF), subretinal fluid (SRF), pigment epithelial detachments (PED), and hyperreflective foci (HRF) volumes, as well as retinal thickness (RT) were compared among MNV types and subtypes. RESULTS: MNV3 had the highest mean IRF with 291 (290) nL, RT with 357 (49) µm, and HRF with 80 (70) nL, P ≤ .05. MNV1 showed the greatest mean SRF with 492 (586) nL, whereas MNV3 exhibited the lowest with 218 (382) nL, P ≤ .05. Heat maps showed IRF confined to the center, whereas SRF was scattered in all types. SRF, HRF, and PED were more distributed in the temporal macular half in MNV3. Means of IRF, HRF, and PED were higher in the multifocal than in the unifocal MNV3 with 416 (309) nL,114 (95) nL, and 810 (850) nL, P ≤ .05. Compared to the non-polypoidal subtype, the polypoidal subtype had greater means of SRF with 695 (718) nL, HRF 69 (63) nL, RT 357 (45) µm, and PED 1115 (1170) nL, P ≤ .05. CONCLUSIONS: This novel quantitative AI analysis shows that SRF is a biomarker of choroidal origin in MNV1, whereas IRF, HRF, and RT are retinal biomarkers in MNV3. Polypoidal MNV1 and multifocal MNV3 present with higher exudation compared to other subtypes.

6.
Med Image Anal ; 93: 103104, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38350222

RESUMEN

Automated lesion detection in retinal optical coherence tomography (OCT) scans has shown promise for several clinical applications, including diagnosis, monitoring and guidance of treatment decisions. However, segmentation models still struggle to achieve the desired results for some complex lesions or datasets that commonly occur in real-world, e.g. due to variability of lesion phenotypes, image quality or disease appearance. While several techniques have been proposed to improve them, one line of research that has not yet been investigated is the incorporation of additional semantic context through the application of anomaly detection models. In this study we experimentally show that incorporating weak anomaly labels to standard segmentation models consistently improves lesion segmentation results. This can be done relatively easy by detecting anomalies with a separate model and then adding these output masks as an extra class for training the segmentation model. This provides additional semantic context without requiring extra manual labels. We empirically validated this strategy using two in-house and two publicly available retinal OCT datasets for multiple lesion targets, demonstrating the potential of this generic anomaly guided segmentation approach to be used as an extra tool for improving lesion detection models.


Asunto(s)
Semántica , Tomografía de Coherencia Óptica , Humanos , Fenotipo , Retina/diagnóstico por imagen
7.
Ophthalmol Sci ; 4(3): 100456, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38317867

RESUMEN

Objective: Treatment decisions in neovascular age-related macular degeneration (nAMD) are mainly based on subjective evaluation of OCT. The purpose of this cross-sectional study was to provide a comparison of qualitative and quantitative differences between OCT devices in a systematic manner. Design: Prospective, cross-sectional study. Subjects: One hundred sixty OCT volumes, 40 eyes of 40 patients with nAMD. Methods: Patients from clinical practice were imaged with 4 different OCT devices during one visit: (1) Spectralis Heidelberg; (2) Cirrus; (3) Topcon Maestro2; and (4) Topcon Triton. Intraretinal fluid (IRF), subretinal fluid (SRF), and pigment epithelial detachment (PED) were manually annotated in all cubes by trained human experts to establish fluid measurements based on expert-reader annotations. Intraretinal fluid, SRF, and PED volume were quantified in nanoliters (nL). Bland-Altman plots were created to analyze the agreement of measurements in the central 1 and 6 mm. The Friedman test was performed to test for significant differences in the central 1, 3, and 6 mm. Main Outcome Measures: Intraretinal fluid, SRF, and PED volume. Results: In the central 6 mm, there was a trend toward higher IRF and PED volumes in Spectralis images compared with the other devices and no differences in SRF volume. In the central 1 mm, the standard deviation of the differences ranged from ± 3 nL to ± 6 nL for IRF, from ± 3 nL to ± 4 nL for SRF, and from ± 7 nL to ± 10 nL for PED in all pairwise comparisons. Manually annotated IRF and SRF volumes showed no significant differences in the central 1 mm. Conclusions: Fluid volume quantification achieved excellent reliability in all 3 retinal compartments on images obtained from 4 OCT devices, particularly for clinically relevant IRF and SRF values. Although fluid volume quantification is reliable in all 4 OCT devices, switching OCT devices might lead to deviating fluid volume measurements with higher agreement in the central 1 mm compared with the central 6 mm, with highest agreement for SRF volume in the central 1 mm. Understanding device-dependent differences is essential for expanding the interpretation and implementation of pixel-wise fluid volume measurements in clinical practice and in clinical trials. Financial Disclosures: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

8.
IEEE J Biomed Health Inform ; 28(4): 2235-2246, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38206782

RESUMEN

The use of multimodal imaging has led to significant improvements in the diagnosis and treatment of many diseases. Similar to clinical practice, some works have demonstrated the benefits of multimodal fusion for automatic segmentation and classification using deep learning-based methods. However, current segmentation methods are limited to fusion of modalities with the same dimensionality (e.g., 3D + 3D, 2D + 2D), which is not always possible, and the fusion strategies implemented by classification methods are incompatible with localization tasks. In this work, we propose a novel deep learning-based framework for the fusion of multimodal data with heterogeneous dimensionality (e.g., 3D + 2D) that is compatible with localization tasks. The proposed framework extracts the features of the different modalities and projects them into the common feature subspace. The projected features are then fused and further processed to obtain the final prediction. The framework was validated on the following tasks: segmentation of geographic atrophy (GA), a late-stage manifestation of age-related macular degeneration, and segmentation of retinal blood vessels (RBV) in multimodal retinal imaging. Our results show that the proposed method outperforms the state-of-the-art monomodal methods on GA and RBV segmentation by up to 3.10% and 4.64% Dice, respectively.


Asunto(s)
Retina , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Procesamiento de Imagen Asistido por Computador/métodos
9.
Sci Data ; 11(1): 99, 2024 Jan 20.
Artículo en Inglés | MEDLINE | ID: mdl-38245589

RESUMEN

Pathologic myopia (PM) is a common blinding retinal degeneration suffered by highly myopic population. Early screening of this condition can reduce the damage caused by the associated fundus lesions and therefore prevent vision loss. Automated diagnostic tools based on artificial intelligence methods can benefit this process by aiding clinicians to identify disease signs or to screen mass populations using color fundus photographs as inputs. This paper provides insights about PALM, our open fundus imaging dataset for pathological myopia recognition and anatomical structure annotation. Our databases comprises 1200 images with associated labels for the pathologic myopia category and manual annotations of the optic disc, the position of the fovea and delineations of lesions such as patchy retinal atrophy (including peripapillary atrophy) and retinal detachment. In addition, this paper elaborates on other details such as the labeling process used to construct the database, the quality and characteristics of the samples and provides other relevant usage notes.


Asunto(s)
Miopía Degenerativa , Disco Óptico , Degeneración Retiniana , Humanos , Inteligencia Artificial , Fondo de Ojo , Miopía Degenerativa/diagnóstico por imagen , Miopía Degenerativa/patología , Disco Óptico/diagnóstico por imagen
10.
IEEE Trans Med Imaging ; 43(1): 542-557, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37713220

RESUMEN

The early detection of glaucoma is essential in preventing visual impairment. Artificial intelligence (AI) can be used to analyze color fundus photographs (CFPs) in a cost-effective manner, making glaucoma screening more accessible. While AI models for glaucoma screening from CFPs have shown promising results in laboratory settings, their performance decreases significantly in real-world scenarios due to the presence of out-of-distribution and low-quality images. To address this issue, we propose the Artificial Intelligence for Robust Glaucoma Screening (AIROGS) challenge. This challenge includes a large dataset of around 113,000 images from about 60,000 patients and 500 different screening centers, and encourages the development of algorithms that are robust to ungradable and unexpected input data. We evaluated solutions from 14 teams in this paper and found that the best teams performed similarly to a set of 20 expert ophthalmologists and optometrists. The highest-scoring team achieved an area under the receiver operating characteristic curve of 0.99 (95% CI: 0.98-0.99) for detecting ungradable images on-the-fly. Additionally, many of the algorithms showed robust performance when tested on three other publicly available datasets. These results demonstrate the feasibility of robust AI-enabled glaucoma screening.


Asunto(s)
Inteligencia Artificial , Glaucoma , Humanos , Glaucoma/diagnóstico por imagen , Fondo de Ojo , Técnicas de Diagnóstico Oftalmológico , Algoritmos
11.
Eye (Lond) ; 38(5): 863-870, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-37875700

RESUMEN

BACKGROUND/OBJECTIVES: To analyse short-term changes of mean photoreceptor thickness (PRT) on the ETDRS-grid after vitrectomy and membrane peeling in patients with epiretinal membrane (ERM). SUBJECTS/METHODS: Forty-eight patients with idiopathic ERM were included in this prospective study. Study examinations comprised best-corrected visual acuity (BCVA) and optical coherence tomography (OCT) before surgery, 1 week (W1), 1 month (M1) and 3 months (M3) after surgery. Mean PRT was assessed using an automated algorithm and correlated with BCVA and central retinal thickness (CRT). RESULTS: Regarding PRT changes of the study eye in comparison to baseline values, a significant decrease at W1 in the 1 mm, 3 mm and 6 mm area (all p-values < 0.001), at M1 (p = 0.009) and M3 (p = 0.019) in the central 1 mm area, a significant increase at M3 in the 6 mm area (p < 0.001), but no significant change at M1 in the 3 mm and 6 mm area and M3 in the 3 mm area (all p-values > 0.05) were observed. BCVA increased significantly from baseline to M3 (0.3LogMAR-0.15LogMAR, Snellen equivalent = 20/40-20/28 respectively; p < 0.001). There was no correlation between baseline PRT and BCVA at any visit after surgery, nor between PRT and BCVA at any visit (all p-values > 0.05). Decrease in PRT in the 1 mm (p < 0.001), 3 mm (p = 0.013) and 6 mm (p = 0.034) area after one week correlated with the increase in CRT (449.9 µm-462.2 µm). CONCLUSIONS: Although the photoreceptor layer is morphologically affected by ERMs and after their surgical removal, it is not correlated to BCVA. Thus, patients with photoreceptor layer alterations due to ERM may still benefit from surgery and achieve good functional rehabilitation thereafter.


Asunto(s)
Membrana Epirretinal , Humanos , Membrana Epirretinal/cirugía , Estudios Prospectivos , Estudios Retrospectivos , Retina , Tomografía de Coherencia Óptica/métodos , Vitrectomía/métodos
12.
Surv Ophthalmol ; 69(2): 165-172, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-37890677

RESUMEN

There is a need to identify accurately prognostic factors that determine the progression of intermediate to late-stage age-related macular degeneration (AMD). Currently, clinicians cannot provide individualised prognoses of disease progression. Moreover, enriching clinical trials with rapid progressors may facilitate delivery of shorter intervention trials aimed at delaying or preventing progression to late AMD. Thus, we performed a systematic review to outline and assess the accuracy of reporting prognostic factors for the progression of intermediate to late AMD. A meta-analysis was originally planned. Synonyms of AMD and disease progression were used to search Medline and EMBASE for articles investigating AMD progression published between 1991 and 2021. Initial search results included 3229 articles. Predetermined eligibility criteria were employed to systematically screen papers by two reviewers working independently and in duplicate. Quality appraisal and data extraction were performed by a team of reviewers. Only 6 studies met the eligibility criteria. Based on these articles, exploratory prognostic factors for progression of intermediate to late AMD included phenotypic features (e.g. location and size of drusen), age, smoking status, ocular and systemic co-morbidities, race, and genotype. Overall, study heterogeneity precluded reporting by forest plots and meta-analysis. The most commonly reported prognostic factors were baseline drusen volume/size, which was associated with progression to neovascular AMD, and outer retinal thinning linked to progression to geographic atrophy. In conclusion, poor methodological quality of included studies warrants cautious interpretation of our findings. Rigorous studies are warranted to provide robust evidence in the future.


Asunto(s)
Drusas Retinianas , Degeneración Macular Húmeda , Humanos , Pronóstico , Inhibidores de la Angiogénesis , Progresión de la Enfermedad , Agudeza Visual , Factor A de Crecimiento Endotelial Vascular
13.
IEEE Trans Med Imaging ; 43(3): 1165-1179, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37934647

RESUMEN

Robust forecasting of the future anatomical changes inflicted by an ongoing disease is an extremely challenging task that is out of grasp even for experienced healthcare professionals. Such a capability, however, is of great importance since it can improve patient management by providing information on the speed of disease progression already at the admission stage, or it can enrich the clinical trials with fast progressors and avoid the need for control arms by the means of digital twins. In this work, we develop a deep learning method that models the evolution of age-related disease by processing a single medical scan and providing a segmentation of the target anatomy at a requested future point in time. Our method represents a time-invariant physical process and solves a large-scale problem of modeling temporal pixel-level changes utilizing NeuralODEs. In addition, we demonstrate the approaches to incorporate the prior domain-specific constraints into our method and define temporal Dice loss for learning temporal objectives. To evaluate the applicability of our approach across different age-related diseases and imaging modalities, we developed and tested the proposed method on the datasets with 967 retinal OCT volumes of 100 patients with Geographic Atrophy and 2823 brain MRI volumes of 633 patients with Alzheimer's Disease. For Geographic Atrophy, the proposed method outperformed the related baseline models in the atrophy growth prediction. For Alzheimer's Disease, the proposed method demonstrated remarkable performance in predicting the brain ventricle changes induced by the disease, achieving the state-of-the-art result on TADPOLE cross-sectional prediction challenge dataset.


Asunto(s)
Enfermedad de Alzheimer , Atrofia Geográfica , Humanos , Enfermedad de Alzheimer/diagnóstico por imagen , Estudios Transversales , Imagen por Resonancia Magnética/métodos , Progresión de la Enfermedad
14.
Sci Rep ; 13(1): 19545, 2023 11 09.
Artículo en Inglés | MEDLINE | ID: mdl-37945665

RESUMEN

Real-world retinal optical coherence tomography (OCT) scans are available in abundance in primary and secondary eye care centres. They contain a wealth of information to be analyzed in retrospective studies. The associated electronic health records alone are often not enough to generate a high-quality dataset for clinical, statistical, and machine learning analysis. We have developed a deep learning-based age-related macular degeneration (AMD) stage classifier, to efficiently identify the first onset of early/intermediate (iAMD), atrophic (GA), and neovascular (nAMD) stage of AMD in retrospective data. We trained a two-stage convolutional neural network to classify macula-centered 3D volumes from Topcon OCT images into 4 classes: Normal, iAMD, GA and nAMD. In the first stage, a 2D ResNet50 is trained to identify the disease categories on the individual OCT B-scans while in the second stage, four smaller models (ResNets) use the concatenated B-scan-wise output from the first stage to classify the entire OCT volume. Classification uncertainty estimates are generated with Monte-Carlo dropout at inference time. The model was trained on a real-world OCT dataset, 3765 scans of 1849 eyes, and extensively evaluated, where it reached an average ROC-AUC of 0.94 in a real-world test set.


Asunto(s)
Aprendizaje Profundo , Degeneración Macular , Humanos , Tomografía de Coherencia Óptica/métodos , Estudios Retrospectivos , Degeneración Macular/diagnóstico por imagen , Redes Neurales de la Computación
15.
Can J Ophthalmol ; 2023 Nov 18.
Artículo en Inglés | MEDLINE | ID: mdl-37989493

RESUMEN

OBJECTIVE: To investigate the effect of macular fluid volumes (subretinal fluid [SRF], intraretinal fluid [IRF], and pigment epithelium detachment [PED]) after initial treatment on functional and structural outcomes in neovascular age-related macular degeneration in a real-world cohort from Fight Retinal Blindness! METHODS: Treatment-naive neovascular age-related macular degeneration patients from Fight Retinal Blindness! (Zürich, Switzerland) were included. Macular fluid on optical coherence tomography was automatically quantified using an approved artificial intelligence algorithm. Follow-up of macular fluid, number of anti-vascular endothelial growth factor treatments, effect of fluid volumes after initial treatment (high, top 25%; low, bottom 75%) on best-corrected visual acuity, and development of macular atrophy and fibrosis was investigated over 48 months. RESULTS: A total of 209 eyes (mean age, 78.3 years) were included. Patients with high IRF volumes after initial treatment differed by -2.6 (p = 0.021) and -7.4 letters (p = 0.007) at months 12 and 48, respectively. Eyes with high IRF received significantly more treatments (+1.6 [p < 0.001] and +5.3 [p = 0.002] at months 12 and 48, respectively). Patients with high SRF or PED had comparable best-corrected visual acuity outcomes but received significantly more treatments for SRF (+2.4 [p < 0.001] and +11.4 [p < 0.001] at months 12 and 48, respectively) and PED (+1.2 [p = 0.001] and +7.8 [p < 0.001] at months 12 and 48, respectively). DISCUSSION: Patients with high macular fluid after initial treatment are at risk of losing vision that may not be compensable with higher treatment frequency for IRF. Higher treatment frequency for SRF and PED may result in comparable treatment outcomes. Quantification of macular fluid in all compartments is essential to detect eyes at risk of aggressive disease.

16.
Med Image Anal ; 90: 102938, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37806020

RESUMEN

Glaucoma is a chronic neuro-degenerative condition that is one of the world's leading causes of irreversible but preventable blindness. The blindness is generally caused by the lack of timely detection and treatment. Early screening is thus essential for early treatment to preserve vision and maintain life quality. Colour fundus photography and Optical Coherence Tomography (OCT) are the two most cost-effective tools for glaucoma screening. Both imaging modalities have prominent biomarkers to indicate glaucoma suspects, such as the vertical cup-to-disc ratio (vCDR) on fundus images and retinal nerve fiber layer (RNFL) thickness on OCT volume. In clinical practice, it is often recommended to take both of the screenings for a more accurate and reliable diagnosis. However, although numerous algorithms are proposed based on fundus images or OCT volumes for the automated glaucoma detection, there are few methods that leverage both of the modalities to achieve the target. To fulfil the research gap, we set up the Glaucoma grAding from Multi-Modality imAges (GAMMA) Challenge to encourage the development of fundus & OCT-based glaucoma grading. The primary task of the challenge is to grade glaucoma from both the 2D fundus images and 3D OCT scanning volumes. As part of GAMMA, we have publicly released a glaucoma annotated dataset with both 2D fundus colour photography and 3D OCT volumes, which is the first multi-modality dataset for machine learning based glaucoma grading. In addition, an evaluation framework is also established to evaluate the performance of the submitted methods. During the challenge, 1272 results were submitted, and finally, ten best performing teams were selected for the final stage. We analyse their results and summarize their methods in the paper. Since all the teams submitted their source code in the challenge, we conducted a detailed ablation study to verify the effectiveness of the particular modules proposed. Finally, we identify the proposed techniques and strategies that could be of practical value for the clinical diagnosis of glaucoma. As the first in-depth study of fundus & OCT multi-modality glaucoma grading, we believe the GAMMA Challenge will serve as an essential guideline and benchmark for future research.


Asunto(s)
Glaucoma , Humanos , Glaucoma/diagnóstico por imagen , Retina , Fondo de Ojo , Técnicas de Diagnóstico Oftalmológico , Ceguera , Tomografía de Coherencia Óptica/métodos
17.
Sci Rep ; 13(1): 16231, 2023 09 27.
Artículo en Inglés | MEDLINE | ID: mdl-37758754

RESUMEN

Deep neural networks have been increasingly proposed for automated screening and diagnosis of retinal diseases from optical coherence tomography (OCT), but often provide high-confidence predictions on out-of-distribution (OOD) cases, compromising their clinical usage. With this in mind, we performed an in-depth comparative analysis of the state-of-the-art uncertainty estimation methods for OOD detection in retinal OCT imaging. The analysis was performed within the use-case of automated screening and staging of age-related macular degeneration (AMD), one of the leading causes of blindness worldwide, where we achieved a macro-average area under the curve (AUC) of 0.981 for AMD classification. We focus on a few-shot Outlier Exposure (OE) method and the detection of near-OOD cases that share pathomorphological characteristics with the inlier AMD classes. Scoring the OOD case based on the Cosine distance in the feature space from the penultimate network layer proved to be a robust approach for OOD detection, especially in combination with the OE. Using Cosine distance and only 8 outliers exposed per class, we were able to improve the near-OOD detection performance of the OE with Reject Bucket method by [Formula: see text] 10% compared to without OE, reaching an AUC of 0.937. The Cosine distance served as a robust metric for OOD detection of both known and unknown classes and should thus be considered as an alternative to the reject bucket class probability in OE approaches, especially in the few-shot scenario. The inclusion of these methodologies did not come at the expense of classification performance, and can substantially improve the reliability and trustworthiness of the resulting deep learning-based diagnostic systems in the context of retinal OCT.


Asunto(s)
Aprendizaje Profundo , Degeneración Macular , Humanos , Tomografía de Coherencia Óptica , Reproducibilidad de los Resultados , Área Bajo la Curva , Terapia Conductista , Degeneración Macular/diagnóstico por imagen
18.
Br J Ophthalmol ; 2023 Sep 29.
Artículo en Inglés | MEDLINE | ID: mdl-37775259

RESUMEN

AIM: To predict antivascular endothelial growth factor (VEGF) treatment requirements, visual acuity and morphological outcomes in neovascular age-related macular degeneration (nAMD) using fluid quantification by artificial intelligence (AI) in a real-world cohort. METHODS: Spectral-domain optical coherence tomography data of 158 treatment-naïve patients with nAMD from the Fight Retinal Blindness! registry in Zurich were processed at baseline, and after initial treatment using intravitreal anti-VEGF to predict subsequent 1-year and 4-year outcomes. Intraretinal and subretinal fluid and pigment epithelial detachment volumes were segmented using a deep learning algorithm (Vienna Fluid Monitor, RetInSight, Vienna, Austria). A predictive machine learning model for future treatment requirements and morphological outcomes was built using the computed set of quantitative features. RESULTS: Two hundred and two eyes from 158 patients were evaluated. 107 eyes had a lower median (≤7) and 95 eyes had an upper median (≥8) number of injections in the first year, with a mean accuracy of prediction of 0.77 (95% CI 0.71 to 0.83) area under the curve (AUC). Best-corrected visual acuity at baseline was the most relevant predictive factor determining final visual outcomes after 1 year. Over 4 years, half of the eyes had progressed to macular atrophy (MA) with the model being able to distinguish MA from non-MA eyes with a mean AUC of 0.70 (95% CI 0.61 to 0.79). Prediction for subretinal fibrosis reached an AUC of 0.74 (95% CI 0.63 to 0.81). CONCLUSIONS: The regulatory approved AI-based fluid monitoring allows clinicians to use automated algorithms in prospectively guided patient treatment in AMD. Furthermore, retinal fluid localisation and quantification can predict long-term morphological outcomes.

19.
Transl Vis Sci Technol ; 12(8): 21, 2023 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-37624605

RESUMEN

Purpose: To investigate and compare novel volumetric microperimetry (MP)-derived metrics in intermediate age-related macular degeneration (iAMD), as current MP metrics show high variability and low sensitivity. Methods: This is a cross-sectional analysis of microperimetry baseline data from the multicenter, prospective PINNACLE study (ClinicalTrials.gov NCT04269304). The Visual Field Modeling and Analysis (VFMA) software and an open-source implementation (OSI) were applied to calculate MP-derived hill-of-vison (HOV) surface plots and the total volume (VTOT) beneath the plots. Bland-Altman plots were used for methodologic comparison, and the association of retinal sensitivity metrics with explanatory variables was tested with mixed-effects models. Results: In total, 247 eyes of 189 participants (75 ± 7.3 years) were included in the analysis. The VTOT output of VFMA and OSI exhibited a significant difference (P < 0.0001). VFMA yielded slightly higher coefficients of determination than OSI and mean sensitivity (MS) in univariable and multivariable modeling, for example, in association with low-luminance visual acuity (LLVA) (marginal R2/conditional R2: VFMA 0.171/0.771, OSI 0.162/0.765, MS 0.133/0.755). In the multivariable analysis, LLVA was the only demonstrable predictor of VFMA VTOT (t-value, P-value: -7.5, <0.001) and MS (-6.5, <0.001). Conclusions: The HOV-derived metric of VTOT exhibits favorable characteristics compared to MS in evaluating retinal sensitivity. The output of VFMA and OSI is not exactly interchangeable in this cross-sectional analysis. Longitudinal analysis is necessary to assess their performance in ability-to-detect change. Translational Relevance: This study explores new volumetric MP endpoints for future application in therapeutic trials in iAMD and reports specific characteristics of the available HOV software applications.


Asunto(s)
Benchmarking , Degeneración Macular , Humanos , Estudios Transversales , Estudios Prospectivos , Pruebas del Campo Visual , Degeneración Macular/diagnóstico , Retina/diagnóstico por imagen
20.
Biomed Opt Express ; 14(7): 3726-3747, 2023 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-37497506

RESUMEN

Optical coherence tomography (OCT) is the most widely used imaging modality in ophthalmology. There are multiple variations of OCT imaging capable of producing complementary information. Thus, registering these complementary volumes is desirable in order to combine their information. In this work, we propose a novel automated pipeline to register OCT images produced by different devices. This pipeline is based on two steps: a multi-modal 2D en-face registration based on deep learning, and a Z-axis (axial axis) registration based on the retinal layer segmentation. We evaluate our method using data from a Heidelberg Spectralis and an experimental PS-OCT device. The empirical results demonstrated high-quality registrations, with mean errors of approximately 46 µm for the 2D registration and 9.59 µm for the Z-axis registration. These registrations may help in multiple clinical applications such as the validation of layer segmentations among others.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...