RESUMEN
PURPOSE: To develop a fully automated algorithm for accurate detection of fovea location in atrophic age-related macular degeneration (AMD), based on spectral-domain optical coherence tomography (SD-OCT) scans. METHODS: Image processing was conducted on a cohort of patients affected by geographic atrophy (GA). SD-OCT images (cube volume) from 55 eyes (51 patients) were extracted and processed with a layer segmentation algorithm to segment Ganglion Cell Layer (GCL) and Inner Plexiform Layer (IPL). Their en face thickness projection was convolved with a 2D Gaussian filter to find the global maximum, which corresponded to the detected fovea. The detection accuracy was evaluated by computing the distance between manual annotation and predicted location. RESULTS: The mean total location error was 0.101±0.145mm; the mean error in horizontal and vertical en face axes was 0.064±0.140mm and 0.063±0.060mm, respectively. The mean error for foveal and extrafoveal retinal pigment epithelium and outer retinal atrophy (RORA) was 0.096±0.070mm and 0.107±0.212mm, respectively. Our method obtained a significantly smaller error than the fovea localization algorithm inbuilt in the OCT device (0.313±0.283mm, p <.001) or a method based on the thinnest central retinal thickness (0.843±1.221, p <.001). Significant outliers are depicted with the reliability score of the method. CONCLUSION: Despite retinal anatomical alterations related to GA, the presented algorithm was able to detect the foveal location on SD-OCT cubes with high reliability. Such an algorithm could be useful for studying structural-functional correlations in atrophic AMD and could have further applications in different retinal pathologies.
Asunto(s)
Atrofia Geográfica , Fóvea Central/patología , Atrofia Geográfica/diagnóstico , Humanos , Reproducibilidad de los Resultados , Epitelio Pigmentado de la Retina/patología , Tomografía de Coherencia Óptica/métodosRESUMEN
INTRODUCTION: In this retrospective cohort study, we wanted to evaluate the performance and analyze the insights of an artificial intelligence (AI) algorithm in detecting retinal fluid in spectral-domain OCT volume scans from a large cohort of patients with neovascular age-related macular degeneration (AMD) and diabetic macular edema (DME). METHODS: A total of 3,981 OCT volumes from 374 patients with AMD and 11,501 OCT volumes from 811 patients with DME were acquired with Heidelberg-Spectralis OCT device (Heidelberg Engineering Inc., Heidelberg, Germany) between 2013 and 2021. Each OCT volume was annotated for the presence or absence of intraretinal fluid (IRF) and subretinal fluid (SRF) by masked reading center graders (ground truth). The performance of an already published AI algorithm to detect IRF and SRF separately, and a combined fluid detector (IRF and/or SRF) of the same OCT volumes was evaluated. An analysis of the sources of disagreement between annotation and prediction and their relationship to central retinal thickness was performed. We computed the mean areas under the curves (AUC) and under the precision-recall curves (AP), accuracy, sensitivity, specificity, and precision. RESULTS: The AUC for IRF was 0.92 and 0.98, for SRF 0.98 and 0.99, in the AMD and DME cohort, respectively. The AP for IRF was 0.89 and 1.00, for SRF 0.97 and 0.93, in the AMD and DME cohort, respectively. The accuracy, specificity, and sensitivity for IRF were 0.87, 0.88, 0.84, and 0.93, 0.95, 0.93, and for SRF 0.93, 0.93, 0.93, and 0.95, 0.95, 0.95 in the AMD and DME cohort, respectively. For detecting any fluid, the AUC was 0.95 and 0.98, and the accuracy, specificity, and sensitivity were 0.89, 0.93, and 0.90 and 0.95, 0.88, and 0.93, in the AMD and DME cohort, respectively. False positives were present when retinal shadow artifacts and strong retinal deformation were present. False negatives were due to small hyporeflective areas in combination with poor image quality. The combined detector correctly predicted more OCT volumes than the single detectors for IRF and SRF, 89.0% versus 81.6% in the AMD and 93.1% versus 88.6% in the DME cohort. DISCUSSION/CONCLUSION: The AI-based fluid detector achieves high performance for retinal fluid detection in a very large dataset dedicated to AMD and DME. Combining single detectors provides better fluid detection accuracy than considering the single detectors separately. The observed independence of the single detectors ensures that the detectors learned features particular to IRF and SRF.
Asunto(s)
Diabetes Mellitus , Retinopatía Diabética , Degeneración Macular , Edema Macular , Degeneración Macular Húmeda , Humanos , Edema Macular/diagnóstico , Retinopatía Diabética/diagnóstico , Tomografía de Coherencia Óptica/métodos , Líquido Subretiniano , Estudios Retrospectivos , Inteligencia Artificial , Degeneración Macular/diagnóstico , Inhibidores de la AngiogénesisRESUMEN
AIM: To explore associations between artificial intelligence (AI)-based fluid compartment quantifications and 12 months visual outcomes in OCT images from a real-world, multicentre, national cohort of naïve neovascular age-related macular degeneration (nAMD) treated eyes. METHODS: Demographics, visual acuity (VA), drug and number of injections data were collected using a validated web-based tool. Fluid compartment quantifications including intraretinal fluid (IRF), subretinal fluid (SRF) and pigment epithelial detachment (PED) in the fovea (1 mm), parafovea (3 mm) and perifovea (6 mm) were measured in nanoliters (nL) using a validated AI-tool. RESULTS: 452 naïve nAMD eyes presented a mean VA gain of +5.5 letters with a median of 7 injections over 12 months. Baseline foveal IRF associated poorer baseline (44.7 vs 63.4 letters) and final VA (52.1 vs 69.1), SRF better final VA (67.1 vs 59.0) and greater VA gains (+7.1 vs +1.9), and PED poorer baseline (48.8 vs 57.3) and final VA (55.1 vs 64.1). Predicted VA gains were greater for foveal SRF (+6.2 vs +0.6), parafoveal SRF (+6.9 vs +1.3), perifoveal SRF (+6.2 vs -0.1) and parafoveal IRF (+7.4 vs +3.6, all p<0.05). Fluid dynamics analysis revealed the greatest relative volume reduction for foveal SRF (-16.4 nL, -86.8%), followed by IRF (-17.2 nL, -84.7%) and PED (-19.1 nL, -28.6%). Subgroup analysis showed greater reductions in eyes with higher number of injections. CONCLUSION: This real-world study describes an AI-based analysis of fluid dynamics and defines baseline OCT-based patient profiles that associate 12-month visual outcomes in a large cohort of treated naïve nAMD eyes nationwide.
Asunto(s)
Mácula Lútea , Degeneración Macular , Desprendimiento de Retina , Degeneración Macular Húmeda , Humanos , Ranibizumab/uso terapéutico , Inhibidores de la Angiogénesis/uso terapéutico , Factor A de Crecimiento Endotelial Vascular , Inteligencia Artificial , Tomografía de Coherencia Óptica , Inyecciones Intravítreas , Desprendimiento de Retina/tratamiento farmacológico , Degeneración Macular/tratamiento farmacológico , Líquido Subretiniano , Degeneración Macular Húmeda/diagnóstico , Degeneración Macular Húmeda/tratamiento farmacológicoRESUMEN
Purpose: Diabetic retinopathy (DR) is the leading cause of vision impairment in working-age adults. Automated screening can increase DR detection at early stages at relatively low costs. We developed and evaluated a cloud-based screening tool that uses artificial intelligence (AI), the LuxIA algorithm, to detect DR from a single fundus image. Methods: Color fundus images that were previously graded by expert readers were collected from the Canarian Health Service (Retisalud) and used to train LuxIA, a deep-learning-based algorithm for the detection of more than mild DR. The algorithm was deployed in the Discovery cloud platform to evaluate each test set. Sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve were computed using a bootstrapping method to evaluate the algorithm performance and compared through different publicly available datasets. A usability test was performed to assess the integration into a clinical tool. Results: Three separate datasets, Messidor-2, APTOS, and a holdout set from Retisalud were evaluated. Mean sensitivity and specificity with 95% confidence intervals (CIs) reached for these three datasets were 0.901 (0.901-0.902) and 0.955 (0.955-0.956), 0.995 (0.995-0.995) and 0.821 (0.821-0.823), and 0.911 (0.907-0.912) and 0.880 (0.879-0.880), respectively. The usability test confirmed the successful integration of LuxIA into Discovery. Conclusions: Clinical data were used to train the deep-learning-based algorithm LuxIA to an expert-level performance. The whole process (image uploading and analysis) was integrated into the cloud-based platform Discovery, allowing more patients to have access to expert-level screening tools. Translational Relevance: Using the cloud-based LuxIA tool as part of a screening program may give diabetic patients greater access to specialist-level decisions, without the need for consultation.
Asunto(s)
Diabetes Mellitus , Retinopatía Diabética , Comportamiento del Uso de la Herramienta , Adulto , Humanos , Inteligencia Artificial , Retinopatía Diabética/diagnóstico , Nube Computacional , AlgoritmosRESUMEN
Age-related macular degeneration (AMD) is a progressive retinal disease, causing vision loss. A more detailed characterization of its atrophic form became possible thanks to the introduction of Optical Coherence Tomography (OCT). However, manual atrophy quantification in 3D retinal scans is a tedious task and prevents taking full advantage of the accurate retina depiction. In this study we developed a fully automated algorithm segmenting Retinal Pigment Epithelial and Outer Retinal Atrophy (RORA) in dry AMD on macular OCT. 62 SD-OCT scans from eyes with atrophic AMD (57 patients) were collected and split into train and test sets. The training set was used to develop a Convolutional Neural Network (CNN). The performance of the algorithm was established by cross validation and comparison to the test set with ground-truth annotated by two graders. Additionally, the effect of using retinal layer segmentation during training was investigated. The algorithm achieved mean Dice scores of 0.881 and 0.844, sensitivity of 0.850 and 0.915 and precision of 0.928 and 0.799 in comparison with Expert 1 and Expert 2, respectively. Using retinal layer segmentation improved the model performance. The proposed model identified RORA with performance matching human experts. It has a potential to rapidly identify atrophy with high consistency.
Asunto(s)
Algoritmos , Atrofia Geográfica/diagnóstico por imagen , Degeneración Macular/diagnóstico por imagen , Tomografía de Coherencia Óptica/métodos , Anciano , Anciano de 80 o más Años , Aprendizaje Profundo , Femenino , Humanos , Masculino , Redes Neurales de la Computación , Variaciones Dependientes del Observador , Reconocimiento de Normas Patrones Automatizadas/métodos , Reconocimiento de Normas Patrones Automatizadas/estadística & datos numéricos , Epitelio Pigmentado de la Retina/diagnóstico por imagen , Tomografía de Coherencia Óptica/estadística & datos numéricosRESUMEN
PURPOSE: To assess the potential of machine learning to predict low and high treatment demand in real life in patients with neovascular age-related macular degeneration (nAMD), retinal vein occlusion (RVO), and diabetic macular edema (DME) treated according to a treat-and-extend regimen (TER). DESIGN: Retrospective cohort study. PARTICIPANTS: Three hundred seventy-seven eyes (340 patients) with nAMD and 333 eyes (285 patients) with RVO or DME treated with anti-vascular endothelial growth factor agents (VEGF) according to a predefined TER from 2014 through 2018. METHODS: Eyes were grouped by disease into low, moderate, and high treatment demands, defined by the average treatment interval (low, ≥10 weeks; high, ≤5 weeks; moderate, remaining eyes). Two random forest models were trained to predict the probability of the long-term treatment demand of a new patient. Both models use morphological features automatically extracted from the OCT volumes at baseline and after 2 consecutive visits, as well as patient demographic information. Evaluation of the models included a 10-fold cross-validation ensuring that no patient was present in both the training set (nAMD, approximately 339; RVO and DME, approximately 300) and test set (nAMD, approximately 38; RVO and DME, approximately 33). MAIN OUTCOME MEASURES: Mean area under the receiver operating characteristic curve (AUC) of both models; contribution to the prediction and statistical significance of the input features. RESULTS: Based on the first 3 visits, it was possible to predict low and high treatment demand in nAMD eyes and in RVO and DME eyes with similar accuracy. The distribution of low, high, and moderate demanders was 127, 42, and 208, respectively, for nAMD and 61, 50, and 222, respectively, for RVO and DME. The nAMD-trained models yielded mean AUCs of 0.79 and 0.79 over the 10-fold crossovers for low and high demand, respectively. Models for RVO and DME showed similar results, with a mean AUC of 0.76 and 0.78 for low and high demand, respectively. Even more importantly, this study revealed that it is possible to predict low demand reasonably well at the first visit, before the first injection. CONCLUSIONS: Machine learning classifiers can predict treatment demand and may assist in establishing patient-specific treatment plans in the near future.
Asunto(s)
Retinopatía Diabética/tratamiento farmacológico , Aprendizaje Automático , Edema Macular/tratamiento farmacológico , Ranibizumab/administración & dosificación , Oclusión de la Vena Retiniana/tratamiento farmacológico , Degeneración Macular Húmeda/tratamiento farmacológico , Anciano , Anciano de 80 o más Años , Inhibidores de la Angiogénesis/administración & dosificación , Retinopatía Diabética/complicaciones , Femenino , Estudios de Seguimiento , Humanos , Inyecciones Intravítreas , Edema Macular/etiología , Masculino , Persona de Mediana Edad , Pronóstico , Estudios Retrospectivos , Factor A de Crecimiento Endotelial VascularRESUMEN
Purpose: To develop a reliable algorithm for the automated identification, localization, and volume measurement of exudative manifestations in neovascular age-related macular degeneration (nAMD), including intraretinal (IRF), subretinal fluid (SRF), and pigment epithelium detachment (PED), using a deep-learning approach. Methods: One hundred seven spectral domain optical coherence tomography (OCT) cube volumes were extracted from nAMD eyes. Manual annotation of IRF, SRF, and PED was performed. Ninety-two OCT volumes served as training and validation set, and 15 OCT volumes from different patients as test set. The performance of our fluid segmentation method was quantified by means of pixel-wise metrics and volume correlations and compared to other methods. Repeatability was tested on 42 other eyes with five OCT volume scans acquired on the same day. Results: The fully automated algorithm achieved good performance for the detection of IRF, SRF, and PED. The area under the curve for detection, sensitivity, and specificity was 0.97, 0.95, and 0.99, respectively. The correlation coefficients for the fluid volumes were 0.99, 0.99, and 0.91, respectively. The Dice score was 0.73, 0.67, and 0.82, respectively. For the largest volume quartiles the Dice scores were >0.90. Including retinal layer segmentation contributed positively to the performance. The repeatability of volume prediction showed a standard deviations of 4.0 nL, 3.5 nL, and 20.0 nL for IRF, SRF, and PED, respectively. Conclusions: The deep-learning algorithm can simultaneously acquire a high level of performance for the identification and volume measurements of IRF, SRF, and PED in nAMD, providing accurate and repeatable predictions. Including layer segmentation during training and squeeze-excite block in the network architecture were shown to boost the performance. Translational Relevance: Potential applications include measurements of specific fluid compartments with high reproducibility, assistance in treatment decisions, and the diagnostic or scientific evaluation of relevant subgroups.
Asunto(s)
Aprendizaje Profundo , Degeneración Macular , Inhibidores de la Angiogénesis/uso terapéutico , Humanos , Degeneración Macular/tratamiento farmacológico , Ranibizumab/uso terapéutico , Reproducibilidad de los Resultados , Agudeza VisualRESUMEN
Purpose: To develop and validate an automatic retinal pigment epithelial and outer retinal atrophy (RORA) progression prediction model for nonexudative age-related macular degeneration (AMD) cases in optical coherence tomography (OCT) scans. Methods: Longitudinal OCT data from 129 eyes/119 patients with RORA was collected and separated into training and testing groups. RORA was automatically segmented in all scans and additionally manually annotated in the test scans. OCT-based features such as layers thicknesses, mean reflectivity, and a drusen height map served as an input to the deep neural network. Based on the baseline OCT scan or the previous visit OCT, en face RORA predictions were calculated for future patient visits. The performance was quantified over time with the means of Dice scores and square root area errors. Results: The average Dice score for segmentations at baseline was 0.85. When predicting progression from baseline OCTs, the Dice scores ranged from 0.73 to 0.80 for total RORA area and from 0.46 to 0.72 for RORA growth region. The square root area error ranged from 0.13 mm to 0.33 mm. By providing continuous time output, the model enabled creation of a patient-specific atrophy risk map. Conclusions: We developed a machine learning method for RORA progression prediction, which provides continuous-time output. It was used to compute atrophy risk maps, which indicate time-to-RORA-conversion, a novel and clinically relevant way of representing disease progression. Translational Relevance: Application of recent advances in artificial intelligence to predict patient-specific progression of atrophic AMD.
Asunto(s)
Atrofia Geográfica , Degeneración Macular , Inteligencia Artificial , Atrofia , Progresión de la Enfermedad , Humanos , Degeneración Macular/diagnóstico por imagen , Tomografía de Coherencia ÓpticaRESUMEN
To compare drusen volume between Heidelberg Spectral Domain (SD-) and Zeiss Swept-Source (SS) PlexElite Optical Coherence Tomography (OCT) determined by manual and automated segmentation methods. Thirty-two eyes of 24 patients with Age-Related Macular Degeneration (AMD) and drusen maculopathy were included. In the central 1 and 3 mm ETDRS circle drusen volumes were calculated and compared. Drusen segmentation was performed using automated manufacturer algorithms of the two OCT devices. Then, the automated segmentation was manually corrected and compared and finally analyzed using customized software. Though on SD-OCT, there was a significant difference of mean drusen volume prior to and after manual correction (mean difference: 0.0188 ± 0.0269 mm3, p < 0.001, corr. p < 0.001, correlation of r = 0.90), there was no difference found on SS-OCT (mean difference: 0.0001 ± 0.0003 mm3, p = 0.262, corr. p = 0.524, r = 1.0). Heidelberg-acquired mean drusen volume after manual correction was significantly different from Zeiss-acquired drusen volume after manual correction (mean difference: 0.1231 ± 0.0371 mm3, p < 0.001, corr. p < 0.001, r = 0.68). Using customized software, the difference of measurements between both devices decreased and correlation among the measurements improved (mean difference: 0.0547 ± 0.0744 mm3, p = 0.02, corr. p = 0.08, r = 0.937). Heidelberg SD-OCT, the Zeiss PlexElite SS-OCT, and customized software all measured significantly different drusen volumes. Therefore, devices/algorithms may not be interchangeable. Third-party customized software helps to minimize differences, which may allow a pooling of data of different devices, e.g., in multicenter trials.
RESUMEN
In this work we evaluated a postprocessing, customized automatic retinal OCT B-scan enhancement software for noise reduction, contrast enhancement and improved depth quality applicable to Heidelberg Engineering Spectralis OCT devices. A trained deep neural network was used to process images from an OCT dataset with ground truth biomarker gradings. Performance was assessed by the evaluation of two expert graders who evaluated image quality for B-scan with a clear preference for enhanced over original images. Objective measures such as SNR and noise estimation showed a significant improvement in quality. Presence grading of seven biomarkers IRF, SRF, ERM, Drusen, RPD, GA and iRORA resulted in similar intergrader agreement. Intergrader agreement was also compared with improvement in IRF and RPD, and disagreement in high variance biomarkers such as GA and iRORA.
Asunto(s)
Angiografía con Fluoresceína/métodos , Oftalmoscopía/métodos , Retina/diagnóstico por imagen , Tomografía de Coherencia Óptica/métodos , Algoritmos , Humanos , Redes Neurales de la Computación , Prueba de Estudio Conceptual , Retina/fisiopatología , Programas InformáticosRESUMEN
PURPOSE: We investigate which spectral domain-optical coherence tomography (SD-OCT) setting is superior when measuring subfoveal choroidal thickness (CT) and compared results to an automated segmentation software. METHODS: Thirty patients underwent enhanced depth imaging (EDI)-OCT. B-scans were extracted in six different settings (W+N = white background/normal contrast 9; W+H = white background/maximum contrast 16; B+N = black background/normal contrast 12; B+H = black background/maximum contrast 16; C+N = Color-encoded image on black background at predefined contrast of 9, and C+H = Color-encoded image on black background at high/maximal contrast of 16), resulting in 180 images. Subfoveal CT was manually measured by nine graders and by automated segmentation software. Intraclass correlation (ICC) was assessed. RESULTS: ICC was higher in normal than in high contrast images, and better for achromatic black than for white background images. Achromatic images were better than color images. Highest ICC was achieved in B+N (ICC = 0.64), followed by B+H (ICC = 0.54), W+N, and W+H (ICC = 0.5 each). Weakest ICC was obtained with Spectral-color (ICC = 0.47). Mean manual CT versus mean computer estimated CT showed a correlation of r = 0.6 (P = 0.001). CONCLUSION: Black background with white image at normal contrast (B+N) seems the best setting to manually assess subfoveal CT. Automated assessment of CT seems to be a reliable tool for CT assessment. TRANSLATIONAL RELEVANCE: To define optimized OCT analysis settings to improve the evaluation of in vivo imaging.
RESUMEN
Retinal swelling due to the accumulation of fluid is associated with the most vision-threatening retinal diseases. Optical coherence tomography (OCT) is the current standard of care in assessing the presence and quantity of retinal fluid and image-guided treatment management. Deep learning methods have made their impact across medical imaging, and many retinal OCT analysis methods have been proposed. However, it is currently not clear how successful they are in interpreting the retinal fluid on OCT, which is due to the lack of standardized benchmarks. To address this, we organized a challenge RETOUCH in conjunction with MICCAI 2017, with eight teams participating. The challenge consisted of two tasks: fluid detection and fluid segmentation. It featured for the first time: all three retinal fluid types, with annotated images provided by two clinical centers, which were acquired with the three most common OCT device vendors from patients with two different retinal diseases. The analysis revealed that in the detection task, the performance on the automated fluid detection was within the inter-grader variability. However, in the segmentation task, fusing the automated methods produced segmentations that were superior to all individual methods, indicating the need for further improvements in the segmentation performance.
Asunto(s)
Interpretación de Imagen Asistida por Computador/métodos , Retina/diagnóstico por imagen , Tomografía de Coherencia Óptica/métodos , Algoritmos , Bases de Datos Factuales , Humanos , Enfermedades de la Retina/diagnóstico por imagenRESUMEN
Deriving accurate three-dimensional (3-D) structural information of materials at the nanometre level is often crucial for understanding their properties. Tomography in transmission electron microscopy (TEM) is a powerful technique that provides such information. It is however demanding and sometimes inapplicable, as it requires the acquisition of multiple images within a large tilt arc and hence prolonged exposure to electrons. In some cases, prior knowledge about the structure can tremendously simplify the 3-D reconstruction if incorporated adequately. Here, a novel algorithm is presented that is able to produce a full 3-D reconstruction of curvilinear structures from stereo pair of TEM images acquired within a small tilt range that spans from only a few to tens of degrees. Reliability of the algorithm is demonstrated through reconstruction of a model 3-D object from its simulated projections, and is compared with that of conventional tomography. This method is experimentally demonstrated for the 3-D visualization of dislocation arrangements in a deformed metallic micro-pillar.
RESUMEN
Retinoblastoma and uveal melanoma are fast spreading eye tumors usually diagnosed by using 2D Fundus Image Photography (Fundus) and 2D Ultrasound (US). Diagnosis and treatment planning of such diseases often require additional complementary imaging to confirm the tumor extend via 3D Magnetic Resonance Imaging (MRI). In this context, having automatic segmentations to estimate the size and the distribution of the pathological tissue would be advantageous towards tumor characterization. Until now, the alternative has been the manual delineation of eye structures, a rather time consuming and error-prone task, to be conducted in multiple MRI sequences simultaneously. This situation, and the lack of tools for accurate eye MRI analysis, reduces the interest in MRI beyond the qualitative evaluation of the optic nerve invasion and the confirmation of recurrent malignancies below calcified tumors. In this manuscript, we propose a new framework for the automatic segmentation of eye structures and ocular tumors in multi-sequence MRI. Our key contribution is the introduction of a pathological eye model from which Eye Patient-Specific Features (EPSF) can be computed. These features combine intensity and shape information of pathological tissue while embedded in healthy structures of the eye. We assess our work on a dataset of pathological patient eyes by computing the Dice Similarity Coefficient (DSC) of the sclera, the cornea, the vitreous humor, the lens and the tumor. In addition, we quantitatively show the superior performance of our pathological eye model as compared to the segmentation obtained by using a healthy model (over 4% DSC) and demonstrate the relevance of our EPSF, which improve the final segmentation regardless of the classifier employed.
Asunto(s)
Neoplasias del Ojo/diagnóstico por imagen , Ojo/diagnóstico por imagen , Imagenología Tridimensional/métodos , Imagen por Resonancia Magnética/métodos , Algoritmos , Córnea/anatomía & histología , Córnea/diagnóstico por imagen , Ojo/anatomía & histología , Neoplasias del Ojo/patología , Humanos , Cristalino/diagnóstico por imagen , Modelos Anatómicos , Esclerótica/anatomía & histología , Esclerótica/diagnóstico por imagen , Cuerpo Vítreo/anatomía & histología , Cuerpo Vítreo/diagnóstico por imagenRESUMEN
PURPOSE: To this day, the slit lamp remains the first tool used by an ophthalmologist to examine patient eyes. Imaging of the retina poses, however, a variety of problems, namely a shallow depth of focus, reflections from the optical system, a small field of view and non-uniform illumination. For ophthalmologists, the use of slit lamp images for documentation and analysis purposes, however, remains extremely challenging due to large image artifacts. For this reason, we propose an automatic retinal slit lamp video mosaicking, which enlarges the field of view and reduces amount of noise and reflections, thus enhancing image quality. METHODS: Our method is composed of three parts: (i) viable content segmentation, (ii) global registration and (iii) image blending. Frame content is segmented using gradient boosting with custom pixel-wise features. Speeded-up robust features are used for finding pair-wise translations between frames with robust random sample consensus estimation and graph-based simultaneous localization and mapping for global bundle adjustment. Foreground-aware blending based on feathering merges video frames into comprehensive mosaics. RESULTS: Foreground is segmented successfully with an area under the curve of the receiver operating characteristic curve of 0.9557. Mosaicking results and state-of-the-art methods were compared and rated by ophthalmologists showing a strong preference for a large field of view provided by our method. CONCLUSIONS: The proposed method for global registration of retinal slit lamp images of the retina into comprehensive mosaics improves over state-of-the-art methods and is preferred qualitatively.
Asunto(s)
Iluminación/instrumentación , Retina/diagnóstico por imagen , Microscopía con Lámpara de Hendidura/métodos , Grabación en Video , HumanosRESUMEN
Ophthalmologists typically acquire different image modalities to diagnose eye pathologies. They comprise, e.g., Fundus photography, optical coherence tomography, computed tomography, and magnetic resonance imaging (MRI). Yet, these images are often complementary and do express the same pathologies in a different way. Some pathologies are only visible in a particular modality. Thus, it is beneficial for the ophthalmologist to have these modalities fused into a single patient-specific model. The goal of this paper is a fusion of Fundus photography with segmented MRI volumes. This adds information to MRI that was not visible before like vessels and the macula. This paper contributions include automatic detection of the optic disc, the fovea, the optic axis, and an automatic segmentation of the vitreous humor of the eye.
Asunto(s)
Interpretación de Imagen Asistida por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Neoplasias de la Retina/patología , Retinoblastoma/patología , Retinoscopía/métodos , Técnica de Sustracción , Adolescente , Puntos Anatómicos de Referencia , Simulación por Computador , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Modelos Biológicos , Modelación Específica para el Paciente , Reproducibilidad de los Resultados , Sensibilidad y EspecificidadRESUMEN
PURPOSE: Proper delineation of ocular anatomy in 3-dimensional (3D) imaging is a big challenge, particularly when developing treatment plans for ocular diseases. Magnetic resonance imaging (MRI) is presently used in clinical practice for diagnosis confirmation and treatment planning for treatment of retinoblastoma in infants, where it serves as a source of information, complementary to the fundus or ultrasonographic imaging. Here we present a framework to fully automatically segment the eye anatomy for MRI based on 3D active shape models (ASM), and we validate the results and present a proof of concept to automatically segment pathological eyes. METHODS AND MATERIALS: Manual and automatic segmentation were performed in 24 images of healthy children's eyes (3.29 ± 2.15 years of age). Imaging was performed using a 3-T MRI scanner. The ASM consists of the lens, the vitreous humor, the sclera, and the cornea. The model was fitted by first automatically detecting the position of the eye center, the lens, and the optic nerve, and then aligning the model and fitting it to the patient. We validated our segmentation method by using a leave-one-out cross-validation. The segmentation results were evaluated by measuring the overlap, using the Dice similarity coefficient (DSC) and the mean distance error. RESULTS: We obtained a DSC of 94.90 ± 2.12% for the sclera and the cornea, 94.72 ± 1.89% for the vitreous humor, and 85.16 ± 4.91% for the lens. The mean distance error was 0.26 ± 0.09 mm. The entire process took 14 seconds on average per eye. CONCLUSION: We provide a reliable and accurate tool that enables clinicians to automatically segment the sclera, the cornea, the vitreous humor, and the lens, using MRI. We additionally present a proof of concept for fully automatically segmenting eye pathology. This tool reduces the time needed for eye shape delineation and thus can help clinicians when planning eye treatment and confirming the extent of the tumor.