Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
1.
Med Image Anal ; 93: 103104, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38350222

RESUMEN

Automated lesion detection in retinal optical coherence tomography (OCT) scans has shown promise for several clinical applications, including diagnosis, monitoring and guidance of treatment decisions. However, segmentation models still struggle to achieve the desired results for some complex lesions or datasets that commonly occur in real-world, e.g. due to variability of lesion phenotypes, image quality or disease appearance. While several techniques have been proposed to improve them, one line of research that has not yet been investigated is the incorporation of additional semantic context through the application of anomaly detection models. In this study we experimentally show that incorporating weak anomaly labels to standard segmentation models consistently improves lesion segmentation results. This can be done relatively easy by detecting anomalies with a separate model and then adding these output masks as an extra class for training the segmentation model. This provides additional semantic context without requiring extra manual labels. We empirically validated this strategy using two in-house and two publicly available retinal OCT datasets for multiple lesion targets, demonstrating the potential of this generic anomaly guided segmentation approach to be used as an extra tool for improving lesion detection models.


Asunto(s)
Semántica , Tomografía de Coherencia Óptica , Humanos , Fenotipo , Retina/diagnóstico por imagen
2.
Insights Imaging ; 15(1): 8, 2024 Jan 17.
Artículo en Inglés | MEDLINE | ID: mdl-38228979

RESUMEN

PURPOSE: To propose a new quality scoring tool, METhodological RadiomICs Score (METRICS), to assess and improve research quality of radiomics studies. METHODS: We conducted an online modified Delphi study with a group of international experts. It was performed in three consecutive stages: Stage#1, item preparation; Stage#2, panel discussion among EuSoMII Auditing Group members to identify the items to be voted; and Stage#3, four rounds of the modified Delphi exercise by panelists to determine the items eligible for the METRICS and their weights. The consensus threshold was 75%. Based on the median ranks derived from expert panel opinion and their rank-sum based conversion to importance scores, the category and item weights were calculated. RESULT: In total, 59 panelists from 19 countries participated in selection and ranking of the items and categories. Final METRICS tool included 30 items within 9 categories. According to their weights, the categories were in descending order of importance: study design, imaging data, image processing and feature extraction, metrics and comparison, testing, feature processing, preparation for modeling, segmentation, and open science. A web application and a repository were developed to streamline the calculation of the METRICS score and to collect feedback from the radiomics community. CONCLUSION: In this work, we developed a scoring tool for assessing the methodological quality of the radiomics research, with a large international panel and a modified Delphi protocol. With its conditional format to cover methodological variations, it provides a well-constructed framework for the key methodological concepts to assess the quality of radiomic research papers. CRITICAL RELEVANCE STATEMENT: A quality assessment tool, METhodological RadiomICs Score (METRICS), is made available by a large group of international domain experts, with transparent methodology, aiming at evaluating and improving research quality in radiomics and machine learning. KEY POINTS: • A methodological scoring tool, METRICS, was developed for assessing the quality of radiomics research, with a large international expert panel and a modified Delphi protocol. • The proposed scoring tool presents expert opinion-based importance weights of categories and items with a transparent methodology for the first time. • METRICS accounts for varying use cases, from handcrafted radiomics to entirely deep learning-based pipelines. • A web application has been developed to help with the calculation of the METRICS score ( https://metricsscore.github.io/metrics/METRICS.html ) and a repository created to collect feedback from the radiomics community ( https://github.com/metricsscore/metrics ).

3.
Eur Radiol Exp ; 7(1): 32, 2023 06 07.
Artículo en Inglés | MEDLINE | ID: mdl-37280478

RESUMEN

BACKGROUND: International societies have issued guidelines for high-risk breast cancer (BC) screening, recommending contrast-enhanced magnetic resonance imaging (CE-MRI) of the breast as a supplemental diagnostic tool. In our study, we tested the applicability of deep learning-based anomaly detection to identify anomalous changes in negative breast CE-MRI screens associated with future lesion emergence. METHODS: In this prospective study, we trained a generative adversarial network on dynamic CE-MRI of 33 high-risk women who participated in a screening program but did not develop BC. We defined an anomaly score as the deviation of an observed CE-MRI scan from the model of normal breast tissue variability. We evaluated the anomaly score's association with future lesion emergence on the level of local image patches (104,531 normal patches, 455 patches of future lesion location) and entire CE-MRI exams (21 normal, 20 with future lesion). Associations were analyzed by receiver operating characteristic (ROC) curves on the patch level and logistic regression on the examination level. RESULTS: The local anomaly score on image patches was a good predictor for future lesion emergence (area under the ROC curve 0.804). An exam-level summary score was significantly associated with the emergence of lesions at any location at a later time point (p = 0.045). CONCLUSIONS: Breast cancer lesions are associated with anomalous appearance changes in breast CE-MRI occurring before the lesion emerges in high-risk women. These early image signatures are detectable and may be a basis for adjusting individual BC risk and personalized screening. RELEVANCE STATEMENT: Anomalies in screening MRI preceding lesion emergence in women at high-risk of breast cancer may inform individualized screening and intervention strategies. KEY POINTS: • Breast lesions are associated with preceding anomalies in CE-MRI of high-risk women. • Deep learning-based anomaly detection can help to adjust risk assessment for future lesions. • An appearance anomaly score may be used for adjusting screening interval times.


Asunto(s)
Neoplasias de la Mama , Aprendizaje Profundo , Femenino , Humanos , Estudios Prospectivos , Estudios de Factibilidad , Medios de Contraste , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/patología , Imagen por Resonancia Magnética/métodos
5.
Eye (Lond) ; 37(17): 3582-3588, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37170011

RESUMEN

OBJECTIVES: To evaluate the quantitative impact of drusen and hyperreflective foci (HRF) volumes on mesopic retinal sensitivity in non-exudative age-related macular degeneration (AMD). METHODS: In a standardized follow-up scheme of every three months, retinal sensitivity of patients with early or intermediate AMD was assessed by microperimetry using a custom pattern of 45 stimuli (Nidek MP-3, Gamagori, Japan). Eyes were consecutively scanned using Spectralis SD-OCT (20° × 20°, 1024 × 97 × 496). Fundus photographs obtained by the MP-3 allowed to map the stimuli locations onto the corresponding OCT scans. The volume and mean thickness of drusen and HRF within a circle of 240 µm centred at each stimulus point was determined using automated AI-based image segmentation algorithms. RESULTS: 8055 individual stimuli from 179 visits from 51 eyes of 35 consecutive patients were matched with the respective OCT images in a point-to-point manner. The patients mean age was 76.85 ± 6.6 years. Mean retinal sensitivity at baseline was 25.7 dB. 73.47% of all MP-spots covered drusen area and 2.02% of MP-spots covered HRF. A negative association between retinal sensitivity and the volume of underlying drusen (p < 0.001, Estimate -0.991 db/µm3) and HRF volume (p = 0.002, Estimate -5.230 db/µm3) was found. During observation time, no eye showed conversion to advanced AMD. CONCLUSION: A direct correlation between drusen and lower sensitivity of the overlying photoreceptors can be observed. For HRF, a small but significant correlation was shown, which is compromised by their small size. Biomarker quantification using AI-methods allows to determine the impact of sub-clinical features in the progression of AMD.


Asunto(s)
Degeneración Macular , Drusas Retinianas , Humanos , Anciano , Anciano de 80 o más Años , Retina/diagnóstico por imagen , Algoritmos , Tomografía de Coherencia Óptica/métodos , Japón
6.
Eye (Lond) ; 37(7): 1439-1444, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-35778604

RESUMEN

BACKGROUND/OBJECTIVES: We aim to develop an objective fully automated Artificial intelligence (AI) algorithm for MNV lesion size and leakage area segmentation on fluorescein angiography (FA) in patients with neovascular age-related macular degeneration (nAMD). SUBJECTS/METHODS: Two FA image datasets collected form large prospective multicentre trials consisting of 4710 images from 513 patients and 4558 images from 514 patients were used to develop and evaluate a deep learning-based algorithm to detect CNV lesion size and leakage area automatically. Manual segmentation of was performed by certified FA graders of the Vienna Reading Center. Precision, Recall and F1 score between AI predictions and manual annotations were computed. In addition, two masked retina experts conducted a clinical-applicability evaluation, comparing the quality of AI based and manual segmentations. RESULTS: For CNV lesion size and leakage area segmentation, we obtained F1 scores of 0.73 and 0.65, respectively. Expert review resulted in a slight preference for the automated segmentations in both datasets. The quality of automated segmentations was slightly more often judged as good compared to manual annotations. CONCLUSIONS: CNV lesion size and leakage area can be segmented by our automated model at human-level performance, its output being well-accepted during clinical applicability testing. The results provide proof-of-concept that an automated deep learning approach can improve efficacy of objective biomarker analysis in FA images and will be well-suited for clinical application.


Asunto(s)
Neovascularización Coroidal , Aprendizaje Profundo , Degeneración Macular , Humanos , Estudios Prospectivos , Inteligencia Artificial , Angiografía con Fluoresceína/métodos , Neovascularización Coroidal/diagnóstico , Degeneración Macular/diagnóstico por imagen
7.
Br J Ophthalmol ; 2022 Nov 23.
Artículo en Inglés | MEDLINE | ID: mdl-36418144

RESUMEN

BACKGROUND/AIMS: Image quality assessment (IQA) is crucial for both reading centres in clinical studies and routine practice, as only adequate quality allows clinicians to correctly identify diseases and treat patients accordingly. Here we aim to develop a neural network for automated real-time IQA in colour fundus (CF) and fluorescein angiography (FA) images. METHODS: Training and evaluation of two neural networks were conducted using 2272 CF and 2492 FA images, with binary labels in four (contrast, focus, illumination, shadow and reflection) and three (contrast, focus, noise) modality specific categories plus an overall quality ranking. Performance was compared with a second human grader, evaluated on an external public dataset and in a clinical trial use-case. RESULTS: The networks achieved a F1-score/area under the receiving operator characteristic/precision recall curve of 0.907/0.963/0.966 for CF and 0.822/0.918/0.889 for FA in overall quality prediction with similar results in most categories. A clear relation between model uncertainty and prediction error was observed. In the clinical trial use-case evaluation, the networks achieved an accuracy of 0.930 for CF and 0.895 for FA. CONCLUSION: The presented method allows automated IQA in real time, demonstrating human-level performance for CF as well as FA. Such models can help to overcome the problem of human intergrader and intragrader variability by providing objective and reproducible IQA results. It has particular relevance for real-time feedback in multicentre clinical studies, when images are uploaded to central reading centre portals. Moreover, automated IQA as preprocessing step can support integrating automated approaches into clinical practice.

8.
Biomed Opt Express ; 13(5): 2566-2580, 2022 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-35774310

RESUMEN

In clinical routine, ophthalmologists frequently analyze the shape and size of the foveal avascular zone (FAZ) to detect and monitor retinal diseases. In order to extract those parameters, the contours of the FAZ need to be segmented, which is normally achieved by analyzing the retinal vasculature (RV) around the macula in fluorescein angiograms (FA). Computer-aided segmentation methods based on deep learning (DL) can automate this task. However, current approaches for segmenting the FAZ are often tailored to a specific dataset or require manual initialization. Furthermore, they do not take the variability and challenges of clinical FA into account, which are often of low quality and difficult to analyze. In this paper we propose a DL-based framework to automatically segment the FAZ in challenging FA scans from clinical routine. Our approach mimics the workflow of retinal experts by using additional RV labels as a guidance during training. Hence, our model is able to produce RV segmentations simultaneously. We minimize the annotation work by using a multi-modal approach that leverages already available public datasets of color fundus pictures (CFPs) and their respective manual RV labels. Our experimental evaluation on two datasets with FA from 1) clinical routine and 2) large multicenter clinical trials shows that the addition of weak RV labels as a guidance during training improves the FAZ segmentation significantly with respect to using only manual FAZ annotations.

9.
IEEE J Biomed Health Inform ; 26(8): 3927-3937, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35394920

RESUMEN

The fovea centralis is an essential landmark in the retina where the photoreceptor layer is entirely composed of cones responsible for sharp, central vision. The localization of this anatomical landmark in optical coherence tomography (OCT) volumes is important for assessing visual function correlates and treatment guidance in macular disease. In this study, the "PRE U-net" is introduced as a novel approach for a fully automated fovea centralis detection, addressing the localization as a pixel-wise regression task. 2D B-scans are sampled from each image volume and are concatenated with spatial location information to train the deep network. A total of 5586 OCT volumes from 1,541 eyes were used to train, validate and test the deep learning method. The test data is comprised of healthy subjects and patients affected by neovascular age-related macular degeneration (nAMD), diabetic macula edema (DME) and macular edema from retinal vein occlusion (RVO), covering the three major retinal diseases responsible for blindness. Our experiments demonstrate that the PRE U-net significantly outperforms state-of-the-art methods and improves the robustness of automated localization, which is of value for clinical practice.


Asunto(s)
Aprendizaje Profundo , Retinopatía Diabética , Edema Macular , Enfermedades de la Retina , Fóvea Central/diagnóstico por imagen , Humanos , Edema Macular/diagnóstico por imagen , Retina/diagnóstico por imagen , Tomografía de Coherencia Óptica/métodos
10.
Acta Ophthalmol ; 100(8): e1611-e1616, 2022 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-35343651

RESUMEN

PURPOSE: To develop and validate a deep learning model to automatically segment three structures using an anterior segment optical coherence tomography (AS-OCT): The intraocular lens (IOL), the retrolental space (IOL to the posterior lens capsule) and Berger's space (BS; posterior capsule to the anterior hyaloid membrane). METHODS: An artificial intelligence (AI) approach based on a deep learning model to automatically segment the IOL, the retrolental space, and BS in AS-OCT, was trained using annotations from an experienced clinician. The training, validation and test set consisted of 92 cross-sectional OCT slices, acquired in 47 visits from 41 eyes. Annotations from a second experienced clinician in the test set were additionally evaluated to conduct an inter-reader variability analysis. RESULTS: The AI model achieved a Precision/Recall/Dice score of 0.97/0.90/0.93 for IOL, 0.54/0.65/0.55 for retrolental space, and 0.72/0.58/0.59 for BS. For inter-reader variability, Precision/Recall/Dice values were 0.98/0.98/0.98 for IOL, 0.74/0.59/0.62 for retrolental space, and 0.58/0.57/0.57 for BS. No statistical differences were observed between the automated algorithm and the inter-reader variability for BS segmentation. CONCLUSION: The deep learning model allows for fully automatic segmentation of all investigated structures, achieving human-level performance in BS segmentation. We, therefore, expect promising applications of the algorithm with particular interest in BS in automated big data analysis and real-time intra-operative support in ophthalmology, particularly in conjunction with primary posterior capsulotomy in femtosecond laser-assisted cataract surgery.


Asunto(s)
Aprendizaje Profundo , Lentes Intraoculares , Humanos , Implantación de Lentes Intraoculares , Inteligencia Artificial , Estudios Transversales , Tomografía de Coherencia Óptica/métodos
11.
Ophthalmol Retina ; 6(6): 501-511, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-35134543

RESUMEN

PURPOSE: The currently used measures of retinal function are limited by being subjective, nonlocalized, or taxing for patients. To address these limitations, we sought to develop and evaluate a deep learning (DL) method to automatically predict the functional end point (retinal sensitivity) based on structural OCT images. DESIGN: Retrospective, cross-sectional study. SUBJECTS: In total, 714 volumes of 289 patients were used in this study. METHODS: A DL algorithm was developed to automatically predict a comprehensive retinal sensitivity map from an OCT volume. Four hundred sixty-three spectral-domain OCT volumes from 174 patients and their corresponding microperimetry examinations (Nidek MP-1) were used for development and internal validation, with a total of 15 563 retinal sensitivity measurements. The patients presented with a healthy macula, early or intermediate age-related macular degeneration, choroidal neovascularization, or geographic atrophy. In addition, an external validation was performed using 251 volumes of 115 patients, comprising 3 different patient populations: those with diabetic macular edema, retinal vein occlusion, or epiretinal membrane. MAIN OUTCOME MEASURES: We evaluated the performance of the algorithm using the mean absolute error (MAE), limits of agreement (LoA), and correlation coefficients of point-wise sensitivity (PWS) and mean sensitivity (MS). RESULTS: The algorithm achieved an MAE of 2.34 dB and 1.30 dB, an LoA of 5.70 and 3.07, a Pearson correlation coefficient of 0.66 and 0.84, and a Spearman correlation coefficient of 0.68 and 0.83 for PWS and MS, respectively. In the external test set, the method achieved an MAE of 2.73 dB and 1.66 dB for PWS and MS, respectively. CONCLUSIONS: The proposed approach allows the prediction of retinal function at each measured location directly based on an OCT scan, demonstrating how structural imaging can serve as a surrogate of visual function. Prospectively, the approach may help to complement retinal function measures, explore the association between image-based information and retinal functionality, improve disease progression monitoring, and provide objective surrogate measures for future clinical trials.


Asunto(s)
Aprendizaje Profundo , Retinopatía Diabética , Edema Macular , Estudios Transversales , Humanos , Estudios Retrospectivos , Tomografía de Coherencia Óptica/métodos , Pruebas del Campo Visual/métodos
12.
Br J Ophthalmol ; 106(1): 113-120, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-33087314

RESUMEN

AIM: To objectively assess disease activity and treatment response in patients with retinal vein occlusion (RVO), neovascular age-related macular degeneration (nAMD) and centre-involved diabetic macular oedema (DME), using artificial intelligence-based fluid quantification. METHODS: Posthoc analysis of 2311 patients (11 151 spectral-domain optical coherence tomography volumes) from five clinical, multicentre trials, who received a flexible antivascular endothelial growth factor (anti-VEGF) therapy over a 12-month period. Fluid volumes were measured with a deep learning algorithm at baseline/months 1, 2, 3 and 12, for three concentric circles with diameters of 1, 3 and 6 mm (fovea, paracentral ring and pericentral ring), as well as four sectors surrounding the fovea (superior, nasal, inferior and temporal). RESULTS: In each disease, at every timepoint, most intraretinal fluid (IRF) per square millimetre was present at the fovea, followed by the paracentral ring and pericentral ring (p<0.0001). While this was also the case for subretinal fluid (SRF) in RVO/DME (p<0.0001), patients with nAMD showed more SRF in the paracentral ring than at the fovea up to month 3 (p<0.0001). Between sectors, patients with RVO/DME showed the highest IRF volumes temporally (p<0.001/p<0.0001). In each disease, more SRF was consistently found inferiorly than superiorly (p<0.02). At month 1/12, we measured the following median reductions of initial fluid volumes. For IRF: RVO, 95.9%/97.7%; nAMD, 91.3%/92.8%; DME, 37.3%/69.9%. For SRF: RVO, 94.7%/97.5%; nAMD, 98.4%/99.8%; DME, 86.3%/97.5%. CONCLUSION: Fully automated localisation and quantification of IRF/SRF over time shed light on the fluid dynamics in each disease. There is a specific anatomical response of IRF/SRF to anti-VEGF therapy in all diseases studied.


Asunto(s)
Edema Macular , Oclusión de la Vena Retiniana , Degeneración Macular Húmeda , Inhibidores de la Angiogénesis/uso terapéutico , Inteligencia Artificial , Factores de Crecimiento Endotelial , Humanos , Inyecciones Intravítreas , Edema Macular/diagnóstico , Edema Macular/tratamiento farmacológico , Edema Macular/metabolismo , Ranibizumab/uso terapéutico , Oclusión de la Vena Retiniana/diagnóstico , Oclusión de la Vena Retiniana/tratamiento farmacológico , Oclusión de la Vena Retiniana/metabolismo , Líquido Subretiniano , Tomografía de Coherencia Óptica/métodos , Factor A de Crecimiento Endotelial Vascular/metabolismo , Agudeza Visual , Degeneración Macular Húmeda/diagnóstico , Degeneración Macular Húmeda/tratamiento farmacológico , Degeneración Macular Húmeda/metabolismo
13.
Prog Retin Eye Res ; 86: 100972, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34166808

RESUMEN

Retinal fluid as the major biomarker in exudative macular disease is accurately visualized by high-resolution three-dimensional optical coherence tomography (OCT), which is used world-wide as a diagnostic gold standard largely replacing clinical examination. Artificial intelligence (AI) with its capability to objectively identify, localize and quantify fluid introduces fully automated tools into OCT imaging for personalized disease management. Deep learning performance has already proven superior to human experts, including physicians and certified readers, in terms of accuracy and speed. Reproducible measurement of retinal fluid relies on precise AI-based segmentation methods that assign a label to each OCT voxel denoting its fluid type such as intraretinal fluid (IRF) and subretinal fluid (SRF) or pigment epithelial detachment (PED) and its location within the central 1-, 3- and 6-mm macular area. Such reliable analysis is most relevant to reflect differences in pathophysiological mechanisms and impacts on retinal function, and the dynamics of fluid resolution during therapy with different regimens and substances. Yet, an in-depth understanding of the mode of action of supervised and unsupervised learning, the functionality of a convolutional neural net (CNN) and various network architectures is needed. Greater insight regarding adequate methods for performance, validation assessment, and device- and scanning-pattern-dependent variations is necessary to empower ophthalmologists to become qualified AI users. Fluid/function correlation can lead to a better definition of valid fluid variables relevant for optimal outcomes on an individual and a population level. AI-based fluid analysis opens the way for precision medicine in real-world practice of the leading retinal diseases of modern times.


Asunto(s)
Inteligencia Artificial , Líquido Subretiniano , Humanos , Retina/diagnóstico por imagen , Líquido Subretiniano/diagnóstico por imagen , Tomografía de Coherencia Óptica , Agudeza Visual
14.
Sci Rep ; 10(1): 12954, 2020 07 31.
Artículo en Inglés | MEDLINE | ID: mdl-32737379

RESUMEN

Artificial intelligence has recently made a disruptive impact in medical imaging by successfully automatizing expert-level diagnostic tasks. However, replicating human-made decisions may inherently be biased by the fallible and dogmatic nature of human experts, in addition to requiring prohibitive amounts of training data. In this paper, we introduce an unsupervised deep learning architecture particularly designed for OCT representations for unbiased, purely data-driven biomarker discovery. We developed artificial intelligence technology that provides biomarker candidates without any restricting input or domain knowledge beyond raw images. Analyzing 54,900 retinal optical coherence tomography (OCT) volume scans of 1094 patients with age-related macular degeneration, we generated a vocabulary of 20 local and global markers capturing characteristic retinal patterns. The resulting markers were validated by linking them with clinical outcomes (visual acuity, lesion activity and retinal morphology) using correlation and machine learning regression. The newly identified features correlated well with specific biomarkers traditionally used in clinical practice (r up to 0.73), and outperformed them in correlating with visual acuity ([Formula: see text] compared to [Formula: see text] for conventional markers), despite representing an enormous compression of OCT imaging data (67 million voxels to 20 features). In addition, our method also discovered hitherto unknown, clinically relevant biomarker candidates. The presented deep learning approach identified known as well as novel medical imaging biomarkers without any prior domain knowledge. Similar approaches may be worthwhile across other medical imaging fields.


Asunto(s)
Aprendizaje Profundo , Degeneración Macular/diagnóstico por imagen , Retina/diagnóstico por imagen , Tomografía de Coherencia Óptica , Biomarcadores , Femenino , Humanos , Masculino
15.
IEEE Trans Med Imaging ; 39(4): 1291, 2020 04.
Artículo en Inglés | MEDLINE | ID: mdl-32248087

RESUMEN

The authors of "Exploiting Epistemic Uncertainty of Anatomy Segmentation for Anomaly Detection in Retinal OCT" which appeared in the January 2020 issue of this journal [1] would like to provide an updated Fig. 3 because there was an error in the published version. The output of the last convolutional layers says "2" in the number of channels but it should be "11" (10 retinal layer and the background).

16.
Biomed Opt Express ; 11(1): 346-363, 2020 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-32010521

RESUMEN

Diagnosis and treatment in ophthalmology depend on modern retinal imaging by optical coherence tomography (OCT). The recent staggering results of machine learning in medical imaging have inspired the development of automated segmentation methods to identify and quantify pathological features in OCT scans. These models need to be sensitive to image features defining patterns of interest, while remaining robust to differences in imaging protocols. A dominant factor for such image differences is the type of OCT acquisition device. In this paper, we analyze the ability of recently developed unsupervised unpaired image translations based on cycle consistency losses (cycleGANs) to deal with image variability across different OCT devices (Spectralis and Cirrus). This evaluation was performed on two clinically relevant segmentation tasks in retinal OCT imaging: fluid and photoreceptor layer segmentation. Additionally, a visual Turing test designed to assess the quality of the learned translation models was carried out by a group of 18 participants with different background expertise. Results show that the learned translation models improve the generalization ability of segmentation models to other OCT-vendors/domains not seen during training. Moreover, relationships between model hyper-parameters and the realism as well as the morphological consistency of the generated images could be identified.

17.
IEEE Trans Med Imaging ; 39(1): 87-98, 2020 01.
Artículo en Inglés | MEDLINE | ID: mdl-31170065

RESUMEN

Diagnosis and treatment guidance are aided by detecting relevant biomarkers in medical images. Although supervised deep learning can perform accurate segmentation of pathological areas, it is limited by requiring a priori definitions of these regions, large-scale annotations, and a representative patient cohort in the training set. In contrast, anomaly detection is not limited to specific definitions of pathologies and allows for training on healthy samples without annotation. Anomalous regions can then serve as candidates for biomarker discovery. Knowledge about normal anatomical structure brings implicit information for detecting anomalies. We propose to take advantage of this property using Bayesian deep learning, based on the assumption that epistemic uncertainties will correlate with anatomical deviations from a normal training set. A Bayesian U-Net is trained on a well-defined healthy environment using weak labels of healthy anatomy produced by existing methods. At test time, we capture epistemic uncertainty estimates of our model using Monte Carlo dropout. A novel post-processing technique is then applied to exploit these estimates and transfer their layered appearance to smooth blob-shaped segmentations of the anomalies. We experimentally validated this approach in retinal optical coherence tomography (OCT) images, using weak labels of retinal layers. Our method achieved a Dice index of 0.789 in an independent anomaly test set of age-related macular degeneration (AMD) cases. The resulting segmentations allowed very high accuracy for separating healthy and diseased cases with late wet AMD, dry geographic atrophy (GA), diabetic macular edema (DME) and retinal vein occlusion (RVO). Finally, we qualitatively observed that our approach can also detect other deviations in normal scans such as cut edge artifacts.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Retina/diagnóstico por imagen , Aprendizaje Automático Supervisado , Tomografía de Coherencia Óptica/métodos , Algoritmos , Artefactos , Humanos
18.
Med Image Anal ; 54: 30-44, 2019 05.
Artículo en Inglés | MEDLINE | ID: mdl-30831356

RESUMEN

Obtaining expert labels in clinical imaging is difficult since exhaustive annotation is time-consuming. Furthermore, not all possibly relevant markers may be known and sufficiently well described a priori to even guide annotation. While supervised learning yields good results if expert labeled training data is available, the visual variability, and thus the vocabulary of findings, we can detect and exploit, is limited to the annotated lesions. Here, we present fast AnoGAN (f-AnoGAN), a generative adversarial network (GAN) based unsupervised learning approach capable of identifying anomalous images and image segments, that can serve as imaging biomarker candidates. We build a generative model of healthy training data, and propose and evaluate a fast mapping technique of new data to the GAN's latent space. The mapping is based on a trained encoder, and anomalies are detected via a combined anomaly score based on the building blocks of the trained model - comprising a discriminator feature residual error and an image reconstruction error. In the experiments on optical coherence tomography data, we compare the proposed method with alternative approaches, and provide comprehensive empirical evidence that f-AnoGAN outperforms alternative approaches and yields high anomaly detection accuracy. In addition, a visual Turing test with two retina experts showed that the generated images are indistinguishable from real normal retinal OCT images. The f-AnoGAN code is available at https://github.com/tSchlegl/f-AnoGAN.


Asunto(s)
Técnicas de Diagnóstico Oftalmológico , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Retina/diagnóstico por imagen , Tomografía de Coherencia Óptica , Algoritmos , Humanos , Teoría de la Información
19.
IEEE Trans Med Imaging ; 38(4): 1037-1047, 2019 04.
Artículo en Inglés | MEDLINE | ID: mdl-30346281

RESUMEN

The identification and quantification of markers in medical images is critical for diagnosis, prognosis, and disease management. Supervised machine learning enables the detection and exploitation of findings that are known a priori after annotation of training examples by experts. However, supervision does not scale well, due to the amount of necessary training examples, and the limitation of the marker vocabulary to known entities. In this proof-of-concept study, we propose unsupervised identification of anomalies as candidates for markers in retinal optical coherence tomography (OCT) imaging data without a constraint to a priori definitions. We identify and categorize marker candidates occurring frequently in the data and demonstrate that these markers show a predictive value in the task of detecting disease. A careful qualitative analysis of the identified data driven markers reveals how their quantifiable occurrence aligns with our current understanding of disease course, in early- and late age-related macular degeneration (AMD) patients. A multi-scale deep denoising autoencoder is trained on healthy images, and a one-class support vector machine identifies anomalies in new data. Clustering in the anomalies identifies stable categories. Using these markers to classify healthy-, early AMD- and late AMD cases yields an accuracy of 81.40%. In a second binary classification experiment on a publicly available data set (healthy versus intermediate AMD), the model achieves an area under the ROC curve of 0.944.


Asunto(s)
Interpretación de Imagen Asistida por Computador/métodos , Retina/diagnóstico por imagen , Tomografía de Coherencia Óptica/métodos , Aprendizaje Automático no Supervisado , Algoritmos , Biomarcadores , Humanos , Degeneración Macular/diagnóstico por imagen , Curva ROC
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...