Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
1.
Med Image Anal ; 93: 103104, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38350222

RESUMO

Automated lesion detection in retinal optical coherence tomography (OCT) scans has shown promise for several clinical applications, including diagnosis, monitoring and guidance of treatment decisions. However, segmentation models still struggle to achieve the desired results for some complex lesions or datasets that commonly occur in real-world, e.g. due to variability of lesion phenotypes, image quality or disease appearance. While several techniques have been proposed to improve them, one line of research that has not yet been investigated is the incorporation of additional semantic context through the application of anomaly detection models. In this study we experimentally show that incorporating weak anomaly labels to standard segmentation models consistently improves lesion segmentation results. This can be done relatively easy by detecting anomalies with a separate model and then adding these output masks as an extra class for training the segmentation model. This provides additional semantic context without requiring extra manual labels. We empirically validated this strategy using two in-house and two publicly available retinal OCT datasets for multiple lesion targets, demonstrating the potential of this generic anomaly guided segmentation approach to be used as an extra tool for improving lesion detection models.


Assuntos
Semântica , Tomografia de Coerência Óptica , Humanos , Fenótipo , Retina/diagnóstico por imagem
2.
Sci Data ; 11(1): 99, 2024 Jan 20.
Artigo em Inglês | MEDLINE | ID: mdl-38245589

RESUMO

Pathologic myopia (PM) is a common blinding retinal degeneration suffered by highly myopic population. Early screening of this condition can reduce the damage caused by the associated fundus lesions and therefore prevent vision loss. Automated diagnostic tools based on artificial intelligence methods can benefit this process by aiding clinicians to identify disease signs or to screen mass populations using color fundus photographs as inputs. This paper provides insights about PALM, our open fundus imaging dataset for pathological myopia recognition and anatomical structure annotation. Our databases comprises 1200 images with associated labels for the pathologic myopia category and manual annotations of the optic disc, the position of the fovea and delineations of lesions such as patchy retinal atrophy (including peripapillary atrophy) and retinal detachment. In addition, this paper elaborates on other details such as the labeling process used to construct the database, the quality and characteristics of the samples and provides other relevant usage notes.


Assuntos
Miopia Degenerativa , Disco Óptico , Degeneração Retiniana , Humanos , Inteligência Artificial , Fundo de Olho , Miopia Degenerativa/diagnóstico por imagem , Miopia Degenerativa/patologia , Disco Óptico/diagnóstico por imagem
3.
Med Image Anal ; 90: 102938, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37806020

RESUMO

Glaucoma is a chronic neuro-degenerative condition that is one of the world's leading causes of irreversible but preventable blindness. The blindness is generally caused by the lack of timely detection and treatment. Early screening is thus essential for early treatment to preserve vision and maintain life quality. Colour fundus photography and Optical Coherence Tomography (OCT) are the two most cost-effective tools for glaucoma screening. Both imaging modalities have prominent biomarkers to indicate glaucoma suspects, such as the vertical cup-to-disc ratio (vCDR) on fundus images and retinal nerve fiber layer (RNFL) thickness on OCT volume. In clinical practice, it is often recommended to take both of the screenings for a more accurate and reliable diagnosis. However, although numerous algorithms are proposed based on fundus images or OCT volumes for the automated glaucoma detection, there are few methods that leverage both of the modalities to achieve the target. To fulfil the research gap, we set up the Glaucoma grAding from Multi-Modality imAges (GAMMA) Challenge to encourage the development of fundus & OCT-based glaucoma grading. The primary task of the challenge is to grade glaucoma from both the 2D fundus images and 3D OCT scanning volumes. As part of GAMMA, we have publicly released a glaucoma annotated dataset with both 2D fundus colour photography and 3D OCT volumes, which is the first multi-modality dataset for machine learning based glaucoma grading. In addition, an evaluation framework is also established to evaluate the performance of the submitted methods. During the challenge, 1272 results were submitted, and finally, ten best performing teams were selected for the final stage. We analyse their results and summarize their methods in the paper. Since all the teams submitted their source code in the challenge, we conducted a detailed ablation study to verify the effectiveness of the particular modules proposed. Finally, we identify the proposed techniques and strategies that could be of practical value for the clinical diagnosis of glaucoma. As the first in-depth study of fundus & OCT multi-modality glaucoma grading, we believe the GAMMA Challenge will serve as an essential guideline and benchmark for future research.


Assuntos
Glaucoma , Humanos , Glaucoma/diagnóstico por imagem , Retina , Fundo de Olho , Técnicas de Diagnóstico Oftalmológico , Cegueira , Tomografia de Coerência Óptica/métodos
4.
Brain Topogr ; 36(5): 644-660, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37382838

RESUMO

Radiologists routinely analyze hippocampal asymmetries in magnetic resonance (MR) images as a biomarker for neurodegenerative conditions like epilepsy and Alzheimer's Disease. However, current clinical tools rely on either subjective evaluations, basic volume measurements, or disease-specific models that fail to capture more complex differences in normal shape. In this paper, we overcome these limitations by introducing NORHA, a novel NORmal Hippocampal Asymmetry deviation index that uses machine learning novelty detection to objectively quantify it from MR scans. NORHA is based on a One-Class Support Vector Machine model learned from a set of morphological features extracted from automatically segmented hippocampi of healthy subjects. Hence, in test time, the model automatically measures how far a new unseen sample falls with respect to the feature space of normal individuals. This avoids biases produced by standard classification models, which require being trained using diseased cases and therefore learning to characterize changes produced only by the ones. We evaluated our new index in multiple clinical use cases using public and private MRI datasets comprising control individuals and subjects with different levels of dementia or epilepsy. The index reported high values for subjects with unilateral atrophies and remained low for controls or individuals with mild or severe symmetric bilateral changes. It also showed high AUC values for discriminating individuals with hippocampal sclerosis, further emphasizing its ability to characterize unilateral abnormalities. Finally, a positive correlation between NORHA and the functional cognitive test CDR-SB was observed, highlighting its promising application as a biomarker for dementia.


Assuntos
Doença de Alzheimer , Disfunção Cognitiva , Humanos , Hipocampo/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Doença de Alzheimer/diagnóstico por imagem , Biomarcadores
5.
Biomed Opt Express ; 13(5): 2566-2580, 2022 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-35774310

RESUMO

In clinical routine, ophthalmologists frequently analyze the shape and size of the foveal avascular zone (FAZ) to detect and monitor retinal diseases. In order to extract those parameters, the contours of the FAZ need to be segmented, which is normally achieved by analyzing the retinal vasculature (RV) around the macula in fluorescein angiograms (FA). Computer-aided segmentation methods based on deep learning (DL) can automate this task. However, current approaches for segmenting the FAZ are often tailored to a specific dataset or require manual initialization. Furthermore, they do not take the variability and challenges of clinical FA into account, which are often of low quality and difficult to analyze. In this paper we propose a DL-based framework to automatically segment the FAZ in challenging FA scans from clinical routine. Our approach mimics the workflow of retinal experts by using additional RV labels as a guidance during training. Hence, our model is able to produce RV segmentations simultaneously. We minimize the annotation work by using a multi-modal approach that leverages already available public datasets of color fundus pictures (CFPs) and their respective manual RV labels. Our experimental evaluation on two datasets with FA from 1) clinical routine and 2) large multicenter clinical trials shows that the addition of weak RV labels as a guidance during training improves the FAZ segmentation significantly with respect to using only manual FAZ annotations.

6.
IEEE Trans Med Imaging ; 41(10): 2828-2847, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-35507621

RESUMO

Age-related macular degeneration (AMD) is the leading cause of visual impairment among elderly in the world. Early detection of AMD is of great importance, as the vision loss caused by this disease is irreversible and permanent. Color fundus photography is the most cost-effective imaging modality to screen for retinal disorders. Cutting edge deep learning based algorithms have been recently developed for automatically detecting AMD from fundus images. However, there are still lack of a comprehensive annotated dataset and standard evaluation benchmarks. To deal with this issue, we set up the Automatic Detection challenge on Age-related Macular degeneration (ADAM), which was held as a satellite event of the ISBI 2020 conference. The ADAM challenge consisted of four tasks which cover the main aspects of detecting and characterizing AMD from fundus images, including detection of AMD, detection and segmentation of optic disc, localization of fovea, and detection and segmentation of lesions. As part of the ADAM challenge, we have released a comprehensive dataset of 1200 fundus images with AMD diagnostic labels, pixel-wise segmentation masks for both optic disc and AMD-related lesions (drusen, exudates, hemorrhages and scars, among others), as well as the coordinates corresponding to the location of the macular fovea. A uniform evaluation framework has been built to make a fair comparison of different models using this dataset. During the ADAM challenge, 610 results were submitted for online evaluation, with 11 teams finally participating in the onsite challenge. This paper introduces the challenge, the dataset and the evaluation methods, as well as summarizes the participating methods and analyzes their results for each task. In particular, we observed that the ensembling strategy and the incorporation of clinical domain knowledge were the key to improve the performance of the deep learning models.


Assuntos
Degeneração Macular , Idoso , Técnicas de Diagnóstico Oftalmológico , Fundo de Olho , Humanos , Degeneração Macular/diagnóstico por imagem , Fotografação/métodos , Reprodutibilidade dos Testes
7.
Ophthalmol Retina ; 6(6): 501-511, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-35134543

RESUMO

PURPOSE: The currently used measures of retinal function are limited by being subjective, nonlocalized, or taxing for patients. To address these limitations, we sought to develop and evaluate a deep learning (DL) method to automatically predict the functional end point (retinal sensitivity) based on structural OCT images. DESIGN: Retrospective, cross-sectional study. SUBJECTS: In total, 714 volumes of 289 patients were used in this study. METHODS: A DL algorithm was developed to automatically predict a comprehensive retinal sensitivity map from an OCT volume. Four hundred sixty-three spectral-domain OCT volumes from 174 patients and their corresponding microperimetry examinations (Nidek MP-1) were used for development and internal validation, with a total of 15 563 retinal sensitivity measurements. The patients presented with a healthy macula, early or intermediate age-related macular degeneration, choroidal neovascularization, or geographic atrophy. In addition, an external validation was performed using 251 volumes of 115 patients, comprising 3 different patient populations: those with diabetic macular edema, retinal vein occlusion, or epiretinal membrane. MAIN OUTCOME MEASURES: We evaluated the performance of the algorithm using the mean absolute error (MAE), limits of agreement (LoA), and correlation coefficients of point-wise sensitivity (PWS) and mean sensitivity (MS). RESULTS: The algorithm achieved an MAE of 2.34 dB and 1.30 dB, an LoA of 5.70 and 3.07, a Pearson correlation coefficient of 0.66 and 0.84, and a Spearman correlation coefficient of 0.68 and 0.83 for PWS and MS, respectively. In the external test set, the method achieved an MAE of 2.73 dB and 1.66 dB for PWS and MS, respectively. CONCLUSIONS: The proposed approach allows the prediction of retinal function at each measured location directly based on an OCT scan, demonstrating how structural imaging can serve as a surrogate of visual function. Prospectively, the approach may help to complement retinal function measures, explore the association between image-based information and retinal functionality, improve disease progression monitoring, and provide objective surrogate measures for future clinical trials.


Assuntos
Aprendizado Profundo , Retinopatia Diabética , Edema Macular , Estudos Transversais , Humanos , Estudos Retrospectivos , Tomografia de Coerência Óptica/métodos , Testes de Campo Visual/métodos
8.
Med Image Anal ; 66: 101798, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32896781

RESUMO

Angle closure glaucoma (ACG) is a more aggressive disease than open-angle glaucoma, where the abnormal anatomical structures of the anterior chamber angle (ACA) may cause an elevated intraocular pressure and gradually lead to glaucomatous optic neuropathy and eventually to visual impairment and blindness. Anterior Segment Optical Coherence Tomography (AS-OCT) imaging provides a fast and contactless way to discriminate angle closure from open angle. Although many medical image analysis algorithms have been developed for glaucoma diagnosis, only a few studies have focused on AS-OCT imaging. In particular, there is no public AS-OCT dataset available for evaluating the existing methods in a uniform way, which limits progress in the development of automated techniques for angle closure detection and assessment. To address this, we organized the Angle closure Glaucoma Evaluation challenge (AGE), held in conjunction with MICCAI 2019. The AGE challenge consisted of two tasks: scleral spur localization and angle closure classification. For this challenge, we released a large dataset of 4800 annotated AS-OCT images from 199 patients, and also proposed an evaluation framework to benchmark and compare different models. During the AGE challenge, over 200 teams registered online, and more than 1100 results were submitted for online evaluation. Finally, eight teams participated in the onsite challenge. In this paper, we summarize these eight onsite challenge methods and analyze their corresponding results for the two tasks. We further discuss limitations and future directions. In the AGE challenge, the top-performing approach had an average Euclidean Distance of 10 pixels (10 µm) in scleral spur localization, while in the task of angle closure classification, all the algorithms achieved satisfactory performances, with two best obtaining an accuracy rate of 100%. These artificial intelligence techniques have the potential to promote new developments in AS-OCT image analysis and image-based angle closure glaucoma assessment in particular.


Assuntos
Glaucoma de Ângulo Fechado , Glaucoma de Ângulo Aberto , Segmento Anterior do Olho/diagnóstico por imagem , Inteligência Artificial , Glaucoma de Ângulo Fechado/diagnóstico por imagem , Humanos , Tomografia de Coerência Óptica
9.
IEEE Trans Med Imaging ; 39(4): 1291, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-32248087

RESUMO

The authors of "Exploiting Epistemic Uncertainty of Anatomy Segmentation for Anomaly Detection in Retinal OCT" which appeared in the January 2020 issue of this journal [1] would like to provide an updated Fig. 3 because there was an error in the published version. The output of the last convolutional layers says "2" in the number of channels but it should be "11" (10 retinal layer and the background).

10.
Sci Rep ; 10(1): 5619, 2020 03 27.
Artigo em Inglês | MEDLINE | ID: mdl-32221349

RESUMO

Diabetic macular edema (DME) and retina vein occlusion (RVO) are macular diseases in which central photoreceptors are affected due to pathological accumulation of fluid. Optical coherence tomography allows to visually assess and evaluate photoreceptor integrity, whose alteration has been observed as an important biomarker of both diseases. However, the manual quantification of this layered structure is challenging, tedious and time-consuming. In this paper we introduce a deep learning approach for automatically segmenting and characterising photoreceptor alteration. The photoreceptor layer is segmented using an ensemble of four different convolutional neural networks. En-face representations of the layer thickness are produced to characterize the photoreceptors. The pixel-wise standard deviation of the score maps produced by the individual models is also taken to indicate areas of photoreceptor abnormality or ambiguous results. Experimental results showed that our ensemble is able to produce results in pair with a human expert, outperforming each of its constitutive models. No statistically significant differences were observed between mean thickness estimates obtained from automated and manually generated annotations. Therefore, our model is able to reliable quantify photoreceptors, which can be used to improve prognosis and managment of macular diseases.


Assuntos
Edema Macular/patologia , Células Fotorreceptoras/patologia , Retina/patologia , Aprendizado Profundo , Retinopatia Diabética/patologia , Humanos , Redes Neurais de Computação , Oclusão da Veia Retiniana/patologia , Tomografia de Coerência Óptica/métodos , Acuidade Visual/fisiologia
11.
Biomed Opt Express ; 11(1): 346-363, 2020 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-32010521

RESUMO

Diagnosis and treatment in ophthalmology depend on modern retinal imaging by optical coherence tomography (OCT). The recent staggering results of machine learning in medical imaging have inspired the development of automated segmentation methods to identify and quantify pathological features in OCT scans. These models need to be sensitive to image features defining patterns of interest, while remaining robust to differences in imaging protocols. A dominant factor for such image differences is the type of OCT acquisition device. In this paper, we analyze the ability of recently developed unsupervised unpaired image translations based on cycle consistency losses (cycleGANs) to deal with image variability across different OCT devices (Spectralis and Cirrus). This evaluation was performed on two clinically relevant segmentation tasks in retinal OCT imaging: fluid and photoreceptor layer segmentation. Additionally, a visual Turing test designed to assess the quality of the learned translation models was carried out by a group of 18 participants with different background expertise. Results show that the learned translation models improve the generalization ability of segmentation models to other OCT-vendors/domains not seen during training. Moreover, relationships between model hyper-parameters and the realism as well as the morphological consistency of the generated images could be identified.

12.
IEEE Trans Med Imaging ; 39(1): 87-98, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31170065

RESUMO

Diagnosis and treatment guidance are aided by detecting relevant biomarkers in medical images. Although supervised deep learning can perform accurate segmentation of pathological areas, it is limited by requiring a priori definitions of these regions, large-scale annotations, and a representative patient cohort in the training set. In contrast, anomaly detection is not limited to specific definitions of pathologies and allows for training on healthy samples without annotation. Anomalous regions can then serve as candidates for biomarker discovery. Knowledge about normal anatomical structure brings implicit information for detecting anomalies. We propose to take advantage of this property using Bayesian deep learning, based on the assumption that epistemic uncertainties will correlate with anatomical deviations from a normal training set. A Bayesian U-Net is trained on a well-defined healthy environment using weak labels of healthy anatomy produced by existing methods. At test time, we capture epistemic uncertainty estimates of our model using Monte Carlo dropout. A novel post-processing technique is then applied to exploit these estimates and transfer their layered appearance to smooth blob-shaped segmentations of the anomalies. We experimentally validated this approach in retinal optical coherence tomography (OCT) images, using weak labels of retinal layers. Our method achieved a Dice index of 0.789 in an independent anomaly test set of age-related macular degeneration (AMD) cases. The resulting segmentations allowed very high accuracy for separating healthy and diseased cases with late wet AMD, dry geographic atrophy (GA), diabetic macular edema (DME) and retinal vein occlusion (RVO). Finally, we qualitatively observed that our approach can also detect other deviations in normal scans such as cut edge artifacts.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Retina/diagnóstico por imagem , Aprendizado de Máquina Supervisionado , Tomografia de Coerência Óptica/métodos , Algoritmos , Artefatos , Humanos
13.
Int J Comput Assist Radiol Surg ; 15(2): 183-192, 2020 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-31392671

RESUMO

PURPOSE: In this paper, we propose to apply generative adversarial neural networks trained with a cycle consistency loss, or CycleGANs, to improve realism in ultrasound (US) simulation from computed tomography (CT) scans. METHODS: A ray-casting US simulation approach is used to generate intermediate synthetic images from abdominal CT scans. Then, an unpaired set of these synthetic and real US images is used to train CycleGANs with two alternative architectures for the generator, a U-Net and a ResNet. These networks are finally used to translate ray-casting based simulations into more realistic synthetic US images. RESULTS: Our approach was evaluated both qualitatively and quantitatively. A user study performed by 21 experts in US imaging shows that both networks significantly improve realism with respect to the original ray-casting algorithm ([Formula: see text]), with the ResNet model performing better than the U-Net ([Formula: see text]). CONCLUSION: Applying CycleGANs allows to obtain better synthetic US images of the abdomen. These results can contribute to reduce the gap between artificially generated and real US scans, which might positively impact in applications such as semi-supervised training of machine learning algorithms and low-cost training of medical doctors and radiologists in US image interpretation.


Assuntos
Abdome/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Redes Neurais de Computação , Tomografia Computadorizada por Raios X/métodos , Ultrassonografia/métodos , Algoritmos , Humanos
14.
Med Image Anal ; 59: 101570, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31630011

RESUMO

Glaucoma is one of the leading causes of irreversible but preventable blindness in working age populations. Color fundus photography (CFP) is the most cost-effective imaging modality to screen for retinal disorders. However, its application to glaucoma has been limited to the computation of a few related biomarkers such as the vertical cup-to-disc ratio. Deep learning approaches, although widely applied for medical image analysis, have not been extensively used for glaucoma assessment due to the limited size of the available data sets. Furthermore, the lack of a standardize benchmark strategy makes difficult to compare existing methods in a uniform way. In order to overcome these issues we set up the Retinal Fundus Glaucoma Challenge, REFUGE (https://refuge.grand-challenge.org), held in conjunction with MICCAI 2018. The challenge consisted of two primary tasks, namely optic disc/cup segmentation and glaucoma classification. As part of REFUGE, we have publicly released a data set of 1200 fundus images with ground truth segmentations and clinical glaucoma labels, currently the largest existing one. We have also built an evaluation framework to ease and ensure fairness in the comparison of different models, encouraging the development of novel techniques in the field. 12 teams qualified and participated in the online challenge. This paper summarizes their methods and analyzes their corresponding results. In particular, we observed that two of the top-ranked teams outperformed two human experts in the glaucoma classification task. Furthermore, the segmentation results were in general consistent with the ground truth annotations, with complementary outcomes that can be further exploited by ensembling the results.


Assuntos
Aprendizado Profundo , Técnicas de Diagnóstico Oftalmológico , Fundo de Olho , Glaucoma/diagnóstico por imagem , Fotografação , Conjuntos de Dados como Assunto , Humanos
15.
Comput Methods Programs Biomed ; 153: 115-127, 2018 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-29157445

RESUMO

BACKGROUND AND OBJECTIVES: Diabetic retinopathy (DR) is one of the leading causes of preventable blindness in the world. Its earliest sign are red lesions, a general term that groups both microaneurysms (MAs) and hemorrhages (HEs). In daily clinical practice, these lesions are manually detected by physicians using fundus photographs. However, this task is tedious and time consuming, and requires an intensive effort due to the small size of the lesions and their lack of contrast. Computer-assisted diagnosis of DR based on red lesion detection is being actively explored due to its improvement effects both in clinicians consistency and accuracy. Moreover, it provides comprehensive feedback that is easy to assess by the physicians. Several methods for detecting red lesions have been proposed in the literature, most of them based on characterizing lesion candidates using hand crafted features, and classifying them into true or false positive detections. Deep learning based approaches, by contrast, are scarce in this domain due to the high expense of annotating the lesions manually. METHODS: In this paper we propose a novel method for red lesion detection based on combining both deep learned and domain knowledge. Features learned by a convolutional neural network (CNN) are augmented by incorporating hand crafted features. Such ensemble vector of descriptors is used afterwards to identify true lesion candidates using a Random Forest classifier. RESULTS: We empirically observed that combining both sources of information significantly improve results with respect to using each approach separately. Furthermore, our method reported the highest performance on a per-lesion basis on DIARETDB1 and e-ophtha, and for screening and need for referral on MESSIDOR compared to a second human expert. CONCLUSIONS: Results highlight the fact that integrating manually engineered approaches with deep learned features is relevant to improve results when the networks are trained from lesion-level annotated data. An open source implementation of our system is publicly available at https://github.com/ignaciorlando/red-lesion-detection.


Assuntos
Retinopatia Diabética/patologia , Fundo de Olho , Aprendizado de Máquina , Retinopatia Diabética/diagnóstico por imagem , Humanos , Interpretação de Imagem Assistida por Computador , Microaneurisma/diagnóstico por imagem , Redes Neurais de Computação
16.
Med Phys ; 44(12): 6425-6434, 2017 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-29044550

RESUMO

PURPOSE: Diabetic retinopathy (DR) is one of the most widespread causes of preventable blindness in the world. The most dangerous stage of this condition is proliferative DR (PDR), in which the risk of vision loss is high and treatments are less effective. Fractal features of the retinal vasculature have been previously explored as potential biomarkers of DR, yet the current literature is inconclusive with respect to their correlation with PDR. In this study, we experimentally assess their discrimination ability to recognize PDR cases. METHODS: A statistical analysis of the viability of using three reference fractal characterization schemes - namely box, information, and correlation dimensions - to identify patients with PDR is presented. These descriptors are also evaluated as input features for training ℓ1 and ℓ2 regularized logistic regression classifiers, to estimate their performance. RESULTS: Our results on MESSIDOR, a public dataset of 1200 fundus photographs, indicate that patients with PDR are more likely to exhibit a higher fractal dimension than healthy subjects or patients with mild levels of DR (P≤1.3×10-2). Moreover, a supervised classifier trained with both fractal measurements and red lesion-based features reports an area under the ROC curve of 0.93 for PDR screening and 0.96 for detecting patients with optic disc neovascularizations. CONCLUSIONS: The fractal dimension of the vasculature increases with the level of DR. Furthermore, PDR screening using multiscale fractal measurements is more feasible than using their derived fractal dimensions. Code and further resources are provided at https://github.com/ignaciorlando/fundus-fractal-analysis.


Assuntos
Bases de Dados Factuais , Retinopatia Diabética/diagnóstico por imagem , Retinopatia Diabética/patologia , Diagnóstico por Imagem , Fractais , Processamento de Imagem Assistida por Computador/métodos , Técnicas de Diagnóstico Oftalmológico , Humanos , Retina/diagnóstico por imagem , Retina/patologia
17.
IEEE Trans Biomed Eng ; 64(1): 16-27, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-26930672

RESUMO

GOAL: In this work, we present an extensive description and evaluation of our method for blood vessel segmentation in fundus images based on a discriminatively trained fully connected conditional random field model. METHODS: Standard segmentation priors such as a Potts model or total variation usually fail when dealing with thin and elongated structures. We overcome this difficulty by using a conditional random field model with more expressive potentials, taking advantage of recent results enabling inference of fully connected models almost in real time. Parameters of the method are learned automatically using a structured output support vector machine, a supervised technique widely used for structured prediction in a number of machine learning applications. RESULTS: Our method, trained with state of the art features, is evaluated both quantitatively and qualitatively on four publicly available datasets: DRIVE, STARE, CHASEDB1, and HRF. Additionally, a quantitative comparison with respect to other strategies is included. CONCLUSION: The experimental results show that this approach outperforms other techniques when evaluated in terms of sensitivity, F1-score, G-mean, and Matthews correlation coefficient. Additionally, it was observed that the fully connected model is able to better distinguish the desired structures than the local neighborhood-based approach. SIGNIFICANCE: Results suggest that this method is suitable for the task of segmenting elongated structures, a feature that can be exploited to contribute with other medical and biological applications.


Assuntos
Angiofluoresceinografia/métodos , Interpretação de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Oftalmoscopia/métodos , Artéria Retiniana/diagnóstico por imagem , Doenças Retinianas/diagnóstico por imagem , Simulação por Computador , Humanos , Modelos Cardiovasculares , Modelos Estatísticos , Reconhecimento Automatizado de Padrão/métodos , Fotografação/métodos , Reprodutibilidade dos Testes , Artéria Retiniana/patologia , Doenças Retinianas/patologia , Sensibilidade e Especificidade
18.
Int J Comput Assist Radiol Surg ; 11(8): 1397-407, 2016 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-26811082

RESUMO

BACKGROUND: Intravascular ultrasound (IVUS) provides axial greyscale images, allowing the assessment of the vessel wall and the surrounding tissues. Several studies have described automatic segmentation of the luminal boundary and the media-adventitia interface by means of different image features. PURPOSE: The aim of the present study is to evaluate the capability of some of the most relevant state-of-the-art image features for segmenting IVUS images. The study is focused on Volcano 20 MHz frames not containing plaque or containing fibrotic plaques, and, in principle, it could not be applied to frames containing shadows, calcified plaques, bifurcations and side vessels. METHODS: Several image filters, textural descriptors, edge detectors, noise and spatial measures were taken into account. The assessment is based on classification techniques previously used for IVUS segmentation, assigning to each pixel a continuous likelihood value obtained using support vector machines (SVMs). To retrieve relevant features, sequential feature selection was performed guided by the area under the precision-recall curve (AUC-PR). RESULTS: Subsets of relevant image features for lumen, plaque and surrounding tissues characterization were obtained, and SVMs trained with these features were able to accurately identify those regions. The experimental results were evaluated with respect to ground truth segmentations from a publicly available dataset, reaching values of AUC-PR up to 0.97 and Jaccard index close to 0.85. CONCLUSION: Noise-reduction filters and Haralick's textural features denoted their relevance to identify lumen and background. Laws' textural features, local binary patterns, Gabor filters and edge detectors had less relevance in the selection process.


Assuntos
Artérias/diagnóstico por imagem , Placa Aterosclerótica/diagnóstico por imagem , Ultrassonografia de Intervenção/métodos , Algoritmos , Humanos , Reprodutibilidade dos Testes , Máquina de Vetores de Suporte
19.
Med Image Comput Comput Assist Interv ; 17(Pt 1): 634-41, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25333172

RESUMO

In this work, we present a novel method for blood vessel segmentation in fundus images based on a discriminatively trained, fully connected conditional random field model. Retinal image analysis is greatly aided by blood vessel segmentation as the vessel structure may be considered both a key source of signal, e.g. in the diagnosis of diabetic retinopathy, or a nuisance, e.g. in the analysis of pigment epithelium or choroid related abnormalities. Blood vessel segmentation in fundus images has been considered extensively in the literature, but remains a challenge largely due to the desired structures being thin and elongated, a setting that performs particularly poorly using standard segmentation priors such as a Potts model or total variation. In this work, we overcome this difficulty using a discriminatively trained conditional random field model with more expressive potentials. In particular, we employ recent results enabling extremely fast inference in a fully connected model. We find that this rich but computationally efficient model family, combined with principled discriminative training based on a structured output support vector machine yields a fully automated system that achieves results statistically indistinguishable from an expert human annotator. Implementation details are available at http://pages.saclay.inria.fr/ matthew.blaschko/projects/retina/.


Assuntos
Algoritmos , Inteligência Artificial , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Vasos Retinianos/anatomia & histologia , Retinoscopia/métodos , Técnica de Subtração , Humanos , Aumento da Imagem/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...