Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros

Banco de datos
Tipo de estudio
Tipo del documento
Intervalo de año de publicación
1.
Biomed Eng Online ; 18(1): 29, 2019 Mar 20.
Artículo en Inglés | MEDLINE | ID: mdl-30894178

RESUMEN

BACKGROUND: Most current algorithms for automatic glaucoma assessment using fundus images rely on handcrafted features based on segmentation, which are affected by the performance of the chosen segmentation method and the extracted features. Among other characteristics, convolutional neural networks (CNNs) are known because of their ability to learn highly discriminative features from raw pixel intensities. METHODS: In this paper, we employed five different ImageNet-trained models (VGG16, VGG19, InceptionV3, ResNet50 and Xception) for automatic glaucoma assessment using fundus images. Results from an extensive validation using cross-validation and cross-testing strategies were compared with previous works in the literature. RESULTS: Using five public databases (1707 images), an average AUC of 0.9605 with a 95% confidence interval of 95.92-97.07%, an average specificity of 0.8580 and an average sensitivity of 0.9346 were obtained after using the Xception architecture, significantly improving the performance of other state-of-the-art works. Moreover, a new clinical database, ACRIMA, has been made publicly available, containing 705 labelled images. It is composed of 396 glaucomatous images and 309 normal images, which means, the largest public database for glaucoma diagnosis. The high specificity and sensitivity obtained from the proposed approach are supported by an extensive validation using not only the cross-validation strategy but also the cross-testing validation on, to the best of the authors' knowledge, all publicly available glaucoma-labelled databases. CONCLUSIONS: These results suggest that using ImageNet-trained models is a robust alternative for automatic glaucoma screening system. All images, CNN weights and software used to fine-tune and test the five CNNs are publicly available, which could be used as a testbed for further comparisons.


Asunto(s)
Fondo de Ojo , Glaucoma/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Bases de Datos Factuales , Humanos , Factores de Tiempo
2.
Med Image Anal ; 95: 103207, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38776843

RESUMEN

The lack of annotated datasets is a major bottleneck for training new task-specific supervised machine learning models, considering that manual annotation is extremely expensive and time-consuming. To address this problem, we present MONAI Label, a free and open-source framework that facilitates the development of applications based on artificial intelligence (AI) models that aim at reducing the time required to annotate radiology datasets. Through MONAI Label, researchers can develop AI annotation applications focusing on their domain of expertise. It allows researchers to readily deploy their apps as services, which can be made available to clinicians via their preferred user interface. Currently, MONAI Label readily supports locally installed (3D Slicer) and web-based (OHIF) frontends and offers two active learning strategies to facilitate and speed up the training of segmentation algorithms. MONAI Label allows researchers to make incremental improvements to their AI-based annotation application by making them available to other researchers and clinicians alike. Additionally, MONAI Label provides sample AI-based interactive and non-interactive labeling applications, that can be used directly off the shelf, as plug-and-play to any given dataset. Significant reduced annotation times using the interactive model can be observed on two public datasets.


Asunto(s)
Inteligencia Artificial , Imagenología Tridimensional , Humanos , Imagenología Tridimensional/métodos , Algoritmos , Programas Informáticos
3.
Med Image Anal ; 74: 102228, 2021 12.
Artículo en Inglés | MEDLINE | ID: mdl-34563860

RESUMEN

Shape reconstruction from sparse point clouds/images is a challenging and relevant task required for a variety of applications in computer vision and medical image analysis (e.g. surgical navigation, cardiac motion analysis, augmented/virtual reality systems). A subset of such methods, viz. 3D shape reconstruction from 2D contours, is especially relevant for computer-aided diagnosis and intervention applications involving meshes derived from multiple 2D image slices, views or projections. We propose a deep learning architecture, coined Mesh Reconstruction Network (MR-Net), which tackles this problem. MR-Net enables accurate 3D mesh reconstruction in real-time despite missing data and with sparse annotations. Using 3D cardiac shape reconstruction from 2D contours defined on short-axis cardiac magnetic resonance image slices as an exemplar, we demonstrate that our approach consistently outperforms state-of-the-art techniques for shape reconstruction from unstructured point clouds. Our approach can reconstruct 3D cardiac meshes to within 2.5-mm point-to-point error, concerning the ground-truth data (the original image spatial resolution is ∼1.8×1.8×10mm3). We further evaluate the robustness of the proposed approach to incomplete data, and contours estimated using an automatic segmentation algorithm. MR-Net is generic and could reconstruct shapes of other organs, making it compelling as a tool for various applications in medical image analysis.


Asunto(s)
Algoritmos , Imagenología Tridimensional , Corazón , Humanos , Imagen por Resonancia Magnética
4.
Med Image Anal ; 59: 101570, 2020 01.
Artículo en Inglés | MEDLINE | ID: mdl-31630011

RESUMEN

Glaucoma is one of the leading causes of irreversible but preventable blindness in working age populations. Color fundus photography (CFP) is the most cost-effective imaging modality to screen for retinal disorders. However, its application to glaucoma has been limited to the computation of a few related biomarkers such as the vertical cup-to-disc ratio. Deep learning approaches, although widely applied for medical image analysis, have not been extensively used for glaucoma assessment due to the limited size of the available data sets. Furthermore, the lack of a standardize benchmark strategy makes difficult to compare existing methods in a uniform way. In order to overcome these issues we set up the Retinal Fundus Glaucoma Challenge, REFUGE (https://refuge.grand-challenge.org), held in conjunction with MICCAI 2018. The challenge consisted of two primary tasks, namely optic disc/cup segmentation and glaucoma classification. As part of REFUGE, we have publicly released a data set of 1200 fundus images with ground truth segmentations and clinical glaucoma labels, currently the largest existing one. We have also built an evaluation framework to ease and ensure fairness in the comparison of different models, encouraging the development of novel techniques in the field. 12 teams qualified and participated in the online challenge. This paper summarizes their methods and analyzes their corresponding results. In particular, we observed that two of the top-ranked teams outperformed two human experts in the glaucoma classification task. Furthermore, the segmentation results were in general consistent with the ground truth annotations, with complementary outcomes that can be further exploited by ensembling the results.


Asunto(s)
Aprendizaje Profundo , Técnicas de Diagnóstico Oftalmológico , Fondo de Ojo , Glaucoma/diagnóstico por imagen , Fotograbar , Conjuntos de Datos como Asunto , Humanos
5.
IEEE Trans Med Imaging ; 38(9): 2211-2218, 2019 09.
Artículo en Inglés | MEDLINE | ID: mdl-30843823

RESUMEN

Recent works show that generative adversarial networks (GANs) can be successfully applied to image synthesis and semi-supervised learning, where, given a small labeled database and a large unlabeled database, the goal is to train a powerful classifier. In this paper, we trained a retinal image synthesizer and a semi-supervised learning method for automatic glaucoma assessment using an adversarial model on a small glaucoma-labeled database and a large unlabeled database. Various studies have shown that glaucoma can be monitored by analyzing the optic disc and its surroundings, and for that reason, the images used in this paper were automatically cropped around the optic disc. The novelty of this paper is to propose a new retinal image synthesizer and a semi-supervised learning method for glaucoma assessment based on the deep convolutional GANs. In addition, and to the best of our knowledge, this system is trained on an unprecedented number of publicly available images (86926 images). This system, hence, is not only able to generate images synthetically but to provide labels automatically. Synthetic images were qualitatively evaluated using t-SNE plots of features associated with the images and their anatomical consistency was estimated by measuring the proportion of pixels corresponding to the anatomical structures around the optic disc. The resulting image synthesizer is able to generate realistic (cropped) retinal images, and subsequently, the glaucoma classifier is able to classify them into glaucomatous and normal with high accuracy (AUC = 0.9017). The obtained retinal image synthesizer and the glaucoma classifier could then be used to generate an unlimited number of cropped retinal images with glaucoma labels.


Asunto(s)
Glaucoma/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Retina/diagnóstico por imagen , Aprendizaje Automático Supervisado , Algoritmos , Bases de Datos Factuales , Técnicas de Diagnóstico Oftalmológico , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA