Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 37
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
J Digit Imaging ; 27(6): 794-804, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24895064

RESUMO

We propose a fully automated method for segmenting the cardiac right ventricle (RV) from magnetic resonance (MR) images. Given a MR test image, it is first oversegmented into superpixels and each superpixel is analyzed to detect the presence of RV regions using random forest (RF) classifiers. The superpixels containing RV regions constitute the region of interest (ROI) which is used to segment the actual RV. Probability maps are generated for each ROI pixel using a second set of RF classifiers which give the probabilities of each pixel belonging to RV or background. The negative log-likelihood of these maps are used as penalty costs in a graph cut segmentation framework. Low-level features like intensity statistics, texture anisotropy and curvature asymmetry, and high level context features are used at different stages. Smoothness constraints are imposed based on semantic information (importance of each feature to the classification task) derived from the second set of learned RF classifiers. Experimental results show that compared to conventional method our algorithm achieves superior performance due to the inclusion of semantic knowledge and context information.


Assuntos
Ventrículos do Coração/anatomia & histologia , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Reconhecimento Automatizado de Padrão/métodos , Árvores de Decisões , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Semântica
2.
Med Image Anal ; 93: 103075, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38199069

RESUMO

Informative sample selection in an active learning (AL) setting helps a machine learning system attain optimum performance with minimum labeled samples, thus reducing annotation costs and boosting performance of computer-aided diagnosis systems in the presence of limited labeled data. Another effective technique to enlarge datasets in a small labeled data regime is data augmentation. An intuitive active learning approach thus consists of combining informative sample selection and data augmentation to leverage their respective advantages and improve the performance of AL systems. In this paper, we propose a novel approach called GANDALF (Graph-based TrANsformer and Data Augmentation Active Learning Framework) to combine sample selection and data augmentation in a multi-label setting. Conventional sample selection approaches in AL have mostly focused on the single-label setting where a sample has only one disease label. These approaches do not perform optimally when a sample can have multiple disease labels (e.g., in chest X-ray images). We improve upon state-of-the-art multi-label active learning techniques by representing disease labels as graph nodes and use graph attention transformers (GAT) to learn more effective inter-label relationships. We identify the most informative samples by aggregating GAT representations. Subsequently, we generate transformations of these informative samples by sampling from a learned latent space. From these generated samples, we identify informative samples via a novel multi-label informativeness score, which beyond the state of the art, ensures that (i) generated samples are not redundant with respect to the training data and (ii) make important contributions to the training stage. We apply our method to two public chest X-ray datasets, as well as breast, dermatology, retina and kidney tissue microscopy MedMNIST datasets, and report improved results over state-of-the-art multi-label AL techniques in terms of model performance, learning rates, and robustness.


Assuntos
Mama , Tórax , Humanos , Raios X , Radiografia , Diagnóstico por Computador
3.
IEEE Trans Med Imaging ; PP2024 Jul 17.
Artigo em Inglês | MEDLINE | ID: mdl-39018216

RESUMO

In fully supervised learning-based medical image classification, the robustness of a trained model is influenced by its exposure to the range of candidate disease classes. Generalized Zero Shot Learning (GZSL) aims to correctly predict seen and novel unseen classes. Current GZSL approaches have focused mostly on the single-label case. However, it is common for chest X-rays to be labelled with multiple disease classes. We propose a novel multi-modal multi-label GZSL approach that leverages feature disentanglement andmulti-modal information to synthesize features of unseen classes. Disease labels are processed through a pre-trained BioBert model to obtain text embeddings that are used to create a dictionary encoding similarity among different labels. We then use disentangled features and graph aggregation to learn a second dictionary of inter-label similarities. A subsequent clustering step helps to identify representative vectors for each class. The multi-modal multi-label dictionaries and the class representative vectors are used to guide the feature synthesis step, which is the most important component of our pipeline, for generating realistic multi-label disease samples of seen and unseen classes. Our method is benchmarked against multiple competing methods and we outperform all of them based on experiments conducted on the publicly available NIH and CheXpert chest X-ray datasets.

4.
Med Image Anal ; 97: 103261, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39018722

RESUMO

State-of-the-art deep learning models often fail to generalize in the presence of distribution shifts between training (source) data and test (target) data. Domain adaptation methods are designed to address this issue using labeled samples (supervised domain adaptation) or unlabeled samples (unsupervised domain adaptation). Active learning is a method to select informative samples to obtain maximum performance from minimum annotations. Selecting informative target domain samples can improve model performance and robustness, and reduce data demands. This paper proposes a novel pipeline called ALFREDO (Active Learning with FeatuRe disEntangelement and DOmain adaptation) that performs active learning under domain shift. We propose a novel feature disentanglement approach to decompose image features into domain specific and task specific components. Domain specific components refer to those features that provide source specific information, e.g., scanners, vendors or hospitals. Task specific components are discriminative features for classification, segmentation or other tasks. Thereafter we define multiple novel cost functions that identify informative samples under domain shift. We test our proposed method for medical image classification using one histopathology dataset and two chest X-ray datasets. Experiments show our method achieves state-of-the-art results compared to other domain adaptation methods, as well as state of the art active domain adaptation methods.


Assuntos
Aprendizado Profundo , Humanos , Algoritmos , Tomografia Computadorizada por Raios X , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina
5.
IEEE Trans Med Imaging ; PP2024 Aug 13.
Artigo em Inglês | MEDLINE | ID: mdl-39137089

RESUMO

Deep learning models for medical image analysis easily suffer from distribution shifts caused by dataset artifact bias, camera variations, differences in the imaging station, etc., leading to unreliable diagnoses in real-world clinical settings. Domain generalization (DG) methods, which aim to train models on multiple domains to perform well on unseen domains, offer a promising direction to solve the problem. However, existing DG methods assume domain labels of each image are available and accurate, which is typically feasible for only a limited number of medical datasets. To address these challenges, we propose a unified DG framework for medical image classification without relying on domain labels, called Prompt-driven Latent Domain Generalization (PLDG). PLDG consists of unsupervised domain discovery and prompt learning. This framework first discovers pseudo domain labels by clustering the bias-associated style features, then leverages collaborative domain prompts to guide a Vision Transformer to learn knowledge from discovered diverse domains. To facilitate cross-domain knowledge learning between different prompts, we introduce a domain prompt generator that enables knowledge sharing between domain prompts and a shared prompt. A domain mixup strategy is additionally employed for more flexible decision margins and mitigates the risk of incorrect domain assignments. Extensive experiments on three medical image classification tasks and one debiasing task demonstrate that our method can achieve comparable or even superior performance than conventional DG algorithms without relying on domain labels. Our code is publicly available at https://github.com/SiyuanYan1/PLDG/tree/main.

6.
J Digit Imaging ; 26(2): 173-82, 2013 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-22688560

RESUMO

We propose a joint segmentation and groupwise registration method for dynamic cardiac perfusion images that uses temporal information. The nature of perfusion images makes groupwise registration especially attractive as the temporal information from the entire image sequence can be used. Registration aims to maximize the smoothness of the intensity signal while segmentation minimizes a pixel's dissimilarity with other pixels having the same segmentation label. The cost function is optimized in an iterative fashion using B-splines. Tests on real patient datasets show that compared with two other methods, our method shows lower registration error and higher segmentation accuracy. This is attributed to the use of temporal information for groupwise registration and mutual complementary registration and segmentation information in one framework while other methods solve the two problems separately.


Assuntos
Algoritmos , Imagem de Difusão por Ressonância Magnética/métodos , Interpretação de Imagem Assistida por Computador , Imageamento Tridimensional , Ventrículos do Coração/diagnóstico por imagem , Humanos , Modelos Cardiovasculares , Miocárdio/patologia , Imagem de Perfusão/métodos , Sensibilidade e Especificidade , Técnica de Subtração
7.
J Digit Imaging ; 26(4): 721-30, 2013 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-23319109

RESUMO

In this paper, we propose a novel method for segmentation of the left ventricle, right ventricle, and myocardium from cine cardiac magnetic resonance images of the STACOM database. Our method incorporates prior shape information in a graph cut framework to achieve segmentation. Poor edge information and large within-patient shape variation of the different parts necessitates the inclusion of prior shape information. But large interpatient shape variability makes it difficult to have a generalized shape model. Therefore, for every dataset the shape prior is chosen as a single image clearly showing the different parts. Prior shape information is obtained from a combination of distance functions and orientation angle histograms of each pixel relative to the prior shape. To account for shape changes, pixels near the boundary are allowed to change their labels by appropriate formulation of the penalty and smoothness costs. Our method consists of two stages. In the first stage, segmentation is performed using only intensity information which is the starting point for the second stage combining intensity and shape information to get the final segmentation. Experimental results on different subsets of 30 real patient datasets show higher segmentation accuracy in using shape information and our method's superior performance over other competing methods.


Assuntos
Ventrículos do Coração/patologia , Interpretação de Imagem Assistida por Computador/métodos , Imagem Cinética por Ressonância Magnética/métodos , Reconhecimento Automatizado de Padrão/métodos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Modelos Estatísticos , Miocárdio/patologia , Reprodutibilidade dos Testes
8.
J Digit Imaging ; 26(5): 898-908, 2013 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-23354341

RESUMO

In this paper, we propose a graphcut method to segment the cardiac right ventricle (RV) and left ventricle (LV) by using context information from each other. Contextual information is very helpful in medical image segmentation because the relative arrangement of different organs is the same. In addition to the conventional log-likelihood penalty, we also include a "context penalty" that captures the geometric relationship between the RV and LV. Contextual information for the RV is obtained by learning its geometrical relationship with respect to the LV. Similarly, RV provides geometrical context information for LV segmentation. The smoothness cost is formulated as a function of the learned context which helps in accurate labeling of pixels. Experimental results on real patient datasets from the STACOM database show the efficacy of our method in accurately segmenting the LV and RV. We also conduct experiments on simulated datasets to investigate our method's robustness to noise and inaccurate segmentations.


Assuntos
Ventrículos do Coração/patologia , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Miocárdio/patologia , Reconhecimento Automatizado de Padrão/métodos , Humanos , Reprodutibilidade dos Testes
9.
J Digit Imaging ; 26(5): 920-31, 2013 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-23392736

RESUMO

Increasing incidence of Crohn's disease (CD) in the Western world has made its accurate diagnosis an important medical challenge. The current reference standard for diagnosis, colonoscopy, is time-consuming and invasive while magnetic resonance imaging (MRI) has emerged as the preferred noninvasive procedure over colonoscopy. Current MRI approaches assess rate of contrast enhancement and bowel wall thickness, and rely on extensive manual segmentation for accurate analysis. We propose a supervised learning method for the identification and localization of regions in abdominal magnetic resonance images that have been affected by CD. Low-level features like intensity and texture are used with shape asymmetry information to distinguish between diseased and normal regions. Particular emphasis is laid on a novel entropy-based shape asymmetry method and higher-order statistics like skewness and kurtosis. Multi-scale feature extraction renders the method robust. Experiments on real patient data show that our features achieve a high level of accuracy and perform better than two competing methods.


Assuntos
Doença de Crohn/diagnóstico , Doença de Crohn/patologia , Processamento de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Imageamento por Ressonância Magnética/métodos , Imageamento por Ressonância Magnética/estatística & dados numéricos , Adulto , Idoso , Colo/patologia , Diagnóstico Diferencial , Feminino , Humanos , Imageamento Tridimensional/métodos , Masculino , Pessoa de Meia-Idade , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Adulto Jovem
10.
IEEE Trans Med Imaging ; 42(3): 661-673, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36240033

RESUMO

While supervised learning techniques have demonstrated state-of-the-art performance in many medical image analysis tasks, the role of sample selection is important. Selecting the most informative samples contributes to the system attaining optimum performance with minimum labeled samples, which translates to fewer expert interventions and cost. Active Learning (AL) methods for informative sample selection are effective in boosting performance of computer aided diagnosis systems when limited labels are available. Conventional approaches to AL have mostly focused on the single label setting where a sample has only one disease label from the set of possible labels. These approaches do not perform optimally in the multi-label setting where a sample can have multiple disease labels (e.g. in chest X-ray images). In this paper we propose a novel sample selection approach based on graph analysis to identify informative samples in a multi-label setting. For every analyzed sample, each class label is denoted as a separate node of a graph. Building on findings from interpretability of deep learning models, edge interactions in this graph characterize similarity between corresponding interpretability saliency map model encodings. We explore different types of graph aggregation to identify informative samples for active learning. We apply our method to public chest X-ray and medical image datasets, and report improved results over state-of-the-art AL techniques in terms of model performance, learning rates, and robustness.


Assuntos
Diagnóstico por Computador , Tórax
11.
Bioengineering (Basel) ; 11(1)2023 Dec 23.
Artigo em Inglês | MEDLINE | ID: mdl-38247890

RESUMO

Oropharyngeal Squamous Cell Carcinoma (OPSCC) is one of the common forms of heterogeneity in head and neck cancer. Infection with human papillomavirus (HPV) has been identified as a major risk factor for OPSCC. Therefore, differentiating the HPV-positive and negative cases in OPSCC patients is an essential diagnostic factor influencing future treatment decisions. In this study, we investigated the accuracy of a deep learning-based method for image interpretation and automatically detected the HPV status of OPSCC in routinely acquired Computed Tomography (CT) and Positron Emission Tomography (PET) images. We introduce a 3D CNN-based multi-modal feature fusion architecture for HPV status prediction in primary tumor lesions. The architecture is composed of an ensemble of CNN networks and merges image features in a softmax classification layer. The pipeline separately learns the intensity, contrast variation, shape, texture heterogeneity, and metabolic assessment from CT and PET tumor volume regions and fuses those multi-modal features for final HPV status classification. The precision, recall, and AUC scores of the proposed method are computed, and the results are compared with other existing models. The experimental results demonstrate that the multi-modal ensemble model with soft voting outperformed single-modality PET/CT, with an AUC of 0.76 and F1 score of 0.746 on publicly available TCGA and MAASTRO datasets. In the MAASTRO dataset, our model achieved an AUC score of 0.74 over primary tumor volumes of interest (VOIs). In the future, more extensive cohort validation may suffice for better diagnostic accuracy and provide preliminary assessment before the biopsy.

12.
J Digit Imaging ; 25(6): 802-14, 2012 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-22354704

RESUMO

In this paper, we propose a novel technique for skull stripping of infant (neonatal) brain magnetic resonance images using prior shape information within a graph cut framework. Skull stripping plays an important role in brain image analysis and is a major challenge for neonatal brain images. Popular methods like the brain surface extractor (BSE) and brain extraction tool (BET) do not produce satisfactory results for neonatal images due to poor tissue contrast, weak boundaries between brain and non-brain regions, and low spatial resolution. Inclusion of prior shape information helps in accurate identification of brain and non-brain tissues. Prior shape information is obtained from a set of labeled training images. The probability of a pixel belonging to the brain is obtained from the prior shape mask and included in the penalty term of the cost function. An extra smoothness term is based on gradient information that helps identify the weak boundaries between the brain and non-brain region. Experimental results on real neonatal brain images show that compared to BET, BSE, and other methods, our method achieves superior segmentation performance for neonatal brain images and comparable performance for adult brain images.


Assuntos
Encéfalo/anatomia & histologia , Imageamento por Ressonância Magnética/métodos , Crânio/anatomia & histologia , Algoritmos , Doença de Alzheimer/patologia , Humanos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Recém-Nascido , Pessoa de Meia-Idade , Reconhecimento Automatizado de Padrão/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
13.
Med Image Anal ; 81: 102551, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-35932546

RESUMO

Deep learning methods provide state of the art performance for supervised learning based medical image analysis. However it is essential that trained models extract clinically relevant features for downstream tasks as, otherwise, shortcut learning and generalization issues can occur. Furthermore in the medical field, trustability and transparency of current deep learning systems is a much desired property. In this paper we propose an interpretability-guided inductive bias approach enforcing that learned features yield more distinctive and spatially consistent saliency maps for different class labels of trained models, leading to improved model performance. We achieve our objectives by incorporating a class-distinctiveness loss and a spatial-consistency regularization loss term. Experimental results for medical image classification and segmentation tasks show our proposed approach outperforms conventional methods, while yielding saliency maps in higher agreement with clinical experts. Additionally, we show how information from unlabeled images can be used to further boost performance. In summary, the proposed approach is modular, applicable to existing network architectures used for medical imaging applications, and yields improved learning rates, model robustness, and model interpretability.


Assuntos
Aprendizado Profundo , Diagnóstico por Imagem , Humanos , Processamento de Imagem Assistida por Computador/métodos
14.
IEEE Trans Med Imaging ; 41(9): 2443-2456, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35349437

RESUMO

In many real world medical image classification settings, access to samples of all disease classes is not feasible, affecting the robustness of a system expected to have high performance in analyzing novel test data. This is a case of generalized zero shot learning (GZSL) aiming to recognize seen and unseen classes. We propose a GZSL method that uses self supervised learning (SSL) for: 1) selecting representative vectors of disease classes; and 2) synthesizing features of unseen classes. We also propose a novel approach to generate GradCAM saliency maps that highlight diseased regions with greater accuracy. We exploit information from the novel saliency maps to improve the clustering process by: 1) Enforcing the saliency maps of different classes to be different; and 2) Ensuring that clusters in the space of image and saliency features should yield class centroids having similar semantic information. This ensures the anchor vectors are representative of each class. Different from previous approaches, our proposed approach does not require class attribute vectors which are essential part of GZSL methods for natural images but are not available for medical images. Using a simple architecture the proposed method outperforms state of the art SSL based GZSL performance for natural images as well as multiple types of medical images. We also conduct many ablation studies to investigate the influence of different loss terms in our method.


Assuntos
Semântica , Humanos
15.
IEEE Trans Med Imaging ; 41(6): 1533-1546, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-34995185

RESUMO

Deep neural networks are known to be data-driven and label noise can have a marked impact on model performance. Recent studies have shown great robustness to classic image recognition even under a high noisy rate. In medical applications, learning from datasets with label noise is more challenging since medical imaging datasets tend to have instance-dependent noise (IDN) and suffer from high observer variability. In this paper, we systematically discuss the two common types of label noise in medical images - disagreement label noise from inconsistency expert opinions and single-target label noise from biased aggregation of individual annotations. We then propose an uncertainty estimation-based framework to handle these two label noise amid the medical image classification task. We design a dual-uncertainty estimation approach to measure the disagreement label noise and single-target label noise via improved Direct Uncertainty Prediction and Monte-Carlo-Dropout. A boosting-based curriculum training procedure is later introduced for robust learning. We demonstrate the effectiveness of our method by conducting extensive experiments on three different diseases with synthesized and real-world label noise: skin lesions, prostate cancer, and retinal diseases. We also release a large re-engineered database that consists of annotations from more than ten ophthalmologists with an unbiased golden standard dataset for evaluation and benchmarking. The dataset is available at https://mmai.group/peoples/julie/.


Assuntos
Diagnóstico por Imagem , Redes Neurais de Computação , Ruído , Radiografia , Incerteza
16.
IEEE Trans Med Imaging ; 40(10): 2548-2562, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-33625979

RESUMO

In supervised learning for medical image analysis, sample selection methodologies are fundamental to attain optimum system performance promptly and with minimal expert interactions (e.g. label querying in an active learning setup). In this article we propose a novel sample selection methodology based on deep features leveraging information contained in interpretability saliency maps. In the absence of ground truth labels for informative samples, we use a novel self supervised learning based approach for training a classifier that learns to identify the most informative sample in a given batch of images. We demonstrate the benefits of the proposed approach, termed Interpretability-Driven Sample Selection (IDEAL), in an active learning setup aimed at lung disease classification and histopathology image segmentation. We analyze three different approaches to determine sample informativeness from interpretability saliency maps: (i) an observational model stemming from findings on previous uncertainty-based sample selection approaches, (ii) a radiomics-based model, and (iii) a novel data-driven self-supervised approach. We compare IDEAL to other baselines using the publicly available NIH chest X-ray dataset for lung disease classification, and a public histopathology segmentation dataset (GLaS), demonstrating the potential of using interpretability information for sample selection in active learning systems. Results show our proposed self supervised approach outperforms other approaches in selecting informative samples leading to state of the art performance with fewer samples.


Assuntos
Pulmão , Aprendizado de Máquina Supervisionado , Incerteza
17.
IEEE J Biomed Health Inform ; 25(10): 3709-3720, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-33465032

RESUMO

The need for comprehensive and automated screening methods for retinal image classification has long been recognized. Well-qualified doctors annotated images are very expensive and only a limited amount of data is available for various retinal diseases such as diabetic retinopathy (DR) and age-related macular degeneration (AMD). Some studies show that some retinal diseases such as DR and AMD share some common features like haemorrhages and exudation but most classification algorithms only train those disease models independently when the only single label for one image is available. Inspired by multi-task learning where additional monitoring signals from various sources is beneficial to train a robust model. We propose a method called synergic adversarial label learning (SALL) which leverages relevant retinal disease labels in both semantic and feature space as additional signals and train the model in a collaborative manner using knowledge distillation. Our experiments on DR and AMD fundus image classification task demonstrate that the proposed method can significantly improve the accuracy of the model for grading diseases by 5.91% and 3.69% respectively. In addition, we conduct additional experiments to show the effectiveness of SALL from the aspects of reliability and interpretability in the context of medical imaging application.


Assuntos
Retinopatia Diabética , Doenças Retinianas , Algoritmos , Retinopatia Diabética/diagnóstico por imagem , Fundo de Olho , Humanos , Reprodutibilidade dos Testes
18.
IEEE Trans Med Imaging ; 40(12): 3413-3423, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34086562

RESUMO

Detecting various types of cells in and around the tumor matrix holds a special significance in characterizing the tumor micro-environment for cancer prognostication and research. Automating the tasks of detecting, segmenting, and classifying nuclei can free up the pathologists' time for higher value tasks and reduce errors due to fatigue and subjectivity. To encourage the computer vision research community to develop and test algorithms for these tasks, we prepared a large and diverse dataset of nucleus boundary annotations and class labels. The dataset has over 46,000 nuclei from 37 hospitals, 71 patients, four organs, and four nucleus types. We also organized a challenge around this dataset as a satellite event at the International Symposium on Biomedical Imaging (ISBI) in April 2020. The challenge saw a wide participation from across the world, and the top methods were able to match inter-human concordance for the challenge metric. In this paper, we summarize the dataset and the key findings of the challenge, including the commonalities and differences between the methods developed by various participants. We have released the MoNuSAC2020 dataset to the public.


Assuntos
Algoritmos , Núcleo Celular , Humanos , Processamento de Imagem Assistida por Computador
19.
Comput Med Imaging Graph ; 71: 30-39, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-30472408

RESUMO

Anatomical landmark segmentation and pathology localisation are important steps in automated analysis of medical images. They are particularly challenging when the anatomy or pathology is small, as in retinal images (e.g. vasculature branches or microaneurysm lesions) and cardiac MRI, or when the image is of low quality due to device acquisition parameters as in magnetic resonance (MR) scanners. We propose an image super-resolution method using progressive generative adversarial networks (P-GANs) that can take as input a low-resolution image and generate a high resolution image of desired scaling factor. The super resolved images can be used for more accurate detection of landmarks and pathologies. Our primary contribution is in proposing a multi-stage model where the output image quality of one stage is progressively improved in the next stage by using a triplet loss function. The triplet loss enables stepwise image quality improvement by using the output of the previous stage as the baseline. This facilitates generation of super resolved images of high scaling factor while maintaining good image quality. Experimental results for image super-resolution show that our proposed multi stage P-GAN outperforms competing methods and baseline GANs. The super resolved images when used for landmark and pathology detection result in accuracy levels close to those obtained when using the original high resolution images. We also demonstrate our methods effectiveness on magnetic resonance (MR) images, thus establishing its broader applicability.


Assuntos
Algoritmos , Ventrículos do Coração/diagnóstico por imagem , Aumento da Imagem/métodos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Vasos Retinianos/diagnóstico por imagem , Pontos de Referência Anatômicos , Fundo de Olho , Humanos
20.
Comput Med Imaging Graph ; 55: 28-41, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-27590198

RESUMO

We present a novel method to segment retinal images using ensemble learning based convolutional neural network (CNN) architectures. An entropy sampling technique is used to select informative points thus reducing computational complexity while performing superior to uniform sampling. The sampled points are used to design a novel learning framework for convolutional filters based on boosting. Filters are learned in several layers with the output of previous layers serving as the input to the next layer. A softmax logistic classifier is subsequently trained on the output of all learned filters and applied on test images. The output of the classifier is subject to an unsupervised graph cut algorithm followed by a convex hull transformation to obtain the final segmentation. Our proposed algorithm for optic cup and disc segmentation outperforms existing methods on the public DRISHTI-GS data set on several metrics.


Assuntos
Entropia , Glaucoma/diagnóstico por imagem , Disco Óptico/diagnóstico por imagem , Reconhecimento Automatizado de Padrão/métodos , Aprendizado de Máquina não Supervisionado , Humanos , Modelos Logísticos , Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA