Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 23
Filtrar
1.
J Clin Med ; 12(8)2023 Apr 20.
Artigo em Inglês | MEDLINE | ID: mdl-37109349

RESUMO

Patients diagnosed with exudative neovascular age-related macular degeneration are commonly treated with anti-vascular endothelial growth factor (anti-VEGF) agents. However, response to treatment is heterogeneous, without a clinical explanation. Predicting suboptimal response at baseline will enable more efficient clinical trial designs for novel, future interventions and facilitate individualised therapies. In this multicentre study, we trained a multi-modal artificial intelligence (AI) system to identify suboptimal responders to the loading-phase of the anti-VEGF agent aflibercept from baseline characteristics. We collected clinical features and optical coherence tomography scans from 1720 eyes of 1612 patients between 2019 and 2021. We evaluated our AI system as a patient selection method by emulating hypothetical clinical trials of different sizes based on our test set. Our method detected up to 57.6% more suboptimal responders than random selection, and up to 24.2% more than any alternative selection criteria tested. Applying this method to the entry process of candidates into randomised controlled trials may contribute to the success of such trials and further inform personalised care.

2.
Ophthalmol Glaucoma ; 4(1): 102-112, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-32826205

RESUMO

PURPOSE: To evaluate the accuracy at which visual field global indices could be estimated from OCT scans of the retina using deep neural networks and to quantify the contributions to the estimates by the macula (MAC) and the optic nerve head (ONH). DESIGN: Observational cohort study. PARTICIPANTS: A total of 10 370 eyes from 109 healthy patients, 697 glaucoma suspects, and 872 patients with glaucoma over multiple visits (median = 3). METHODS: Three-dimensional convolutional neural networks were trained to estimate global visual field indices derived from automated Humphrey perimetry (SITA 24-2) tests (Zeiss, Dublin, CA), using OCT scans centered on MAC, ONH, or both (MAC + ONH) as inputs. MAIN OUTCOME MEASURES: Spearman's rank correlation coefficients, Pearson's correlation coefficient, and absolute errors calculated for 2 indices: visual field index (VFI) and mean deviation (MD). RESULTS: The MAC + ONH achieved 0.76 Spearman's correlation coefficient and 0.87 Pearson's correlation for VFI and MD. Median absolute error was 2.7 for VFI and 1.57 decibels (dB) for MD. Separate MAC or ONH estimates were significantly less correlated and less accurate. Accuracy was dependent on the OCT signal strength and the stage of glaucoma severity. CONCLUSIONS: The accuracy of global visual field indices estimate is improved by integrating information from MAC and ONH in advanced glaucoma, suggesting that structural changes of the 2 regions have different time courses in the disease severity spectrum.


Assuntos
Glaucoma , Disco Óptico , Glaucoma/diagnóstico , Humanos , Redes Neurais de Computação , Disco Óptico/diagnóstico por imagem , Tomografia de Coerência Óptica , Campos Visuais
3.
IEEE J Biomed Health Inform ; 24(12): 3421-3430, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32750930

RESUMO

The direct analysis of 3D Optical Coherence Tomography (OCT) volumes enables deep learning models (DL) to learn spatial structural information and discover new bio-markers that are relevant to glaucoma. Downsampling 3D input volumes is the state-of-art solution to accommodate for the limited number of training volumes as well as the available computing resources. However, this limits the network's ability to learn from small retinal structures in OCT volumes. In this paper, our goal is to improve the performance by providing guidance to DL model during training in order to learn from finer ocular structures in 3D OCT volumes. Therefore, we propose an end-to-end attention guided 3D DL model for glaucoma detection and estimating visual function from retinal structures. The model consists of three pathways with the same network architecture but different inputs. One input is the original 3D-OCT cube and the other two are computed during training guided by the 3D gradient class activation heatmaps. Each pathway outputs the class-label and the whole model is trained concurrently to minimize the sum of losses from three pathways. The final output is obtained by fusing the predictions of the three pathways. Also, to explore the robustness and generalizability of the proposed model, we apply the model on a classification task for glaucoma detection as well as a regression task to estimate visual field index (VFI) (a value between 0 and 100). A 5-fold cross-validation with a total of 3782 and 10,370 OCT scans is used to train and evaluate the classification and regression models, respectively. The glaucoma detection model achieved an area under the curve (AUC) of 93.8% compared with 86.8% for a baseline model without the attention-guided component. The model also outperformed six different feature based machine learning approaches that use scanner computed measurements for training. Further, we also assessed the contribution of different retinal layers that are relevant to glaucoma. The VFI estimation model achieved a Pearson correlation and median absolute error of 0.75 and 3.6%, respectively, for a test set of size 3100 cubes.


Assuntos
Glaucoma/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Redes Neurais de Computação , Tomografia de Coerência Óptica/métodos , Bases de Dados Factuais , Aprendizado Profundo , Humanos
4.
Ophthalmol Glaucoma ; 3(1): 14-24, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32647810

RESUMO

Purpose: The purpose of this study was to develop a machine learning model to forecast future circumpapillary retinal nerve fiber layer (cpRNFL) thickness in eyes of healthy, glaucoma suspect, and glaucoma participants from multimodal temporal data. Design: Retrospective analysis of a longitudinal clinical cohort. Participants: Longitudinal clinical cohort of healthy, glaucoma suspect, and glaucoma participants. Methods: The forecasting models used multimodal patient information including clinical (age and intraocular pressure), structural (cpRNFL thickness derived from scans as well as deep learning-derived OCT image features), and functional (visual field test parameters) data and the intervisit interval for prediction of cpRNFL thickness at the next visit. Four models were developed based on the number of visits used (n = 1 to 4). Longitudinal data from 1089 participants (mean observation period, 3.65±1.73 years) was used with 80% of the cohort for the development of the models. The results of our models were compared with those of a commonly adopted linear regression model, which we refer to here as linear trend-based estimation (LTBE). Main Outcome Measures: The mean absolute difference and Pearson's correlation coefficient between the true and forecasted values of the cpRNFL in the healthy, glaucoma suspect, and glaucoma patients. Results: The best forecasting model of cpRNFL was obtained using 3 visits and incorporated deep learning-derived OCT image features. The mean error was 1.10±0.60 µm, 1.79±1.73 µm, and 1.87±1.85 µm in eyes of healthy, glaucoma suspect, and glaucoma participants, respectively. Our method significantly outperformed the LTBE model for glaucoma suspect and glaucoma participants (P < 0.001), which showed a mean error of 1.55±1.16 µm, 2.4±2.67 µm, and 3.02±3.06 µm in the 3 groups, respectively. The Pearson's correlation coefficient between the forecasted value and the measured thickness was ρ = 0.96 (P < 0.01), ρ = 0.95 (P < 0.01), and ρ = 0.96 (P < 0.01) for the 3 groups, respectively. Conclusions: The performance of the proposed forecasting model for cpRNFL is consistent across glaucoma suspect and glaucoma patients, which implies the robustness of the developed model against the disease state. These forecasted values may be useful to personalize patient care by determining the most appropriate intervisit schedule for timely interventions.


Assuntos
Previsões , Glaucoma/diagnóstico , Pressão Intraocular/fisiologia , Fibras Nervosas/patologia , Disco Óptico/patologia , Células Ganglionares da Retina/patologia , Tomografia de Coerência Óptica/métodos , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Seguimentos , Glaucoma/fisiopatologia , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos , Campos Visuais/fisiologia , Adulto Jovem
5.
IEEE J Biomed Health Inform ; 24(2): 577-585, 2020 02.
Artigo em Inglês | MEDLINE | ID: mdl-30990451

RESUMO

Psoriasis is a chronic skin condition. Its clinical assessment involves four measures: erythema, scales, induration, and area. In this paper, we introduce a scale severity scoring framework for two-dimensional psoriasis skin images. Specifically, we leverage the bag-of-visual words (BoVWs) model for lesion feature extraction using superpixels as key points. BoVWs model is based on building a vocabulary with specific number of words (i.e., codebook size) by using a clustering algorithm with some local features extracted from a constructed set of key points. This is followed by three-class machine learning classifiers for scale scoring using support vector machine (SVM) and random forest. Besides, we examine eight different local color and texture descriptors, namely color histogram, local binary patterns, edge histogram descriptor, color layout descriptor, scalable color descriptor, color and edge directivity descriptor (CEDD), fuzzy color and texture histogram, and brightness and texture directionality histogram. Further, the selection of codebook and superpixel sizes are studied intensively. A psoriasis image set, consisting of 96 images, is used in this study. The conducted experiments show that color descriptors have the highest performance measures for scale severity scoring. This is followed by the combined color and texture descriptors, whereas texture-based descriptors come last. Moreover, K-means algorithm shows better results in vocabulary building than Gaussian mixed model, in terms of accuracy and computations time. Finally, the proposed method yields a scale severity scoring accuracy of 80.81% using the following setup: a superpixel of size [Formula: see text], a combined color and texture descriptor (i.e., CEDD), a constructed codebook of size 128 using K-means, and SVM for scale scoring.


Assuntos
Psoríase/fisiopatologia , Índice de Gravidade de Doença , Pele/patologia , Algoritmos , Análise por Conglomerados , Humanos , Aprendizado de Máquina
6.
PLoS One ; 14(7): e0219126, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31260494

RESUMO

Optical coherence tomography (OCT) based measurements of retinal layer thickness, such as the retinal nerve fibre layer (RNFL) and the ganglion cell with inner plexiform layer (GCIPL) are commonly employed for the diagnosis and monitoring of glaucoma. Previously, machine learning techniques have relied on segmentation-based imaging features such as the peripapillary RNFL thickness and the cup-to-disc ratio. Here, we propose a deep learning technique that classifies eyes as healthy or glaucomatous directly from raw, unsegmented OCT volumes of the optic nerve head (ONH) using a 3D Convolutional Neural Network (CNN). We compared the accuracy of this technique with various feature-based machine learning algorithms and demonstrated the superiority of the proposed deep learning based method. Logistic regression was found to be the best performing classical machine learning technique with an AUC of 0.89. In direct comparison, the deep learning approach achieved a substantially higher AUC of 0.94 with the additional advantage of providing insight into which regions of an OCT volume are important for glaucoma detection. Computing Class Activation Maps (CAM), we found that the CNN identified neuroretinal rim and optic disc cupping as well as the lamina cribrosa (LC) and its surrounding areas as the regions significantly associated with the glaucoma classification. These regions anatomically correspond to the well established and commonly used clinical markers for glaucoma diagnosis such as increased cup volume, cup diameter, and neuroretinal rim thinning at the superior and inferior segments.


Assuntos
Aprendizado Profundo , Glaucoma/diagnóstico por imagem , Tomografia de Coerência Óptica/métodos , Adulto , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Feminino , Glaucoma/classificação , Glaucoma/patologia , Humanos , Modelos Logísticos , Aprendizado de Máquina , Masculino , Pessoa de Meia-Idade , Redes Neurais de Computação , Disco Óptico/diagnóstico por imagem , Disco Óptico/patologia , Células Ganglionares da Retina/patologia , Tomografia de Coerência Óptica/estatística & dados numéricos , Adulto Jovem
7.
PLoS One ; 14(5): e0203726, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31083678

RESUMO

Spectral-domain optical coherence tomography (SDOCT) is a non-invasive imaging modality that generates high-resolution volumetric images. This modality finds widespread usage in ophthalmology for the diagnosis and management of various ocular conditions. The volumes generated can contain 200 or more B-scans. Manual inspection of such large quantity of scans is time consuming and error prone in most clinical settings. Here, we present a method for the generation of visual summaries of SDOCT volumes, wherein a small set of B-scans that highlight the most clinically relevant features in a volume are extracted. The method was trained and evaluated on data acquired from age-related macular degeneration patients, and "relevance" was defined as the presence of visibly discernible structural abnormalities. The summarisation system consists of a detection module, where relevant B-scans are extracted from the volume, and a set of rules that determines which B-scans are included in the visual summary. Two deep learning approaches are presented and compared for the classification of B-scans-transfer learning and de novo learning. Both approaches performed comparably with AUCs of 0.97 and 0.96, respectively, obtained on an independent test set. The de novo network, however, was 98% smaller than the transfer learning approach, and had a run-time that was also significantly shorter.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Tomografia de Coerência Óptica , Algoritmos , Área Sob a Curva , Humanos , Processamento de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/normas , Redes Neurais de Computação , Reprodutibilidade dos Testes , Tomografia de Coerência Óptica/métodos , Tomografia de Coerência Óptica/normas
8.
IEEE J Biomed Health Inform ; 23(2): 570-577, 2019 03.
Artigo em Inglês | MEDLINE | ID: mdl-29993590

RESUMO

This paper presents a QuadTree-based melanoma detection system inspired by dermatologists' color perception. Clinical color assessment in dermoscopy images is challenging because of subtle differences in shades, location-dependent color information, poor color contrast, and wide variation among images of the same class. To overcome these challenges, color enhancement and automatic color identification techniques, based on QuadTree segmentation and modeled after expert color assessments, are developed. The approach presented in this paper is shown to provide an accurate model of expert color assessment. Specifically, the proposed model is shown to: 1) identify significantly more colors in melanomas than in benign skin lesions; 2) identify a higher frequency in melanomas of three colors: blue-gray, black, and pink; and 3) delineate locations of melanoma colors by quintiles, specifically predilection for blue-gray and pink in the periphery and a trend for white and black in the lesion center. Performance of the proposed method is evaluated using four classifiers. The kernel support vector machine classifier is found to achieve the best results, with an area under the receiver operating characteristic (ROC) curve of 0.93, compared to average area under the ROC curve of 0.82 achieved by the dermatologists in this study. The results indicate that the biologically inspired method of automatic color detection proposed in this paper has the potential to play an important role in melanoma diagnosis in the clinic.


Assuntos
Dermoscopia/métodos , Interpretação de Imagem Assistida por Computador/métodos , Melanoma/diagnóstico por imagem , Neoplasias Cutâneas/diagnóstico por imagem , Algoritmos , Cor , Humanos , Melanoma/patologia , Pele/diagnóstico por imagem , Pele/patologia , Neoplasias Cutâneas/patologia , Pigmentação da Pele/fisiologia
9.
Comput Med Imaging Graph ; 71: 30-39, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-30472408

RESUMO

Anatomical landmark segmentation and pathology localisation are important steps in automated analysis of medical images. They are particularly challenging when the anatomy or pathology is small, as in retinal images (e.g. vasculature branches or microaneurysm lesions) and cardiac MRI, or when the image is of low quality due to device acquisition parameters as in magnetic resonance (MR) scanners. We propose an image super-resolution method using progressive generative adversarial networks (P-GANs) that can take as input a low-resolution image and generate a high resolution image of desired scaling factor. The super resolved images can be used for more accurate detection of landmarks and pathologies. Our primary contribution is in proposing a multi-stage model where the output image quality of one stage is progressively improved in the next stage by using a triplet loss function. The triplet loss enables stepwise image quality improvement by using the output of the previous stage as the baseline. This facilitates generation of super resolved images of high scaling factor while maintaining good image quality. Experimental results for image super-resolution show that our proposed multi stage P-GAN outperforms competing methods and baseline GANs. The super resolved images when used for landmark and pathology detection result in accuracy levels close to those obtained when using the original high resolution images. We also demonstrate our methods effectiveness on magnetic resonance (MR) images, thus establishing its broader applicability.


Assuntos
Algoritmos , Ventrículos do Coração/diagnóstico por imagem , Aumento da Imagem/métodos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Vasos Retinianos/diagnóstico por imagem , Pontos de Referência Anatômicos , Fundo de Olho , Humanos
10.
Comput Med Imaging Graph ; 66: 44-55, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29524784

RESUMO

Psoriasis is a chronic skin disease which can be life-threatening. Accurate severity scoring helps dermatologists to decide on the treatment. In this paper, we present a semi-supervised computer-aided system for automatic erythema severity scoring in psoriasis images. Firstly, the unsupervised stage includes a novel image representation method. We construct a dictionary, which is then used in the sparse representation for local feature extraction. To acquire the final image representation vector, an aggregation method is exploited over the local features. Secondly, the supervised phase is where various multi-class machine learning (ML) classifiers are trained for erythema severity scoring. Finally, we compare the proposed system with two popular unsupervised feature extractor methods, namely: bag of visual words model (BoVWs) and AlexNet pretrained model. Root mean square error (RMSE) and F1 score are used as performance measures for the learned dictionaries and the trained ML models, respectively. A psoriasis image set consisting of 676 images, is used in this study. Experimental results demonstrate that the use of the proposed procedure can provide a setup where erythema scoring is accurate and consistent. Also, it is revealed that dictionaries with large number of atoms and small patch sizes yield the best representative erythema severity features. Further, random forest (RF) outperforms other classifiers with F1 score 0.71, followed by support vector machine (SVM) and boosting with 0.66 and 0.64 scores, respectively. Furthermore, the conducted comparative studies confirm the effectiveness of the proposed approach with improvement of 9% and 12% over BoVWs and AlexNet based features, respectively.


Assuntos
Eritema/diagnóstico por imagem , Eritema/fisiopatologia , Interpretação de Imagem Assistida por Computador/métodos , Psoríase/diagnóstico por imagem , Algoritmos , Humanos , Aprendizado de Máquina , Índice de Gravidade de Doença , Máquina de Vetores de Suporte
11.
Biomed Opt Express ; 9(12): 6205-6221, 2018 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-31065423

RESUMO

Optical coherence tomography (OCT) images of the retina are a powerful tool for diagnosing and monitoring eye disease. However, they are plagued by speckle noise, which reduces image quality and reliability of assessment. This paper introduces a novel speckle reduction method inspired by the recent successes of deep learning in medical imaging. We present two versions of the network to reflect the needs and preferences of different end-users. Specifically, we train a convolution neural network to denoise cross-sections from OCT volumes of healthy eyes using either (1) mean-squared error, or (2) a generative adversarial network (GAN) with Wasserstein distance and perceptual similarity. We then interrogate the success of both methods with extensive quantitative and qualitative metrics on cross-sections from both healthy and glaucomatous eyes. The results show that the former approach provides state-of-the-art improvement in quantitative metrics such as PSNR and SSIM, and aids layer segmentation. However, the latter approach, which puts more weight on visual perception, outperformed for qualitative comparisons based on accuracy, clarity, and personal preference. Overall, our results demonstrate the effectiveness and efficiency of a deep learning approach to denoising OCT images, while maintaining subtle details in the images.

12.
J Med Imaging (Bellingham) ; 4(4): 044004, 2017 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-29152533

RESUMO

Psoriasis is a chronic skin disease that is assessed visually by dermatologists. The Psoriasis Area and Severity Index (PASI) is the current gold standard used to measure lesion severity by evaluating four parameters, namely, area, erythema, scaliness, and thickness. In this context, psoriasis skin lesion segmentation is required as the basis for PASI scoring. An automatic lesion segmentation method by leveraging multiscale superpixels and [Formula: see text]-means clustering is outlined. Specifically, we apply a superpixel segmentation strategy on CIE-[Formula: see text] color space using different scales. Also, we suppress the superpixels that belong to nonskin areas. Once similar regions on different scales are obtained, the [Formula: see text]-means algorithm is used to cluster each superpixel scale separately into normal and lesion skin areas. Features from both [Formula: see text] and [Formula: see text] color bands are used in the clustering process. Furthermore, majority voting is performed to fuse the segmentation results from different scales to obtain the final output. The proposed method is extensively evaluated on a set of 457 psoriasis digital images, acquired from the Royal Melbourne Hospital, Melbourne, Australia. Experimental results have shown evidence that the method is very effective and efficient, even when applied to images containing hairy skin and diverse lesion size, shape, and severity. It has also been ascertained that CIE-[Formula: see text] outperforms other color spaces for psoriasis lesion analysis and segmentation. In addition, we use three evaluation metrics, namely, Dice coefficient, Jaccard index, and pixel accuracy where scores of 0.783%, 0.698%, and 86.99% have been achieved by the proposed method for the three metrics, respectively. Finally, compared with existing methods that employ either skin decomposition and support vector machine classifier or Euclidean distance in the hue-chrome plane, our multiscale superpixel-based method achieves markedly better performance with at least 20% accuracy enhancement.

13.
Annu Int Conf IEEE Eng Med Biol Soc ; 2016: 3260-3264, 2016 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-28269003

RESUMO

Glaucoma is one of the leading cause of blindness. The manual examination of optic cup and disc is a standard procedure used for detecting glaucoma. This paper presents a fully automatic regression based method which accurately segments optic cup and disc in retinal colour fundus image. First, we roughly segment optic disc using circular hough transform. The approximated optic disc is then used to compute the initial optic disc and cup shapes. We propose a robust and efficient cascaded shape regression method which iteratively learns the final shape of the optic cup and disc from a given initial shape. Gradient boosted regression trees are employed to learn each regressor in the cascade. A novel data augmentation approach is proposed to improve the regressors performance by generating synthetic training data. The proposed optic cup and disc segmentation method is applied on an image set of 50 patients and demonstrate high segmentation accuracy for optic cup and disc with dice metric of 0.95 and 0.85 respectively. Comparative study shows that our proposed method outperforms state of the art optic cup and disc segmentation methods.


Assuntos
Fundo de Olho , Processamento de Imagem Assistida por Computador , Disco Óptico/anatomia & histologia , Algoritmos , Glaucoma/diagnóstico , Humanos , Análise de Regressão
14.
Annu Int Conf IEEE Eng Med Biol Soc ; 2016: 3855-3858, 2016 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-28269127

RESUMO

Asymmetry is one of key characteristics for early diagnosis of melanoma according to medical algorithms such as (ABCD, CASH etc.). Besides shape information, cues such as irregular distribution of colors and structures within the lesion area are assessed by dermatologists to determine lesion asymmetry. Motivated by the clinical practices, we have used Kullback-Leibler divergence of color histogram and Structural Similarity metric as a measures of these irregularities. We have presented performance of several classifiers using these features on publicly available PH2 dataset. The obtained result shows better asymmetry classification than available literature. Besides being a new benchmark, the proposed technique can be used for early diagnosis of melanoma by both clinical experts and other automated diagnosis systems.


Assuntos
Processamento de Imagem Assistida por Computador , Dermatopatias/patologia , Neoplasias Cutâneas/patologia , Pele/patologia , Algoritmos , Cor , Bases de Dados Factuais , Dermoscopia/métodos , Humanos
15.
Annu Int Conf IEEE Eng Med Biol Soc ; 2016: 1304-1307, 2016 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-28268565

RESUMO

Retinal image quality assessment (IQA) algorithms use different hand crafted features for training classifiers without considering the working of the human visual system (HVS) which plays an important role in IQA. We propose a convolutional neural network (CNN) based approach that determines image quality using the underlying principles behind the working of the HVS. CNNs provide a principled approach to feature learning and hence higher accuracy in decision making. Experimental results demonstrate the superior performance of our proposed algorithm over competing methods.


Assuntos
Retina , Algoritmos , Humanos , Redes Neurais de Computação , Neurobiologia
16.
Annu Int Conf IEEE Eng Med Biol Soc ; 2016: 1361-1364, 2016 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-28268578

RESUMO

This paper presents a robust segmentation method based on multi-scale classification to identify the lesion boundary in dermoscopic images. Our proposed method leverages a collection of classifiers which are trained at various resolutions to categorize each pixel as "lesion" or "surrounding skin". In detection phase, trained classifiers are applied on new images. The classifier outputs are fused at pixel level to build probability maps which represent lesion saliency maps. In the next step, Otsu thresholding is applied to convert the saliency maps to binary masks, which determine the border of the lesions. We compared our proposed method with existing lesion segmentation methods proposed in the literature using two dermoscopy data sets (International Skin Imaging Collaboration and Pedro Hispano Hospital) which demonstrates the superiority of our method with Dice Coefficient of 0.91 and accuracy of 94%.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Neoplasias Cutâneas/diagnóstico por imagem , Pele/diagnóstico por imagem , Algoritmos , Bases de Dados Factuais , Dermoscopia/métodos , Humanos , Aprendizado de Máquina , Nevo/diagnóstico por imagem , Nevo/patologia , Pele/patologia , Neoplasias Cutâneas/patologia
17.
Stud Health Technol Inform ; 216: 691-5, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26262140

RESUMO

Advanced techniques in machine learning combined with scalable "cloud" computing infrastructure are driving the creation of new and innovative health diagnostic applications. We describe a service and application for performing image training and recognition, tailored to dermatology and melanoma identification. The system implements new machine learning approaches to provide a feedback-driven training loop. This training sequence enhances classification performance by incrementally retraining the classifier model from expert responses. To easily provide this application and associated web service to clinical practices, we also describe a scalable cloud infrastructure, deployable in public cloud infrastructure and private, on-premise systems.


Assuntos
Computação em Nuvem , Sistemas Inteligentes , Interpretação de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Melanoma/patologia , Neoplasias Cutâneas/patologia , Algoritmos , Dermoscopia/métodos , Retroalimentação , Humanos , Reconhecimento Automatizado de Padrão/métodos , Interface Usuário-Computador
18.
Annu Int Conf IEEE Eng Med Biol Soc ; 2015: 2977-80, 2015 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-26736917

RESUMO

Multi-atlas segmentation first registers each atlas image to the target image and transfers the label of atlas image to the coordinate system of the target image. The transferred labels are then combined, using a label fusion algorithm. In this paper, we propose a novel label fusion method which aggregates discriminative learning and generative modeling for segmentation of cardiac MR images. First, a probabilistic Random Forest classifier is trained as a discriminative model to obtain the prior probability of a label at the given voxel of the target image. Then, a probability distribution of image patches is modeled using Gaussian Mixture Model for each label, providing the likelihood of the voxel belonging to the label. The final label posterior is obtained by combining the classification score and the likelihood score under Bayesian rule. Comparative study performed on MICCAI 2013 SATA Segmentation Challenge demonstrates that our proposed hybrid label fusion algorithm is accurate than other five state-of-the-art label fusion methods. The proposed method obtains dice similarity coefficient of 0.94 and 0.92 in segmenting epicardium and endocardium respectively. Moreover, our label fusion method achieves more accurate segmentation results compared to four other label fusion methods.


Assuntos
Coração , Algoritmos , Teorema de Bayes , Imageamento por Ressonância Magnética , Probabilidade
19.
Artigo em Inglês | MEDLINE | ID: mdl-26737789

RESUMO

Analysis and characterization of anatomical segments in the left ventricle (LV) of the heart in cardiac MRI convey clinical significance. Based on the standard defined by the American Heart Association (AHA), the LV is divided into 17 anatomical segments. In this paper, we propose a novel method to automatically partition the LV into 17 segments, which allows automated analysis of these segments. Our method starts with assigning each slice with a section tag by using the papillary muscles and the LV cavity as references: basal, mid-cavity, apical and apex. It then partitions each slice into 4 or 6 segments by extracting the relevant points on the outer circle of a fitted cylinder and identifying the image orientation by using the lung as a reference. We evaluate our method on 45 patients with different cardiac conditions. The partition of mid-cavity has the best agreement with the ground truth, followed by basal and then apical sections for all groups of patients.


Assuntos
Ventrículos do Coração/anatomia & histologia , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Modelos Cardiovasculares , Humanos , Músculos Papilares/anatomia & histologia
20.
IEEE Trans Inf Technol Biomed ; 16(6): 1239-52, 2012 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-22893445

RESUMO

This paper presents a novel computer-aided diagnosis system for melanoma. The novelty lies in the optimised selection and integration of features derived from textural, borderbased and geometrical properties of the melanoma lesion. The texture features are derived from using wavelet-decomposition, the border features are derived from constructing a boundaryseries model of the lesion border and analysing it in spatial and frequency domains, and the geometry features are derived from shape indexes. The optimised selection of features is achieved by using the Gain-Ratio method, which is shown to be computationally efficient for melanoma diagnosis application. Classification is done through the use of four classifiers; namely, Support Vector Machine, Random Forest, Logistic Model Tree and Hidden Naive Bayes. The proposed diagnostic system is applied on a set of 289 dermoscopy images (114 malignant, 175 benign) partitioned into train, validation and test image sets. The system achieves and accuracy of 91.26% and AUC value of 0.937, when 23 features are used. Other important findings include (i) the clear advantage gained in complementing texture with border and geometry features, compared to using texture information only, and (ii) higher contribution of texture features than border-based features in the optimised feature set.


Assuntos
Dermoscopia/métodos , Interpretação de Imagem Assistida por Computador/métodos , Melanoma/diagnóstico , Análise de Ondaletas , Árvores de Decisões , Humanos , Melanoma/patologia , Reprodutibilidade dos Testes , Máquina de Vetores de Suporte
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...