Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 41
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Neuroimage ; 225: 117366, 2021 01 15.
Artigo em Inglês | MEDLINE | ID: mdl-33039617

RESUMO

Deep learning (DL) has shown great potential in medical image enhancement problems, such as super-resolution or image synthesis. However, to date, most existing approaches are based on deterministic models, neglecting the presence of different sources of uncertainty in such problems. Here we introduce methods to characterise different components of uncertainty, and demonstrate the ideas using diffusion MRI super-resolution. Specifically, we propose to account for intrinsic uncertainty through a heteroscedastic noise model and for parameter uncertainty through approximate Bayesian inference, and integrate the two to quantify predictive uncertainty over the output image. Moreover, we introduce a method to propagate the predictive uncertainty on a multi-channelled image to derived scalar parameters, and separately quantify the effects of intrinsic and parameter uncertainty therein. The methods are evaluated for super-resolution of two different signal representations of diffusion MR images-Diffusion Tensor images and Mean Apparent Propagator MRI-and their derived quantities such as mean diffusivity and fractional anisotropy, on multiple datasets of both healthy and pathological human brains. Results highlight three key potential benefits of modelling uncertainty for improving the safety of DL-based image enhancement systems. Firstly, modelling uncertainty improves the predictive performance even when test data departs from training data ("out-of-distribution" datasets). Secondly, the predictive uncertainty highly correlates with reconstruction errors, and is therefore capable of detecting predictive "failures". Results on both healthy subjects and patients with brain glioma or multiple sclerosis demonstrate that such an uncertainty measure enables subject-specific and voxel-wise risk assessment of the super-resolved images that can be accounted for in subsequent analysis. Thirdly, we show that the method for decomposing predictive uncertainty into its independent sources provides high-level "explanations" for the model performance by separately quantifying how much uncertainty arises from the inherent difficulty of the task or the limited training examples. The introduced concepts of uncertainty modelling extend naturally to many other imaging modalities and data enhancement applications.


Assuntos
Encéfalo/diagnóstico por imagem , Aprendizado Profundo , Imagem de Difusão por Ressonância Magnética/métodos , Aumento da Imagem/métodos , Neuroimagem/métodos , Incerteza , Imagem de Tensor de Difusão , Humanos , Processamento de Imagem Assistida por Computador
2.
Neuroimage ; 152: 283-298, 2017 05 15.
Artigo em Inglês | MEDLINE | ID: mdl-28263925

RESUMO

This paper introduces a new computational imaging technique called image quality transfer (IQT). IQT uses machine learning to transfer the rich information available from one-off experimental medical imaging devices to the abundant but lower-quality data from routine acquisitions. The procedure uses matched pairs to learn mappings from low-quality to corresponding high-quality images. Once learned, these mappings then augment unseen low quality images, for example by enhancing image resolution or information content. Here, we demonstrate IQT using a simple patch-regression implementation and the uniquely rich diffusion MRI data set from the human connectome project (HCP). Results highlight potential benefits of IQT in both brain connectivity mapping and microstructure imaging. In brain connectivity mapping, IQT reveals, from standard data sets, thin connection pathways that tractography normally requires specialised data to reconstruct. In microstructure imaging, IQT shows potential in estimating, from standard "single-shell" data (one non-zero b-value), maps of microstructural parameters that normally require specialised multi-shell data. Further experiments show strong generalisability, highlighting IQT's benefits even when the training set does not directly represent the application domain. The concept extends naturally to many other imaging modalities and reconstruction problems.


Assuntos
Encéfalo/anatomia & histologia , Conectoma/métodos , Imagem de Difusão por Ressonância Magnética/métodos , Aumento da Imagem , Adolescente , Adulto , Idoso , Animais , Criança , Chlorocebus aethiops , Imagem de Tensor de Difusão/métodos , Feminino , Humanos , Aprendizado de Máquina , Masculino , Pessoa de Meia-Idade , Substância Branca/anatomia & histologia , Adulto Jovem
3.
NPJ Digit Med ; 6(1): 168, 2023 Sep 11.
Artigo em Inglês | MEDLINE | ID: mdl-37696899

RESUMO

Waist-to-hip circumference ratio (WHR) is now recognized as among the strongest shape biometrics linked with health outcomes, although use of this phenotypic marker remains limited due to the inaccuracies in and inconvenient nature of flexible tape measurements when made in clinical and home settings. Here we report that accurate and reliable WHR estimation in adults is possible with a smartphone application based on novel computer vision algorithms. The developed application runs a convolutional neural network model referred to as MeasureNet that predicts a person's body circumferences and WHR using front, side, and back color images. MeasureNet bridges the gap between measurements conducted by trained professionals in clinical environments, which can be inconvenient, and self-measurements performed by users at home, which can be unreliable. MeasureNet's accuracy and reliability is evaluated using 1200 participants, measured by a trained staff member. The developed smartphone application, which is a part of Amazon Halo, is a major advance in digital anthropometry, filling a long-existing gap in convenient, accurate WHR measurement capabilities.

4.
NPJ Digit Med ; 5(1): 79, 2022 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-35768575

RESUMO

Body composition is a key component of health in both individuals and populations, and excess adiposity is associated with an increased risk of developing chronic diseases. Body mass index (BMI) and other clinical or commercially available tools for quantifying body fat (BF) such as DXA, MRI, CT, and photonic scanners (3DPS) are often inaccurate, cost prohibitive, or cumbersome to use. The aim of the current study was to evaluate the performance of a novel automated computer vision method, visual body composition (VBC), that uses two-dimensional photographs captured via a conventional smartphone camera to estimate percentage total body fat (%BF). The VBC algorithm is based on a state-of-the-art convolutional neural network (CNN). The hypothesis is that VBC yields better accuracy than other consumer-grade fat measurements devices. 134 healthy adults ranging in age (21-76 years), sex (61.2% women), race (60.4% White; 23.9% Black), and body mass index (BMI, 18.5-51.6 kg/m2) were evaluated at two clinical sites (N = 64 at MGH, N = 70 at PBRC). Each participant had %BF measured with VBC, three consumer and two professional bioimpedance analysis (BIA) systems. The PBRC participants also had air displacement plethysmography (ADP) measured. %BF measured by dual-energy x-ray absorptiometry (DXA) was set as the reference against which all other %BF measurements were compared. To test our scientific hypothesis we run multiple, pair-wise Wilcoxon signed rank tests where we compare each competing measurement tool (VBC, BIA, …) with respect to the same ground-truth (DXA). Relative to DXA, VBC had the lowest mean absolute error and standard deviation (2.16 ± 1.54%) compared to all of the other evaluated methods (p < 0.05 for all comparisons). %BF measured by VBC also had good concordance with DXA (Lin's concordance correlation coefficient, CCC: all 0.96; women 0.93; men 0.94), whereas BMI had very poor concordance (CCC: all 0.45; women 0.40; men 0.74). Bland-Altman analysis of VBC revealed the tightest limits of agreement (LOA) and absence of significant bias relative to DXA (bias -0.42%, R2 = 0.03; p = 0.062; LOA -5.5% to +4.7%), whereas all other evaluated methods had significant (p < 0.01) bias and wider limits of agreement. Bias in Bland-Altman analyses is defined as the discordance between the y = 0 axis and the regressed line computed from the data in the plot. In this first validation study of a novel, accessible, and easy-to-use system, VBC body fat estimates were accurate and without significant bias compared to DXA as the reference; VBC performance exceeded those of all other BIA and ADP methods evaluated. The wide availability of smartphones suggests that the VBC method for evaluating %BF could play an important role in quantifying adiposity levels in a wide range of settings.Trial registration: ClinicalTrials.gov Identifier: NCT04854421.

5.
Neuroimage ; 57(2): 378-90, 2011 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-21497655

RESUMO

A new algorithm is presented for the automatic segmentation of Multiple Sclerosis (MS) lesions in 3D Magnetic Resonance (MR) images. It builds on a discriminative random decision forest framework to provide a voxel-wise probabilistic classification of the volume. The method uses multi-channel MR intensities (T1, T2, and FLAIR), knowledge on tissue classes and long-range spatial context to discriminate lesions from background. A symmetry feature is introduced accounting for the fact that some MS lesions tend to develop in an asymmetric way. Quantitative evaluation of the proposed methods is carried out on publicly available labeled cases from the MICCAI MS Lesion Segmentation Challenge 2008 dataset. When tested on the same data, the presented method compares favorably to all earlier methods. In an a posteriori analysis, we show how selected features during classification can be ranked according to their discriminative power and reveal the most important ones.


Assuntos
Algoritmos , Mapeamento Encefálico/métodos , Árvores de Decisões , Interpretação de Imagem Assistida por Computador/métodos , Esclerose Múltipla/patologia , Humanos , Imageamento por Ressonância Magnética/métodos
6.
J Med Imaging (Bellingham) ; 6(3): 034002, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31423456

RESUMO

Most of the current state-of-the-art methods for tumor segmentation are based on machine learning models trained manually on segmented images. This type of training data is particularly costly, as manual delineation of tumors is not only time-consuming but also requires medical expertise. On the other hand, images with a provided global label (indicating presence or absence of a tumor) are less informative but can be obtained at a substantially lower cost. We propose to use both types of training data (fully annotated and weakly annotated) to train a deep learning model for segmentation. The idea of our approach is to extend segmentation networks with an additional branch performing image-level classification. The model is jointly trained for segmentation and classification tasks to exploit the information contained in weakly annotated images while preventing the network from learning features that are irrelevant for the segmentation task. We evaluate our method on the challenging task of brain tumor segmentation in magnetic resonance images from the Brain Tumor Segmentation 2018 Challenge. We show that the proposed approach provides a significant improvement in segmentation performance compared to the standard supervised learning. The observed improvement is proportional to the ratio between weakly annotated and fully annotated images available for training.

7.
Comput Med Imaging Graph ; 73: 60-72, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30889541

RESUMO

We present an efficient deep learning approach for the challenging task of tumor segmentation in multisequence MR images. In recent years, Convolutional Neural Networks (CNN) have achieved state-of-the-art performances in a large variety of recognition tasks in medical imaging. Because of the considerable computational cost of CNNs, large volumes such as MRI are typically processed by subvolumes, for instance slices (axial, coronal, sagittal) or small 3D patches. In this paper we introduce a CNN-based model which efficiently combines the advantages of the short-range 3D context and the long-range 2D context. Furthermore, we propose a network architecture with modality-specific subnetworks in order to be more robust to the problem of missing MR sequences during the training phase. To overcome the limitations of specific choices of neural network architectures, we describe a hierarchical decision process to combine outputs of several segmentation models. Finally, a simple and efficient algorithm for training large CNN models is introduced. We evaluate our method on the public benchmark of the BRATS 2017 challenge on the task of multiclass segmentation of malignant brain tumors. Our method achieves good performances and produces accurate segmentations with median Dice scores of 0.918 (whole tumor), 0.883 (tumor core) and 0.854 (enhancing core).


Assuntos
Neoplasias Encefálicas/diagnóstico por imagem , Imageamento Tridimensional , Redes Neurais de Computação , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética/métodos
8.
Phys Med Biol ; 63(23): 235002, 2018 Nov 22.
Artigo em Inglês | MEDLINE | ID: mdl-30465543

RESUMO

Machine learning for image segmentation could provide expedited clinic workflow and better standardization of contour delineation. We evaluated a new model using deep decision forests of image features in order to contour pelvic anatomy on treatment planning CTs. 193 CT scans from one UK and two US institutions for patients undergoing radiotherapy treatment for prostate cancer from 2012-2016 were anonymized. A decision forest autosegmentation model was trained on a random selection of 94 images from Institution 1 and tested on 99 scans from Institution 1, 2, and 3. The accuracy of model contours was measured with the Dice similarity coefficient (DSC) and the median slice-wise Hausdorff distance (MSHD) using clinical contours as the ground truth reference. Two comparison studies were performed. The accuracy of the model was compared to four commercial software packages on twenty randomly-selected images. Additionally, inter-observer variability (IOV) of contours between three radiation oncology experts and the original contours was evaluated on ten randomly-selected images. The highest median values of DSC across all institutions were 0.94-0.97 for bladder (with interquartile range, or IQR, of 0.92-0.98) and 0.96-0.97 (IQR 0.94-0.97) for femurs. Good agreement was seen for prostate, with median DSC 0.75-0.76 (IQR 0.67-0.82), and rectum, with median DSC 0.71-0.82 (IQR 0.63-0.87). The lowest median scores were 0.49-0.70 for seminal vesicles (IQR 0.31-0.79). For the commercial software comparison, model-based segmentation produced higher DSC than atlas-based segmentation, with decision forests producing highest DSC for all organs of interest. For the interobserver study, variability in DSC between observers was similar to the agreement between the model and ground truth. Deep decision forests of radiomic features can generate contours of pelvic anatomy with reasonable agreement with physician contours. This method could be useful for automated treatment planning, and autosegmentation may improve efficiency and increase standardization in the clinic.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Próstata/anatomia & histologia , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/radioterapia , Planejamento da Radioterapia Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Humanos , Masculino , Modelos Anatômicos , Variações Dependentes do Observador , Próstata/diagnóstico por imagem
9.
IEEE Trans Med Imaging ; 36(2): 607-617, 2017 02.
Artigo em Inglês | MEDLINE | ID: mdl-27831863

RESUMO

We investigate uncertainty quantification under a sparse Bayesian model of medical image registration. Bayesian modelling has proven powerful to automate the tuning of registration hyperparameters, such as the trade-off between the data and regularization functionals. Sparsity-inducing priors have recently been used to render the parametrization itself adaptive and data-driven. The sparse prior on transformation parameters effectively favors the use of coarse basis functions to capture the global trends in the visible motion while finer, highly localized bases are introduced only in the presence of coherent image information and motion. In earlier work, approximate inference under the sparse Bayesian model was tackled in an efficient Variational Bayes (VB) framework. In this paper we are interested in the theoretical and empirical quality of uncertainty estimates derived under this approximate scheme vs. under the exact model. We implement an (asymptotically) exact inference scheme based on reversible jump Markov Chain Monte Carlo (MCMC) sampling to characterize the posterior distribution of the transformation and compare the predictions of the VB and MCMC based methods. The true posterior distribution under the sparse Bayesian model is found to be meaningful: orders of magnitude for the estimated uncertainty are quantitatively reasonable, the uncertainty is higher in textureless regions and lower in the direction of strong intensity gradients.


Assuntos
Teorema de Bayes , Humanos , Cadeias de Markov , Método de Monte Carlo , Movimento (Física) , Incerteza
10.
Med Image Anal ; 36: 79-97, 2017 02.
Artigo em Inglês | MEDLINE | ID: mdl-27870999

RESUMO

We extend Bayesian models of non-rigid image registration to allow not only for the automatic determination of registration parameters (such as the trade-off between image similarity and regularization functionals), but also for a data-driven, multiscale, spatially adaptive parametrization of deformations. Adaptive parametrizations have been used with success to promote both the regularity and accuracy of registration schemes, but so far on non-probabilistic grounds - either as part of multiscale heuristics, or on the basis of sparse optimization. Under the proposed model, a sparsity-inducing prior on transformation parameters complements the classical smoothness-inducing prior, and favors parametrizations that use few degrees of freedom. As a result, finer bases get introduced only in the presence of coherent image information and motion, while coarser bases ensure better extrapolation of the motion to textureless, uninformative regions. The space of possible parametrizations consists of arbitrary combinations of basis functions chosen among any preset, widely overcomplete (and typically multiscale) dictionary. Inference is tackled in an efficient Variational Bayes framework. In addition we propose a flexible mixture-of-Gaussian model of data that proves to be more faithful for a variety of image modalities than the sum-of-squared differences. The performance of the proposed approach is demonstrated on time series of (cine and tagged) magnetic resonance and echocardiographic cardiac images. The proposed algorithm matches the state-of-the-art on benchmark datasets evaluating accuracy of motion and strain, and is highly automated.


Assuntos
Algoritmos , Teorema de Bayes , Ecocardiografia/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Coração/diagnóstico por imagem , Heurística , Humanos , Movimento (Física)
11.
IEEE Trans Pattern Anal Mach Intell ; 28(9): 1480-92, 2006 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-16929733

RESUMO

This paper describes models and algorithms for the real-time segmentation of foreground from background layers in stereo video sequences. Automatic separation of layers from color/contrast or from stereo alone is known to be error-prone. Here, color, contrast, and stereo matching information are fused to infer layers accurately and efficiently. The first algorithm, Layered Dynamic Programming (LDP), solves stereo in an extended six-state space that represents both foreground/background layers and occluded regions. The stereo-match likelihood is then fused with a contrast-sensitive color model that is learned on-the-fly and stereo disparities are obtained by dynamic programming. The second algorithm, Layered Graph Cut (LGC), does not directly solve stereo. Instead, the stereo match likelihood is marginalized over disparities to evaluate foreground and background hypotheses and then fused with a contrast-sensitive color model like the one used in LDP. Segmentation is solved efficiently by ternary graph cut. Both algorithms are evaluated with respect to ground truth data and found to have similar performance, substantially better than either stereo or color/ contrast alone. However, their characteristics with respect to computational efficiency are rather different. The algorithms are demonstrated in the application of background substitution and shown to give good quality composite video output.


Assuntos
Algoritmos , Colorimetria/métodos , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Fotogrametria/métodos , Técnica de Subtração , Gravação em Vídeo/métodos , Inteligência Artificial , Aumento da Imagem/métodos , Armazenamento e Recuperação da Informação/métodos
12.
IEEE Trans Vis Comput Graph ; 21(5): 571-83, 2015 May.
Artigo em Inglês | MEDLINE | ID: mdl-26357205

RESUMO

Recovery from tracking failure is essential in any simultaneous localization and tracking system. In this context, we explore an efficient keyframe-based relocalization method based on frame encoding using randomized ferns. The method enables automatic discovery of keyframes through online harvesting in tracking mode, and fast retrieval of pose candidates in the case when tracking is lost. Frame encoding is achieved by applying simple binary feature tests which are stored in the nodes of an ensemble of randomized ferns. The concatenation of small block codes generated by each fern yields a global compact representation of camera frames. Based on those representations we define the frame dissimilarity as the block-wise hamming distance (BlockHD). Dissimilarities between an incoming query frame and a large set of keyframes can be efficiently evaluated by simply traversing the nodes of the ferns and counting image co-occurrences in corresponding code tables. In tracking mode, those dissimilarities decide whether a frame/pose pair is considered as a novel keyframe. For tracking recovery, poses of the most similar keyframes are retrieved and used for reinitialization of the tracking algorithm. The integration of our relocalization method into a hand-held KinectFusion system allows seamless continuation of mapping even when tracking is frequently lost.

13.
JMIR Hum Factors ; 2(1): e11, 2015 Jun 24.
Artigo em Inglês | MEDLINE | ID: mdl-27025782

RESUMO

BACKGROUND: Sensor-based recordings of human movements are becoming increasingly important for the assessment of motor symptoms in neurological disorders beyond rehabilitative purposes. ASSESS MS is a movement recording and analysis system being developed to automate the classification of motor dysfunction in patients with multiple sclerosis (MS) using depth-sensing computer vision. It aims to provide a more consistent and finer-grained measurement of motor dysfunction than currently possible. OBJECTIVE: To test the usability and acceptability of ASSESS MS with health professionals and patients with MS. METHODS: A prospective, mixed-methods study was carried out at 3 centers. After a 1-hour training session, a convenience sample of 12 health professionals (6 neurologists and 6 nurses) used ASSESS MS to capture recordings of standardized movements performed by 51 volunteer patients. Metrics for effectiveness, efficiency, and acceptability were defined and used to analyze data captured by ASSESS MS, video recordings of each examination, feedback questionnaires, and follow-up interviews. RESULTS: All health professionals were able to complete recordings using ASSESS MS, achieving high levels of standardization on 3 of 4 metrics (movement performance, lateral positioning, and clear camera view but not distance positioning). Results were unaffected by patients' level of physical or cognitive disability. ASSESS MS was perceived as easy to use by both patients and health professionals with high scores on the Likert-scale questions and positive interview commentary. ASSESS MS was highly acceptable to patients on all dimensions considered, including attitudes to future use, interaction (with health professionals), and overall perceptions of ASSESS MS. Health professionals also accepted ASSESS MS, but with greater ambivalence arising from the need to alter patient interaction styles. There was little variation in results across participating centers, and no differences between neurologists and nurses. CONCLUSIONS: In typical clinical settings, ASSESS MS is usable and acceptable to both patients and health professionals, generating data of a quality suitable for clinical analysis. An iterative design process appears to have been successful in accounting for factors that permit ASSESS MS to be used by a range of health professionals in new settings with minimal training. The study shows the potential of shifting ubiquitous sensing technologies from research into the clinic through a design approach that gives appropriate attention to the clinic environment.

14.
IEEE Trans Med Imaging ; 34(10): 1993-2024, 2015 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-25494501

RESUMO

In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients-manually annotated by up to four raters-and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.


Assuntos
Imageamento por Ressonância Magnética , Neuroimagem , Algoritmos , Benchmarking , Glioma/patologia , Humanos , Imageamento por Ressonância Magnética/métodos , Imageamento por Ressonância Magnética/normas , Neuroimagem/métodos , Neuroimagem/normas
15.
IEEE Trans Image Process ; 13(9): 1200-12, 2004 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-15449582

RESUMO

A new algorithm is proposed for removing large objects from digital images. The challenge is to fill in the hole that is left behind in a visually plausible way. In the past, this problem has been addressed by two classes of algorithms: 1) "texture synthesis" algorithms for generating large image regions from sample textures and 2) "inpainting" techniques for filling in small image gaps. The former has been demonstrated for "textures"--repeating two-dimensional patterns with some stochasticity; the latter focus on linear "structures" which can be thought of as one-dimensional patterns, such as lines and object contours. This paper presents a novel and efficient algorithm that combines the advantages of these two approaches. We first note that exemplar-based texture synthesis contains the essential process required to replicate both texture and structure; the success of structure propagation, however, is highly dependent on the order in which the filling proceeds. We propose a best-first algorithm in which the confidence in the synthesized pixel values is propagated in a manner similar to the propagation of information in inpainting. The actual color values are computed using exemplar-based synthesis. In this paper, the simultaneous propagation of texture and structure information is achieved by a single, efficient algorithm. Computational efficiency is achieved by a block-based sampling process. A number of examples on real and synthetic images demonstrate the effectiveness of our algorithm in removing large occluding objects, as well as thin scratches. Robustness with respect to the shape of the manually selected target region is also demonstrated. Our results compare favorably to those obtained by existing techniques.


Assuntos
Algoritmos , Gráficos por Computador , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão , Processamento de Sinais Assistido por Computador , Técnica de Subtração , Hipermídia , Armazenamento e Recuperação da Informação/métodos , Análise Numérica Assistida por Computador , Pinturas , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
16.
Med Image Comput Comput Assist Interv ; 17(Pt 2): 496-504, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25485416

RESUMO

This paper presents a new, efficient and accurate technique for the semantic segmentation of medical images. The paper builds upon the successful random decision forests model and improves on it by modifying the way in which randomness is injected into the tree training process. The contribution of this paper is two-fold. First, we replace the conventional bagging procedure (the uniform sampling of training images) with a guided bagging approach, which exploits the inherent structure and organization of the training image set. This allows the creation of decision trees that are specialized to a specific sub-type of images in the training set. Second, the segmentation of a previously unseen image happens via selection and application of only the trees that are relevant to the given test image. Tree selection is done automatically, via the learned image embedding, with more precisely a Laplacian eigenmap. We, therefore, call the proposed approach Laplacian Forests. We validate Laplacian Forests on a dataset of 256, manually segmented 3D CT scans of patients showing high variability in scanning protocols, resolution, body shape and anomalies. Compared with conventional decision forests, Laplacian Forests yield both higher training efficiency, due to the local analysis of the training image space, as well as higher segmentation accuracy, due to the specialization of the forest to image sub-types.


Assuntos
Algoritmos , Inteligência Artificial , Reconhecimento Automatizado de Padrão/métodos , Intensificação de Imagem Radiográfica/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Humanos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
17.
Med Image Comput Comput Assist Interv ; 17(Pt 1): 235-42, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25333123

RESUMO

We propose a Sparse Bayesian framework for non-rigid registration. Our principled approach is flexible, in that it efficiently finds an optimal, sparse model to represent deformations among any preset, widely overcomplete range of basis functions. It addresses open challenges in state-of-the-art registration, such as the automatic joint estimate of model parameters (e.g. noise and regularization levels). We demonstrate the feasibility and performance of our approach on cine MR, tagged MR and 3D US cardiac images, and show state-of-the-art results on benchmark datasets evaluating accuracy of motion and strain.


Assuntos
Teorema de Bayes , Ecocardiografia Tridimensional/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Imagem Cinética por Ressonância Magnética/métodos , Reconhecimento Automatizado de Padrão/métodos , Técnica de Subtração , Algoritmos , Humanos , Aumento da Imagem/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
18.
Med Image Comput Comput Assist Interv ; 17(Pt 3): 225-32, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25320803

RESUMO

This paper introduces image quality transfer. The aim is to learn the fine structural detail of medical images from high quality data sets acquired with long acquisition times or from bespoke devices and transfer that information to enhance lower quality data sets from standard acquisitions. We propose a framework for solving this problem using random forest regression to relate patches in the low-quality data set to voxel values in the high quality data set. Two examples in diffusion MRI demonstrate the idea. In both cases, we learn from the Human Connectome Project (HCP) data set, which uses an hour of acquisition time per subject, just for diffusion imaging, using custom built scanner hardware and rapid imaging techniques. The first example, super-resolution of diffusion tensor images (DTIs), enhances spatial resolution of standard data sets with information from the high-resolution HCP data. The second, parameter mapping, constructs neurite orientation density and dispersion imaging (NODDI) parameter maps, which usually require specialist data sets with two b-values, from standard single-shell high angular resolution diffusion imaging (HARDI) data sets with b = 1000 smm-2. Experiments quantify the improvement against alternative image reconstructions in comparison to ground truth from the HCP data set in both examples and demonstrate efficacy on a standard data set.


Assuntos
Algoritmos , Encéfalo/citologia , Imagem de Difusão por Ressonância Magnética/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Interpretação Estatística de Dados , Humanos , Análise de Regressão , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
19.
Med Image Comput Comput Assist Interv ; 17(Pt 2): 429-37, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25485408

RESUMO

This paper presents new learning-based techniques for measuring disease progression in Multiple Sclerosis (MS) patients. Our system aims to augment conventional neurological examinations by adding quantitative evidence of disease progression. An off-the-shelf depth camera is used to image the patient at the examination, during which he/she is asked to perform carefully selected movements. Our algorithms then automatically analyze the videos, assessing the quality of each movement and classifying them as healthy or non-healthy. Our contribution is three-fold: We i) introduce ensembles of randomized SVM classifiers and compare them with decision forests on the task of depth video classification; ii) demonstrate automatic selection of discriminative landmarks in the depth videos, showing their clinical relevance; iii) validate our classification algorithms quantitatively on a new dataset of 1041 videos of both MS patients and healthy volunteers. We achieve average Dice scores well in excess of the 80% mark, confirming the validity of our approach in practical applications. Our results suggest that this technique could be fruitful for depth-camera supported clinical assessments for a range of conditions.


Assuntos
Técnicas de Diagnóstico Neurológico , Imageamento Tridimensional/métodos , Transtornos dos Movimentos/diagnóstico , Esclerose Múltipla/diagnóstico , Reconhecimento Automatizado de Padrão/métodos , Gravação em Vídeo/métodos , Imagem Corporal Total/métodos , Inteligência Artificial , Progressão da Doença , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Transtornos dos Movimentos/etiologia , Esclerose Múltipla/complicações , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
20.
Artigo em Inglês | MEDLINE | ID: mdl-24505745

RESUMO

We propose a method for multi-atlas label propagation based on encoding the individual atlases by randomized classification forests. Most current approaches perform a non-linear registration between all atlases and the target image, followed by a sophisticated fusion scheme. While these approaches can achieve high accuracy, in general they do so at high computational cost. This negatively affects the scalability to large databases and experimentation. To tackle this issue, we propose to use a small and deep classification forest to encode each atlas individually in reference to an aligned probabilistic atlas, resulting in an Atlas Forest (AF). At test time, each AF yields a probabilistic label estimate, and fusion is done by averaging. Our scheme performs only one registration per target image, achieves good results with a simple fusion scheme, and allows for efficient experimentation. In contrast to standard forest schemes, incorporation of new scans is possible without retraining, and target-specific selection of atlases remains possible. The evaluation on three different databases shows accuracy at the level of the state of the art, at a significantly lower runtime.


Assuntos
Encéfalo/anatomia & histologia , Interpretação de Imagem Assistida por Computador/métodos , Armazenamento e Recuperação da Informação/métodos , Imageamento por Ressonância Magnética/métodos , Modelos Anatômicos , Reconhecimento Automatizado de Padrão/métodos , Técnica de Subtração , Algoritmos , Simulação por Computador , Interpretação Estatística de Dados , Humanos , Aumento da Imagem/métodos , Modelos Neurológicos , Modelos Estatísticos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Coloração e Rotulagem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA