Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 35
Filtrar
Más filtros

Bases de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
MAGMA ; 29(2): 155-95, 2016 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-26811173

RESUMEN

Cardiovascular magnetic resonance (CMR) has become a key imaging modality in clinical cardiology practice due to its unique capabilities for non-invasive imaging of the cardiac chambers and great vessels. A wide range of CMR sequences have been developed to assess various aspects of cardiac structure and function, and significant advances have also been made in terms of imaging quality and acquisition times. A lot of research has been dedicated to the development of global and regional quantitative CMR indices that help the distinction between health and pathology. The goal of this review paper is to discuss the structural and functional CMR indices that have been proposed thus far for clinical assessment of the cardiac chambers. We include indices definitions, the requirements for the calculations, exemplar applications in cardiovascular diseases, and the corresponding normal ranges. Furthermore, we review the most recent state-of-the art techniques for the automatic segmentation of the cardiac boundaries, which are necessary for the calculation of the CMR indices. Finally, we provide a detailed discussion of the existing literature and of the future challenges that need to be addressed to enable a more robust and comprehensive assessment of the cardiac chambers in clinical practice.


Asunto(s)
Cardiopatías/diagnóstico por imagen , Cardiopatías/patología , Pruebas de Función Cardíaca/métodos , Corazón/diagnóstico por imagen , Imagen por Resonancia Cinemagnética/métodos , Miocardio/patología , Humanos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Aprendizaje Automático , Reconocimiento de Normas Patrones Automatizadas/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
2.
Comput Methods Programs Biomed ; 250: 108158, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38604010

RESUMEN

BACKGROUND AND OBJECTIVE: In radiotherapy treatment planning, respiration-induced motion introduces uncertainty that, if not appropriately considered, could result in dose delivery problems. 4D cone-beam computed tomography (4D-CBCT) has been developed to provide imaging guidance by reconstructing a pseudo-motion sequence of CBCT volumes through binning projection data into breathing phases. However, it suffers from artefacts and erroneously characterizes the averaged breathing motion. Furthermore, conventional 4D-CBCT can only be generated post-hoc using the full sequence of kV projections after the treatment is complete, limiting its utility. Hence, our purpose is to develop a deep-learning motion model for estimating 3D+t CT images from treatment kV projection series. METHODS: We propose an end-to-end learning-based 3D motion modelling and 4DCT reconstruction model named 4D-Precise, abbreviated from Probabilistic reconstruction of image sequences from CBCT kV projections. The model estimates voxel-wise motion fields and simultaneously reconstructs a 3DCT volume at any arbitrary time point of the input projections by transforming a reference CT volume. Developing a Torch-DRR module, it enables end-to-end training by computing Digitally Reconstructed Radiographs (DRRs) in PyTorch. During training, DRRs with matching projection angles to the input kVs are automatically extracted from reconstructed volumes and their structural dissimilarity to inputs is penalised. We introduced a novel loss function to regulate spatio-temporal motion field variations across the CT scan, leveraging planning 4DCT for prior motion distribution estimation. RESULTS: The model is trained patient-specifically using three kV scan series, each including over 1200 angular/temporal projections, and tested on three other scan series. Imaging data from five patients are analysed here. Also, the model is validated on a simulated paired 4DCT-DRR dataset created using the Surrogate Parametrised Respiratory Motion Modelling (SuPReMo). The results demonstrate that the reconstructed volumes by 4D-Precise closely resemble the ground-truth volumes in terms of Dice, volume similarity, mean contour distance, and Hausdorff distance, whereas 4D-Precise achieves smoother deformations and fewer negative Jacobian determinants compared to SuPReMo. CONCLUSIONS: Unlike conventional 4DCT reconstruction techniques that ignore breath inter-cycle motion variations, the proposed model computes both intra-cycle and inter-cycle motions. It represents motion over an extended timeframe, covering several minutes of kV scan series.


Asunto(s)
Tomografía Computarizada de Haz Cónico , Tomografía Computarizada Cuatridimensional , Planificación de la Radioterapia Asistida por Computador , Respiración , Tomografía Computarizada Cuatridimensional/métodos , Humanos , Tomografía Computarizada de Haz Cónico/métodos , Planificación de la Radioterapia Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Fantasmas de Imagen , Movimiento , Movimiento (Física) , Aprendizaje Profundo
3.
Med Image Anal ; 93: 103097, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38325154

RESUMEN

Determining early-stage prognostic markers and stratifying patients for effective treatment are two key challenges for improving outcomes for melanoma patients. Previous studies have used tumour transcriptome data to stratify patients into immune subgroups, which were associated with differential melanoma specific survival and potential predictive biomarkers. However, acquiring transcriptome data is a time-consuming and costly process. Moreover, it is not routinely used in the current clinical workflow. Here, we attempt to overcome this by developing deep learning models to classify gigapixel haematoxylin and eosin (H&E) stained pathology slides, which are well established in clinical workflows, into these immune subgroups. We systematically assess six different multiple instance learning (MIL) frameworks, using five different image resolutions and three different feature extraction methods. We show that pathology-specific self-supervised models using 10x resolution patches generate superior representations for the classification of immune subtypes. In addition, in a primary melanoma dataset, we achieve a mean area under the receiver operating characteristic curve (AUC) of 0.80 for classifying histopathology images into 'high' or 'low immune' subgroups and a mean AUC of 0.82 in an independent TCGA melanoma dataset. Furthermore, we show that these models are able to stratify patients into 'high' and 'low immune' subgroups with significantly different melanoma specific survival outcomes (log rank test, P< 0.005). We anticipate that MIL methods will allow us to find new biomarkers of high importance, act as a tool for clinicians to infer the immune landscape of tumours and stratify patients, without needing to carry out additional expensive genetic tests.


Asunto(s)
Melanoma , Humanos , Melanoma/diagnóstico por imagen , Melanoma/genética , Curva ROC , Coloración y Etiquetado , Flujo de Trabajo , Biomarcadores
4.
J Prosthodont ; 22(4): 330-3, 2013 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-23279141

RESUMEN

An immediate denture is fabricated before all the remaining teeth have been removed. Its advantages include maintenance of a patient's appearance, muscle tone, facial height, tongue size, and normal speech and reduction of postoperative pain. The purpose of this study is to describe the use of a patient's fixed prosthesis for fabricating an interim immediate partial denture in one appointment. Occlusion, occlusal vertical dimension, and facial support are maintained during the healing period in this procedure.


Asunto(s)
Diseño de Dentadura , Dentadura Parcial Inmediata , Dentadura Parcial Provisoria , Abrazadera Dental , Técnica de Impresión Dental , Oclusión Dental , Bases para Dentadura , Rebasado de Dentaduras , Dentadura Parcial Fija , Cara/anatomía & histología , Femenino , Humanos , Persona de Mediana Edad , Factores de Tiempo , Acondicionamiento de Tejidos Dentales/métodos , Dimensión Vertical
5.
Med Image Anal ; 83: 102678, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36403308

RESUMEN

Deformable image registration (DIR) can be used to track cardiac motion. Conventional DIR algorithms aim to establish a dense and non-linear correspondence between independent pairs of images. They are, nevertheless, computationally intensive and do not consider temporal dependencies to regulate the estimated motion in a cardiac cycle. In this paper, leveraging deep learning methods, we formulate a novel hierarchical probabilistic model, termed DragNet, for fast and reliable spatio-temporal registration in cine cardiac magnetic resonance (CMR) images and for generating synthetic heart motion sequences. DragNet is a variational inference framework, which takes an image from the sequence in combination with the hidden states of a recurrent neural network (RNN) as inputs to an inference network per time step. As part of this framework, we condition the prior probability of the latent variables on the hidden states of the RNN utilised to capture temporal dependencies. We further condition the posterior of the motion field on a latent variable from hierarchy and features from the moving image. Subsequently, the RNN updates the hidden state variables based on the feature maps of the fixed image and the latent variables. Different from traditional methods, DragNet performs registration on unseen sequences in a forward pass, which significantly expedites the registration process. Besides, DragNet enables generating a large number of realistic synthetic image sequences given only one frame, where the corresponding deformations are also retrieved. The probabilistic framework allows for computing spatio-temporal uncertainties in the estimated motion fields. Our results show that DragNet performance is comparable with state-of-the-art methods in terms of registration accuracy, with the advantage of offering analytical pixel-wise motion uncertainty estimation across a cardiac cycle and being a motion generator. We will make our code publicly available.

6.
Artículo en Inglés | MEDLINE | ID: mdl-37022057

RESUMEN

Modern radiotherapy delivers treatment plans optimised on an individual patient level, using CT-based 3D models of patient anatomy. This optimisation is fundamentally based on simple assumptions about the relationship between radiation dose delivered to the cancer (increased dose will increase cancer control) and normal tissue (increased dose will increase rate of side effects). The details of these relationships are still not well understood, especially for radiation-induced toxicity. We propose a convolutional neural network based on multiple instance learning to analyse toxicity relationships for patients receiving pelvic radiotherapy. A dataset comprising of 315 patients were included in this study; with 3D dose distributions, pre-treatment CT scans with annotated abdominal structures, and patient-reported toxicity scores provided for each participant. In addition, we propose a novel mechanism for segregating the attentions over space and dose/imaging features independently for a better understanding of the anatomical distribution of toxicity. Quantitative and qualitative experiments were performed to evaluate the network performance. The proposed network could predict toxicity with 80% accuracy. Attention analysis over space demonstrated that there was a significant association between radiation dose to the anterior and right iliac of the abdomen and patient-reported toxicity. Experimental results showed that the proposed network had outstanding performance for toxicity prediction, localisation and explanation with the ability of generalisation for an unseen dataset.

7.
Med Image Anal ; 75: 102276, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34753021

RESUMEN

Automatic shape anomaly detection in large-scale imaging data can be useful for screening suboptimal segmentations and pathologies altering the cardiac morphology without intensive manual labour. We propose a deep probabilistic model for local anomaly detection in sequences of heart shapes, modelled as point sets, in a cardiac cycle. A deep recurrent encoder-decoder network captures the spatio-temporal dependencies to predict the next shape in the cycle and thus derive the outlier points that are attributed to excessive deviations from the network prediction. A predictive mixture distribution models the inlier and outlier classes via Gaussian and uniform distributions, respectively. A Gibbs sampling Expectation-Maximisation (EM) algorithm computes soft anomaly scores of the points via the posterior probabilities of each class in the E-step and estimates the parameters of the network and the predictive distribution in the M-step. We demonstrate the versatility of the method using two shape datasets derived from: (i) one million biventricular CMR images from 20,000 participants in the UK Biobank (UKB), and (ii) routine diagnostic imaging from Multi-Centre, Multi-Vendor, and Multi-Disease Cardiac Image (M&Ms). Experiments show that the detected shape anomalies in the UKB dataset are mostly associated with poor segmentation quality, and the predicted shape sequences show significant improvement over the input sequences. Furthermore, evaluations on U-Net based shapes from the M&Ms dataset reveals that the anomalies are attributable to the underlying pathologies that affect the ventricles. The proposed model can therefore be used as an effective mechanism to sift shape anomalies in large-scale cardiac imaging pipelines for further analysis.


Asunto(s)
Algoritmos , Modelos Estadísticos , Corazón/diagnóstico por imagen , Ventrículos Cardíacos/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador , Movimiento (Física)
8.
IEEE Trans Med Imaging ; 39(5): 1380-1391, 2020 05.
Artículo en Inglés | MEDLINE | ID: mdl-31647422

RESUMEN

Generalized nucleus segmentation techniques can contribute greatly to reducing the time to develop and validate visual biomarkers for new digital pathology datasets. We summarize the results of MoNuSeg 2018 Challenge whose objective was to develop generalizable nuclei segmentation techniques in digital pathology. The challenge was an official satellite event of the MICCAI 2018 conference in which 32 teams with more than 80 participants from geographically diverse institutes participated. Contestants were given a training set with 30 images from seven organs with annotations of 21,623 individual nuclei. A test dataset with 14 images taken from seven organs, including two organs that did not appear in the training set was released without annotations. Entries were evaluated based on average aggregated Jaccard index (AJI) on the test set to prioritize accurate instance segmentation as opposed to mere semantic segmentation. More than half the teams that completed the challenge outperformed a previous baseline. Among the trends observed that contributed to increased accuracy were the use of color normalization as well as heavy data augmentation. Additionally, fully convolutional networks inspired by variants of U-Net, FCN, and Mask-RCNN were popularly used, typically based on ResNet or VGG base architectures. Watershed segmentation on predicted semantic segmentation maps was a popular post-processing strategy. Several of the top techniques compared favorably to an individual human annotator and can be used with confidence for nuclear morphometrics.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Núcleo Celular , Humanos
9.
IEEE J Biomed Health Inform ; 23(2): 509-518, 2019 03.
Artículo en Inglés | MEDLINE | ID: mdl-29994323

RESUMEN

Lesion segmentation is the first step in most automatic melanoma recognition systems. Deficiencies and difficulties in dermoscopic images such as color inconstancy, hair occlusion, dark corners, and color charts make lesion segmentation an intricate task. In order to detect the lesion in the presence of these problems, we propose a supervised saliency detection method tailored for dermoscopic images based on the discriminative regional feature integration (DRFI). A DRFI method incorporates multilevel segmentation, regional contrast, property, background descriptors, and a random forest regressor to create saliency scores for each region in the image. In our improved saliency detection method, mDRFI, we have added some new features to regional property descriptors. Also, in order to achieve more robust regional background descriptors, a thresholding algorithm is proposed to obtain a new pseudo-background region. Findings reveal that mDRFI is superior to DRFI in detecting the lesion as the salient object in dermoscopic images. The proposed overall lesion segmentation framework uses detected saliency map to construct an initial mask of the lesion through thresholding and postprocessing operations. The initial mask is then evolving in a level set framework to fit better on the lesion's boundaries. The results of evaluation tests on three public datasets show that our proposed segmentation method outperforms the other conventional state-of-the-art segmentation algorithms and its performance is comparable with most recent approaches that are based on deep convolutional neural networks.


Asunto(s)
Dermoscopía/métodos , Interpretación de Imagen Asistida por Computador/métodos , Neoplasias Cutáneas/diagnóstico por imagen , Algoritmos , Bases de Datos Factuales , Humanos , Piel/diagnóstico por imagen , Aprendizaje Automático Supervisado
10.
IEEE Trans Image Process ; 28(7): 3246-3260, 2019 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-30703023

RESUMEN

The recognition of different cell compartments, the types of cells, and their interactions is a critical aspect of quantitative cell biology. However, automating this problem has proven to be non-trivial and requires solving multi-class image segmentation tasks that are challenging owing to the high similarity of objects from different classes and irregularly shaped structures. To alleviate this, graphical models are useful due to their ability to make use of prior knowledge and model inter-class dependences. Directed acyclic graphs, such as trees, have been widely used to model top-down statistical dependences as a prior for improved image segmentation. However, using trees, a few inter-class constraints can be captured. To overcome this limitation, we propose polytree graphical models that capture label proximity relations more naturally compared to tree-based approaches. A novel recursive mechanism based on two-pass message passing was developed to efficiently calculate closed-form posteriors of graph nodes on polytrees. The algorithm is evaluated on simulated data and on two publicly available fluorescence microscopy datasets, outperforming directed trees and three state-of-the-art convolutional neural networks, namely, SegNet, DeepLab, and PSPNet. Polytrees are shown to outperform directed trees in predicting segmentation error by highlighting areas in the segmented image that do not comply with prior knowledge. This paves the way to uncertainty measures on the resulting segmentation and guide subsequent segmentation refinement.

11.
Med Image Anal ; 53: 47-63, 2019 04.
Artículo en Inglés | MEDLINE | ID: mdl-30684740

RESUMEN

A probabilistic framework for registering generalised point sets comprising multiple voxel-wise data features such as positions, orientations and scalar-valued quantities, is proposed. It is employed for the analysis of magnetic resonance diffusion tensor image (DTI)-derived quantities, such as fractional anisotropy (FA) and fibre orientation, across multiple subjects. A hybrid Student's t-Watson-Gaussian mixture model-based non-rigid registration framework is formulated for the joint registration and clustering of voxel-wise DTI-derived data, acquired from multiple subjects. The proposed approach jointly estimates the non-rigid transformations necessary to register an unbiased mean template (represented as a 7-dimensional hybrid point set comprising spatial positions, fibre orientations and FA values) to white matter regions of interest (ROIs), and approximates the joint distribution of voxel spatial positions, their associated principal diffusion axes, and FA. Specific white matter ROIs, namely, the corpus callosum and cingulum, are analysed across healthy control (HC) subjects (K = 20 samples) and patients diagnosed with mild cognitive impairment (MCI) (K = 20 samples) or Alzheimer's disease (AD) (K = 20 samples) using the proposed framework, facilitating inter-group comparisons of FA and fibre orientations. Group-wise analyses of the latter is not afforded by conventional approaches such as tract-based spatial statistics (TBSS) and voxel-based morphometry (VBM).


Asunto(s)
Enfermedad de Alzheimer/diagnóstico por imagen , Disfunción Cognitiva/diagnóstico por imagen , Imagen de Difusión por Resonancia Magnética , Interpretación de Imagen Asistida por Computador/métodos , Algoritmos , Anisotropía , Cuerpo Calloso/diagnóstico por imagen , Humanos , Sustancia Blanca/diagnóstico por imagen
12.
Med Image Anal ; 56: 26-42, 2019 08.
Artículo en Inglés | MEDLINE | ID: mdl-31154149

RESUMEN

Population imaging studies generate data for developing and implementing personalised health strategies to prevent, or more effectively treat disease. Large prospective epidemiological studies acquire imaging for pre-symptomatic populations. These studies enable the early discovery of alterations due to impending disease, and enable early identification of individuals at risk. Such studies pose new challenges requiring automatic image analysis. To date, few large-scale population-level cardiac imaging studies have been conducted. One such study stands out for its sheer size, careful implementation, and availability of top quality expert annotation; the UK Biobank (UKB). The resulting massive imaging datasets (targeting ca. 100,000 subjects) has put published approaches for cardiac image quantification to the test. In this paper, we present and evaluate a cardiac magnetic resonance (CMR) image analysis pipeline that properly scales up and can provide a fully automatic analysis of the UKB CMR study. Without manual user interactions, our pipeline performs end-to-end image analytics from multi-view cine CMR images all the way to anatomical and functional bi-ventricular quantification. All this, while maintaining relevant quality controls of the CMR input images, and resulting image segmentations. To the best of our knowledge, this is the first published attempt to fully automate the extraction of global and regional reference ranges of all key functional cardiovascular indexes, from both left and right cardiac ventricles, for a population of 20,000 subjects imaged at 50 time frames per subject, for a total of one million CMR volumes. In addition, our pipeline provides 3D anatomical bi-ventricular models of the heart. These models enable the extraction of detailed information of the morphodynamics of the two ventricles for subsequent association to genetic, omics, lifestyle habits, exposure information, and other information provided in population imaging studies. We validated our proposed CMR analytics pipeline against manual expert readings on a reference cohort of 4620 subjects with contour delineations and corresponding clinical indexes. Our results show broad significant agreement between the manually obtained reference indexes, and those automatically computed via our framework. 80.67% of subjects were processed with mean contour distance of less than 1 pixel, and 17.50% with mean contour distance between 1 and 2 pixels. Finally, we compare our pipeline with a recently published approach reporting on UKB data, and based on deep learning. Our comparison shows similar performance in terms of segmentation accuracy with respect to human experts.


Asunto(s)
Ventrículos Cardíacos/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Cinemagnética/métodos , Modelos Estadísticos , Redes Neurales de la Computación , Bancos de Muestras Biológicas , Femenino , Humanos , Imagenología Tridimensional , Masculino , Reconocimiento de Normas Patrones Automatizadas , Reino Unido
13.
IEEE Trans Image Process ; 17(8): 1295-312, 2008 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-18632340

RESUMEN

In this paper, a level-set-based geometric regularization method is proposed which has the ability to estimate the local orientation of the evolving front and utilize it as shape induced information for anisotropic propagation. We show that preserving anisotropic fronts can improve elongations of the extracted structures, while minimizing the risk of leakage. To that end, for an evolving front using its shape-offset level-set representation, a novel energy functional is defined. It is shown that constrained optimization of this functional results in an anisotropic expansion flow which is usefull for vessel segmentation. We have validated our method using synthetic data sets, 2-D retinal angiogram images and magnetic resonance angiography volumetric data sets. A comparison has been made with two state-of-the-art vessel segmentation methods. Quantitative results, as well as qualitative comparisons of segmentations, indicate that our regularization method is a promising tool to improve the efficiency of both techniques.


Asunto(s)
Inteligencia Artificial , Angiografía con Fluoresceína/métodos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Vasos Retinianos/anatomía & histología , Retinoscopía/métodos , Algoritmos , Humanos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
14.
IEEE Trans Pattern Anal Mach Intell ; 40(4): 891-904, 2018 04.
Artículo en Inglés | MEDLINE | ID: mdl-28475045

RESUMEN

Inferring a probability density function (pdf) for shape from a population of point sets is a challenging problem. The lack of point-to-point correspondences and the non-linearity of the shape spaces undermine the linear models. Methods based on manifolds model the shape variations naturally, however, statistics are often limited to a single geodesic mean and an arbitrary number of variation modes. We relax the manifold assumption and consider a piece-wise linear form, implementing a mixture of distinctive shape classes. The pdf for point sets is defined hierarchically, modeling a mixture of Probabilistic Principal Component Analyzers (PPCA) in higher dimension. A Variational Bayesian approach is designed for unsupervised learning of the posteriors of point set labels, local variation modes, and point correspondences. By maximizing the model evidence, the numbers of clusters, modes of variations, and points on the mean models are automatically selected. Using the predictive distribution, we project a test shape to the spaces spanned by the local PPCA's. The method is applied to point sets from: i) synthetic data, ii) healthy versus pathological heart morphologies, and iii) lumbar vertebrae. The proposed method selects models with expected numbers of clusters and variation modes, achieving lower generalization-specificity errors compared to state-of-the-art.

15.
Med Image Anal ; 44: 156-176, 2018 02.
Artículo en Inglés | MEDLINE | ID: mdl-29248842

RESUMEN

A probabilistic group-wise similarity registration technique based on Student's t-mixture model (TMM) and a multi-resolution extension of the same (mr-TMM) are proposed in this study, to robustly align shapes and establish valid correspondences, for the purpose of training statistical shape models (SSMs). Shape analysis across large cohorts requires automatic generation of the requisite training sets. Automated segmentation and landmarking of medical images often result in shapes with varying proportions of outliers and consequently require a robust method of alignment and correspondence estimation. Both TMM and mrTMM are validated by comparison with state-of-the-art registration algorithms based on Gaussian mixture models (GMMs), using both synthetic and clinical data. Four clinical data sets are used for validation: (a) 2D femoral heads (K= 1000 samples generated from DXA images of healthy subjects); (b) control-hippocampi (K= 50 samples generated from T1-weighted magnetic resonance (MR) images of healthy subjects); (c) MCI-hippocampi (K= 28 samples generated from MR images of patients diagnosed with mild cognitive impairment); and (d) heart shapes comprising left and right ventricular endocardium and epicardium (K= 30 samples generated from short-axis MR images of: 10 healthy subjects, 10 patients diagnosed with pulmonary hypertension and 10 diagnosed with hypertrophic cardiomyopathy). The proposed methods significantly outperformed the state-of-the-art in terms of registration accuracy in the experiments involving synthetic data, with mrTMM offering significant improvement over TMM. With the clinical data, both methods performed comparably to the state-of-the-art for the hippocampi and heart data sets, which contained few outliers. They outperformed the state-of-the-art for the femur data set, containing large proportions of outliers, in terms of alignment accuracy, and the quality of SSMs trained, quantified in terms of generalization, compactness and specificity.


Asunto(s)
Absorciometría de Fotón , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética , Cardiomiopatía Hipertrófica/diagnóstico por imagen , Cabeza Femoral/diagnóstico por imagen , Corazón/diagnóstico por imagen , Hipocampo/diagnóstico por imagen , Humanos , Hipertensión Pulmonar/diagnóstico por imagen , Modelos Estadísticos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
16.
Med Phys ; 2018 Jul 05.
Artículo en Inglés | MEDLINE | ID: mdl-29974971

RESUMEN

PURPOSE: This work proposes a new reliable computer-aided diagnostic (CAD) system for the diagnosis of breast cancer from breast ultrasound (BUS) images. The system can be useful to reduce the number of biopsies and pathological tests, which are invasive, costly, and often unnecessary. METHODS: The proposed CAD system classifies breast tumors into benign and malignant classes using morphological and textural features extracted from breast ultrasound (BUS) images. The images are first preprocessed to enhance the edges and filter the speckles. The tumor is then segmented semiautomatically using the watershed method. Having the tumor contour, a set of 855 features including 21 shape-based, 810 contour-based, and 24 textural features are extracted from each tumor. Then, a Bayesian Automatic Relevance Detection (ARD) mechanism is used for computing the discrimination power of different features and dimensionality reduction. Finally, a logistic regression classifier computed the posterior probabilities of malignant vs benign tumors using the reduced set of features. RESULTS: A dataset of 104 BUS images of breast tumors, including 72 benign and 32 malignant tumors, was used for evaluation using an eightfold cross-validation. The algorithm outperformed six state-of-the-art methods for BUS image classification with large margins by achieving 97.12% accuracy, 93.75% sensitivity, and 98.61% specificity rates. CONCLUSIONS: Using ARD, the proposed CAD system selects five new features for breast tumor classification and outperforms state-of-the-art, making a reliable and complementary tool to help clinicians diagnose breast cancer.

17.
Artículo en Inglés | MEDLINE | ID: mdl-30475705

RESUMEN

Cardiac magnetic resonance (CMR) images play a growing role in the diagnostic imaging of cardiovascular diseases. Full coverage of the left ventricle (LV), from base to apex, is a basic criterion for CMR image quality and necessary for accurate measurement of cardiac volume and functional assessment. Incomplete coverage of the LV is identified through visual inspection, which is time-consuming and usually done retrospectively in the assessment of large imaging cohorts. This paper proposes a novel automatic method for determining LV coverage from CMR images by using Fisher-discriminative three-dimensional (FD3D) convolutional neural networks (CNNs). In contrast to our previous method employing 2D CNNs, this approach utilizes spatial contextual information in CMR volumes, extracts more representative high-level features and enhances the discriminative capacity of the baseline 2D CNN learning framework, thus achieving superior detection accuracy. A two-stage framework is proposed to identify missing basal and apical slices in measurements of CMR volume. First, the FD3D CNN extracts high-level features from the CMR stacks. These image representations are then used to detect the missing basal and apical slices. Compared to the traditional 3D CNN strategy, the proposed FD3D CNN minimizes within-class scatter and maximizes between-class scatter. We performed extensive experiments to validate the proposed method on more than 5,000 independent volumetric CMR scans from the UK Biobank study, achieving low error rates for missing basal/apical slice detection (4.9%/4.6%). The proposed method can also be adopted for assessing LV coverage for other types of CMR image data.

18.
IEEE J Biomed Health Inform ; 22(2): 503-515, 2018 03.
Artículo en Inglés | MEDLINE | ID: mdl-28103561

RESUMEN

Statistical shape modeling is a powerful tool for visualizing and quantifying geometric and functional patterns of the heart. After myocardial infarction (MI), the left ventricle typically remodels in response to physiological challenges. Several methods have been proposed in the literature to describe statistical shape changes. Which method best characterizes left ventricular remodeling after MI is an open research question. A better descriptor of remodeling is expected to provide a more accurate evaluation of disease status in MI patients. We therefore designed a challenge to test shape characterization in MI given a set of three-dimensional left ventricular surface points. The training set comprised 100 MI patients, and 100 asymptomatic volunteers (AV). The challenge was initiated in 2015 at the Statistical Atlases and Computational Models of the Heart workshop, in conjunction with the MICCAI conference. The training set with labels was provided to participants, who were asked to submit the likelihood of MI from a different (validation) set of 200 cases (100 AV and 100 MI). Sensitivity, specificity, accuracy and area under the receiver operating characteristic curve were used as the outcome measures. The goals of this challenge were to (1) establish a common dataset for evaluating statistical shape modeling algorithms in MI, and (2) test whether statistical shape modeling provides additional information characterizing MI patients over standard clinical measures. Eleven groups with a wide variety of classification and feature extraction approaches participated in this challenge. All methods achieved excellent classification results with accuracy ranges from 0.83 to 0.98. The areas under the receiver operating characteristic curves were all above 0.90. Four methods showed significantly higher performance than standard clinical measures. The dataset and software for evaluation are available from the Cardiac Atlas Project website1.

19.
Comput Methods Programs Biomed ; 151: 139-149, 2017 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-28946995

RESUMEN

BACKGROUND AND OBJECTIVE: Retinal vascular tree extraction plays an important role in computer-aided diagnosis and surgical operations. Junction point detection and classification provide useful information about the structure of the vascular network, facilitating objective analysis of retinal diseases. METHODS: In this study, we present a new machine learning algorithm for joint classification and tracking of retinal blood vessels. Our method is based on a hierarchical probabilistic framework, where the local intensity cross sections are classified as either junction or vessel points. Gaussian basis functions are used for intensity interpolation, and the corresponding linear coefficients are assumed to be samples from class-specific Gamma distributions. Hence, a directed Probabilistic Graphical Model (PGM) is proposed and the hyperparameters are estimated using a Maximum Likelihood (ML) solution based on Laplace approximation. RESULTS: The performance of proposed method is evaluated using precision and recall rates on the REVIEW database. Our experiments show the proposed approach reaches promising results in bifurcation point detection and classification, achieving 88.67% precision and 88.67% recall rates. CONCLUSIONS: This technique results in a classifier with high precision and recall when comparing it with Xu's method.


Asunto(s)
Algoritmos , Diagnóstico por Computador , Aprendizaje Automático , Vasos Retinianos/diagnóstico por imagen , Humanos , Funciones de Verosimilitud , Modelos Estadísticos , Distribución Normal
20.
Med Phys ; 44(7): 3615-3629, 2017 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-28409834

RESUMEN

PURPOSE: The aim of this study was to develop a novel technique for lung nodule detection using an optimized feature set. This feature set has been achieved after rigorous experimentation, which has helped in reducing the false positives significantly. METHOD: The proposed method starts with preprocessing, removing any present noise from input images, followed by lung segmentation using optimal thresholding. Then the image is enhanced using multiscale dot enhancement filtering prior to nodule detection and feature extraction. Finally, classification of lung nodules is achieved using Support Vector Machine (SVM) classifier. The feature set consists of intensity, shape (2D and 3D) and texture features, which have been selected to optimize the sensitivity and reduce false positives. In addition to SVM, some other supervised classifiers like K-Nearest-Neighbor (KNN), Decision Tree and Linear Discriminant Analysis (LDA) have also been used for performance comparison. The extracted features have also been compared class-wise to determine the most relevant features for lung nodule detection. The proposed system has been evaluated using 850 scans from Lung Image Database Consortium (LIDC) dataset and k-fold cross-validation scheme. RESULTS: The overall sensitivity has been improved compared to the previous methods and false positives per scan have been reduced significantly. The achieved sensitivities at detection and classification stages are 94.20% and 98.15%, respectively, with only 2.19 false positives per scan. CONCLUSIONS: It is very difficult to achieve high performance metrics using only a single feature class therefore hybrid approach in feature selection remains a better choice. Choosing right set of features can improve the overall accuracy of the system by improving the sensitivity and reducing false positives.


Asunto(s)
Algoritmos , Diagnóstico por Computador , Nódulo Pulmonar Solitario/diagnóstico por imagen , Máquina de Vectores de Soporte , Tomografía Computarizada por Rayos X , Humanos , Neoplasias Pulmonares , Interpretación de Imagen Radiográfica Asistida por Computador , Sensibilidad y Especificidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA