Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 31
Filtrar
Más filtros

Banco de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
J Magn Reson Imaging ; 41(5): 1242-50, 2015 May.
Artículo en Inglés | MEDLINE | ID: mdl-24862942

RESUMEN

PURPOSE: To develop and validate an automated segmentation method that extracts the interventricular septum (IS) from myocardial black-blood images for the T2* measurement in thalassemia patients. MATERIALS AND METHODS: A total of 144 thalassemia major patients (age range, 11-51 years; 73 males) were scanned with a black-blood multi-echo gradient-echo sequence using a 1.5 Tesla Siemens Sonata system (flip angle 20°, sampling bandwidth 810 Hz/pixel, voxel size 1.56 × 1.56 × 10 mm(3) and variable fields of view (20-30) × 40 cm(2) depending on patient size). The improved Chan-Vese model with an automated initialization by the circular Hough transformation was implemented to segment the endocardial and epicardial margins of the left ventricle (LV). Consequently, the IS was extracted by analyzing the anatomical relation between the LV and the blood pool of the right ventricle, identified by intensity thresholding. The proposed automated IS segmentation (AISS) method was compared with the conventional manual method by using the Bland-Altman analysis and the coefficient of variation (CoV). RESULTS: The T2* measurements using the AISS method were in good agreement with those manually measured by experienced observers with a mean difference of 1.71% and a CoV of 4.15% (P < 0.001). CONCLUSION: Black-blood myocardial T2* measurement can be fully automated with the proposed AISS method.


Asunto(s)
Tabiques Cardíacos/patología , Imagen por Resonancia Magnética/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Técnica de Sustracción , Talasemia/patología , Tabique Interventricular/fisiología , Adolescente , Adulto , Algoritmos , Niño , Femenino , Humanos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Masculino , Persona de Mediana Edad , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Adulto Joven
2.
J Digit Imaging ; 26(3): 578-93, 2013 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-23149587

RESUMEN

We present a novel method for the automatic segmentation of the vertebral bodies from 2D sagittal magnetic resonance (MR) images of the spine. First, a new affinity matrix is constructed by incorporating neighboring information, which local intensity is considered to depict the image and overcome the noise effectively. Second, the Gaussian kernel function is to weight chi-square distance based on the neighboring information, which the vital spatial structure of the image is introduced to improve the accuracy of the segmentation task. Third, an adaptive local scaling parameter is utilized to facilitate the image segmentation and avoid the optimal configuration of controlling parameter manually. The encouraging results on the spinal MR images demonstrate the advantage of the proposed method over other methods in terms of both efficiency and robustness.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Enfermedades de la Columna Vertebral/diagnóstico , Columna Vertebral/anatomía & histología , Distribución de Chi-Cuadrado , Humanos , Sensibilidad y Especificidad
3.
Med Phys ; 39(11): 6929-42, 2012 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-23127086

RESUMEN

PURPOSE: A content-based image retrieval (CBIR) method for T1-weighted contrast-enhanced MRI (CE-MRI) images of brain tumors is presented for diagnosis aid. The method is thoroughly evaluated on a large image dataset. METHODS: Using the tumor region as a query, the authors' CBIR system attempts to retrieve tumors of the same pathological category. Aside from commonly used features such as intensity, texture, and shape features, the authors use a margin information descriptor (MID), which is capable of describing the characteristics of tissue surrounding a tumor, for representing image contents. In addition, the authors designed a distance metric learning algorithm called Maximum mean average Precision Projection (MPP) to maximize the smooth approximated mean average precision (mAP) to optimize retrieval performance. RESULTS: The effectiveness of MID and MPP algorithms was evaluated using a brain CE-MRI dataset consisting of 3108 2D scans acquired from 235 patients with three categories of brain tumors (meningioma, glioma, and pituitary tumor). By combining MID and other features, the mAP of retrieval increased by more than 6% with the learned distance metrics. The distance metric learned by MPP significantly outperformed the other two existing distance metric learning methods in terms of mAP. The CBIR system using the proposed strategies achieved a mAP of 87.3% and a precision of 89.3% when top 10 images were returned by the system. Compared with scale-invariant feature transform, the MID, which uses the intensity profile as descriptor, achieves better retrieval performance. CONCLUSIONS: Incorporating tumor margin information represented by MID with the distance metric learned by the MPP algorithm can substantially improve the retrieval performance for brain tumors in CE-MRI.


Asunto(s)
Neoplasias Encefálicas/diagnóstico , Neoplasias Encefálicas/patología , Medios de Contraste , Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética/métodos , Humanos
4.
J Digit Imaging ; 25(6): 708-19, 2012 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-22692772

RESUMEN

This paper is aimed at developing and evaluating a content-based retrieval method for contrast-enhanced liver computed tomographic (CT) images using bag-of-visual-words (BoW) representations of single and multiple phases. The BoW histograms are extracted using the raw intensity as local patch descriptor for each enhance phase by densely sampling the image patches within the liver lesion regions. The distance metric learning algorithms are employed to obtain the semantic similarity on the Hellinger kernel feature map of the BoW histograms. The different visual vocabularies for BoW and learned distance metrics are evaluated in a contrast-enhanced CT image dataset comprised of 189 patients with three types of focal liver lesions, including 87 hepatomas, 62 cysts, and 60 hemangiomas. For each single enhance phase, the mean of average precision (mAP) of BoW representations for retrieval can reach above 90 % which is significantly higher than that of intensity histogram and Gabor filters. Furthermore, the combined BoW representations of the three enhance phases can improve mAP to 94.5 %. These preliminary results demonstrate that the BoW representation is effective and feasible for retrieval of liver lesions in contrast-enhanced CT images.


Asunto(s)
Hepatopatías/diagnóstico por imagen , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Tomografía Computarizada Espiral , Algoritmos , Inteligencia Artificial , Medios de Contraste , Humanos , Informática Médica/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Sistemas de Información Radiológica
5.
Curr Med Imaging ; 18(14): 1486-1502, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35578861

RESUMEN

BACKGROUND: Ovarian tumor is a common female genital tumor, among which malignant tumors have a poor prognosis. The survival rate of 70% of patients with ovarian cancer is less than 5 years, while benign ovarian tumor is better, so the early diagnosis of ovarian cancer is important for the treatment and prognosis of patients. OBJECTIVES: Our aim is to establish a classification model for ovarian tumors. METHODS: We extracted radiomics and deep learning features from patients'CT images. The four-step feature selection algorithm proposed in this paper was used to obtain the optimal combination of features, then, a classification model was developed by combining those selected features and support vector machine. The receiver operating characteristic curve and an area under the curve (AUC) analysis were used to evaluate the performance of the classification model in both the training and test cohort. RESULTS: The classification model, which combined radiomics features with deep learning features, demonstrated better classification performance with respect to the radiomics features model alone in training cohort (AUC 0.9289 vs. 0.8804, P < 0.0001, accuracy 0.8970 vs. 0.7993, P < 0.0001), and significantly improve the performance in the test cohort (AUC 0.9089 vs. 0.8446, P = 0.001, accuracy 0.8296 vs. 0.7259, P < 0.0001). CONCLUSION: The experiments showed that deep learning features play an active role in the construction of classification model, and the proposed classification model achieved excellent classification performance, which can potentially become a new auxiliary diagnostic tool.


Asunto(s)
Aprendizaje Profundo , Neoplasias Ováricas , Humanos , Femenino , Curva ROC , Máquina de Vectores de Soporte , Algoritmos , Neoplasias Ováricas/diagnóstico por imagen
6.
Artículo en Inglés | MEDLINE | ID: mdl-32086210

RESUMEN

Class imbalance has emerged as one of the major challenges for medical image segmentation. The model cascade (MC) strategy, a popular scheme, significantly alleviates the class imbalance issue via running a set of individual deep models for coarse-to-fine segmentation. Despite its outstanding performance, however, this method leads to undesired system complexity and also ignores the correlation among the models. To handle these flaws in the MC approach, we propose in this paper a light-weight deep model, i.e., the One-pass Multi-task Network (OM-Net) to solve class imbalance better than MC does, while requiring only one-pass computation for brain tumor segmentation. First, OM-Net integrates the separate segmentation tasks into one deep model, which consists of shared parameters to learn joint features, as well as task-specific parameters to learn discriminative features. Second, to more effectively optimize OM-Net, we take advantage of the correlation among tasks to design both an online training data transfer strategy and a curriculum learning-based training strategy. Third, we further propose sharing prediction results between tasks, which enables us to design a cross-task guided attention (CGA) module. By following the guidance of the prediction results provided by the previous task, CGA can adaptively recalibrate channel-wise feature responses based on the category-specific statistics. Finally, a simple yet effective post-processing method is introduced to refine the segmentation results of the proposed attention network. Extensive experiments are conducted to demonstrate the effectiveness of the proposed techniques. Most impressively, we achieve state-of-the-art performance on the BraTS 2015 testing set and BraTS 2017 online validation set. Using these proposed approaches, we also won joint third place in the BraTS 2018 challenge among 64 participating teams.The code will be made publicly available at https://github.com/chenhong-zhou/OM-Net.

7.
Nan Fang Yi Ke Da Xue Xue Bao ; 39(9): 1023-1029, 2019 Sep 30.
Artículo en Zh | MEDLINE | ID: mdl-31640953

RESUMEN

OBJECTIVE: To compare the effectiveness and sensitivity of entropy and regional homogeneity (ReHo) for identifying irritable bowel syndrome (IBS) based on functional magnetic resonance imaging (fMRI). METHODS: Voxel-based approximate entropy (ApEn) was calculated based on findings of resting fMRI of 54 patients with IBS and 54 healthy control subjects. Feature selection was performed using independent sample t-test, and support vector machine was then used to classify and identify different groups. The classification performance obtained from ApEn was compared with that from ReHo. RESULTS: Significant differences between the two groups were found in the left triangle part of inferior prefrontal gyrus, right angular gyrus of the inferior parietal lobule, left inferior temporal gyrus, left middle temporal gyrus, left lingual gyrus, bilateral middle occipital gyrus and bilateral superior occipital gyrus for ReHo (P < 0.05), and in the bilateral postcentral gyrus, right precentral gyrus, right inferior temporal gyrus, bilateral middle temporal gyrus and left superior occipital gyrus for ApEn (P < 0.05). ApEn consistently showed better performance than ReHo regardless of the variations in the number of features. The classification accuracy, specificity and sensitivity of ApEn were 93.5185%, 90.7407% and 96.2963%, respectively, as compared with 86.1111%, 85.1852% and 87.037% of ReHo. CONCLUSIONS: Entropy analysis based on fMRI can be more sensitive and effective than ReHo for identification of IBS.


Asunto(s)
Síndrome del Colon Irritable/diagnóstico por imagen , Imagen por Resonancia Magnética , Encéfalo/diagnóstico por imagen , Mapeo Encefálico , Estudios de Casos y Controles , Entropía , Humanos
8.
Comput Biol Med ; 114: 103432, 2019 11.
Artículo en Inglés | MEDLINE | ID: mdl-31521897

RESUMEN

BACKGROUND: Parotid ducts (PDs) play an important role in the diagnosis and treatment of parotid lesions. Segmentation of PDs from Cone beam computed tomography (CBCT) images has a significant impact to the pathological analysis of the parotid gland. Although level set methods (LSMs) have achieved considerable success in medical imaging segmentation, it is still a challenging task for existing LSMs to precisely and self-adaptively segment PDs from parotid duct (PD) images with both noise, intensity inhomogeneity, and vague boundary. In this paper, we propose a novel Self-adaptive Weighted level set method via Local intensity Difference (SWLD) to comprehensively solve the above issues. METHOD: Firstly, a new adaptive weighted operator based on local intensity variance difference has been proposed to overcome the limitations of previous LSMs that are sensitive to parameters, which achieves the aim of automatic segmentation. Secondly, we introduce local intensity mean difference into the energy function to improve the curve evolution efficiency. Thirdly, we eliminate the effects of intensity inhomogeneity, noise, and boundary blur in the parotid image through a local similarity factor with two different neighborhood sizes. RESULTS: Using the same dataset, segmentation of PDs is performed using the proposed SWLD algorithm and existing LSM algorithms. The mean Dice score for the proposed algorithm is 91.3%, and the corresponding mean Hausdorff distance (HD) is 1.746. CONCLUSION: Experimental results demonstrate that the proposed algorithm is superior to many existing level set segmentation algorithms, and it can accurately and automatically segment the PDs even in complex gradient boundaries.


Asunto(s)
Tomografía Computarizada de Haz Cónico/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Glándula Parótida/diagnóstico por imagen , Algoritmos , Humanos
9.
IEEE Trans Med Imaging ; 38(10): 2352-2363, 2019 10.
Artículo en Inglés | MEDLINE | ID: mdl-30908198

RESUMEN

Conducting an accurate motion correction of liver dynamic contrast-enhanced magnetic resonance (DCE-MR) imaging remains challenging because of intensity variations caused by contrast agents. Such variations lead to the failure of the traditional intensity-based registration method. To address this problem, we propose a correlation-weighted sparse representation framework to separate the contrast agent from original liver DCE-MR images. This framework allows the robust registration of motion components over time without intensity variances. Existing sparse coding techniques recover a 3D image containing only contrast agents (named contrast enhancement component) from a manually labeled dictionary, whose column has the same size with the original 3D volume (3D-t mode). The high dimension of the recovery target (3D volume) and the indistinguishability between the unenhanced and enhanced images make accurate coding difficult. In this paper, we predefine an ideal time-intensity curve containing only contrast agents (named contrast agent curve) and recover it from the transpose dictionary (t-3D mode), whose column has been updated into the original time-intensity curves. The low dimension of the target (1D curve) and the significant intergroup difference between contrast agent curves and non-contrast agent curves can estimate a series of pure contrast agent curves. A "correlation-weighted" constraint is introduced for the selection of a coding subset with more contrast agent curves, leading to an efficient and accurate sparse recovery process. Then, the contrast enhancement component can be estimated by the solved sparse coefficients' map and the ideal curve and subtracted from the original DCE-MRI. Finally, we register the de-enhanced images and apply the obtained deformation fields for the original DCE-MRI to achieve the goal of motion correction. We conduct the experiments on both simulated and real liver DCE-MRI data. Compared with other state-of-the-art DCE-MRI registration methods, the experimental results show that our method achieves a better registration performance with less computational efficiency.


Asunto(s)
Imagenología Tridimensional/métodos , Hígado/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Algoritmos , Medios de Contraste , Humanos
10.
IEEE Trans Med Imaging ; 38(10): 2271-2280, 2019 10.
Artículo en Inglés | MEDLINE | ID: mdl-30908202

RESUMEN

Hippocampus segmentation plays a significant role in mental disease diagnoses, such as Alzheimer's disease, epilepsy, and so on. Patch-based multi-atlas segmentation (PBMAS) approach is a popular method for hippocampus segmentation and has achieved a promising result. However, the PBMAS approach needs high computation cost due to registration and the segmentation accuracy is subject to the registration accuracy. In this paper, we propose a novel method based on iterative local linear mapping (ILLM) with the representative and local structure-preserved feature embedding to achieve accurate and robust hippocampus segmentation with no need for registration. In the proposed approach, semi-supervised deep autoencoder (SSDA) exploits unsupervised deep autoencoder and local structure-preserved manifold regularization to nonlinearly transform the extracted magnetic resonance (MR) patch to embedded feature manifold, whose adjacent relationship is similar to the signed distance map (SDM) patch manifold. Local linear mapping is used to preliminarily predict SDM patch corresponding to the MR patch. Subsequently, threshold segmentation generates a preliminary segmentation. The ILLM refines the segmentation result iteratively by ensuring the local constraints of embedded feature manifold and SDM patch manifold using a space-constrained dictionary update. Thus, a refined segmentation is obtained with no need for registration. The experiments on 135 subjects from ADNI dataset show that the proposed approach is superior to the state-of-the-art PBMAS and classification-based approaches with mean Dice similarity coefficients of 0.8852±0.0203 and 0.8783 ± 0.0251 for bilateral hippocampus segmentation of 1.5T and 3.0T datasets, respectively.


Asunto(s)
Aprendizaje Profundo , Hipocampo/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Neuroimagen/métodos , Algoritmos , Enfermedad de Alzheimer/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética
11.
Appl Bionics Biomech ; 2019: 9806464, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31341514

RESUMEN

BACKGROUND AND OBJECTIVE: When radiologists diagnose lung diseases in chest radiography, they can miss some lung nodules overlapped with ribs or clavicles. Dual-energy subtraction (DES) imaging performs well because it can produce soft tissue images, in which the bone components in chest radiography were almost suppressed but the visibility of nodules and lung vessels was still maintained. However, most routinely available X-ray machines do not possess the DES function. Thus, we presented a data-driven decomposition model to perform virtual DES function for decomposing a single conventional chest radiograph into soft tissue and bone images. METHODS: For a given chest radiograph, similar chest radiographs with corresponding DES soft tissue and bone images are selected from the training database as exemplars for decomposition. The corresponding fields between the observed chest radiograph and the exemplars are solved by a hierarchically dense matching algorithm. Then, nonparametric priors of soft tissue and bone components are constructed by sampling image patches from the selected soft tissue and bone images according to the corresponding fields. Finally, these nonparametric priors are integrated into our decomposition model, the energy function of which is efficiently optimized by an iteratively reweighted least-squares scheme (IRLS). RESULTS: The decomposition method is evaluated on a data set of posterior-anterior DES radiography (503 cases), as well as on the JSRT data set. The proposed method can produce soft tissue and bone images similar to those produced by the actual DES system. CONCLUSIONS: The proposed method can markedly reduce the visibility of bony structures in chest radiographs and shows potential to enhance diagnosis.

12.
Nan Fang Yi Ke Da Xue Xue Bao ; 38(1): 55-61, 2018 Jan 30.
Artículo en Zh | MEDLINE | ID: mdl-33177032

RESUMEN

OBJECTIVE: To establish a model for discrimination between benign and malignant gastrointestinal stromal tumors (GIST) by analyzing the texture features extracted from computed tomography (CT) images. METHODS: The CT datasets were collected from 110 patients with GIST (including 80 as the training cohort and 30 as the validation cohort). Feature set reduction was executed with the 0.632 + bootstrap method in the initial feature set followed by stepwise forward feature selection in the feature subset, and the classification model was generated by logistic regression. RESULTS: The 6-texture-featurebased classification model successfully discriminated between benign and malignant GIST in both the training and validation cohorts with AUCs of 0.93 and 0.91, sensitivity of 0.88 and 0.87, specificity of 0.85 and 0.86, and accuracy of 0.87 and 0.86 in the two cohorts, respectively. CONCLUSIONS: This classification model established by radiomics analysis is capable of discrimination between benign and malignant GIST to provide assistance in preoperative diagnosis of GIST.

13.
Nan Fang Yi Ke Da Xue Xue Bao ; 38(12): 1485-1491, 2018 Dec 30.
Artículo en Zh | MEDLINE | ID: mdl-30613018

RESUMEN

OBJECTIVE: To establish a fast adaptive active contour model based on local gray difference for parotid duct image segmentation. METHODS: On the basis of the LBF model, we added the mean difference of the local gray scale inside and outside the contour as the energy term of the driving evolution curve, and the local gray-scale variance difference was used to replaceλ1 and λ2 as the control term of the energy parameter value. Two local similarity factors of different neighborhood sizes were introduced to correct the effects of image gray unevenness and boundary blur to improve the segmentation efficiency. RESULTS: During image segmentation, this algorithm allowed for adaptive adjustment of the evolution direction, velocity and the energy weight of the internal and external regions according to the difference of gray mean and variance between the internal and external regions. This algorithm was also capable of detecting the actual boundary in a complex gradient boundary region, thus enabling the evolution curve to approach the target boundary quickly and accurately. CONCLUSIONS: The proposed algorithm is superior to the existing segmentation algorithms and allows fast and accurate segmentation of the parotid duct with well-preserved image details.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Glándula Parótida/diagnóstico por imagen , Conductos Salivales/diagnóstico por imagen , Color
14.
IEEE J Biomed Health Inform ; 22(3): 842-851, 2018 05.
Artículo en Inglés | MEDLINE | ID: mdl-28368835

RESUMEN

Lung field segmentation in chest radiographs (CXRs) is an essential preprocessing step in automatically analyzing such images. We present a method for lung field segmentation that is built on a high-quality boundary map detected by an efficient modern boundary detector, namely a structured edge detector (SED). A SED is trained beforehand to detect lung boundaries in CXRs with manually outlined lung fields. Then, an ultrametric contour map (UCM) is transformed from the masked and marked boundary map. Finally, the contours with the highest confidence level in the UCM are extracted as lung contours. Our method is evaluated using the public Japanese Society of Radiological Technology database of scanned films. The average Jaccard index of our method is 95.2%, which is comparable with those of other state-of-the-art methods (95.4%). The computation time of our method is less than 0.1 s for a CXR when executed on an ordinary laptop. Our method is also validated on CXRs acquired with different digital radiography units. The results demonstrate the generalization of the trained SED model and the usefulness of our method.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Pulmón/diagnóstico por imagen , Radiografía Torácica/métodos , Algoritmos , Bases de Datos Factuales , Humanos
15.
IEEE Trans Med Imaging ; 37(4): 977-987, 2018 04.
Artículo en Inglés | MEDLINE | ID: mdl-29610076

RESUMEN

Attenuation correction for positron-emission tomography (PET)/magnetic resonance (MR) hybrid imaging systems and dose planning for MR-based radiation therapy remain challenging due to insufficient high-energy photon attenuation information. We present a novel approach that uses the learned nonlinear local descriptors and feature matching to predict pseudo computed tomography (pCT) images from T1-weighted and T2-weighted magnetic resonance imaging (MRI) data. The nonlinear local descriptors are obtained by projecting the linear descriptors into the nonlinear high-dimensional space using an explicit feature map and low-rank approximation with supervised manifold regularization. The nearest neighbors of each local descriptor in the input MR images are searched in a constrained spatial range of the MR images among the training dataset. Then the pCT patches are estimated through k-nearest neighbor regression. The proposed method for pCT prediction is quantitatively analyzed on a dataset consisting of paired brain MRI and CT images from 13 subjects. Our method generates pCT images with a mean absolute error (MAE) of 75.25 ± 18.05 Hounsfield units, a peak signal-to-noise ratio of 30.87 ± 1.15 dB, a relative MAE of 1.56 ± 0.5% in PET attenuation correction, and a dose relative structure volume difference of 0.055 ± 0.107% in , as compared with true CT. The experimental results also show that our method outperforms four state-of-the-art methods.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Encéfalo/diagnóstico por imagen , Humanos , Modelos Estadísticos , Dinámicas no Lineales
16.
Sci Rep ; 7(1): 4274, 2017 06 27.
Artículo en Inglés | MEDLINE | ID: mdl-28655897

RESUMEN

In this paper, we present an original multiple atlases level set framework (MALSF) for automatic, accurate and robust thalamus segmentation in magnetic resonance images (MRI). The contributions of the MALSF method are twofold. First, the main technical contribution is a novel label fusion strategy in the level set framework. Label fusion is achieved by seeking an optimal level set function that minimizes energy functional with three terms: label fusion term, image based term, and regularization term. This strategy integrates shape prior, image information and the regularity of the thalamus. Second, we use propagated labels from multiple registration methods with different parameters to take full advantage of the complementary information of different registration methods. Since different registration methods and different atlases can yield complementary information, multiple registration and multiple atlases can be incorporated into the level set framework to improve the segmentation performance. Experiments have shown that the MALSF method can improve the segmentation accuracy for the thalamus. Compared to ground truth segmentation, the mean Dice metrics of our method are 0.9239 and 0.9200 for left and right thalamus.


Asunto(s)
Mapeo Encefálico , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Tálamo/fisiología , Algoritmos , Humanos , Interpretación de Imagen Asistida por Computador , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Modelos Teóricos
17.
Med Image Anal ; 35: 421-433, 2017 01.
Artículo en Inglés | MEDLINE | ID: mdl-27589577

RESUMEN

Suppression of bony structures in chest radiographs (CXRs) is potentially useful for radiologists and computer-aided diagnostic schemes. In this paper, we present an effective deep learning method for bone suppression in single conventional CXR using deep convolutional neural networks (ConvNets) as basic prediction units. The deep ConvNets were adapted to learn the mapping between the gradients of the CXRs and the corresponding bone images. We propose a cascade architecture of ConvNets (called CamsNet) to refine progressively the predicted bone gradients in which the ConvNets work at successively increased resolutions. The predicted bone gradients at different scales from the CamsNet are fused in a maximum-a-posteriori framework to produce the final estimation of a bone image. This estimation of a bone image is subtracted from the original CXR to produce a soft-tissue image in which the bone components are eliminated. Our method was evaluated on a dataset that consisted of 504 cases of real two-exposure dual-energy subtraction chest radiographs (404 cases for training and 100 cases for test). The results demonstrate that our method can produce high-quality and high-resolution bone and soft-tissue images. The average relative mean absolute error of the produced bone images and peak signal-to-noise ratio of the produced soft-tissue images were 3.83% and 38.7dB, respectively. The average bone suppression ratio of our method was 83.8% for the CXRs with pixel sizes of nearly 0.194mm. Furthermore, we apply the trained CamsNet model on the CXRs acquired by various types of X-ray machines, including scanned films, and our method can also produce visually appealing bone and soft-tissue images.


Asunto(s)
Huesos , Diagnóstico por Computador/métodos , Redes Neurales de la Computación , Radiografía Torácica/métodos , Algoritmos
18.
Sci Rep ; 7: 45501, 2017 04 03.
Artículo en Inglés | MEDLINE | ID: mdl-28368016

RESUMEN

We propose local linear mapping (LLM), a novel fusion framework for distance field (DF) to perform automatic hippocampus segmentation. A k-means cluster method is propose for constructing magnetic resonance (MR) and DF dictionaries. In LLM, we assume that the MR and DF samples are located on two nonlinear manifolds and the mapping from the MR manifold to the DF manifold is differentiable and locally linear. We combine the MR dictionary using local linear representation to present the test sample, and combine the DF dictionary using the corresponding coefficients derived from local linear representation procedure to predict the DF of the test sample. We then merge the overlapped predicted DF patch to obtain the DF value of each point in the test image via a confidence-based weighted average method. This approach enabled us to estimate the label of the test image according to the predicted DF. The proposed method was evaluated on brain images of 35 subjects obtained from SATA dataset. Results indicate the effectiveness of the proposed method, which yields mean Dice similarity coefficients of 0.8697, 0.8770 and 0.8734 for the left, right and bi-lateral hippocampus, respectively.


Asunto(s)
Hipocampo/diagnóstico por imagen , Imagen por Resonancia Magnética , Algoritmos , Mapeo Encefálico , Humanos , Interpretación de Imagen Asistida por Computador
19.
J Nucl Med ; 57(10): 1635-1641, 2016 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-27230932

RESUMEN

Attenuation correction is important for PET reconstruction. In PET/MR, MR intensities are not directly related to attenuation coefficients that are needed in PET imaging. The attenuation coefficient map can be derived from CT images. Therefore, prediction of CT substitutes from MR images is desired for attenuation correction in PET/MR. METHODS: This study presents a patch-based method for CT prediction from MR images, generating attenuation maps for PET reconstruction. Because no global relation exists between MR and CT intensities, we propose local diffeomorphic mapping (LDM) for CT prediction. In LDM, we assume that MR and CT patches are located on 2 nonlinear manifolds, and the mapping from the MR manifold to the CT manifold approximates a diffeomorphism under a local constraint. Locality is important in LDM and is constrained by the following techniques. The first is local dictionary construction, wherein, for each patch in the testing MR image, a local search window is used to extract patches from training MR/CT pairs to construct MR and CT dictionaries. The k-nearest neighbors and an outlier detection strategy are then used to constrain the locality in MR and CT dictionaries. Second is local linear representation, wherein, local anchor embedding is used to solve MR dictionary coefficients when representing the MR testing sample. Under these local constraints, dictionary coefficients are linearly transferred from the MR manifold to the CT manifold and used to combine CT training samples to generate CT predictions. RESULTS: Our dataset contains 13 healthy subjects, each with T1- and T2-weighted MR and CT brain images. This method provides CT predictions with a mean absolute error of 110.1 Hounsfield units, Pearson linear correlation of 0.82, peak signal-to-noise ratio of 24.81 dB, and Dice in bone regions of 0.84 as compared with real CTs. CT substitute-based PET reconstruction has a regression slope of 1.0084 and R2 of 0.9903 compared with real CT-based PET. CONCLUSION: In this method, no image segmentation or accurate registration is required. Our method demonstrates superior performance in CT prediction and PET reconstruction compared with competing methods.


Asunto(s)
Mapeo Encefálico , Encéfalo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética , Imagen Multimodal , Tomografía de Emisión de Positrones , Tomografía Computarizada por Rayos X , Encéfalo/citología , Voluntarios Sanos , Humanos
20.
Sci Rep ; 6: 34461, 2016 Sep 29.
Artículo en Inglés | MEDLINE | ID: mdl-27681452

RESUMEN

A technical challenge in the registration of dynamic contrast-enhanced magnetic resonance (DCE-MR) imaging in the liver is intensity variations caused by contrast agents. Such variations lead to the failure of the traditional intensity-based registration method. To address this problem, a manifold-based registration framework for liver DCE-MR time series is proposed. We assume that liver DCE-MR time series are located on a low-dimensional manifold and determine intrinsic similarities between frames. Based on the obtained manifold, the large deformation of two dissimilar images can be decomposed into a series of small deformations between adjacent images on the manifold through gradual deformation of each frame to the template image along the geodesic path. Furthermore, manifold construction is important in automating the selection of the template image, which is an approximation of the geodesic mean. Robust principal component analysis is performed to separate motion components from intensity changes induced by contrast agents; the components caused by motion are used to guide registration in eliminating the effect of contrast enhancement. Visual inspection and quantitative assessment are further performed on clinical dataset registration. Experiments show that the proposed method effectively reduces movements while preserving the topology of contrast-enhancing structures and provides improved registration performance.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA