Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 38
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
J Magn Reson Imaging ; 2024 Jun 03.
Artículo en Inglés | MEDLINE | ID: mdl-38826142

RESUMEN

BACKGROUND: The number of focal liver lesions (FLLs) detected by imaging has increased worldwide, highlighting the need to develop a robust, objective system for automatically detecting FLLs. PURPOSE: To assess the performance of the deep learning-based artificial intelligence (AI) software in identifying and measuring lesions on contrast-enhanced magnetic resonance imaging (MRI) images in patients with FLLs. STUDY TYPE: Retrospective. SUBJECTS: 395 patients with 1149 FLLs. FIELD STRENGTH/SEQUENCE: The 1.5 T and 3 T scanners, including T1-, T2-, diffusion-weighted imaging, in/out-phase imaging, and dynamic contrast-enhanced imaging. ASSESSMENT: The diagnostic performance of AI, radiologist, and their combination was compared. Using 20 mm as the cut-off value, the lesions were divided into two groups, and then divided into four subgroups: <10, 10-20, 20-40, and ≥40 mm, to evaluate the sensitivity of radiologists and AI in the detection of lesions of different sizes. We compared the pathologic sizes of 122 surgically resected lesions with measurements obtained using AI and those made by radiologists. STATISTICAL TESTS: McNemar test, Bland-Altman analyses, Friedman test, Pearson's chi-squared test, Fisher's exact test, Dice coefficient, and intraclass correlation coefficients. A P-value <0.05 was considered statistically significant. RESULTS: The average Dice coefficient of AI in segmentation of liver lesions was 0.62. The combination of AI and radiologist outperformed the radiologist alone, with a significantly higher detection rate (0.894 vs. 0.825) and sensitivity (0.883 vs. 0.806). The AI showed significantly sensitivity than radiologists in detecting all lesions <20 mm (0.848 vs. 0.788). Both AI and radiologists achieved excellent detection performance for lesions ≥20 mm (0.867 vs. 0.881, P = 0.671). A remarkable agreement existed in the average tumor sizes among the three measurements (P = 0.174). DATA CONCLUSION: AI software based on deep learning exhibited practical value in automatically identifying and measuring liver lesions. TECHNICAL EFFICACY: Stage 2.

2.
Neuroimage ; 194: 105-119, 2019 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-30910724

RESUMEN

Detailed whole brain segmentation is an essential quantitative technique in medical image analysis, which provides a non-invasive way of measuring brain regions from a clinical acquired structural magnetic resonance imaging (MRI). Recently, deep convolution neural network (CNN) has been applied to whole brain segmentation. However, restricted by current GPU memory, 2D based methods, downsampling based 3D CNN methods, and patch-based high-resolution 3D CNN methods have been the de facto standard solutions. 3D patch-based high resolution methods typically yield superior performance among CNN approaches on detailed whole brain segmentation (>100 labels), however, whose performance are still commonly inferior compared with state-of-the-art multi-atlas segmentation methods (MAS) due to the following challenges: (1) a single network is typically used to learn both spatial and contextual information for the patches, (2) limited manually traced whole brain volumes are available (typically less than 50) for training a network. In this work, we propose the spatially localized atlas network tiles (SLANT) method to distribute multiple independent 3D fully convolutional networks (FCN) for high-resolution whole brain segmentation. To address the first challenge, multiple spatially distributed networks were used in the SLANT method, in which each network learned contextual information for a fixed spatial location. To address the second challenge, auxiliary labels on 5111 initially unlabeled scans were created by multi-atlas segmentation for training. Since the method integrated multiple traditional medical image processing methods with deep learning, we developed a containerized pipeline to deploy the end-to-end solution. From the results, the proposed method achieved superior performance compared with multi-atlas segmentation methods, while reducing the computational time from >30 h to 15 min. The method has been made available in open source (https://github.com/MASILab/SLANTbrainSeg).


Asunto(s)
Encéfalo/anatomía & histología , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional/métodos , Atlas como Asunto , Humanos , Imagen por Resonancia Magnética/métodos , Neuroimagen/métodos
3.
IEEE Trans Med Imaging ; 43(5): 1995-2009, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38224508

RESUMEN

Deep learning models have demonstrated remarkable success in multi-organ segmentation but typically require large-scale datasets with all organs of interest annotated. However, medical image datasets are often low in sample size and only partially labeled, i.e., only a subset of organs are annotated. Therefore, it is crucial to investigate how to learn a unified model on the available partially labeled datasets to leverage their synergistic potential. In this paper, we systematically investigate the partial-label segmentation problem with theoretical and empirical analyses on the prior techniques. We revisit the problem from a perspective of partial label supervision signals and identify two signals derived from ground truth and one from pseudo labels. We propose a novel two-stage framework termed COSST, which effectively and efficiently integrates comprehensive supervision signals with self-training. Concretely, we first train an initial unified model using two ground truth-based signals and then iteratively incorporate the pseudo label signal to the initial model using self-training. To mitigate performance degradation caused by unreliable pseudo labels, we assess the reliability of pseudo labels via outlier detection in latent space and exclude the most unreliable pseudo labels from each self-training iteration. Extensive experiments are conducted on one public and three private partial-label segmentation tasks over 12 CT datasets. Experimental results show that our proposed COSST achieves significant improvement over the baseline method, i.e., individual networks trained on each partially labeled dataset. Compared to the state-of-the-art partial-label segmentation methods, COSST demonstrates consistent superior performance on various segmentation tasks and with different training data sizes.


Asunto(s)
Bases de Datos Factuales , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Tomografía Computarizada por Rayos X/métodos , Aprendizaje Automático Supervisado
4.
Med Image Anal ; 90: 102939, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37725868

RESUMEN

Transformer-based models, capable of learning better global dependencies, have recently demonstrated exceptional representation learning capabilities in computer vision and medical image analysis. Transformer reformats the image into separate patches and realizes global communication via the self-attention mechanism. However, positional information between patches is hard to preserve in such 1D sequences, and loss of it can lead to sub-optimal performance when dealing with large amounts of heterogeneous tissues of various sizes in 3D medical image segmentation. Additionally, current methods are not robust and efficient for heavy-duty medical segmentation tasks such as predicting a large number of tissue classes or modeling globally inter-connected tissue structures. To address such challenges and inspired by the nested hierarchical structures in vision transformer, we proposed a novel 3D medical image segmentation method (UNesT), employing a simplified and faster-converging transformer encoder design that achieves local communication among spatially adjacent patch sequences by aggregating them hierarchically. We extensively validate our method on multiple challenging datasets, consisting of multiple modalities, anatomies, and a wide range of tissue classes, including 133 structures in the brain, 14 organs in the abdomen, 4 hierarchical components in the kidneys, inter-connected kidney tumors and brain tumors. We show that UNesT consistently achieves state-of-the-art performance and evaluate its generalizability and data efficiency. Particularly, the model achieves whole brain segmentation task complete ROI with 133 tissue classes in a single network, outperforming prior state-of-the-art method SLANT27 ensembled with 27 networks. Our model performance increases the mean DSC score of the publicly available Colin and CANDI dataset from 0.7264 to 0.7444 and from 0.6968 to 0.7025, respectively. Code, pre-trained models, and use case pipeline are available at: https://github.com/MASILab/UNesT.

5.
Med Phys ; 39(10): 5981-9, 2012 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-23039636

RESUMEN

PURPOSE: Malignant gliomas represent an aggressive class of central nervous system neoplasms. Correlation of interventional outcomes with tumor morphometry data necessitates 3D segmentation of tumors (typically based on magnetic resonance imaging). Expert delineation is the long-held gold standard for tumor segmentation, but is exceptionally resource intensive and subject to intrarater and inter-rater variability. Automated tumor segmentation algorithms have been demonstrated for a variety of imaging modalities and tumor phenotypes, but translation of these methods across clinical study designs is problematic given variation in image acquisition, tumor characteristics, segmentation objectives, and validation criteria. Herein, the authors demonstrate an alternative approach for high-throughput tumor segmentation using Internet-based, collaborative labeling. METHODS: In a study of 85 human raters and 98 tumor patients, raters were recruited from a general university campus population (i.e., no specific medical knowledge), given minimal training, and provided web-based tools to label MRI images based on 2D cross sections. The labeling goal was characterized as to extract the enhanced tumor cores on T1-weighted MRI and the bright abnormality on T2-weighted MRI. An experienced rater manually constructed the ground truth volumes of a randomly sampled subcohort of 48 tumor subjects (for both T1w and T2w). Raters' taskwise individual observations, as well as the volume wise truth estimates via statistical fusion method, were evaluated over the subjects having the ground truth. RESULTS: Individual raters were able to reliably characterize (with >0.8 dice similarity coefficient, DSC) the gadolinium-enhancing cores and extent of the edematous areas only slightly more than half of the time. Yet, human raters were efficient in terms of providing these highly variable segmentations (less than 20 s per slice). When statistical fusion was used to combine the results of seven raters per slice for all slices in the datasets, the 3D agreement of the fused results with expertly delineated segmentations was on par with the inter-rater reliability observed between experienced raters using traditional 3D tools (approximately 0.85 DSC). The cumulative time spent per tumor patient with the collaborative approach was equivalent to that with an experienced rater, but the collaborative approach could be achieved with less training time, fewer resources, and efficient parallelization. CONCLUSIONS: Hence, collaborative labeling is a promising technique with potentially wide applicability to cost-effective manual labeling of medical images.


Asunto(s)
Conducta Cooperativa , Glioma/diagnóstico , Procesamiento de Imagen Asistido por Computador/métodos , Internet , Barrera Hematoencefálica/metabolismo , Interpretación Estadística de Datos , Edema/complicaciones , Glioma/complicaciones , Glioma/metabolismo , Humanos , Imagenología Tridimensional , Imagen por Resonancia Magnética
6.
Radiat Oncol ; 17(1): 129, 2022 Jul 22.
Artículo en Inglés | MEDLINE | ID: mdl-35869525

RESUMEN

BACKGROUND: We describe and evaluate a deep network algorithm which automatically contours organs at risk in the thorax and pelvis on computed tomography (CT) images for radiation treatment planning. METHODS: The algorithm identifies the region of interest (ROI) automatically by detecting anatomical landmarks around the specific organs using a deep reinforcement learning technique. The segmentation is restricted to this ROI and performed by a deep image-to-image network (DI2IN) based on a convolutional encoder-decoder architecture combined with multi-level feature concatenation. The algorithm is commercially available in the medical products "syngo.via RT Image Suite VB50" and "AI-Rad Companion Organs RT VA20" (Siemens Healthineers). For evaluation, thoracic CT images of 237 patients and pelvic CT images of 102 patients were manually contoured following the Radiation Therapy Oncology Group (RTOG) guidelines and compared to the DI2IN results using metrics for volume, overlap and distance, e.g., Dice Similarity Coefficient (DSC) and Hausdorff Distance (HD95). The contours were also compared visually slice by slice. RESULTS: We observed high correlations between automatic and manual contours. The best results were obtained for the lungs (DSC 0.97, HD95 2.7 mm/2.9 mm for left/right lung), followed by heart (DSC 0.92, HD95 4.4 mm), bladder (DSC 0.88, HD95 6.7 mm) and rectum (DSC 0.79, HD95 10.8 mm). Visual inspection showed excellent agreements with some exceptions for heart and rectum. CONCLUSIONS: The DI2IN algorithm automatically generated contours for organs at risk close to those by a human expert, making the contouring step in radiation treatment planning simpler and faster. Few cases still required manual corrections, mainly for heart and rectum.


Asunto(s)
Aprendizaje Profundo , Tomografía Computarizada por Rayos X , Algoritmos , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Órganos en Riesgo , Planificación de la Radioterapia Asistida por Computador/métodos , Tórax , Tomografía Computarizada por Rayos X/métodos
7.
Artículo en Inglés | MEDLINE | ID: mdl-34531632

RESUMEN

Renal segmentation on contrast-enhanced computed tomography (CT) provides distinct spatial context and morphology. Current studies for renal segmentations are highly dependent on manual efforts, which are time-consuming and tedious. Hence, developing an automatic framework for the segmentation of renal cortex, medulla and pelvicalyceal system is an important quantitative assessment of renal morphometry. Recent innovations in deep methods have driven performance toward levels for which clinical translation is appealing. However, the segmentation of renal structures can be challenging due to the limited field-of-view (FOV) and variability among patients. In this paper, we propose a method to automatically label the renal cortex, the medulla and pelvicalyceal system. First, we retrieved 45 clinically-acquired deidentified arterial phase CT scans (45 patients, 90 kidneys) without diagnosis codes (ICD-9) involving kidney abnormalities. Second, an interpreter performed manual segmentation to pelvis, medulla and cortex slice-by-slice on all retrieved subjects under expert supervision. Finally, we proposed a patch-based deep neural networks to automatically segment renal structures. Compared to the automatic baseline algorithm (3D U-Net) and conventional hierarchical method (3D U-Net Hierarchy), our proposed method achieves improvement of 0.7968 to 0.6749 (3D U-Net), 0.7482 (3D U-Net Hierarchy) in terms of mean Dice scores across three classes (p-value < 0.001, paired t-tests between our method and 3D U-Net Hierarchy). In summary, the proposed algorithm provides a precise and efficient method for labeling renal structures.

8.
J Nucl Med ; 61(12): 1786-1792, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-32332147

RESUMEN

Prostate-specific membrane antigen (PSMA)-targeting PET imaging is becoming the reference standard for prostate cancer staging, especially in advanced disease. Yet, the implications of PSMA PET-derived whole-body tumor volume for overall survival are poorly elucidated to date. This might be because semiautomated quantification of whole-body tumor volume as a PSMA PET biomarker is an unmet clinical challenge. Therefore, in the present study we propose and evaluate a software that enables the semiautomated quantification of PSMA PET biomarkers such as whole-body tumor volume. Methods: The proposed quantification is implemented as a research prototype. PSMA-accumulating foci were automatically segmented by a percental threshold (50% of local SUVmax). Neural networks were trained to segment organs in PET/CT acquisitions (training CTs: 8,632, validation CTs: 53). Thereby, PSMA foci within organs of physiologic PSMA uptake were semiautomatically excluded from the analysis. Pretherapeutic PSMA PET/CTs of 40 consecutive patients treated with 177Lu-PSMA-617 were evaluated in this analysis. The whole-body tumor volume (PSMATV50), SUVmax, SUVmean, and other whole-body imaging biomarkers were calculated for each patient. Semiautomatically derived results were compared with manual readings in a subcohort (by 1 nuclear medicine physician). Additionally, an interobserver evaluation of the semiautomated approach was performed in a subcohort (by 2 nuclear medicine physicians). Results: Manually and semiautomatically derived PSMA metrics were highly correlated (PSMATV50: R2 = 1.000, P < 0.001; SUVmax: R2 = 0.988, P < 0.001). The interobserver agreement of the semiautomated workflow was also high (PSMATV50: R2 = 1.000, P < 0.001, interclass correlation coefficient = 1.000; SUVmax: R2 = 0.988, P < 0.001, interclass correlation coefficient = 0.997). PSMATV50 (ml) was a significant predictor of overall survival (hazard ratio: 1.004; 95% confidence interval: 1.001-1.006, P = 0.002) and remained so in a multivariate regression including other biomarkers (hazard ratio: 1.004; 95% confidence interval: 1.001-1.006 P = 0.004). Conclusion: PSMATV50 is a promising PSMA PET biomarker that is reproducible and easily quantified by the proposed semiautomated software. Moreover, PSMATV50 is a significant predictor of overall survival in patients with advanced prostate cancer who receive 177Lu-PSMA-617 therapy.


Asunto(s)
Ácido Edético/análogos & derivados , Oligopéptidos , Tomografía Computarizada por Tomografía de Emisión de Positrones , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/patología , Carga Tumoral , Anciano , Automatización , Biomarcadores de Tumor/metabolismo , Isótopos de Galio , Radioisótopos de Galio , Humanos , Procesamiento de Imagen Asistido por Computador , Masculino , Variaciones Dependientes del Observador , Neoplasias de la Próstata/sangre , Neoplasias de la Próstata/metabolismo , Programas Informáticos , Análisis de Supervivencia
9.
ArXiv ; 2020 Nov 18.
Artículo en Inglés | MEDLINE | ID: mdl-32550252

RESUMEN

PURPOSE: To present a method that automatically segments and quantifies abnormal CT patterns commonly present in coronavirus disease 2019 (COVID-19), namely ground glass opacities and consolidations. MATERIALS AND METHODS: In this retrospective study, the proposed method takes as input a non-contrasted chest CT and segments the lesions, lungs, and lobes in three dimensions, based on a dataset of 9749 chest CT volumes. The method outputs two combined measures of the severity of lung and lobe involvement, quantifying both the extent of COVID-19 abnormalities and presence of high opacities, based on deep learning and deep reinforcement learning. The first measure of (PO, PHO) is global, while the second of (LSS, LHOS) is lobewise. Evaluation of the algorithm is reported on CTs of 200 participants (100 COVID-19 confirmed patients and 100 healthy controls) from institutions from Canada, Europe and the United States collected between 2002-Present (April, 2020). Ground truth is established by manual annotations of lesions, lungs, and lobes. Correlation and regression analyses were performed to compare the prediction to the ground truth. RESULTS: Pearson correlation coefficient between method prediction and ground truth for COVID-19 cases was calculated as 0.92 for PO (P < .001), 0.97 for PHO(P < .001), 0.91 for LSS (P < .001), 0.90 for LHOS (P < .001). 98 of 100 healthy controls had a predicted PO of less than 1%, 2 had between 1-2%. Automated processing time to compute the severity scores was 10 seconds per case compared to 30 minutes required for manual annotations. CONCLUSION: A new method segments regions of CT abnormalities associated with COVID-19 and computes (PO, PHO), as well as (LSS, LHOS) severity scores.

10.
Radiol Artif Intell ; 2(4): e200048, 2020 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-33928255

RESUMEN

PURPOSE: To present a method that automatically segments and quantifies abnormal CT patterns commonly present in coronavirus disease 2019 (COVID-19), namely ground glass opacities and consolidations. MATERIALS AND METHODS: In this retrospective study, the proposed method takes as input a non-contrasted chest CT and segments the lesions, lungs, and lobes in three dimensions, based on a dataset of 9749 chest CT volumes. The method outputs two combined measures of the severity of lung and lobe involvement, quantifying both the extent of COVID-19 abnormalities and presence of high opacities, based on deep learning and deep reinforcement learning. The first measure of (PO, PHO) is global, while the second of (LSS, LHOS) is lobe-wise. Evaluation of the algorithm is reported on CTs of 200 participants (100 COVID-19 confirmed patients and 100 healthy controls) from institutions from Canada, Europe and the United States collected between 2002-Present (April 2020). Ground truth is established by manual annotations of lesions, lungs, and lobes. Correlation and regression analyses were performed to compare the prediction to the ground truth. RESULTS: Pearson correlation coefficient between method prediction and ground truth for COVID-19 cases was calculated as 0.92 for PO (P < .001), 0.97 for PHO (P < .001), 0.91 for LSS (P < .001), 0.90 for LHOS (P < .001). 98 of 100 healthy controls had a predicted PO of less than 1%, 2 had between 1-2%. Automated processing time to compute the severity scores was 10 seconds per case compared to 30 minutes required for manual annotations. CONCLUSION: A new method segments regions of CT abnormalities associated with COVID-19 and computes (PO, PHO), as well as (LSS, LHOS) severity scores.

11.
IEEE Trans Med Imaging ; 38(5): 1185-1196, 2019 05.
Artículo en Inglés | MEDLINE | ID: mdl-30442602

RESUMEN

The findings of splenomegaly, abnormal enlargement of the spleen, is a non-invasive clinical biomarker for liver and spleen diseases. Automated segmentation methods are essential to efficiently quantify splenomegaly from clinically acquired abdominal magnetic resonance imaging (MRI) scans. However, the task is challenging due to: 1) large anatomical and spatial variations of splenomegaly; 2) large inter- and intra-scan intensity variations on multi-modal MRI; and 3) limited numbers of labeled splenomegaly scans. In this paper, we propose the Splenomegaly Segmentation Network (SS-Net) to introduce the deep convolutional neural network (DCNN) approaches in multi-modal MRI splenomegaly segmentation. Large convolutional kernel layers were used to address the spatial and anatomical variations, while the conditional generative adversarial networks were employed to leverage the segmentation performance of SS-Net in an end-to-end manner. A clinically acquired cohort containing both T1-weighted (T1w) and T2-weighted (T2w) MRI splenomegaly scans was used to train and evaluate the performance of multi-atlas segmentation (MAS), 2D DCNN networks, and a 3-D DCNN network. From the experimental results, the DCNN methods achieved superior performance to the state-of-the-art MAS method. The proposed SS-Net method has achieved the highest median and mean Dice scores among the investigated baseline DCNN methods.


Asunto(s)
Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética/métodos , Redes Neurales de la Computación , Esplenomegalia/diagnóstico por imagen , Humanos , Imagenología Tridimensional/métodos , Bazo/diagnóstico por imagen
12.
IEEE Trans Biomed Eng ; 65(2): 336-343, 2018 02.
Artículo en Inglés | MEDLINE | ID: mdl-29364118

RESUMEN

OBJECTIVE: Magnetic resonance imaging (MRI) is an essential imaging modality in noninvasive splenomegaly diagnosis. However, it is challenging to achieve spleen volume measurement from three-dimensional MRI given the diverse structural variations of human abdomens as well as the wide variety of clinical MRI acquisition schemes. Multi-atlas segmentation (MAS) approaches have been widely used and validated to handle heterogeneous anatomical scenarios. In this paper, we propose to use MAS for clinical MRI spleen segmentation for splenomegaly. METHODS: First, an automated segmentation method using the selective and iterative method for performance level estimation (SIMPLE) atlas selection is used to address the concerns of inhomogeneity for clinical splenomegaly MRI. Then, to further control outliers, semiautomated craniocaudal spleen length-based SIMPLE atlas selection (L-SIMPLE) is proposed to integrate a spatial prior in a Bayesian fashion and guide iterative atlas selection. Last, a graph cuts refinement is employed to achieve the final segmentation from the probability maps from MAS. RESULTS: A clinical cohort of 55 MRI volumes (28 T1 weighted and 27 T2 weighted) was used to evaluate both automated and semiautomated methods. CONCLUSION: The results demonstrated that both methods achieved median Dice , and outliers were alleviated by the L-SIMPLE (≍1 min manual efforts per scan), which achieved 0.97 Pearson correlation of volume measurements with the manual segmentation. SIGNIFICANCE: In this paper, spleen segmentation on MRI splenomegaly using MAS has been performed.


Asunto(s)
Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética/métodos , Bazo/diagnóstico por imagen , Esplenomegalia/diagnóstico por imagen , Algoritmos , Humanos , Reproducibilidad de los Resultados
13.
Artículo en Inglés | MEDLINE | ID: mdl-30334788

RESUMEN

A key limitation of deep convolutional neural networks (DCNN) based image segmentation methods is the lack of generalizability. Manually traced training images are typically required when segmenting organs in a new imaging modality or from distinct disease cohort. The manual efforts can be alleviated if the manually traced images in one imaging modality (e.g., MRI) are able to train a segmentation network for another imaging modality (e.g., CT). In this paper, we propose an end-to-end synthetic segmentation network (SynSeg-Net) to train a segmentation network for a target imaging modality without having manual labels. SynSeg-Net is trained by using (1) unpaired intensity images from source and target modalities, and (2) manual labels only from source modality. SynSeg-Net is enabled by the recent advances of cycle generative adversarial networks (CycleGAN) and DCNN. We evaluate the performance of the SynSeg-Net on two experiments: (1) MRI to CT splenomegaly synthetic segmentation for abdominal images, and (2) CT to MRI total intracranial volume synthetic segmentation (TICV) for brain images. The proposed end-to-end approach achieved superior performance to two stage methods. Moreover, the SynSeg-Net achieved comparable performance to the traditional segmentation network using target modality labels in certain scenarios. The source code of SynSeg-Net is publicly available 2.

14.
Artículo en Inglés | MEDLINE | ID: mdl-29887666

RESUMEN

Spleen volume estimation using automated image segmentation technique may be used to detect splenomegaly (abnormally enlarged spleen) on Magnetic Resonance Imaging (MRI) scans. In recent years, Deep Convolutional Neural Networks (DCNN) segmentation methods have demonstrated advantages for abdominal organ segmentation. However, variations in both size and shape of the spleen on MRI images may result in large false positive and false negative labeling when deploying DCNN based methods. In this paper, we propose the Splenomegaly Segmentation Network (SSNet) to address spatial variations when segmenting extraordinarily large spleens. SSNet was designed based on the framework of image-to-image conditional generative adversarial networks (cGAN). Specifically, the Global Convolutional Network (GCN) was used as the generator to reduce false negatives, while the Markovian discriminator (PatchGAN) was used to alleviate false positives. A cohort of clinically acquired 3D MRI scans (both T1 weighted and T2 weighted) from patients with splenomegaly were used to train and test the networks. The experimental results demonstrated that a mean Dice coefficient of 0.9260 and a median Dice coefficient of 0.9262 using SSNet on independently tested MRI volumes of patients with splenomegaly.

15.
Proc SPIE Int Soc Opt Eng ; 101332017 Feb 11.
Artículo en Inglés | MEDLINE | ID: mdl-28649156

RESUMEN

Non-invasive spleen volume estimation is essential in detecting splenomegaly. Magnetic resonance imaging (MRI) has been used to facilitate splenomegaly diagnosis in vivo. However, achieving accurate spleen volume estimation from MR images is challenging given the great inter-subject variance of human abdomens and wide variety of clinical images/modalities. Multi-atlas segmentation has been shown to be a promising approach to handle heterogeneous data and difficult anatomical scenarios. In this paper, we propose to use multi-atlas segmentation frameworks for MRI spleen segmentation for splenomegaly. To the best of our knowledge, this is the first work that integrates multi-atlas segmentation for splenomegaly as seen on MRI. To address the particular concerns of spleen MRI, automated and novel semi-automated atlas selection approaches are introduced. The automated approach interactively selects a subset of atlases using selective and iterative method for performance level estimation (SIMPLE) approach. To further control the outliers, semi-automated craniocaudal length based SIMPLE atlas selection (L-SIMPLE) is proposed to introduce a spatial prior in a fashion to guide the iterative atlas selection. A dataset from a clinical trial containing 55 MRI volumes (28 T1 weighted and 27 T2 weighted) was used to evaluate different methods. Both automated and semi-automated methods achieved median DSC > 0.9. The outliers were alleviated by the L-SIMPLE (≈1 min manual efforts per scan), which achieved 0.9713 Pearson correlation compared with the manual segmentation. The results demonstrated that the multi-atlas segmentation is able to achieve accurate spleen segmentation from the multi-contrast splenomegaly MRI scans.

16.
Proc SPIE Int Soc Opt Eng ; 101332017 Feb 11.
Artículo en Inglés | MEDLINE | ID: mdl-28736468

RESUMEN

Automatic spleen segmentation on CT is challenging due to the complexity of abdominal structures. Multi-atlas segmentation (MAS) has shown to be a promising approach to conduct spleen segmentation. To deal with the substantial registration errors between the heterogeneous abdominal CT images, the context learning method for performance level estimation (CLSIMPLE) method was previously proposed. The context learning method generates a probability map for a target image using a Gaussian mixture model (GMM) as the prior in a Bayesian framework. However, the CLSSIMPLE typically trains a single GMM from the entire heterogeneous training atlas set. Therefore, the estimated spatial prior maps might not represent specific target images accurately. Rather than using all training atlases, we propose an adaptive GMM based context learning technique (AGMMCL) to train the GMM adaptively using subsets of the training data with the subsets tailored for different target images. Training sets are selected adaptively based on the similarity between atlases and the target images using cranio-caudal length, which is derived manually from the target image. To validate the proposed method, a heterogeneous dataset with a large variation of spleen sizes (100 cc to 9000 cc) is used. We designate a metric of size to differentiate each group of spleens, with 0 to 100 cc as small, 200 to 500cc as medium, 500 to 1000 cc as large, 1000 to 2000 cc as XL, and 2000 and above as XXL. From the results, AGMMCL leads to more accurate spleen segmentations by training GMMs adaptively for different target images.

18.
Proc SPIE Int Soc Opt Eng ; 97842016 Feb 27.
Artículo en Inglés | MEDLINE | ID: mdl-27127333

RESUMEN

The abdominal wall is an important structure differentiating subcutaneous and visceral compartments and intimately involved with maintaining abdominal structure. Segmentation of the whole abdominal wall on routinely acquired computed tomography (CT) scans remains challenging due to variations and complexities of the wall and surrounding tissues. In this study, we propose a slice-wise augmented active shape model (AASM) approach to robustly segment both the outer and inner surfaces of the abdominal wall. Multi-atlas label fusion (MALF) and level set (LS) techniques are integrated into the traditional ASM framework. The AASM approach globally optimizes the landmark updates in the presence of complicated underlying local anatomical contexts. The proposed approach was validated on 184 axial slices of 20 CT scans. The Hausdorff distance against the manual segmentation was significantly reduced using proposed approach compared to that using ASM, MALF, and LS individually. Our segmentation of the whole abdominal wall enables the subcutaneous and visceral fat measurement, with high correlation to the measurement derived from manual segmentation. This study presents the first generic algorithm that combines ASM, MALF, and LS, and demonstrates practical application for automatically capturing visceral and subcutaneous fat volumes.

19.
J Med Imaging (Bellingham) ; 3(3): 036002, 2016 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-27610400

RESUMEN

Active shape models (ASMs) have been widely used for extracting human anatomies in medical images given their capability for shape regularization of topology preservation. However, sensitivity to model initialization and local correspondence search often undermines their performances, especially around highly variable contexts in computed-tomography (CT) and magnetic resonance (MR) images. In this study, we propose an augmented ASM (AASM) by integrating the multiatlas label fusion (MALF) and level set (LS) techniques into the traditional ASM framework. Using AASM, landmark updates are optimized globally via a region-based LS evolution applied on the probability map generated from MALF. This augmentation effectively extends the searching range of correspondent landmarks while reducing sensitivity to the image contexts and improves the segmentation robustness. We propose the AASM framework as a two-dimensional segmentation technique targeting structures with one axis of regularity. We apply AASM approach to abdomen CT and spinal cord (SC) MR segmentation challenges. On 20 CT scans, the AASM segmentation of the whole abdominal wall enables the subcutaneous/visceral fat measurement, with high correlation to the measurement derived from manual segmentation. On 28 3T MR scans, AASM yields better performances than other state-of-the-art approaches in segmenting white/gray matter in SC.

20.
IEEE Trans Biomed Eng ; 63(8): 1563-72, 2016 08.
Artículo en Inglés | MEDLINE | ID: mdl-27254856

RESUMEN

OBJECTIVE: This work evaluates current 3-D image registration tools on clinically acquired abdominal computed tomography (CT) scans. METHODS: Thirteen abdominal organs were manually labeled on a set of 100 CT images, and the 100 labeled images (i.e., atlases) were pairwise registered based on intensity information with six registration tools (FSL, ANTS-CC, ANTS-QUICK-MI, IRTK, NIFTYREG, and DEEDS). The Dice similarity coefficient (DSC), mean surface distance, and Hausdorff distance were calculated on the registered organs individually. Permutation tests and indifference-zone ranking were performed to examine the statistical and practical significance, respectively. RESULTS: The results suggest that DEEDS yielded the best registration performance. However, due to the overall low DSC values, and substantial portion of low-performing outliers, great care must be taken when image registration is used for local interpretation of abdominal CT. CONCLUSION: There is substantial room for improvement in image registration for abdominal CT. SIGNIFICANCE: All data and source code are available so that innovations in registration can be directly compared with the current generation of tools without excessive duplication of effort.


Asunto(s)
Abdomen/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Radiografía Abdominal/métodos , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA