Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 38
1.
J Magn Reson Imaging ; 2024 Jun 03.
Article En | MEDLINE | ID: mdl-38826142

BACKGROUND: The number of focal liver lesions (FLLs) detected by imaging has increased worldwide, highlighting the need to develop a robust, objective system for automatically detecting FLLs. PURPOSE: To assess the performance of the deep learning-based artificial intelligence (AI) software in identifying and measuring lesions on contrast-enhanced magnetic resonance imaging (MRI) images in patients with FLLs. STUDY TYPE: Retrospective. SUBJECTS: 395 patients with 1149 FLLs. FIELD STRENGTH/SEQUENCE: The 1.5 T and 3 T scanners, including T1-, T2-, diffusion-weighted imaging, in/out-phase imaging, and dynamic contrast-enhanced imaging. ASSESSMENT: The diagnostic performance of AI, radiologist, and their combination was compared. Using 20 mm as the cut-off value, the lesions were divided into two groups, and then divided into four subgroups: <10, 10-20, 20-40, and ≥40 mm, to evaluate the sensitivity of radiologists and AI in the detection of lesions of different sizes. We compared the pathologic sizes of 122 surgically resected lesions with measurements obtained using AI and those made by radiologists. STATISTICAL TESTS: McNemar test, Bland-Altman analyses, Friedman test, Pearson's chi-squared test, Fisher's exact test, Dice coefficient, and intraclass correlation coefficients. A P-value <0.05 was considered statistically significant. RESULTS: The average Dice coefficient of AI in segmentation of liver lesions was 0.62. The combination of AI and radiologist outperformed the radiologist alone, with a significantly higher detection rate (0.894 vs. 0.825) and sensitivity (0.883 vs. 0.806). The AI showed significantly sensitivity than radiologists in detecting all lesions <20 mm (0.848 vs. 0.788). Both AI and radiologists achieved excellent detection performance for lesions ≥20 mm (0.867 vs. 0.881, P = 0.671). A remarkable agreement existed in the average tumor sizes among the three measurements (P = 0.174). DATA CONCLUSION: AI software based on deep learning exhibited practical value in automatically identifying and measuring liver lesions. TECHNICAL EFFICACY: Stage 2.

2.
IEEE Trans Med Imaging ; 43(5): 1995-2009, 2024 May.
Article En | MEDLINE | ID: mdl-38224508

Deep learning models have demonstrated remarkable success in multi-organ segmentation but typically require large-scale datasets with all organs of interest annotated. However, medical image datasets are often low in sample size and only partially labeled, i.e., only a subset of organs are annotated. Therefore, it is crucial to investigate how to learn a unified model on the available partially labeled datasets to leverage their synergistic potential. In this paper, we systematically investigate the partial-label segmentation problem with theoretical and empirical analyses on the prior techniques. We revisit the problem from a perspective of partial label supervision signals and identify two signals derived from ground truth and one from pseudo labels. We propose a novel two-stage framework termed COSST, which effectively and efficiently integrates comprehensive supervision signals with self-training. Concretely, we first train an initial unified model using two ground truth-based signals and then iteratively incorporate the pseudo label signal to the initial model using self-training. To mitigate performance degradation caused by unreliable pseudo labels, we assess the reliability of pseudo labels via outlier detection in latent space and exclude the most unreliable pseudo labels from each self-training iteration. Extensive experiments are conducted on one public and three private partial-label segmentation tasks over 12 CT datasets. Experimental results show that our proposed COSST achieves significant improvement over the baseline method, i.e., individual networks trained on each partially labeled dataset. Compared to the state-of-the-art partial-label segmentation methods, COSST demonstrates consistent superior performance on various segmentation tasks and with different training data sizes.


Databases, Factual , Deep Learning , Image Processing, Computer-Assisted , Humans , Image Processing, Computer-Assisted/methods , Algorithms , Tomography, X-Ray Computed/methods , Supervised Machine Learning
3.
Med Image Anal ; 90: 102939, 2023 Dec.
Article En | MEDLINE | ID: mdl-37725868

Transformer-based models, capable of learning better global dependencies, have recently demonstrated exceptional representation learning capabilities in computer vision and medical image analysis. Transformer reformats the image into separate patches and realizes global communication via the self-attention mechanism. However, positional information between patches is hard to preserve in such 1D sequences, and loss of it can lead to sub-optimal performance when dealing with large amounts of heterogeneous tissues of various sizes in 3D medical image segmentation. Additionally, current methods are not robust and efficient for heavy-duty medical segmentation tasks such as predicting a large number of tissue classes or modeling globally inter-connected tissue structures. To address such challenges and inspired by the nested hierarchical structures in vision transformer, we proposed a novel 3D medical image segmentation method (UNesT), employing a simplified and faster-converging transformer encoder design that achieves local communication among spatially adjacent patch sequences by aggregating them hierarchically. We extensively validate our method on multiple challenging datasets, consisting of multiple modalities, anatomies, and a wide range of tissue classes, including 133 structures in the brain, 14 organs in the abdomen, 4 hierarchical components in the kidneys, inter-connected kidney tumors and brain tumors. We show that UNesT consistently achieves state-of-the-art performance and evaluate its generalizability and data efficiency. Particularly, the model achieves whole brain segmentation task complete ROI with 133 tissue classes in a single network, outperforming prior state-of-the-art method SLANT27 ensembled with 27 networks. Our model performance increases the mean DSC score of the publicly available Colin and CANDI dataset from 0.7264 to 0.7444 and from 0.6968 to 0.7025, respectively. Code, pre-trained models, and use case pipeline are available at: https://github.com/MASILab/UNesT.

5.
Radiat Oncol ; 17(1): 129, 2022 Jul 22.
Article En | MEDLINE | ID: mdl-35869525

BACKGROUND: We describe and evaluate a deep network algorithm which automatically contours organs at risk in the thorax and pelvis on computed tomography (CT) images for radiation treatment planning. METHODS: The algorithm identifies the region of interest (ROI) automatically by detecting anatomical landmarks around the specific organs using a deep reinforcement learning technique. The segmentation is restricted to this ROI and performed by a deep image-to-image network (DI2IN) based on a convolutional encoder-decoder architecture combined with multi-level feature concatenation. The algorithm is commercially available in the medical products "syngo.via RT Image Suite VB50" and "AI-Rad Companion Organs RT VA20" (Siemens Healthineers). For evaluation, thoracic CT images of 237 patients and pelvic CT images of 102 patients were manually contoured following the Radiation Therapy Oncology Group (RTOG) guidelines and compared to the DI2IN results using metrics for volume, overlap and distance, e.g., Dice Similarity Coefficient (DSC) and Hausdorff Distance (HD95). The contours were also compared visually slice by slice. RESULTS: We observed high correlations between automatic and manual contours. The best results were obtained for the lungs (DSC 0.97, HD95 2.7 mm/2.9 mm for left/right lung), followed by heart (DSC 0.92, HD95 4.4 mm), bladder (DSC 0.88, HD95 6.7 mm) and rectum (DSC 0.79, HD95 10.8 mm). Visual inspection showed excellent agreements with some exceptions for heart and rectum. CONCLUSIONS: The DI2IN algorithm automatically generated contours for organs at risk close to those by a human expert, making the contouring step in radiation treatment planning simpler and faster. Few cases still required manual corrections, mainly for heart and rectum.


Deep Learning , Tomography, X-Ray Computed , Algorithms , Humans , Image Processing, Computer-Assisted/methods , Organs at Risk , Radiotherapy Planning, Computer-Assisted/methods , Thorax , Tomography, X-Ray Computed/methods
6.
Article En | MEDLINE | ID: mdl-34531632

Renal segmentation on contrast-enhanced computed tomography (CT) provides distinct spatial context and morphology. Current studies for renal segmentations are highly dependent on manual efforts, which are time-consuming and tedious. Hence, developing an automatic framework for the segmentation of renal cortex, medulla and pelvicalyceal system is an important quantitative assessment of renal morphometry. Recent innovations in deep methods have driven performance toward levels for which clinical translation is appealing. However, the segmentation of renal structures can be challenging due to the limited field-of-view (FOV) and variability among patients. In this paper, we propose a method to automatically label the renal cortex, the medulla and pelvicalyceal system. First, we retrieved 45 clinically-acquired deidentified arterial phase CT scans (45 patients, 90 kidneys) without diagnosis codes (ICD-9) involving kidney abnormalities. Second, an interpreter performed manual segmentation to pelvis, medulla and cortex slice-by-slice on all retrieved subjects under expert supervision. Finally, we proposed a patch-based deep neural networks to automatically segment renal structures. Compared to the automatic baseline algorithm (3D U-Net) and conventional hierarchical method (3D U-Net Hierarchy), our proposed method achieves improvement of 0.7968 to 0.6749 (3D U-Net), 0.7482 (3D U-Net Hierarchy) in terms of mean Dice scores across three classes (p-value < 0.001, paired t-tests between our method and 3D U-Net Hierarchy). In summary, the proposed algorithm provides a precise and efficient method for labeling renal structures.

7.
ArXiv ; 2020 Nov 18.
Article En | MEDLINE | ID: mdl-32550252

PURPOSE: To present a method that automatically segments and quantifies abnormal CT patterns commonly present in coronavirus disease 2019 (COVID-19), namely ground glass opacities and consolidations. MATERIALS AND METHODS: In this retrospective study, the proposed method takes as input a non-contrasted chest CT and segments the lesions, lungs, and lobes in three dimensions, based on a dataset of 9749 chest CT volumes. The method outputs two combined measures of the severity of lung and lobe involvement, quantifying both the extent of COVID-19 abnormalities and presence of high opacities, based on deep learning and deep reinforcement learning. The first measure of (PO, PHO) is global, while the second of (LSS, LHOS) is lobewise. Evaluation of the algorithm is reported on CTs of 200 participants (100 COVID-19 confirmed patients and 100 healthy controls) from institutions from Canada, Europe and the United States collected between 2002-Present (April, 2020). Ground truth is established by manual annotations of lesions, lungs, and lobes. Correlation and regression analyses were performed to compare the prediction to the ground truth. RESULTS: Pearson correlation coefficient between method prediction and ground truth for COVID-19 cases was calculated as 0.92 for PO (P < .001), 0.97 for PHO(P < .001), 0.91 for LSS (P < .001), 0.90 for LHOS (P < .001). 98 of 100 healthy controls had a predicted PO of less than 1%, 2 had between 1-2%. Automated processing time to compute the severity scores was 10 seconds per case compared to 30 minutes required for manual annotations. CONCLUSION: A new method segments regions of CT abnormalities associated with COVID-19 and computes (PO, PHO), as well as (LSS, LHOS) severity scores.

8.
J Nucl Med ; 61(12): 1786-1792, 2020 12.
Article En | MEDLINE | ID: mdl-32332147

Prostate-specific membrane antigen (PSMA)-targeting PET imaging is becoming the reference standard for prostate cancer staging, especially in advanced disease. Yet, the implications of PSMA PET-derived whole-body tumor volume for overall survival are poorly elucidated to date. This might be because semiautomated quantification of whole-body tumor volume as a PSMA PET biomarker is an unmet clinical challenge. Therefore, in the present study we propose and evaluate a software that enables the semiautomated quantification of PSMA PET biomarkers such as whole-body tumor volume. Methods: The proposed quantification is implemented as a research prototype. PSMA-accumulating foci were automatically segmented by a percental threshold (50% of local SUVmax). Neural networks were trained to segment organs in PET/CT acquisitions (training CTs: 8,632, validation CTs: 53). Thereby, PSMA foci within organs of physiologic PSMA uptake were semiautomatically excluded from the analysis. Pretherapeutic PSMA PET/CTs of 40 consecutive patients treated with 177Lu-PSMA-617 were evaluated in this analysis. The whole-body tumor volume (PSMATV50), SUVmax, SUVmean, and other whole-body imaging biomarkers were calculated for each patient. Semiautomatically derived results were compared with manual readings in a subcohort (by 1 nuclear medicine physician). Additionally, an interobserver evaluation of the semiautomated approach was performed in a subcohort (by 2 nuclear medicine physicians). Results: Manually and semiautomatically derived PSMA metrics were highly correlated (PSMATV50: R2 = 1.000, P < 0.001; SUVmax: R2 = 0.988, P < 0.001). The interobserver agreement of the semiautomated workflow was also high (PSMATV50: R2 = 1.000, P < 0.001, interclass correlation coefficient = 1.000; SUVmax: R2 = 0.988, P < 0.001, interclass correlation coefficient = 0.997). PSMATV50 (ml) was a significant predictor of overall survival (hazard ratio: 1.004; 95% confidence interval: 1.001-1.006, P = 0.002) and remained so in a multivariate regression including other biomarkers (hazard ratio: 1.004; 95% confidence interval: 1.001-1.006 P = 0.004). Conclusion: PSMATV50 is a promising PSMA PET biomarker that is reproducible and easily quantified by the proposed semiautomated software. Moreover, PSMATV50 is a significant predictor of overall survival in patients with advanced prostate cancer who receive 177Lu-PSMA-617 therapy.


Edetic Acid/analogs & derivatives , Oligopeptides , Positron Emission Tomography Computed Tomography , Prostatic Neoplasms/diagnostic imaging , Prostatic Neoplasms/pathology , Tumor Burden , Aged , Automation , Biomarkers, Tumor/metabolism , Gallium Isotopes , Gallium Radioisotopes , Humans , Image Processing, Computer-Assisted , Male , Observer Variation , Prostatic Neoplasms/blood , Prostatic Neoplasms/metabolism , Software , Survival Analysis
9.
Radiol Artif Intell ; 2(4): e200048, 2020 Jul.
Article En | MEDLINE | ID: mdl-33928255

PURPOSE: To present a method that automatically segments and quantifies abnormal CT patterns commonly present in coronavirus disease 2019 (COVID-19), namely ground glass opacities and consolidations. MATERIALS AND METHODS: In this retrospective study, the proposed method takes as input a non-contrasted chest CT and segments the lesions, lungs, and lobes in three dimensions, based on a dataset of 9749 chest CT volumes. The method outputs two combined measures of the severity of lung and lobe involvement, quantifying both the extent of COVID-19 abnormalities and presence of high opacities, based on deep learning and deep reinforcement learning. The first measure of (PO, PHO) is global, while the second of (LSS, LHOS) is lobe-wise. Evaluation of the algorithm is reported on CTs of 200 participants (100 COVID-19 confirmed patients and 100 healthy controls) from institutions from Canada, Europe and the United States collected between 2002-Present (April 2020). Ground truth is established by manual annotations of lesions, lungs, and lobes. Correlation and regression analyses were performed to compare the prediction to the ground truth. RESULTS: Pearson correlation coefficient between method prediction and ground truth for COVID-19 cases was calculated as 0.92 for PO (P < .001), 0.97 for PHO (P < .001), 0.91 for LSS (P < .001), 0.90 for LHOS (P < .001). 98 of 100 healthy controls had a predicted PO of less than 1%, 2 had between 1-2%. Automated processing time to compute the severity scores was 10 seconds per case compared to 30 minutes required for manual annotations. CONCLUSION: A new method segments regions of CT abnormalities associated with COVID-19 and computes (PO, PHO), as well as (LSS, LHOS) severity scores.

10.
Neuroimage ; 194: 105-119, 2019 07 01.
Article En | MEDLINE | ID: mdl-30910724

Detailed whole brain segmentation is an essential quantitative technique in medical image analysis, which provides a non-invasive way of measuring brain regions from a clinical acquired structural magnetic resonance imaging (MRI). Recently, deep convolution neural network (CNN) has been applied to whole brain segmentation. However, restricted by current GPU memory, 2D based methods, downsampling based 3D CNN methods, and patch-based high-resolution 3D CNN methods have been the de facto standard solutions. 3D patch-based high resolution methods typically yield superior performance among CNN approaches on detailed whole brain segmentation (>100 labels), however, whose performance are still commonly inferior compared with state-of-the-art multi-atlas segmentation methods (MAS) due to the following challenges: (1) a single network is typically used to learn both spatial and contextual information for the patches, (2) limited manually traced whole brain volumes are available (typically less than 50) for training a network. In this work, we propose the spatially localized atlas network tiles (SLANT) method to distribute multiple independent 3D fully convolutional networks (FCN) for high-resolution whole brain segmentation. To address the first challenge, multiple spatially distributed networks were used in the SLANT method, in which each network learned contextual information for a fixed spatial location. To address the second challenge, auxiliary labels on 5111 initially unlabeled scans were created by multi-atlas segmentation for training. Since the method integrated multiple traditional medical image processing methods with deep learning, we developed a containerized pipeline to deploy the end-to-end solution. From the results, the proposed method achieved superior performance compared with multi-atlas segmentation methods, while reducing the computational time from >30 h to 15 min. The method has been made available in open source (https://github.com/MASILab/SLANTbrainSeg).


Brain/anatomy & histology , Deep Learning , Image Processing, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Atlases as Topic , Humans , Magnetic Resonance Imaging/methods , Neuroimaging/methods
11.
IEEE Trans Med Imaging ; 38(5): 1185-1196, 2019 05.
Article En | MEDLINE | ID: mdl-30442602

The findings of splenomegaly, abnormal enlargement of the spleen, is a non-invasive clinical biomarker for liver and spleen diseases. Automated segmentation methods are essential to efficiently quantify splenomegaly from clinically acquired abdominal magnetic resonance imaging (MRI) scans. However, the task is challenging due to: 1) large anatomical and spatial variations of splenomegaly; 2) large inter- and intra-scan intensity variations on multi-modal MRI; and 3) limited numbers of labeled splenomegaly scans. In this paper, we propose the Splenomegaly Segmentation Network (SS-Net) to introduce the deep convolutional neural network (DCNN) approaches in multi-modal MRI splenomegaly segmentation. Large convolutional kernel layers were used to address the spatial and anatomical variations, while the conditional generative adversarial networks were employed to leverage the segmentation performance of SS-Net in an end-to-end manner. A clinically acquired cohort containing both T1-weighted (T1w) and T2-weighted (T2w) MRI splenomegaly scans was used to train and evaluate the performance of multi-atlas segmentation (MAS), 2D DCNN networks, and a 3-D DCNN network. From the experimental results, the DCNN methods achieved superior performance to the state-of-the-art MAS method. The proposed SS-Net method has achieved the highest median and mean Dice scores among the investigated baseline DCNN methods.


Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Neural Networks, Computer , Splenomegaly/diagnostic imaging , Humans , Imaging, Three-Dimensional/methods , Spleen/diagnostic imaging
12.
Article En | MEDLINE | ID: mdl-30334788

A key limitation of deep convolutional neural networks (DCNN) based image segmentation methods is the lack of generalizability. Manually traced training images are typically required when segmenting organs in a new imaging modality or from distinct disease cohort. The manual efforts can be alleviated if the manually traced images in one imaging modality (e.g., MRI) are able to train a segmentation network for another imaging modality (e.g., CT). In this paper, we propose an end-to-end synthetic segmentation network (SynSeg-Net) to train a segmentation network for a target imaging modality without having manual labels. SynSeg-Net is trained by using (1) unpaired intensity images from source and target modalities, and (2) manual labels only from source modality. SynSeg-Net is enabled by the recent advances of cycle generative adversarial networks (CycleGAN) and DCNN. We evaluate the performance of the SynSeg-Net on two experiments: (1) MRI to CT splenomegaly synthetic segmentation for abdominal images, and (2) CT to MRI total intracranial volume synthetic segmentation (TICV) for brain images. The proposed end-to-end approach achieved superior performance to two stage methods. Moreover, the SynSeg-Net achieved comparable performance to the traditional segmentation network using target modality labels in certain scenarios. The source code of SynSeg-Net is publicly available 2.

13.
Article En | MEDLINE | ID: mdl-29887666

Spleen volume estimation using automated image segmentation technique may be used to detect splenomegaly (abnormally enlarged spleen) on Magnetic Resonance Imaging (MRI) scans. In recent years, Deep Convolutional Neural Networks (DCNN) segmentation methods have demonstrated advantages for abdominal organ segmentation. However, variations in both size and shape of the spleen on MRI images may result in large false positive and false negative labeling when deploying DCNN based methods. In this paper, we propose the Splenomegaly Segmentation Network (SSNet) to address spatial variations when segmenting extraordinarily large spleens. SSNet was designed based on the framework of image-to-image conditional generative adversarial networks (cGAN). Specifically, the Global Convolutional Network (GCN) was used as the generator to reduce false negatives, while the Markovian discriminator (PatchGAN) was used to alleviate false positives. A cohort of clinically acquired 3D MRI scans (both T1 weighted and T2 weighted) from patients with splenomegaly were used to train and test the networks. The experimental results demonstrated that a mean Dice coefficient of 0.9260 and a median Dice coefficient of 0.9262 using SSNet on independently tested MRI volumes of patients with splenomegaly.

14.
IEEE Trans Biomed Eng ; 65(2): 336-343, 2018 02.
Article En | MEDLINE | ID: mdl-29364118

OBJECTIVE: Magnetic resonance imaging (MRI) is an essential imaging modality in noninvasive splenomegaly diagnosis. However, it is challenging to achieve spleen volume measurement from three-dimensional MRI given the diverse structural variations of human abdomens as well as the wide variety of clinical MRI acquisition schemes. Multi-atlas segmentation (MAS) approaches have been widely used and validated to handle heterogeneous anatomical scenarios. In this paper, we propose to use MAS for clinical MRI spleen segmentation for splenomegaly. METHODS: First, an automated segmentation method using the selective and iterative method for performance level estimation (SIMPLE) atlas selection is used to address the concerns of inhomogeneity for clinical splenomegaly MRI. Then, to further control outliers, semiautomated craniocaudal spleen length-based SIMPLE atlas selection (L-SIMPLE) is proposed to integrate a spatial prior in a Bayesian fashion and guide iterative atlas selection. Last, a graph cuts refinement is employed to achieve the final segmentation from the probability maps from MAS. RESULTS: A clinical cohort of 55 MRI volumes (28 T1 weighted and 27 T2 weighted) was used to evaluate both automated and semiautomated methods. CONCLUSION: The results demonstrated that both methods achieved median Dice , and outliers were alleviated by the L-SIMPLE (≍1 min manual efforts per scan), which achieved 0.97 Pearson correlation of volume measurements with the manual segmentation. SIGNIFICANCE: In this paper, spleen segmentation on MRI splenomegaly using MAS has been performed.


Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Spleen/diagnostic imaging , Splenomegaly/diagnostic imaging , Algorithms , Humans , Reproducibility of Results
15.
Proc SPIE Int Soc Opt Eng ; 101332017 Feb 11.
Article En | MEDLINE | ID: mdl-28736468

Automatic spleen segmentation on CT is challenging due to the complexity of abdominal structures. Multi-atlas segmentation (MAS) has shown to be a promising approach to conduct spleen segmentation. To deal with the substantial registration errors between the heterogeneous abdominal CT images, the context learning method for performance level estimation (CLSIMPLE) method was previously proposed. The context learning method generates a probability map for a target image using a Gaussian mixture model (GMM) as the prior in a Bayesian framework. However, the CLSSIMPLE typically trains a single GMM from the entire heterogeneous training atlas set. Therefore, the estimated spatial prior maps might not represent specific target images accurately. Rather than using all training atlases, we propose an adaptive GMM based context learning technique (AGMMCL) to train the GMM adaptively using subsets of the training data with the subsets tailored for different target images. Training sets are selected adaptively based on the similarity between atlases and the target images using cranio-caudal length, which is derived manually from the target image. To validate the proposed method, a heterogeneous dataset with a large variation of spleen sizes (100 cc to 9000 cc) is used. We designate a metric of size to differentiate each group of spleens, with 0 to 100 cc as small, 200 to 500cc as medium, 500 to 1000 cc as large, 1000 to 2000 cc as XL, and 2000 and above as XXL. From the results, AGMMCL leads to more accurate spleen segmentations by training GMMs adaptively for different target images.

16.
Proc SPIE Int Soc Opt Eng ; 101332017 Feb 11.
Article En | MEDLINE | ID: mdl-28649156

Non-invasive spleen volume estimation is essential in detecting splenomegaly. Magnetic resonance imaging (MRI) has been used to facilitate splenomegaly diagnosis in vivo. However, achieving accurate spleen volume estimation from MR images is challenging given the great inter-subject variance of human abdomens and wide variety of clinical images/modalities. Multi-atlas segmentation has been shown to be a promising approach to handle heterogeneous data and difficult anatomical scenarios. In this paper, we propose to use multi-atlas segmentation frameworks for MRI spleen segmentation for splenomegaly. To the best of our knowledge, this is the first work that integrates multi-atlas segmentation for splenomegaly as seen on MRI. To address the particular concerns of spleen MRI, automated and novel semi-automated atlas selection approaches are introduced. The automated approach interactively selects a subset of atlases using selective and iterative method for performance level estimation (SIMPLE) approach. To further control the outliers, semi-automated craniocaudal length based SIMPLE atlas selection (L-SIMPLE) is proposed to introduce a spatial prior in a fashion to guide the iterative atlas selection. A dataset from a clinical trial containing 55 MRI volumes (28 T1 weighted and 27 T2 weighted) was used to evaluate different methods. Both automated and semi-automated methods achieved median DSC > 0.9. The outliers were alleviated by the L-SIMPLE (≈1 min manual efforts per scan), which achieved 0.9713 Pearson correlation compared with the manual segmentation. The results demonstrated that the multi-atlas segmentation is able to achieve accurate spleen segmentation from the multi-contrast splenomegaly MRI scans.

17.
J Med Imaging (Bellingham) ; 3(3): 036002, 2016 Jul.
Article En | MEDLINE | ID: mdl-27610400

Active shape models (ASMs) have been widely used for extracting human anatomies in medical images given their capability for shape regularization of topology preservation. However, sensitivity to model initialization and local correspondence search often undermines their performances, especially around highly variable contexts in computed-tomography (CT) and magnetic resonance (MR) images. In this study, we propose an augmented ASM (AASM) by integrating the multiatlas label fusion (MALF) and level set (LS) techniques into the traditional ASM framework. Using AASM, landmark updates are optimized globally via a region-based LS evolution applied on the probability map generated from MALF. This augmentation effectively extends the searching range of correspondent landmarks while reducing sensitivity to the image contexts and improves the segmentation robustness. We propose the AASM framework as a two-dimensional segmentation technique targeting structures with one axis of regularity. We apply AASM approach to abdomen CT and spinal cord (SC) MR segmentation challenges. On 20 CT scans, the AASM segmentation of the whole abdominal wall enables the subcutaneous/visceral fat measurement, with high correlation to the measurement derived from manual segmentation. On 28 3T MR scans, AASM yields better performances than other state-of-the-art approaches in segmenting white/gray matter in SC.

18.
Acad Radiol ; 23(10): 1214-20, 2016 10.
Article En | MEDLINE | ID: mdl-27519156

OBJECTIVES: Multi-atlas fusion is a promising approach for computer-assisted segmentation of anatomic structures. The purpose of this study was to evaluate the accuracy and time efficiency of multi-atlas segmentation for estimating spleen volumes on clinically acquired computed tomography (CT) scans. MATERIALS AND METHODS: Under an institutional review board approval, we obtained 294 de-identified (Health Insurance Portability and Accountability Act-compliant) abdominal CT scans on 78 subjects from a recent clinical trial. We compared five pipelines for obtaining splenic volumes: Pipeline 1 - manual segmentation of all scans, Pipeline 2 - automated segmentation of all scans, Pipeline 3 - automated segmentation of all scans with manual segmentation for outliers on a rudimentary visual quality check, and Pipelines 4 and 5 - volumes derived from a unidimensional measurement of craniocaudal spleen length and three-dimensional splenic index measurements, respectively. Using Pipeline 1 results as ground truth, the accuracies of Pipelines 2-5 (Dice similarity coefficient, Pearson correlation, R-squared, and percent and absolute deviation of volume from ground truth) were compared for point estimates of splenic volume and for change in splenic volume over time. Time cost was also compared for Pipelines 1-5. RESULTS: Pipeline 3 was dominant in terms of both accuracy and time cost. With a Pearson correlation coefficient of 0.99, average absolute volume deviation of 23.7 cm(3), and time cost of 1 minute per scan, Pipeline 3 yielded the best results. The second-best approach was Pipeline 5, with a Pearson correlation coefficient of 0.98, absolute deviation of 46.92 cm(3), and time cost of 1 minute 30 seconds per scan. Manual segmentation (Pipeline 1) required 11 minutes per scan. CONCLUSION: A computer-automated segmentation approach with manual correction of outliers generated accurate splenic volumes with reasonable time efficiency.


Spleen/anatomy & histology , Spleen/diagnostic imaging , Tomography, X-Ray Computed/methods , Adult , Aged , Aged, 80 and over , Female , Humans , Male , Middle Aged , Organ Size
19.
IEEE Trans Biomed Eng ; 63(8): 1563-72, 2016 08.
Article En | MEDLINE | ID: mdl-27254856

OBJECTIVE: This work evaluates current 3-D image registration tools on clinically acquired abdominal computed tomography (CT) scans. METHODS: Thirteen abdominal organs were manually labeled on a set of 100 CT images, and the 100 labeled images (i.e., atlases) were pairwise registered based on intensity information with six registration tools (FSL, ANTS-CC, ANTS-QUICK-MI, IRTK, NIFTYREG, and DEEDS). The Dice similarity coefficient (DSC), mean surface distance, and Hausdorff distance were calculated on the registered organs individually. Permutation tests and indifference-zone ranking were performed to examine the statistical and practical significance, respectively. RESULTS: The results suggest that DEEDS yielded the best registration performance. However, due to the overall low DSC values, and substantial portion of low-performing outliers, great care must be taken when image registration is used for local interpretation of abdominal CT. CONCLUSION: There is substantial room for improvement in image registration for abdominal CT. SIGNIFICANCE: All data and source code are available so that innovations in registration can be directly compared with the current generation of tools without excessive duplication of effort.


Abdomen/diagnostic imaging , Image Processing, Computer-Assisted/methods , Radiography, Abdominal/methods , Tomography, X-Ray Computed/methods , Algorithms , Humans
20.
Proc SPIE Int Soc Opt Eng ; 97842016 Feb 27.
Article En | MEDLINE | ID: mdl-27064328

Modern magnetic resonance imaging (MRI) brain atlases are high quality 3-D volumes with specific structures labeled in the volume. Atlases are essential in providing a common space for interpretation of results across studies, for anatomical education, and providing quantitative image-based navigation. Extensive work has been devoted to atlas construction for humans, macaque, and several non-primate species (e.g., rat). One notable gap in the literature is the common squirrel monkey - for which the primary published atlases date from the 1960's. The common squirrel monkey has been used extensively as surrogate for humans in biomedical studies, given its anatomical neuro-system similarities and practical considerations. This work describes the continued development of a multi-modal MRI atlas for the common squirrel monkey, for which a structural imaging space and gray matter parcels have been previously constructed. This study adds white matter tracts to the atlas. The new atlas includes 49 white matter (WM) tracts, defined using diffusion tensor imaging (DTI) in three animals and combines these data to define the anatomical locations of these tracks in a standardized coordinate system compatible with previous development. An anatomist reviewed the resulting tracts and the inter-animal reproducibility (i.e., the Dice index of each WM parcel across animals in common space) was assessed. The Dice indices range from 0.05 to 0.80 due to differences of local registration quality and the variation of WM tract position across individuals. However, the combined WM labels from the 3 animals represent the general locations of WM parcels, adding basic connectivity information to the atlas.

...