Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
1.
Med Biol Eng Comput ; 2024 Apr 25.
Article in English | MEDLINE | ID: mdl-38658497

ABSTRACT

The assessment of deformable registration uncertainty is an important task for the safety and reliability of registration methods in clinical applications. However, it is typically done by a manual and time-consuming procedure. We propose a novel automatic method to predict registration uncertainty based on multi-category features and supervised learning. Three types of features, including deformation field statistical features, deformation field physiologically realistic features, and image similarity features, are introduced and calculated to train the random forest regressor for local registration uncertain prediction. Deformation field statistical features represent the numerical stability of registration optimization, which are correlated to the uncertainty of deformation fields; deformation field physiologically realistic features represent the biomechanical properties of organ motions, which mathematically reflect the physiological reality of deformation; image similarity features reflect the similarity between the warped image and fixed image. The multi-category features comprehensively reflect the registration uncertainty. The strategy of spatial adaptive random perturbations is also introduced to accurately simulate spatial distribution of registration uncertainty, which makes deformation field statistical features more discriminative to the uncertainty of deformation fields. Experiments were conducted on three publicly available thoracic CT image datasets. Seventeen randomly selected image pairs are used to train the random forest model, and 9 image pairs are used to evaluate the prediction model. The quantitative experiments on lung CT images show that the proposed method outperforms the baseline method for uncertain prediction of classical iterative optimization-based registration and deep learning-based registration with different registration qualities. The proposed method achieves good performance for registration uncertain prediction, which has great potential in improving the accuracy of registration uncertain prediction.

2.
Quant Imaging Med Surg ; 13(11): 7504-7522, 2023 Nov 01.
Article in English | MEDLINE | ID: mdl-37969634

ABSTRACT

Background: Supervised machine learning methods [both radiomics and convolutional neural network (CNN)-based deep learning] are usually employed to develop artificial intelligence models with medical images for computer-assisted diagnosis and prognosis of diseases. A classical machine learning-based modeling workflow involves a series of interconnected components and various algorithms, but this makes it challenging, tedious, and labor intensive for radiologists and researchers to build customized models for specific clinical applications if they lack expertise in machine learning methods. Methods: We developed a user-friendly artificial intelligence-assisted diagnosis modeling software (AIMS) platform, which supplies standardized machine learning-based modeling workflows for computer-assisted diagnosis and prognosis systems with medical images. In contrast to other existing software platforms, AIMS contains both radiomics and CNN-based deep learning workflows, making it an all-in-one software platform for machine learning-based medical image analysis. The modular design of AIMS allows users to build machine learning models easily, test models comprehensively, and fairly compare the performance of different models in a specific application. The graphical user interface (GUI) enables users to process large numbers of medical images without programming or script addition. Furthermore, AIMS also provides a flexible image processing toolkit (e.g., semiautomatic segmentation, registration, morphological operations) to rapidly create lesion labels for multiphase analysis, multiregion analysis of an individual tumor (e.g., tumor mass and peritumor), and multimodality analysis. Results: The functionality and efficiency of AIMS were demonstrated in 3 independent experiments in radiation oncology, where multiphase, multiregion, and multimodality analyses were performed, respectively. For clear cell renal cell carcinoma (ccRCC) Fuhrman grading with multiphase analysis (sample size =187), the area under the curve (AUC) value of the AIMS was 0.776; for ccRCC Fuhrman grading with multiregion analysis (sample size =177), the AUC value of the AIMS was 0.848; for prostate cancer Gleason grading with multimodality analysis (sample size =206), the AUC value of the AIMS was 0.980. Conclusions: AIMS provides a user-friendly infrastructure for radiologists and researchers, lowering the barrier to building customized machine learning-based computer-assisted diagnosis models for medical image analysis.

3.
Front Oncol ; 13: 1167328, 2023.
Article in English | MEDLINE | ID: mdl-37692840

ABSTRACT

Objective: This study aimed to evaluate the effectiveness of multi-phase-combined contrast-enhanced CT (CECT) radiomics methods for noninvasive Fuhrman grade prediction of clear cell renal cell carcinoma (ccRCC). Methods: A total of 187 patients with four-phase CECT images were retrospectively enrolled and then were categorized into training cohort (n=126) and testing cohort (n=61). All patients were confirmed as ccRCC by histopathological reports. A total of 110 3D classical radiomics features were extracted from each phase of CECT for individual ccRCC lesion, and contrast-enhanced variation features were also calculated as derived radiomics features. These features were concatenated together, and redundant features were removed by Pearson correlation analysis. The discriminative features were selected by minimum redundancy maximum relevance method (mRMR) and then input into a C-support vector classifier to build multi-phase-combined CECT radiomics models. The prediction performance was evaluated by the area under the curve (AUC) of receiver operating characteristic (ROC). Results: The multi-phase-combined CECT radiomics model showed the best prediction performance (AUC=0.777) than the single-phase CECT radiomics model (AUC=0.711) in the testing cohort (p value=0.039). Conclusion: The multi-phase-combined CECT radiomics model is a potential effective way to noninvasively predict Fuhrman grade of ccRCC. The concatenation of first-order features and texture features extracted from corticomedullary phase and nephrographic phase are discriminative feature representations.

4.
Biomed Eng Online ; 22(1): 91, 2023 Sep 19.
Article in English | MEDLINE | ID: mdl-37726780

ABSTRACT

Deformable multimodal image registration plays a key role in medical image analysis. It remains a challenge to find accurate dense correspondences between multimodal images due to the significant intensity distortion and the large deformation. macJNet is proposed to align the multimodal medical images, which is a weakly-supervised multimodal image deformable registration method using a joint learning framework and multi-sampling cascaded modality independent neighborhood descriptor (macMIND). The joint learning framework consists of a multimodal image registration network and two segmentation networks. The proposed macMIND is a modality-independent image structure descriptor to provide dense correspondence for registration, which incorporates multi-orientation and multi-scale sampling patterns to build self-similarity context. It greatly enhances the representation ability of cross-modal features in the registration network. The semi-supervised segmentation networks generate anatomical labels to provide semantics correspondence for registration, and the registration network helps to improve the performance of multimodal image segmentation by providing the consistency of anatomical labels. 3D CT-MR liver image dataset with 118 samples is built for evaluation, and comprehensive experiments have been conducted to demonstrate that macJNet achieves superior performance over state-of-the-art multi-modality medical image registration methods.


Subject(s)
Learning , Semantics , Tomography, X-Ray Computed
5.
Comput Med Imaging Graph ; 108: 102260, 2023 09.
Article in English | MEDLINE | ID: mdl-37343325

ABSTRACT

PURPOSE: Multimodal registration is a key task in medical image analysis. Due to the large differences of multimodal images in intensity scale and texture pattern, it is a great challenge to design distinctive similarity metrics to guide deep learning-based multimodal image registration. Besides, since the limitation of the small receptive field, existing deep learning-based methods are mainly suitable for small deformation, but helpless for large deformation. To address the above issues, we present an unsupervised multimodal image registration method based on the multiscale integrated spatial-weight module and dual similarity guidance. METHODS: In this method, a U-shape network with our multiscale integrated spatial-weight module is embedded into a multi-resolution image registration architecture to achieve end-to-end large deformation registration, where the spatial-weight module can effectively highlight the regions with large deformation and aggregate discriminative features, and the multi-resolution architecture further helps to solve the optimization problem of the network in a coarse-to-fine pattern. Furthermore, we introduce a special loss function based on dual similarity, which represents both global gray-scale similarity and local feature similarity, to optimize the unsupervised multimodal registration network. RESULTS: We verified the effectiveness of the proposed method on liver CT-MR images. Experimental results indicate that the proposed method achieves the optimal DSC value and TRE value of 92.70 ± 1.75(%) and 6.52 ± 2.94(mm), compared with other state-of-the-art registration algorithms. CONCLUSION: The proposed method can accurately estimate the large deformation field by aggregating multiscale features, and achieve higher registration accuracy and fast registration speed. Comparative experiments also demonstrate the effectiveness and generalization ability of the algorithm.


Subject(s)
Algorithms , Tomography, X-Ray Computed , Liver/diagnostic imaging , Image Processing, Computer-Assisted/methods
6.
Med Phys ; 50(4): 2279-2289, 2023 Apr.
Article in English | MEDLINE | ID: mdl-36412164

ABSTRACT

BACKGROUND: The Gleason Grade Group (GG) is essential in assessing the malignancy of prostate cancer (PCa) and is typically obtained by invasive biopsy procedures in which sampling errors could lead to inaccurately scored GGs. With the gradually recognized value of bi-parametric magnetic resonance imaging (bpMRI) in PCa, it is beneficial to noninvasively predict GGs from bpMRI for early diagnosis and treatment planning of PCa. However, it is challenging to establish the connection between bpMRI features and GGs. PURPOSE: In this study, we propose a dual attention-guided multiscale neural network (DAMS-Net) to predict the 5-scored GG from bpMRI and design a training curriculum to further improve the prediction performance. METHODS: The proposed DAMS-Net incorporates a feature pyramid network (FPN) to fully extract the multiscale features for lesions of varying sizes and a dual attention module to focus on lesion and surrounding regions while avoiding the influence of irrelevant ones. Furthermore, to enhance the differential ability for lesions with the inter-grade similarity and intra-grade variation in bpMRI, the training process employs a specially designed curriculum based on the differences between the radiological evaluations and the ground truth GGs. RESULTS: Extensive experiments were conducted on a private dataset of 382 patients and the public PROSTATEx-2 dataset. For the private dataset, the experimental results showed that the proposed network performed better than the plain baseline model for GG prediction, achieving a mean quadratic weighted Kappa (Kw ) of 0.4902 and a mean positive predictive value of 0.9098 for predicting clinically significant cancer (PPVGG>1 ). With the application of curriculum learning, the mean Kw and PPVGG>1 further increased to 0.5144 and 0.9118, respectively. For the public dataset, the proposed method achieved state-of-the-art results of 0.5413 Kw and 0.9747 PPVGG>1 . CONCLUSION: The proposed DAMS-Net trained with curriculum learning can effectively predict GGs from bpMRI, which may assist clinicians in early diagnosis and treatment planning for PCa patients.


Subject(s)
Magnetic Resonance Imaging , Prostatic Neoplasms , Male , Humans , Magnetic Resonance Imaging/methods , Prostatic Neoplasms/diagnostic imaging , Prostatic Neoplasms/pathology , Neoplasm Grading , Curriculum , Neural Networks, Computer
7.
Front Oncol ; 12: 963925, 2022.
Article in English | MEDLINE | ID: mdl-36046035

ABSTRACT

Objective: To develop and validate a radiomics nomogram that could incorporate clinicopathological characteristics and ultrasound (US)-based radiomics signature to non-invasively predict Ki-67 expression level in patients with breast cancer (BC) preoperatively. Methods: A total of 328 breast lesions from 324 patients with BC who were pathologically confirmed in our hospital from June 2019 to October 2020 were included, and they were divided into high Ki-67 expression level group and low Ki-67 expression level group. Routine US and shear wave elastography (SWE) were performed for each lesion, and the ipsilateral axillary lymph nodes (ALNs) were scanned for abnormal changes. The datasets were randomly divided into training and validation cohorts with a ratio of 7:3. Correlation analysis and the least absolute shrinkage and selection operator (LASSO) were used to select the radiomics features obtained from gray-scale US images of BC patients, and each radiomics score (Rad-score) was calculated. Afterwards, multivariate logistic regression analysis was used to establish a radiomics nomogram based on the radiomics signature and clinicopathological characteristics. The prediction performance of the nomogram was assessed by the area under the receiver operating characteristic curve (AUC), the calibration curve, and decision curve analysis (DCA) using the results of immunohistochemistry as the gold standard. Results: The radiomics signature, consisted of eight selected radiomics features, achieved a nearly moderate prediction efficacy with AUC of 0.821 (95% CI:0.764-0.880) and 0.713 (95% CI:0.612-0.814) in the training and validation cohorts, respectively. The radiomics nomogram, incorporating maximum diameter of lesions, stiff rim sign, US-reported ALN status, and radiomics signature showed a promising performance for prediction of Ki-67 expression level, with AUC of 0.904 (95% CI:0.860-0.948) and 0.890 (95% CI:0.817-0.964) in the training and validation cohorts, respectively. The calibration curve and DCA indicated promising consistency and clinical applicability. Conclusion: The proposed US-based radiomics nomogram could be used to non-invasively predict Ki-67 expression level in BC patients preoperatively, and to assist clinicians in making reliable clinical decisions.

8.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 2810-2814, 2021 11.
Article in English | MEDLINE | ID: mdl-34891833

ABSTRACT

Supervised machine learning methods are usually used to build a custom model for disease diagnosis and auxiliary prognosis in radiomics studies. A classical machine learning pipeline involves a series of steps and multiple algorithms, which leads to a great challenge to find an appropriate combination of algorithms and an optimal hyper-parameter set for radiomics model building. We developed a freely available software package for radiomics model building. It can be used to lesion labeling, feature extraction, feature selection, classifier training and statistic result visualization. This software provides a user-friendly graphic interface and flexible IOs for radiologists and researchers to automatically develop radiomics models. Moreover, this software can extract features from corresponding lesion regions in multi-modality images, which is labeled by semi-automatic or full-automatic segmentation algorithms. It is designed in a loosely coupled architecture, programmed with Qt, VTK, and Python. In order to evaluate the availability and effectiveness of the software, we utilized it to build a CT-based radiomics model containing peritumoral features for malignancy grading of cell renal cell carcinoma. The final model got a good performance of grading study with AUC=0.848 on independent validation dataset.Clinical Relevance-the developed provides convenient and powerful toolboxes to build radiomics models for radiologists and researchers on clinical studies.


Subject(s)
Machine Learning , Software , Algorithms , Retrospective Studies , Supervised Machine Learning
9.
Abdom Radiol (NY) ; 46(6): 2690-2698, 2021 06.
Article in English | MEDLINE | ID: mdl-33427908

ABSTRACT

OBJECTIVE: To evaluate the efficiency of CT-based peritumoral radiomics signatures of clear cell renal cell carcinoma (ccRCC) for malignancy grading in preoperative prediction. MATERIALS AND METHODS: 203 patients with pathologically confirmed as ccRCC were retrospectively enrolled in this study. All patients were categorized into training set (n = 122) and validation set (n = 81). For each patient, two types of volumes of interest (VOI) were masked on CT images. One type of VOIs was defined as the tumor mass volume (TMV), which was masked by radiologists delineating the outline of all contiguous slices of the entire tumor, while the other type defined as the peritumoral tumor volume (PTV), which was automatically created by an image morphological method. 1760 radiomics features were calculated from each VOI, and then the discriminative radiomics features were selected by Pearson correlation analysis for reproducibility and redundancy. These selected features were investigated their validity for building radiomics signatures by mRMR feature ranking method. Finally, the top ranked features, which were used as radiomics signatures, were input into a classifier for malignancy grading. The prediction performance was evaluated by receiver operating characteristic (ROC) curve in an independent validation cohort. RESULTS: The radiomics signatures of PTV showed a better performance on malignancy grade prediction of ccRCC with AUC of 0.807 (95% CI 0.800-0.834) in train data and 0.848 (95% CI 0.760-0.936) in validation data, while the radiomics signatures of TMV with AUC of 0.773 (95% CI 0.744-0.802) in train data and 0.810 (95% CI 0.706-0.914) in validation data. CONCLUSION: The CT-based peritumoral radiomics signature is a potential way to be used as a noninvasive tool to preoperatively predict the malignancy grades of ccRCC.


Subject(s)
Carcinoma, Renal Cell , Kidney Neoplasms , Carcinoma, Renal Cell/diagnostic imaging , Humans , Kidney Neoplasms/diagnostic imaging , Reproducibility of Results , Retrospective Studies , Tomography, X-Ray Computed
10.
Front Oncol ; 11: 792456, 2021.
Article in English | MEDLINE | ID: mdl-35127499

ABSTRACT

PURPOSE: To compare the performance of radiomics to that of the Prostate Imaging Reporting and Data System (PI-RADS) v2.1 scoring system in the detection of clinically significant prostate cancer (csPCa) based on biparametric magnetic resonance imaging (bpMRI) vs. multiparametric MRI (mpMRI). METHODS: A total of 204 patients with pathological results were enrolled between January 2018 and December 2019, with 142 patients in the training cohort and 62 patients in the testing cohort. The radiomics model was compared with the PI-RADS v2.1 for the diagnosis of csPCa based on bpMRI and mpMRI by using receiver operating characteristic (ROC) curve analysis. RESULTS: The radiomics model based on bpMRI and mpMRI signatures showed high predictive efficiency but with no significant differences (AUC = 0.975 vs 0.981, p=0.687 in the training cohort, and 0.953 vs 0.968, p=0.287 in the testing cohort, respectively). In addition, the radiomics model outperformed the PI-RADS v2.1 in the diagnosis of csPCa regardless of whether bpMRI (AUC = 0.975 vs. 0.871, p= 0.030 for the training cohort and AUC = 0.953 vs. 0.853, P = 0.024 for the testing cohort) or mpMRI (AUC = 0.981 vs. 0.880, p= 0.030 for the training cohort and AUC = 0.968 vs. 0.863, P = 0.016 for the testing cohort) was incorporated. CONCLUSIONS: Our study suggests the performance of bpMRI- and mpMRI-based radiomics models show no significant difference, which indicates that omitting DCE imaging in radiomics can simplify the process of analysis. Adding radiomics to PI-RADS v2.1 may improve the performance to predict csPCa.

11.
Sci Rep ; 8(1): 8742, 2018 06 07.
Article in English | MEDLINE | ID: mdl-29880859

ABSTRACT

A new accurate and robust non-rigid point set registration method, named DSMM, is proposed for non-rigid point set registration in the presence of significant amounts of missing correspondences and outliers. The key idea of this algorithm is to consider the relationship between the point sets as random variables and model the prior probabilities via Dirichlet distribution. We assign the various prior probabilities of each point to its correspondences in the Student's-t mixture model. We later incorporate the local spatial representation of the point sets by representing the posterior probabilities in a linear smoothing filter and get closed-form mixture proportions, leading to a computationally efficient registration algorithm comparing to other Student's-t mixture model based methods. Finally, by introducing the hidden random variables in the Bayesian framework, we propose a general mixture model family for generalizing the mixture-model-based point set registration, where the existing methods can be considered as members of the proposed family. We evaluate DSMM and other state-of-the-art finite mixture models based point set registration algorithms on both artificial point set and various 2D and 3D point sets, where DSMM demonstrates its statistical accuracy and robustness, outperforming the competing algorithms.

12.
Biomed Eng Online ; 17(1): 77, 2018 Jun 15.
Article in English | MEDLINE | ID: mdl-29903023

ABSTRACT

BACKGROUND: In diffusion-weighted magnetic resonance imaging (DWI) using single-shot echo planar imaging (ss-EPI), both reduced field-of-view (FOV) excitation and sensitivity encoding (SENSE) alone can increase in-plane resolution to some degree. However, when the two techniques are combined to further increase resolution without pronounced geometric distortion, the resulted images are often corrupted by high level of noise and artifact due to the numerical restriction in SENSE. Hence, this study is aimed to provide a reconstruction method to deal with this problem. METHODS: The proposed reconstruction method was developed and implemented to deal with the high level of noise and artifact in the combination of reduced FOV imaging and traditional SENSE, in which all the imaging data were considered jointly by incorporating the motion induced phase variations among excitations. The in vivo human spine diffusion images from ten subjects were acquired at 1.5 T and reconstructed using the proposed method, and compared with SENSE magnitude average results for a range of reduction factors in reduced FOV. These images were evaluated by two radiologists using visual scores (considering distortion, noise and artifact levels) from 1 to 10. RESULTS: The proposed method was able to reconstruct images with greatly reduced noise and artifact compared to SENSE magnitude average. The mean g-factors were maintained close to 1 along with enhanced signal-to-noise ratio efficiency. The image quality scores of the proposed method were significantly higher (P < 0.01) than SENSE magnitude average for all the evaluated reduction factors. CONCLUSION: The proposed method can improve the combination of SENSE and reduced FOV for high-resolution ss-EPI DWI with reduced noise and artifact.


Subject(s)
Diffusion Magnetic Resonance Imaging/methods , Echo-Planar Imaging , Signal-To-Noise Ratio , Artifacts , Cervical Vertebrae/diagnostic imaging , Humans , Image Processing, Computer-Assisted , Spinal Cord/diagnostic imaging
14.
PLoS One ; 11(4): e0153369, 2016.
Article in English | MEDLINE | ID: mdl-27077923

ABSTRACT

Many modalities of magnetic resonance imaging (MRI) have been confirmed to be of great diagnostic value in glioma grading. Contrast enhanced T1-weighted imaging allows the recognition of blood-brain barrier breakdown. Perfusion weighted imaging and MR spectroscopic imaging enable the quantitative measurement of perfusion parameters and metabolic alterations respectively. These modalities can potentially improve the grading process in glioma if combined properly. In this study, Bayesian Network, which is a powerful and flexible method for probabilistic analysis under uncertainty, is used to combine features extracted from contrast enhanced T1-weighted imaging, perfusion weighted imaging and MR spectroscopic imaging. The networks were constructed using K2 algorithm along with manual determination and distribution parameters learned using maximum likelihood estimation. The grading performance was evaluated in a leave-one-out analysis, achieving an overall grading accuracy of 92.86% and an area under the curve of 0.9577 in the receiver operating characteristic analysis given all available features observed in the total 56 patients. Results and discussions show that Bayesian Network is promising in combining features from multiple modalities of MRI for improved grading performance.


Subject(s)
Brain Neoplasms/pathology , Glioma/pathology , Adult , Aged , Aged, 80 and over , Algorithms , Area Under Curve , Bayes Theorem , Brain Neoplasms/diagnostic imaging , Echo-Planar Imaging , Female , Glioma/diagnostic imaging , Humans , Likelihood Functions , Magnetic Resonance Imaging , Male , Middle Aged , Neoplasm Grading , ROC Curve , Radiography , Signal-To-Noise Ratio
SELECTION OF CITATIONS
SEARCH DETAIL
...