Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 62
Filter
1.
Heliyon ; 10(9): e29897, 2024 May 15.
Article in English | MEDLINE | ID: mdl-38694030

ABSTRACT

Gliomas are the most common type of cerebral tumors; they occur with increasing incidence in the last decade and have a high rate of mortality. For efficient treatment, fast accurate diagnostic and grading of tumors are imperative. Presently, the grading of tumors is established by histopathological evaluation, which is a time-consuming procedure and relies on the pathologists' experience. Here we propose a supervised machine learning procedure for tumor grading which uses quantitative phase images of unstained tissue samples acquired by digital holographic microscopy. The algorithm is using an extensive set of statistical and texture parameters computed from these images. The procedure has been able to classify six classes of images (normal tissue and five glioma subtypes) and to distinguish between gliomas types from grades II to IV (with the highest sensitivity and specificity for grade II astrocytoma and grade III oligodendroglioma and very good scores in recognizing grade III anaplastic astrocytoma and grade IV glioblastoma). The procedure bolsters clinical diagnostic accuracy, offering a swift and reliable means of tumor characterization and grading, ultimately the enhancing treatment decision-making process.

2.
J Integr Neurosci ; 23(5): 100, 2024 May 14.
Article in English | MEDLINE | ID: mdl-38812383

ABSTRACT

BACKGROUND: Multiple radiomics models have been proposed for grading glioma using different algorithms, features, and sequences of magnetic resonance imaging. The research seeks to assess the present overall performance of radiomics for grading glioma. METHODS: A systematic literature review of the databases Ovid MEDLINE PubMed, and Ovid EMBASE for publications published on radiomics for glioma grading between 2012 and 2023 was performed. The systematic review was carried out following the criteria of Preferred Reporting Items for Systematic Reviews and Meta-Analysis. RESULTS: In the meta-analysis, a total of 7654 patients from 40 articles, were assessed. R-package mada was used for modeling the joint estimates of specificity (SPE) and sensitivity (SEN). Pooled event rates across studies were performed with a random-effects meta-analysis. The heterogeneity of SPE and SEN were based on the χ2 test. Overall values for SPE and SEN in the differentiation between high-grade gliomas (HGGs) and low-grade gliomas (LGGs) were 84% and 91%, respectively. With regards to the discrimination between World Health Organization (WHO) grade 4 and WHO grade 3, the overall SPE was 81% and the SEN was 89%. The modern non-linear classifiers showed a better trend, whereas textural features tend to be the best-performing (29%) and the most used. CONCLUSIONS: Our findings confirm that present radiomics' diagnostic performance for glioma grading is superior in terms of SEN and SPE for the HGGs vs. LGGs discrimination task when compared to the WHO grade 4 vs. 3 task.


Subject(s)
Brain Neoplasms , Glioma , Magnetic Resonance Imaging , Neoplasm Grading , Glioma/diagnostic imaging , Glioma/pathology , Humans , Magnetic Resonance Imaging/standards , Magnetic Resonance Imaging/methods , Brain Neoplasms/diagnostic imaging , Brain Neoplasms/pathology , Neuroimaging/standards , Neuroimaging/methods , Radiomics
3.
J Imaging Inform Med ; 37(4): 1711-1727, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38413460

ABSTRACT

Gliomas are primary brain tumors that arise from neural stem cells, or glial precursors. Diagnosis of glioma is based on histological evaluation of pathological cell features and molecular markers. Gliomas are infiltrated by myeloid cells that accumulate preferentially in malignant tumors, and their abundance inversely correlates with survival, which is of interest for cancer immunotherapies. To avoid time-consuming and laborious manual examination of images, a deep learning approach for automatic multiclass classification of tumor grades was proposed. As an alternative way of investigating characteristics of brain tumor grades, we implemented a protocol for learning, discovering, and quantifying tumor microenvironment elements on our glioma dataset. Using only single-stained biopsies we derived characteristic differentiating tumor microenvironment phenotypic neighborhoods. The study was complicated by the small size of the available human leukocyte antigen stained on glioma tissue microarray dataset - 206 images of 5 classes - as well as imbalanced data distribution. This challenge was addressed by image augmentation for underrepresented classes. In practice, we considered two scenarios, a whole slide supervised learning classification, and an unsupervised cell-to-cell analysis looking for patterns of the microenvironment. In the supervised learning investigation, we evaluated 6 distinct model architectures. Experiments revealed that a DenseNet121 architecture surpasses the baseline's accuracy by a significant margin of 9% for the test set, achieving a score of 69%, increasing accuracy in discerning challenging WHO grade 2 and 3 cases. All experiments have been carried out in a cross-validation manner. The tumor microenvironment analysis suggested an important role for myeloid cells and their accumulation in the context of characterizing glioma grades. Those promising approaches can be used as an additional diagnostic tool to improve assessment during intraoperative examination or subtyping tissues for treatment selection, potentially easing the workflow of pathologists and oncologists.


Subject(s)
Brain Neoplasms , Deep Learning , Glioma , Neoplasm Grading , Tumor Microenvironment , Humans , Glioma/pathology , Brain Neoplasms/pathology , Neoplasm Grading/methods , Image Interpretation, Computer-Assisted/methods
4.
Med Image Anal ; 91: 102990, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37864912

ABSTRACT

The fusion of multi-modal data, e.g., pathology slides and genomic profiles, can provide complementary information and benefit glioma grading. However, genomic profiles are difficult to obtain due to the high costs and technical challenges, thus limiting the clinical applications of multi-modal diagnosis. In this work, we investigate the realistic problem where paired pathology-genomic data are available during training, while only pathology slides are accessible for inference. To solve this problem, a comprehensive learning and adaptive teaching framework is proposed to improve the performance of pathological grading models by transferring the privileged knowledge from the multi-modal teacher to the pathology student. For comprehensive learning of the multi-modal teacher, we propose a novel Saliency-Aware Masking (SA-Mask) strategy to explore richer disease-related features from both modalities by masking the most salient features. For adaptive teaching of the pathology student, we first devise a Local Topology Preserving and Discrepancy Eliminating Contrastive Distillation (TDC-Distill) module to align the feature distributions of the teacher and student models. Furthermore, considering the multi-modal teacher may include incorrect information, we propose a Gradient-guided Knowledge Refinement (GK-Refine) module that builds a knowledge bank and adaptively absorbs the reliable knowledge according to their agreement in the gradient space. Experiments on the TCGA GBM-LGG dataset show that our proposed distillation framework improves the pathological glioma grading and outperforms other KD methods. Notably, with the sole pathology slides, our method achieves comparable performance with existing multi-modal methods. The code is available at https://github.com/CUHK-AIM-Group/MultiModal-learning.


Subject(s)
Glioma , Learning , Humans
5.
Bioengineering (Basel) ; 10(8)2023 Jul 26.
Article in English | MEDLINE | ID: mdl-37627772

ABSTRACT

Deep networks have shown strong performance in glioma grading; however, interpreting their decisions remains challenging due to glioma heterogeneity. To address these challenges, the proposed solution is the Causal Segmentation Framework (CSF). This framework aims to accurately predict high- and low-grade gliomas while simultaneously highlighting key subregions. Our framework utilizes a shrinkage segmentation method to identify subregions containing essential decision information. Moreover, we introduce a glioma grading module that combines deep learning and traditional approaches for precise grading. Our proposed model achieves the best performance among all models, with an AUC of 96.14%, an F1 score of 93.74%, an accuracy of 91.04%, a sensitivity of 91.83%, and a specificity of 88.88%. Additionally, our model exhibits efficient resource utilization, completing predictions within 2.31s and occupying only 0.12 GB of memory during the test phase. Furthermore, our approach provides clear and specific visualizations of key subregions, surpassing other methods in terms of interpretability. In conclusion, the Causal Segmentation Framework (CSF) demonstrates its effectiveness at accurately predicting glioma grades and identifying key subregions. The inclusion of causality in the CSF model enhances the reliability and accuracy of preoperative decision-making for gliomas. The interpretable results provided by the CSF model can assist clinicians in their assessment and treatment planning.

6.
Comput Biol Med ; 165: 107332, 2023 10.
Article in English | MEDLINE | ID: mdl-37598632

ABSTRACT

Accurate grading of brain tumors plays a crucial role in the diagnosis and treatment of glioma. While convolutional neural networks (CNNs) have shown promising performance in this task, their clinical applicability is still constrained by the interpretability and robustness of the models. In the conventional framework, the classification model is trained first, and then visual explanations are generated. However, this approach often leads to models that prioritize classification performance or complexity, making it difficult to achieve a precise visual explanation. Motivated by these challenges, we propose the Unified Visualization and Classification Network (UniVisNet), a novel framework that aims to improve both the classification performance and the generation of high-resolution visual explanations. UniVisNet addresses attention misalignment by introducing a subregion-based attention mechanism, which replaces traditional down-sampling operations. Additionally, multiscale feature maps are fused to achieve higher resolution, enabling the generation of detailed visual explanations. To streamline the process, we introduce the Unified Visualization and Classification head (UniVisHead), which directly generates visual explanations without the need for additional separation steps. Through extensive experiments, our proposed UniVisNet consistently outperforms strong baseline classification models and prevalent visualization methods. Notably, UniVisNet achieves remarkable results on the glioma grading task, including an AUC of 94.7%, an accuracy of 89.3%, a sensitivity of 90.4%, and a specificity of 85.3%. Moreover, UniVisNet provides visually interpretable explanations that surpass existing approaches. In conclusion, UniVisNet innovatively generates visual explanations in brain tumor grading by simultaneously improving the classification performance and generating high-resolution visual explanations. This work contributes to the clinical application of deep learning, empowering clinicians with comprehensive insights into the spatial heterogeneity of glioma.


Subject(s)
Brain Neoplasms , Glioma , Humans , Magnetic Resonance Imaging , Glioma/diagnostic imaging , Neural Networks, Computer , Brain Neoplasms/diagnostic imaging , Brain/pathology
7.
Med Image Anal ; 88: 102874, 2023 08.
Article in English | MEDLINE | ID: mdl-37423056

ABSTRACT

The fusion of multi-modal data, e.g., medical images and genomic profiles, can provide complementary information and further benefit disease diagnosis. However, multi-modal disease diagnosis confronts two challenges: (1) how to produce discriminative multi-modal representations by exploiting complementary information while avoiding noisy features from different modalities. (2) how to obtain an accurate diagnosis when only a single modality is available in real clinical scenarios. To tackle these two issues, we present a two-stage disease diagnostic framework. In the first multi-modal learning stage, we propose a novel Momentum-enriched Multi-Modal Low-Rank (M3LR) constraint to explore the high-order correlations and complementary information among different modalities, thus yielding more accurate multi-modal diagnosis. In the second stage, the privileged knowledge of the multi-modal teacher is transferred to the unimodal student via our proposed Discrepancy Supervised Contrastive Distillation (DSCD) and Gradient-guided Knowledge Modulation (GKM) modules, which benefit the unimodal-based diagnosis. We have validated our approach on two tasks: (i) glioma grading based on pathology slides and genomic data, and (ii) skin lesion classification based on dermoscopy and clinical images. Experimental results on both tasks demonstrate that our proposed method consistently outperforms existing approaches in both multi-modal and unimodal diagnoses.


Subject(s)
Glioma , Humans , Learning , Motion , Skin
8.
Bioengineering (Basel) ; 10(6)2023 May 23.
Article in English | MEDLINE | ID: mdl-37370560

ABSTRACT

Three-dimensional (3D) image analyses are frequently applied to perform classification tasks. Herein, 3D-based machine learning systems are generally used/generated by examining two designs: a 3D-based deep learning model or a 3D-based task-specific framework. However, except for a new approach named 3t2FTS, a promising feature transform operating from 3D to two-dimensional (2D) space has not been efficiently investigated for classification applications in 3D magnetic resonance imaging (3D MRI). In other words, a state-of-the-art feature transform strategy is not available that achieves high accuracy and provides the adaptation of 2D-based deep learning models for 3D MRI-based classification. With this aim, this paper presents a new version of the 3t2FTS approach (3t2FTS-v2) to apply a transfer learning model for tumor categorization of 3D MRI data. For performance evaluation, the BraTS 2017/2018 dataset is handled that involves high-grade glioma (HGG) and low-grade glioma (LGG) samples in four different sequences/phases. 3t2FTS-v2 is proposed to effectively transform the features from 3D to 2D space by using two textural features: first-order statistics (FOS) and gray level run length matrix (GLRLM). In 3t2FTS-v2, normalization analyses are assessed to be different from 3t2FTS to accurately transform the space information apart from the usage of GLRLM features. The ResNet50 architecture is preferred to fulfill the HGG/LGG classification due to its remarkable performance in tumor grading. As a result, for the classification of 3D data, the proposed model achieves a 99.64% accuracy by guiding the literature about the importance of 3t2FTS-v2 that can be utilized not only for tumor grading but also for whole brain tissue-based disease classification.

9.
Heliyon ; 9(3): e14654, 2023 Mar.
Article in English | MEDLINE | ID: mdl-37009333

ABSTRACT

Glioma grading is critical for treatment selection, and the fine classification between glioma grades II and III is still a pathological challenge. Traditional systems based on a single deep learning (DL) model can only show relatively low accuracy in distinguishing glioma grades II and III. Introducing ensemble DL models by combining DL and ensemble learning techniques, we achieved annotation-free glioma grading (grade II or III) from pathological images. We established multiple tile-level DL models using residual network ResNet-18 architecture and then used DL models as component classifiers to develop ensemble DL models to achieve patient-level glioma grading. Whole-slide images of 507 subjects with low-grade glioma (LGG) from the Cancer Genome Atlas (TCGA) were included. The 30 DL models exhibited an average area under the curve (AUC) of 0.7991 in patient-level glioma grading. Single DL models showed large variation, and the median between-model cosine similarity was 0.9524, significantly smaller than the threshold of 1.0. The ensemble model based on logistic regression (LR) methods with a 14-component DL classifier (LR-14) demonstrated a mean patient-level accuracy and AUC of 0.8011 and 0.8945, respectively. Our proposed LR-14 ensemble DL model achieved state-of-the-art performance in glioma grade II and III classifications based on unannotated pathological images.

10.
Biomed Phys Eng Express ; 9(3)2023 03 23.
Article in English | MEDLINE | ID: mdl-36898146

ABSTRACT

Purpose.To determine glioma grading by applying radiomic analysis or deep convolutional neural networks (DCNN) and to benchmark both approaches on broader validation sets.Methods.Seven public datasets were considered: (1) low-grade glioma or high-grade glioma (369 patients, BraTS'20) (2) well-differentiated liposarcoma or lipoma (115, LIPO); (3) desmoid-type fibromatosis or extremity soft-tissue sarcomas (203, Desmoid); (4) primary solid liver tumors, either malignant or benign (186, LIVER); (5) gastrointestinal stromal tumors (GISTs) or intra-abdominal gastrointestinal tumors radiologically resembling GISTs (246, GIST); (6) colorectal liver metastases (77, CRLM); and (7) lung metastases of metastatic melanoma (103, Melanoma). Radiomic analysis was performed on 464 (2016) radiomic features for the BraTS'20 (others) datasets respectively. Random forests (RF), Extreme Gradient Boosting (XGBOOST) and a voting algorithm comprising both classifiers were tested. The parameters of the classifiers were optimized using a repeated nested stratified cross-validation process. The feature importance of each classifier was computed using the Gini index or permutation feature importance. DCNN was performed on 2D axial and sagittal slices encompassing the tumor. A balanced database was created, when necessary, using smart slices selection. ResNet50, Xception, EficientNetB0, and EfficientNetB3 were transferred from the ImageNet application to the tumor classification and were fine-tuned. Five-fold stratified cross-validation was performed to evaluate the models. The classification performance of the models was measured using multiple indices including area under the receiver operating characteristic curve (AUC).Results.The best radiomic approach was based on XGBOOST for all datasets; AUC was 0.934 (BraTS'20), 0.86 (LIPO), 0.73 (LIVER), (0.844) Desmoid, 0.76 (GIST), 0.664 (CRLM), and 0.577 (Melanoma) respectively. The best DCNN was based on EfficientNetB0; AUC was 0.99 (BraTS'20), 0.982 (LIPO), 0.977 (LIVER), (0.961) Desmoid, 0.926 (GIST), 0.901 (CRLM), and 0.89 (Melanoma) respectively.Conclusion.Tumor classification can be accurately determined by adapting state-of-the-art machine learning algorithms to the medical context.


Subject(s)
Deep Learning , Glioma , Radiomics , Glioma/diagnostic imaging , Glioma/pathology , Neoplasm Grading , Humans , Datasets as Topic
11.
J Clin Neurosci ; 110: 92-99, 2023 Apr.
Article in English | MEDLINE | ID: mdl-36848737

ABSTRACT

BACKGROUND: To explore the diagnostic value and feasibility of shear wave elastography and superb microvascular imaging in the grading diagnosis of glioma intraoperatively. MATERIALS AND METHODS: Forty-nine patients with glioma were included in this study. B-mode ultrasonography, Young's modulus in shear-wave elastography (SWE) and vascular architecture in superb microvascular imaging(SMI) of tumor tissue and peritumoral tissue were analyzed. Receiver operating characteristic(ROC) curve analysis was used to evaluate the diagnostic effect of SWE. Logistic regression model was used to calculate the prediction probability of HGG diagnosis. RESULTS: Compared with LGG, HGG was often characterized by peritumoral edema in B mode (P < 0.05). There was a significant difference in Young's modulus between HGG and LGG; the diagnostic threshold of HGG and LGG was 13.05 kPa, the sensitivity was 78.3%, and the specificity was 76.9%. The vascular architectures of the tumor tissue and peritumoral tissues of HGG and LGG were significantly different (P < 0.05). The vascular architectures of peritumoral tissue in HGG often characterized by distorted blood flow signals surrounding the tumor (14/26,53.8%); in the tumor tissue, HGG often presents as dilated and bent vessels(19/26,73.1%). The elasticity value of SWE and the tumor vascular architectures of SMI were correlated with the diagnosis of HGG. CONCLUSION: Intraoperative ultrasound (ioUS), especially SWE, and SMI are beneficial for the differentiation of HGG and LGG and may help optimize clinical surgical procedures.


Subject(s)
Elasticity Imaging Techniques , Glioma , Humans , Elasticity Imaging Techniques/methods , Sensitivity and Specificity , Ultrasonography , ROC Curve , Glioma/diagnostic imaging , Glioma/surgery
12.
Eur J Radiol ; 160: 110721, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36738600

ABSTRACT

OBJECTIVES: To noninvasively assess the diagnostic performance of diffusion-weighted imaging (DWI), bi-exponential intravoxel incoherent motion imaging (IVIM) and three-dimensional pseudo-continuous arterial spin labeling (3D pCASL) in differentiating lower-grade gliomas (LGGs) from high-grade gliomas (HGGs), and predicting the isocitrate dehydrogenase (IDH) mutation status. MATERIALS AND METHODS: Ninety-five patients with pathologically confirmed grade 2-4 gliomas with preoperative DWI, IVIM and 3D pCASL were enrolled in this study. The Student's t test and Mann-Whitney U test were used to evaluate differences in parameters of DWI, IVIM and 3D pCASL between LGG and HGG as well as between mutant and wild-type IDH in grade 2 and 3 diffusion astrocytoma; receiver operator characteristic (ROC) analysis was used to assess the diagnostic performance. RESULTS: The value of ADCmean, ADCmin, Dmean and Dmin in HGGs were lower than in LGGs, while the value of CBFmean and CBFmax in HGGs were higher than in LGGs. In ROC analysis, the AUC values of Dmean, Dmin and CBFmax were 0.827, 0.878 and 0.839, respectively. The combination of CBFmax and Dmin displayed the highest diagnostic performance to distinguish LGGs from HGGs, with AUC 0.906, sensitivity 82.4 %, and specificity 86.4 %. In grades 2 and 3 diffusion astrocytoma patients, ADCmin, Dmean, Dmin, CBFmean and CBFmax showed significant differences between IDHmut and IDHwt group (p < 0.05, 0.001, 0.001, 0.01 and 0.001, respectively) and the AUC values were 0. 709, 0.849, 0.919, 0.755 and 0.873, respectively. Similarly, the combination of CBFmax and Dmin demonstrated the highest AUC value (0.938) in prediction IDH mutation status, with sensitivity 92.9 %, and specificity 95.5 %. CONCLUSION: The combination of IVIM and 3D pCASL can be used in prediction histologic grade and IDH mutation status of glioma noninvasively.


Subject(s)
Astrocytoma , Brain Neoplasms , Glioma , Humans , Isocitrate Dehydrogenase/genetics , Brain Neoplasms/diagnostic imaging , Brain Neoplasms/genetics , Brain Neoplasms/pathology , Spin Labels , Neoplasm Grading , Glioma/diagnostic imaging , Glioma/genetics , Glioma/pathology , Diffusion Magnetic Resonance Imaging/methods , Astrocytoma/diagnostic imaging , Astrocytoma/genetics , Mutation , Magnetic Resonance Imaging/methods , Retrospective Studies
13.
Phys Med ; 107: 102538, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36796177

ABSTRACT

PURPOSE: Analysis pipelines based on the computation of radiomic features on medical images are widely used exploration tools across a large variety of image modalities. This study aims to define a robust processing pipeline based on Radiomics and Machine Learning (ML) to analyze multiparametric Magnetic Resonance Imaging (MRI) data to discriminate between high-grade (HGG) and low-grade (LGG) gliomas. METHODS: The dataset consists of 158 multiparametric MRI of patients with brain tumor publicly available on The Cancer Imaging Archive, preprocessed by the BraTS organization committee. Three different types of image intensity normalization algorithms were applied and 107 features were extracted for each tumor region, setting the intensity values according to different discretization levels. The predictive power of radiomic features in the LGG versus HGG categorization was evaluated by using random forest classifiers. The impact of the normalization techniques and of the different settings in the image discretization was studied in terms of the classification performances. A set of MRI-reliable features was defined selecting the features extracted according to the most appropriate normalization and discretization settings. RESULTS: The results show that using MRI-reliable features improves the performance in glioma grade classification (AUC=0.93±0.05) with respect to the use of raw (AUC=0.88±0.08) and robust features (AUC=0.83±0.08), defined as those not depending on image normalization and intensity discretization. CONCLUSIONS: These results confirm that image normalization and intensity discretization strongly impact the performance of ML classifiers based on radiomic features. Thus, special attention should be provided in the image preprocessing step before typical radiomic and ML analysis are carried out.


Subject(s)
Brain Neoplasms , Glioma , Multiparametric Magnetic Resonance Imaging , Humans , Glioma/diagnostic imaging , Glioma/pathology , Brain Neoplasms/diagnostic imaging , Brain Neoplasms/pathology , Machine Learning , Magnetic Resonance Imaging/methods , Retrospective Studies
14.
Magn Reson Imaging ; 99: 91-97, 2023 06.
Article in English | MEDLINE | ID: mdl-36803634

ABSTRACT

PURPOSE: To evaluate the diagnostic performance of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) metrics for glioma grading on a point-to-point basis. METHODS: Forty patients with treatment-naïve glioma underwent DCE-MR examination and stereotactic biopsy. DCE-derived parameters including endothelial transfer constant (Ktrans), volume of extravascular-extracellular space (ve), fractional plasma volume (fpv), and reflux transfer rate (kep) were measured within ROIs on DCE maps accurately matched with biopsies used for histologic grades diagnosis. Differences in parameters between grades were evaluated by Kruskal-Wallis tests. Diagnostic accuracy of each parameter and their combination was assessed using receiver operating characteristic curve. RESULTS: Eighty-four independent biopsy samples from 40 patients were analyzed in our study. Significant statistical differences in Ktrans and ve were observed between grades except ve between grade 2 and 3. Ktrans showed good to excellent accuracy in discriminating grade 2 from 3, 3 from 4, and 2 from 4 (area under the curve = 0.802, 0.801 and 0.971, respectively). Ve indicated good accuracy in discriminating grade 3 from 4 and 2 from 4 (AUC = 0.874 and 0.899, respectively). The combined parameter demonstrated fair to excellent accuracy in discriminating grade 2 from 3, 3 from 4, and 2 from 4 (AUC = 0.794, 0.899 and 0.982, respectively). CONCLUSION: Our study had identified Ktrans, ve and the combination of parameters to be an accurate predictor for grading glioma.


Subject(s)
Brain Neoplasms , Glioma , Humans , Brain Neoplasms/pathology , Neoplasm Grading , Contrast Media , Glioma/pathology , Magnetic Resonance Imaging/methods , Biopsy
15.
MAGMA ; 36(1): 43-53, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36326937

ABSTRACT

OBJECTIVE: Despite the critical role of Magnetic Resonance Imaging (MRI) in the diagnosis of brain tumours, there are still many pitfalls in the exact grading of them, in particular, gliomas. In this regard, it was aimed to examine the potential of Transfer Learning (TL) and Machine Learning (ML) algorithms in the accurate grading of gliomas on MRI images. MATERIALS AND METHODS: Dataset has included four types of axial MRI images of glioma brain tumours with grades I-IV: T1-weighted, T2-weighted, FLAIR, and T1-weighted Contrast-Enhanced (T1-CE). Images were resized, normalized, and randomly split into training, validation, and test sets. ImageNet pre-trained Convolutional Neural Networks (CNNs) were utilized for feature extraction and classification, using Adam and SGD optimizers. Logistic Regression (LR) and Support Vector Machine (SVM) methods were also implemented for classification instead of Fully Connected (FC) layers taking advantage of features extracted by each CNN. RESULTS: Evaluation metrics were computed to find the model with the best performance, and the highest overall accuracy of 99.38% was achieved for the model containing an SVM classifier and features extracted by pre-trained VGG-16. DISCUSSION: It was demonstrated that developing Computer-aided Diagnosis (CAD) systems using pre-trained CNNs and classification algorithms is a functional approach to automatically specify the grade of glioma brain tumours in MRI images. Using these models is an excellent alternative to invasive methods and helps doctors diagnose more accurately before treatment.


Subject(s)
Brain Neoplasms , Glioma , Humans , Magnetic Resonance Imaging , Glioma/diagnostic imaging , Brain Neoplasms/diagnostic imaging , Diagnosis, Computer-Assisted , Machine Learning
16.
Comput Biol Med ; 152: 106457, 2023 01.
Article in English | MEDLINE | ID: mdl-36571937

ABSTRACT

In this paper, a magnetic resonance imaging (MRI) oriented novel attention-based glioma grading network (AGGN) is proposed. By applying the dual-domain attention mechanism, both channel and spatial information can be considered to assign weights, which benefits highlighting the key modalities and locations in the feature maps. Multi-branch convolution and pooling operations are applied in a multi-scale feature extraction module to separately obtain shallow and deep features on each modality, and a multi-modal information fusion module is adopted to sufficiently merge low-level detailed and high-level semantic features, which promotes the synergistic interaction among different modality information. The proposed AGGN is comprehensively evaluated through extensive experiments, and the results have demonstrated the effectiveness and superiority of the proposed AGGN in comparison to other advanced models, which also presents high generalization ability and strong robustness. In addition, even without the manually labeled tumor masks, AGGN can present considerable performance as other state-of-the-art algorithms, which alleviates the excessive reliance on supervised information in the end-to-end learning paradigm.


Subject(s)
Glioma , Humans , Glioma/diagnostic imaging , Algorithms , Learning , Semantics
17.
Quant Imaging Med Surg ; 12(11): 5171-5183, 2022 Nov.
Article in English | MEDLINE | ID: mdl-36330178

ABSTRACT

Background: Accurate grading of gliomas is a challenge in imaging diagnosis. This study aimed to evaluate the performance of a machine learning (ML) approach based on multiparametric diffusion-weighted imaging (DWI) in differentiating low- and high-grade adult gliomas. Methods: A model was developed from an initial cohort containing 74 patients with pathology-confirmed gliomas, who underwent 3 tesla (3T) diffusion magnetic resonance imaging (MRI) with 21 b values. In all, 112 histogram features were extracted from 16 parameters derived from seven diffusion models [monoexponential, intravoxel incoherent motion (IVIM), diffusion kurtosis imaging (DKI), fractional order calculus (FROC), continuous-time random walk (CTRW), stretched-exponential, and statistical]. Feature selection and model training were performed using five randomly permuted five-fold cross-validations. An internal test set (15 cases of the primary dataset) and an external cohort (n=55) imaged on a different scanner were used to validate the model. The diagnostic performance of the model was compared with that of a single DWI model and DWI radiomics using accuracy, sensitivity, specificity, and the area under the curve (AUC). Results: Seven significant multiparametric DWI features (two from the stretched-exponential and FROC models, and three from the CTRW model) were selected to construct the model. The multiparametric DWI model achieved the highest AUC (0.84, versus 0.71 for the single DWI model, P<0.05), an accuracy of 0.80 in the internal test, and both AUC and accuracy of 0.76 in the external test. Conclusions: Our multiparametric DWI model differentiated low- (LGG) from high-grade glioma (HGG) with better generalization performance than the established single DWI model. This result suggests that the application of an ML approach with multiple DWI models is feasible for the preoperative grading of gliomas.

18.
J Neurooncol ; 160(3): 577-589, 2022 Dec.
Article in English | MEDLINE | ID: mdl-36434486

ABSTRACT

PURPOSE: Gliomas are the most commonly occurring brain tumour in adults and there remains no cure for these tumours with treatment strategies being based on tumour grade. All treatment options aim to prolong survival, maintain quality of life and slow the inevitable progression from low-grade to high-grade. Despite imaging advancements, the only reliable method to grade a glioma is to perform a biopsy, and even this is fraught with errors associated with under grading. Positron emission tomography (PET) imaging with amino acid tracers such as [18F]fluorodopa (18F-FDOPA), [11C]methionine (11C-MET), [18F]fluoroethyltyrosine (18F-FET), and 18F-FDOPA are being increasingly used in the diagnosis and management of gliomas. METHODS: In this review we discuss the literature available on the ability of 18F-FDOPA-PET to distinguish low- from high-grade in newly diagnosed gliomas. RESULTS: In 2016 the Response Assessment in Neuro-Oncology (RANO) and European Association for Neuro-Oncology (EANO) published recommendations on the clinical use of PET imaging in gliomas. However, since these recommendations there have been a number of studies performed looking at whether 18F-FDOPA-PET can identify areas of high-grade transformation before the typical radiological features of transformation such as contrast enhancement are visible on standard magnetic resonance imaging (MRI). CONCLUSION: Larger studies are needed to validate 18F-FDOPA-PET as a non-invasive marker of glioma grade and prediction of tumour molecular characteristics which could guide decisions surrounding surgical resection.


Subject(s)
Brain Neoplasms , Glioma , Adult , Humans , Quality of Life , Neoplasm Grading , Glioma/pathology , Positron-Emission Tomography/methods , Brain Neoplasms/pathology , Magnetic Resonance Imaging
19.
Comput Methods Programs Biomed ; 226: 107165, 2022 Nov.
Article in English | MEDLINE | ID: mdl-36215857

ABSTRACT

BACKGROUND AND OBJECTIVE: Gliomas are graded using multimodal magnetic resonance imaging, which provides important information for treatment and prognosis. When modalities are missing, the grading is degraded. We propose a robust brain tumor grading model that can handle missing modalities. METHODS: Our method was developed and tested on Brain Tumor Segmentation Challenge 2017 dataset (n = 285) via nested five-fold cross-validation. Our method adopts adversarial learning to generate the features of missing modalities relative to the features obtained from a full set of modalities in the latent space. An attention-based fusion block across modalities fuses the features of each available modality into a shared representation. Our method's results are compared to those of two other models where 15 missing-modality scenarios are explicitly considered and a joint training approach with random dropouts is used. RESULTS: Our method outperforms the two competing methods in classifying high-grade gliomas (HGGs) and low-grade gliomas (LGGs), achieving an area under the curve of 87.76% on average for all missing-modality scenarios. The activation maps derived with our method confirm that it focuses on the enhancing portion of the tumor in HGGs and on the edema and non-enhancing portions of the tumor in LGGs, which is consistent with prior expertise. An ablation study shows the added benefits of a fusion block and adversarial learning for handling missing modalities. CONCLUSION: Our method shows robust grading of gliomas in all cases of missing modalities. Our proposed network might have positive implications in glioma care by learning features robust to missing modalities.


Subject(s)
Brain Neoplasms , Glioma , Humans , Neoplasm Grading , Brain Neoplasms/diagnostic imaging , Brain Neoplasms/pathology , Glioma/diagnostic imaging , Glioma/pathology , Magnetic Resonance Imaging/methods , Brain/pathology
20.
Phys Med Biol ; 67(15)2022 07 19.
Article in English | MEDLINE | ID: mdl-35767979

ABSTRACT

Objective. Glioma is one of the most fatal cancers in the world which has been divided into low grade glioma (LGG) and high grade glioma (HGG), and its image grading has become a hot topic of contemporary research. Magnetic resonance imaging (MRI) is a vital diagnostic tool for brain tumor detection, analysis, and surgical planning. Accurate and automatic glioma grading is crucial for speeding up diagnosis and treatment planning. Aiming at the problems of (1) large number of parameters, (2) complex calculation, and (3) poor speed of the current glioma grading algorithms based on deep learning, this paper proposes a lightweight 3D UNet deep learning framework, which can improve classification accuracy in comparison with the existing methods.Approach. To improve efficiency while maintaining accuracy, existing 3D UNet has been excluded, and depthwise separable convolution has been applied to 3D convolution to reduce the number of network parameters. The weight of parameters on the basis of space and channel compression & excitation module has been strengthened to improve the model in the feature map, reduce the weight of redundant parameters, and strengthen the performance of the model.Main results. A total of 560 patients with glioma were retrospectively reviewed. All patients underwent MRI before surgery. The experiments were carried out on T1w, T2w, fluid attenuated inversion recovery, and CET1w images. Additionally, a way of marking tumor area by cube bounding box is presented which has no significant difference in model performance with the manually drawn ground truth. Evaluated on test datasets using the proposed model has shown good results (with accuracy of 89.29%).Significance. This work serves to achieve LGG/HGG grading by simple, effective, and non-invasive diagnostic approaches to provide diagnostic suggestions for clinical usage, thereby facilitating hasten treatment decisions.


Subject(s)
Brain Neoplasms , Glioma , Brain Neoplasms/pathology , Glioma/diagnostic imaging , Glioma/pathology , Humans , Magnetic Resonance Imaging/methods , Neoplasm Grading , Retrospective Studies
SELECTION OF CITATIONS
SEARCH DETAIL