Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
2.
Eur Radiol ; 34(2): 1190-1199, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37615767

RESUMO

OBJECTIVES: Existing brain extraction models should be further optimized to provide more information for oncological analysis. We aimed to develop an nnU-Net-based deep learning model for automated brain extraction on contrast-enhanced T1-weighted (T1CE) images in presence of brain tumors. METHODS: This is a multi-center, retrospective study involving 920 patients. A total of 720 cases with four types of intracranial tumors from private institutions were collected and set as the training group and the internal test group. Mann-Whitney U test (U test) was used to investigate if the model performance was associated with pathological types and tumor characteristics. Then, the generalization of model was independently tested on public datasets consisting of 100 glioma and 100 vestibular schwannoma cases. RESULTS: In the internal test, the model achieved promising performance with median Dice similarity coefficient (DSC) of 0.989 (interquartile range (IQR), 0.988-0.991), and Hausdorff distance (HD) of 6.403 mm (IQR, 5.099-8.426 mm). U test suggested a slightly descending performance in meningioma and vestibular schwannoma group. The results of U test also suggested that there was a significant difference in peritumoral edema group, with median DSC of 0.990 (IQR, 0.989-0.991, p = 0.002), and median HD of 5.916 mm (IQR, 5.000-8.000 mm, p = 0.049). In the external test, our model also showed to be robust performance, with median DSC of 0.991 (IQR, 0.983-0.998) and HD of 8.972 mm (IQR, 6.164-13.710 mm). CONCLUSIONS: For automated processing of MRI neuroimaging data presence of brain tumors, the proposed model can perform brain extraction including important superficial structures for oncological analysis. CLINICAL RELEVANCE STATEMENT: The proposed model serves as a radiological tool for image preprocessing in tumor cases, focusing on superficial brain structures, which could streamline the workflow and enhance the efficiency of subsequent radiological assessments. KEY POINTS: • The nnU-Net-based model is capable of segmenting significant superficial structures in brain extraction. • The proposed model showed feasible performance, regardless of pathological types or tumor characteristics. • The model showed generalization in the public datasets.


Assuntos
Neoplasias Encefálicas , Neoplasias Meníngeas , Neuroma Acústico , Humanos , Estudos Retrospectivos , Neuroma Acústico/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Encéfalo , Imageamento por Ressonância Magnética/métodos , Neoplasias Encefálicas/diagnóstico por imagem
3.
J Med Internet Res ; 25: e44119, 2023 12 15.
Artigo em Inglês | MEDLINE | ID: mdl-38100181

RESUMO

BACKGROUND: Convolutional neural networks (CNNs) have produced state-of-the-art results in meningioma segmentation on magnetic resonance imaging (MRI). However, images obtained from different institutions, protocols, or scanners may show significant domain shift, leading to performance degradation and challenging model deployment in real clinical scenarios. OBJECTIVE: This research aims to investigate the realistic performance of a well-trained meningioma segmentation model when deployed across different health care centers and verify the methods to enhance its generalization. METHODS: This study was performed in four centers. A total of 606 patients with 606 MRIs were enrolled between January 2015 and December 2021. Manual segmentations, determined through consensus readings by neuroradiologists, were used as the ground truth mask. The model was previously trained using a standard supervised CNN called Deeplab V3+ and was deployed and tested separately in four health care centers. To determine the appropriate approach to mitigating the observed performance degradation, two methods were used: unsupervised domain adaptation and supervised retraining. RESULTS: The trained model showed a state-of-the-art performance in tumor segmentation in two health care institutions, with a Dice ratio of 0.887 (SD 0.108, 95% CI 0.903-0.925) in center A and a Dice ratio of 0.874 (SD 0.800, 95% CI 0.854-0.894) in center B. Whereas in the other health care institutions, the performance declined, with Dice ratios of 0.631 (SD 0.157, 95% CI 0.556-0.707) in center C and 0.649 (SD 0.187, 95% CI 0.566-0.732) in center D, as they obtained the MRI using different scanning protocols. The unsupervised domain adaptation showed a significant improvement in performance scores, with Dice ratios of 0.842 (SD 0.073, 95% CI 0.820-0.864) in center C and 0.855 (SD 0.097, 95% CI 0.826-0.886) in center D. Nonetheless, it did not overperform the supervised retraining, which achieved Dice ratios of 0.899 (SD 0.026, 95% CI 0.889-0.906) in center C and 0.886 (SD 0.046, 95% CI 0.870-0.903) in center D. CONCLUSIONS: Deploying the trained CNN model in different health care institutions may show significant performance degradation due to the domain shift of MRIs. Under this circumstance, the use of unsupervised domain adaptation or supervised retraining should be considered, taking into account the balance between clinical requirements, model performance, and the size of the available data.


Assuntos
Neoplasias Meníngeas , Meningioma , Humanos , Meningioma/diagnóstico por imagem , Consenso , Redes Neurais de Computação , Estudos Retrospectivos , Neoplasias Meníngeas/diagnóstico por imagem
4.
Eur Radiol ; 33(11): 7482-7493, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37488296

RESUMO

OBJECTIVES: To investigate whether morphological changes after surgery and delta-radiomics of the optic chiasm obtained from routine MRI could help predict postoperative visual recovery of pituitary adenoma patients. METHODS: A total of 130 pituitary adenoma patients were retrospectively enrolled and divided into the recovery group (n = 87) and non-recovery group (n = 43) according to visual outcome 1 year after endoscopic endonasal transsphenoidal surgery. Morphological parameters of the optic chiasm were measured preoperatively and postoperatively, including chiasmal thickness, deformed angle, and suprasellar extension. Delta-radiomics of the optic chiasm were calculated based on features extracted from preoperative and postoperative coronal T2-weighted images, followed by machine learning modeling using least absolute shrinkage and selection operator wrapped with support vector machine through fivefold cross-validation in the development set. The delta-radiomic model was independently evaluated in the test set, and compared with the combined model that incorporated delta-radiomics, significant clinical and morphological parameters. RESULTS: Postoperative morphological changes of the optic chiasm could not significantly be used as predictors for the visual outcome. In contrast, the delta-radiomics model represented good performances in predicting visual recovery, with an AUC of 0.821 in the development set and 0.811 in the independent test set. Moreover, the combined model that incorporated age and delta-radiomics features of the optic chiasm achieved the highest AUC of 0.841 and 0.840 in the development set and independent test set, respectively. CONCLUSIONS: Our proposed machine learning models based on delta-radiomics of the optic chiasm can be used to predict postoperative visual recovery of pituitary adenoma patients. CLINICAL RELEVANCE STATEMENT: Our delta-radiomics-based models from MRI enable accurate visual recovery predictions in pituitary adenoma patients who underwent endoscopic endonasal transsphenoidal surgery, facilitating better clinical decision-making and ultimately improving patient outcomes. KEY POINTS: • Prediction of the postoperative visual outcome for pituitary adenoma patients is important but challenging. • Delta-radiomics of the optic chiasm after surgical decompression represented better prognostic performances compared with its morphological changes. • The proposed machine learning models can serve as novel approaches to predict visual recovery for pituitary adenoma patients in clinical practice.


Assuntos
Adenoma , Neoplasias Hipofisárias , Humanos , Neoplasias Hipofisárias/diagnóstico por imagem , Neoplasias Hipofisárias/cirurgia , Quiasma Óptico/diagnóstico por imagem , Estudos Retrospectivos , Imageamento por Ressonância Magnética/métodos , Prognóstico , Adenoma/diagnóstico por imagem , Adenoma/cirurgia
5.
Int J Surg ; 109(4): 896-904, 2023 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-36999782

RESUMO

BACKGROUND: Predicting the postoperative visual outcome of pituitary adenoma patients is important but remains challenging. This study aimed to identify a novel prognostic predictor which can be automatically obtained from routine MRI using a deep learning approach. MATERIALS AND METHODS: A total of 220 pituitary adenoma patients were prospectively enrolled and stratified into the recovery and nonrecovery groups according to the visual outcome at 6 months after endoscopic endonasal transsphenoidal surgery. The optic chiasm was manually segmented on preoperative coronal T2WI, and its morphometric parameters were measured, including suprasellar extension distance, chiasmal thickness, and chiasmal volume. Univariate and multivariate analyses were conducted on clinical and morphometric parameters to identify predictors for visual recovery. Additionally, a deep learning model for automated segmentation and volumetric measurement of optic chiasm was developed with nnU-Net architecture and evaluated in a multicenter data set covering 1026 pituitary adenoma patients from four institutions. RESULTS: Larger preoperative chiasmal volume was significantly associated with better visual outcomes ( P =0.001). Multivariate logistic regression suggested it could be taken as the independent predictor for visual recovery (odds ratio=2.838, P <0.001). The auto-segmentation model represented good performances and generalizability in internal (Dice=0.813) and three independent external test sets (Dice=0.786, 0.818, and 0.808, respectively). Moreover, the model achieved accurate volumetric evaluation of the optic chiasm with an intraclass correlation coefficient of more than 0.83 in both internal and external test sets. CONCLUSION: The preoperative volume of the optic chiasm could be utilized as the prognostic predictor for visual recovery of pituitary adenoma patients after surgery. Moreover, the proposed deep learning-based model allowed for automated segmentation and volumetric measurement of the optic chiasm on routine MRI.


Assuntos
Adenoma , Neoplasias Hipofisárias , Humanos , Quiasma Óptico/diagnóstico por imagem , Quiasma Óptico/cirurgia , Neoplasias Hipofisárias/diagnóstico por imagem , Neoplasias Hipofisárias/cirurgia , Neoplasias Hipofisárias/complicações , Estudos de Coortes , Endoscopia , Prognóstico , Adenoma/diagnóstico por imagem , Adenoma/cirurgia
6.
Eur Radiol ; 33(4): 2665-2675, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-36396792

RESUMO

OBJECTIVES: To develop a U-Net-based deep learning model for automated segmentation of craniopharyngioma. METHODS: A total number of 264 patients diagnosed with craniopharyngiomas were included in this research. Pre-treatment MRIs were collected, annotated, and used as ground truth to learn and evaluate the deep learning model. Thirty-eight patients from another institution were used for independently external testing. The proposed segmentation model was constructed based on a U-Net architecture. Dice similarity coefficients (DSCs), Hausdorff distance of 95% percentile (95HD), Jaccard value, true positive rate (TPR), and false positive rate (FPR) of each case were calculated. One-way ANOVA analysis was used to investigate if the model performance was associated with the radiological characteristics of tumors. RESULTS: The proposed model showed a good performance in segmentation with average DSCs of 0.840, Jaccard of 0.734, TPR of 0.820, FPR of 0.000, and 95HD of 3.669 mm. It performed feasibly in the independent external test set, with average DSCs of 0.816, Jaccard of 0.704, TPR of 0.765, FPR of 0.000, and 95HD of 4.201 mm. Also, one-way ANOVA suggested the performance was not statistically associated with radiological characteristics, including predominantly composition (p = 0.370), lobulated shape (p = 0.353), compressed or enclosed ICA (p = 0.809), and cavernous sinus invasion (p = 0.283). CONCLUSIONS: The proposed deep learning model shows promising results for the automated segmentation of craniopharyngioma. KEY POINTS: • The segmentation model based on U-Net showed good performance in segmentation of craniopharyngioma. • The proposed model showed good performance regardless of the radiological characteristics of craniopharyngioma. • The model achieved feasibility in the independent external dataset obtained from another center.


Assuntos
Craniofaringioma , Aprendizado Profundo , Neoplasias Hipofisárias , Humanos , Craniofaringioma/diagnóstico por imagem , Redes Neurais de Computação , Imageamento por Ressonância Magnética/métodos , Neoplasias Hipofisárias/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos
7.
Transl Cancer Res ; 11(11): 4079-4088, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-36523299

RESUMO

Background: The aim of this study was to investigate whether texture analysis-based machine learning could be utilized in presurgical differentiation of high-grade gliomas in adults. Methods: This is a single-center retrospective study involving 150 patients diagnosed with glioblastoma (GBM) (n=50), anaplastic astrocytoma (AA) (n=50) or anaplastic oligodendroglioma (AO) (n=50). The training group and validation group were randomly divided with a 4:1 ratio. Forty texture features were extracted from contrast-enhanced T1-weighted images using LIFEx software. Two feature-selection methods were separately introduced to select optimal features, including distance correlation (DC) and least absolute shrinkage and selection operator (LASSO). Optimal features selected were fed into linear discriminant analysis (LDA) classifier and support vector machine (SVM) classifier to establish multiple classification models. The performance was evaluated by using the accuracy, Kappa value and area under receiver operating characteristic curve (AUC) of each model. Results: The overall diagnostic accuracies of LDA-based models were 76.0% (DC + LDA) and 74.3% (LASSO + LDA) in the validation group, while for SVM-based models were 58.0% (DC + SVM) and 63.3% (LASSO + SVM). The combination of DC and LDA reach the highest diagnostic accuracy, AUC of GBM, AA and AO were 0.999, 0.834 and 0.865 separately, indicating that this model could distinguish GBM from AA and AO commendably, whereas the differentiation between AA and AO was relatively difficult. Conclusions: This study indicated that MRI texture analysis combined with LDA algorithm has the potential to be utilized in distinguishing the subtypes of high-grade gliomas.

8.
J Clin Med ; 11(24)2022 Dec 16.
Artigo em Inglês | MEDLINE | ID: mdl-36556097

RESUMO

PURPOSE: The goal of this study was to develop end-to-end convolutional neural network (CNN) models that can noninvasively discriminate papillary craniopharyngioma (PCP) from adamantinomatous craniopharyngioma (ACP) on MR images requiring no manual segmentation. MATERIALS AND METHODS: A total of 97 patients diagnosed with ACP or PCP were included. Pretreatment contrast-enhanced T1-weighted images were collected and used as the input of the CNNs. Six models were established based on six networks, including VGG16, ResNet18, ResNet50, ResNet101, DenseNet121, and DenseNet169. The area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity were used to assess the performances of these deep neural networks. A five-fold cross-validation was applied to evaluate the performances of the models. RESULTS: The six networks yielded feasible performances, with area under the receiver operating characteristic curves (AUCs) of at least 0.78 for classification. The model based on Resnet50 achieved the highest AUC of 0.838 ± 0.062, with an accuracy of 0.757 ± 0.052, a sensitivity of 0.608 ± 0.198, and a specificity of 0.845 ± 0.034, respectively. Moreover, the results also indicated that the CNN method had a competitive performance compared to the radiomics-based method, which required manual segmentation for feature extraction and further feature selection. CONCLUSIONS: MRI-based deep neural networks can noninvasively differentiate ACP from PCP to facilitate the personalized assessment of craniopharyngiomas.

9.
J Pers Med ; 11(10)2021 Sep 30.
Artigo em Inglês | MEDLINE | ID: mdl-34683132

RESUMO

Preoperative prediction of visual recovery after pituitary adenoma surgery remains a challenge. We aimed to investigate the value of MRI-based radiomics of the optic chiasm in predicting postoperative visual field outcome using machine learning technology. A total of 131 pituitary adenoma patients were retrospectively enrolled and divided into the recovery group (N = 79) and the non-recovery group (N = 52) according to visual field outcome following surgical chiasmal decompression. Radiomic features were extracted from the optic chiasm on preoperative coronal T2-weighted imaging. Least absolute shrinkage and selection operator regression were first used to select optimal features. Then, three machine learning algorithms were employed to develop radiomic models to predict visual recovery, including support vector machine (SVM), random forest and linear discriminant analysis. The prognostic performances of models were evaluated via five-fold cross-validation. The results showed that radiomic models using different machine learning algorithms all achieved area under the curve (AUC) over 0.750. The SVM-based model represented the best predictive performance for visual field recovery, with the highest AUC of 0.824. In conclusion, machine learning-based radiomics of the optic chiasm on routine MR imaging could potentially serve as a novel approach to preoperatively predict visual recovery and allow personalized counseling for individual pituitary adenoma patients.

10.
J Pers Med ; 11(8)2021 Aug 12.
Artigo em Inglês | MEDLINE | ID: mdl-34442431

RESUMO

The purpose of this study was to determine whether a deep-learning-based assessment system could facilitate preoperative grading of meningioma. This was a retrospective study conducted at two institutions covering 643 patients. The system, designed with a cascade network structure, was developed using deep-learning technology for automatic tumor detection, visual assessment, and grading prediction. Specifically, a modified U-Net convolutional neural network was first established to segment tumor images. Subsequently, the segmentations were introduced into rendering algorithms for spatial reconstruction and another DenseNet convolutional neural network for grading prediction. The trained models were integrated as a system, and the robustness was tested based on its performance on an external dataset from the second institution involving different magnetic resonance imaging platforms. The results showed that the segment model represented a noteworthy performance with dice coefficients of 0.920 ± 0.009 in the validation group. With accurate segmented tumor images, the rendering model delicately reconstructed the tumor body and clearly displayed the important intracranial vessels. The DenseNet model also achieved high accuracy with an area under the curve of 0.918 ± 0.006 and accuracy of 0.901 ± 0.039 when classifying tumors into low-grade and high-grade meningiomas. Moreover, the system exhibited good performance on the external validation dataset.

11.
Front Oncol ; 11: 521313, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34141605

RESUMO

PURPOSE: To investigate the diagnostic ability of radiomics-based machine learning in differentiating atypical low-grade astrocytoma (LGA) from anaplastic astrocytoma (AA). METHODS: The current study involved 175 patients diagnosed with LGA (n = 95) or AA (n = 80) and treated in the Neurosurgery Department of West China Hospital from April 2010 to December 2019. Radiomics features were extracted from pre-treatment contrast-enhanced T1 weighted imaging (T1C). Nine diagnostic models were established with three selection methods [Distance Correlation, least absolute shrinkage, and selection operator (LASSO), and Gradient Boosting Decision Tree (GBDT)] and three classification algorithms [Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), and random forest (RF)]. The sensitivity, specificity, accuracy, and areas under receiver operating characteristic curve (AUC) of each model were calculated. Diagnostic ability of each model was evaluated based on these indexes. RESULTS: Nine radiomics-based machine learning models with promising diagnostic performances were established. For LDA-based models, the optimal one was the combination of LASSO + LDA with AUC of 0.825. For SVM-based modes, Distance Correlation + SVM represented the most promising diagnostic performance with AUC of 0.808. And for RF-based models, Distance Correlation + RF were observed to be the optimal model with AUC of 0.821. CONCLUSION: Radiomic-based machine-learning has the potential to be utilized in differentiating atypical LGA from AA with reliable diagnostic performance.

12.
Front Oncol ; 9: 1371, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31921635

RESUMO

Objectives: To investigate the ability of radiomics features from MRI in differentiating anaplastic oligodendroglioma (AO) from atypical low-grade oligodendroglioma using machine-learning algorithms. Methods: A total number of 101 qualified patients (50 participants with AO and 51 with atypical low-grade oligodendroglioma) were enrolled in this retrospective, single-center study. Forty radiomics features of tumor images derived from six matrices were extracted from contrast-enhanced T1-weighted (T1C) images and fluid-attenuation inversion recovery (FLAIR) images. Three selection methods were performed to select the optimal features for classifiers, including distance correlation, least absolute shrinkage and selection operator (LASSO), and gradient boosting decision tree (GBDT). Then three machine-learning classifiers were adopted to generate discriminative models, including linear discriminant analysis, support vector machine, and random forest (RF). Receiver operating characteristic analysis was conducted to evaluate the discriminative performance of each model. Results: Nine predictive models were established based on radiomics features from T1C images and FLAIR images. All of the classifiers represented feasible ability in differentiation, with AUC more than 0.840 when combined with suitable selection method. For models based on T1C images, the combination of LASSO and RF classifier represented the highest AUC of 0.904 in the validation group. For models based on FLAIR images, the combination of GBDT and RF classifier showed the highest AUC of 0.861 in the validation group. Conclusion: Radiomics-based machine-learning approach could potentially serve as a feasible method in distinguishing AO from atypical low-grade oligodendroglioma.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA