Your browser doesn't support javascript.
loading
A multitask classification framework based on vision transformer for predicting molecular expressions of glioma.
Xu, Qian; Xu, Qian Qian; Shi, Nian; Dong, Li Na; Zhu, Hong; Xu, Kai.
Afiliación
  • Xu Q; Department of Radiology, The Affiliated Hospital of Xuzhou Medical University, Xuzhou City, Jiangsu Province 221002, China.
  • Xu QQ; School of Medical Information and Engineering, Xuzhou Medical University, Xuzhou City, Jiangsu Province 221002, China.
  • Shi N; School of Medical Imaging, Xuzhou Medical University, Xuzhou City, Jiangsu Province 221002, China.
  • Dong LN; Department of Radiology, The Affiliated Hospital of Xuzhou Medical University, Xuzhou City, Jiangsu Province 221002, China.
  • Zhu H; School of Medical Information and Engineering, Xuzhou Medical University, Xuzhou City, Jiangsu Province 221002, China. Electronic address: zhuhong@xzhmu.edu.cn.
  • Xu K; Department of Radiology, The Affiliated Hospital of Xuzhou Medical University, Xuzhou City, Jiangsu Province 221002, China. Electronic address: xukaixz@126.com.
Eur J Radiol ; 157: 110560, 2022 Dec.
Article en En | MEDLINE | ID: mdl-36327857
PURPOSE: The purpose of this study is to develop a Vision Transformer model with multitask classification framework that is appropriate for predicting four molecular expressions of glioma simultaneously based on MR imaging. MATERIALS AND METHODS: A total of 188 glioma (grades II-IV) patients with an immunohistochemical diagnosis of IDH, MGMT, Ki67 and P53 expression were enrolled in our study. A Vision Transformer (ViT) model, including three independent networks based on T2WI, T1CWI and T2 + T1CWI (T2-net, T1C-net and TU-net), was developed for the prediction of four glioma molecular expressions simultaneously. To evaluate the model performance, the accuracy rate, recall, precision, F1-score, and area under the receiver operating characteristic curve (AUC) were calculated. RESULTS: The proposed ViT model achieved high accuracy in predicting IDH, MGMT, Ki67 and P53 expression in gliomas. Among the three networks using the ViT model, TU-net achieved the best results with the highest values of accuracy (range, 0.937-0.969), precision (range, 0.949-0.972), recall (range, 0.873-0.991), F1-score (range, 0.910-0.981) and AUC (range, 0.976-0.984). Comparisons were also made between our ViT model and convolutional neural network (CNN)-based models, and the proposed ViT model outperformed the existing CNN-based models. CONCLUSION: Vision Transformer is a reliable approach for the prediction of glioma molecular biomarkers and can be a viable alternative to CNNs.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Neoplasias Encefálicas / Glioma Tipo de estudio: Prognostic_studies / Risk_factors_studies Límite: Humans Idioma: En Revista: Eur J Radiol Año: 2022 Tipo del documento: Article País de afiliación: China Pais de publicación: Irlanda

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Neoplasias Encefálicas / Glioma Tipo de estudio: Prognostic_studies / Risk_factors_studies Límite: Humans Idioma: En Revista: Eur J Radiol Año: 2022 Tipo del documento: Article País de afiliación: China Pais de publicación: Irlanda