Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Tipo de documento
Assunto da revista
País de afiliação
Intervalo de ano de publicação
1.
Oral Dis ; 29(8): 3325-3336, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-36520552

RESUMO

OBJECTIVES: Imaging interpretation of the benignancy or malignancy of parotid gland tumors (PGTs) is a critical consideration prior to surgery in view of therapeutic and prognostic values of such discrimination. This study investigates the application of a deep learning-based method for preoperative stratification of PGTs. MATERIALS AND METHODS: Using the 3D DenseNet-121 architecture and a dataset consisting of 117 volumetric arterial-phase contrast-enhanced CT scans, we developed a binary classifier for PGT distinction and tested it. We compared the discriminative performance of the model on the test set to that of 12 junior and 12 senior head and neck clinicians. Besides, potential clinical utility of the model was evaluated by measuring changes in unassisted and model-assisted performance of junior clinicians. RESULTS: The model finally reached the sensitivity, specificity, PPV, NPV, F1-score of 0.955 (95% CI 0.751-0.998), 0.667 (95% CI 0.241-0.940), 0.913 (95% CI 0.705-0.985), 0.800 (95% CI 0.299-0.989) and 0.933, respectively, comparable to that of practicing clinicians. Furthermore, there were statistically significant increases in junior clinicians' specificity, PPV, NPV and F1-score in differentiating benign from malignant PGTs when unassisted and model-assisted performance of junior clinicians were compared. CONCLUSION: Our results provide evidence that deep learning-based method may offer assistance for PGT's binary distinction.


Assuntos
Aprendizado Profundo , Neoplasias Parotídeas , Humanos , Glândula Parótida/diagnóstico por imagem , Diagnóstico por Computador/métodos , Tomografia Computadorizada por Raios X , Neoplasias Parotídeas/diagnóstico por imagem , Estudos Retrospectivos
2.
Front Oncol ; 12: 841262, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35463386

RESUMO

Tongue squamous cell carcinoma (TSCC) is the most common oral malignancy. The proliferation status of tumor cells as indicated with the Ki-67 index has great impact on tumor microenvironment, therapeutic strategy making, and patients' prognosis. However, the most commonly used method to obtain the proliferation status is through biopsy or surgical immunohistochemical staining. Noninvasive method before operation remains a challenge. Hence, in this study, we aimed to validate a novel method to predict the proliferation status of TSCC using contrast-enhanced CT (CECT) based on artificial intelligence (AI). CECT images of the lesion area from 179 TSCC patients were analyzed using a convolutional neural network (CNN). Patients were divided into a high proliferation status group and a low proliferation status group according to the Ki-67 index of patients with the median 20% as cutoff. The model was trained and then the test set was automatically classified. Results of the test set showed an accuracy of 65.38% and an AUC of 0.7172, suggesting that the majority of samples were classified correctly and the model was stable. Our study provided a possibility of predicting the proliferation status of TSCC using AI in CECT noninvasively before operation.

3.
Front Oncol ; 11: 793417, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-35155194

RESUMO

OBJECTIVE: The purpose of this study was to utilize a convolutional neural network (CNN) to make preoperative differential diagnoses between ameloblastoma (AME) and odontogenic keratocyst (OKC) on cone-beam CT (CBCT). METHODS: The CBCT images of 178 AMEs and 172 OKCs were retrospectively retrieved from the Hospital of Stomatology, Wuhan University. The datasets were randomly split into a training dataset of 272 cases and a testing dataset of 78 cases. Slices comprising lesions were retained and then cropped to suitable patches for training. The Inception v3 deep learning algorithm was utilized, and its diagnostic performance was compared with that of oral and maxillofacial surgeons. RESULTS: The sensitivity, specificity, accuracy, and F1 score were 87.2%, 82.1%, 84.6%, and 85.0%, respectively. Furthermore, the average scores of the same indexes for 7 senior oral and maxillofacial surgeons were 60.0%, 71.4%, 65.7%, and 63.6%, respectively, and those of 30 junior oral and maxillofacial surgeons were 63.9%, 53.2%, 58.5%, and 60.7%, respectively. CONCLUSION: The deep learning model was able to differentiate these two lesions with better diagnostic accuracy than clinical surgeons. The results indicate that the CNN may provide assistance for clinical diagnosis, especially for inexperienced surgeons.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA