Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
1.
Sci Rep ; 13(1): 11676, 2023 07 19.
Artículo en Inglés | MEDLINE | ID: mdl-37468501

RESUMEN

The study aims to identify histological classifiers from histopathological images of oral squamous cell carcinoma using convolutional neural network (CNN) deep learning models and shows how the results can improve diagnosis. Histopathological samples of oral squamous cell carcinoma were prepared by oral pathologists. Images were divided into tiles on a virtual slide, and labels (squamous cell carcinoma, normal, and others) were applied. VGG16 and ResNet50 with the optimizers stochastic gradient descent with momentum and spectral angle mapper (SAM) were used, with and without a learning rate scheduler. The conditions for achieving good CNN performances were identified by examining performance metrics. We used ROCAUC to statistically evaluate diagnostic performance improvement of six oral pathologists using the results from the selected CNN model for assisted diagnosis. VGG16 with SAM showed the best performance, with accuracy = 0.8622 and AUC = 0.9602. The diagnostic performances of the oral pathologists statistically significantly improved when the diagnostic results of the deep learning model were used as supplementary diagnoses (p-value = 0.031). By considering the learning results of deep learning model classifiers, the diagnostic accuracy of pathologists can be improved. This study contributes to the application of highly reliable deep learning models for oral pathological diagnosis.


Asunto(s)
Carcinoma de Células Escamosas , Aprendizaje Profundo , Neoplasias de Cabeza y Cuello , Neoplasias de la Boca , Humanos , Carcinoma de Células Escamosas/diagnóstico , Carcinoma de Células Escamosas de Cabeza y Cuello , Patólogos , Neoplasias de la Boca/diagnóstico
2.
Sci Rep ; 12(1): 684, 2022 01 13.
Artículo en Inglés | MEDLINE | ID: mdl-35027629

RESUMEN

Pell and Gregory, and Winter's classifications are frequently implemented to classify the mandibular third molars and are crucial for safe tooth extraction. This study aimed to evaluate the classification accuracy of convolutional neural network (CNN) deep learning models using cropped panoramic radiographs based on these classifications. We compared the diagnostic accuracy of single-task and multi-task learning after labeling 1330 images of mandibular third molars from digital radiographs taken at the Department of Oral and Maxillofacial Surgery at a general hospital (2014-2021). The mandibular third molar classifications were analyzed using a VGG 16 model of a CNN. We statistically evaluated performance metrics [accuracy, precision, recall, F1 score, and area under the curve (AUC)] for each prediction. We found that single-task learning was superior to multi-task learning (all p < 0.05) for all metrics, with large effect sizes and low p-values. Recall and F1 scores for position classification showed medium effect sizes in single and multi-task learning. To our knowledge, this is the first deep learning study to examine single-task and multi-task learning for the classification of mandibular third molars. Our results demonstrated the efficacy of implementing Pell and Gregory, and Winter's classifications for specific respective tasks.


Asunto(s)
Aprendizaje Profundo , Mandíbula , Tercer Molar/diagnóstico por imagen , Redes Neurales de la Computación , Área Bajo la Curva , Humanos , Tercer Molar/anatomía & histología , Tercer Molar/cirugía , Radiografía Panorámica , Extracción Dental/métodos , Diente Impactado/cirugía
3.
Sci Rep ; 12(1): 16925, 2022 10 08.
Artículo en Inglés | MEDLINE | ID: mdl-36209283

RESUMEN

In this study, the accuracy of the positional relationship of the contact between the inferior alveolar canal and mandibular third molar was evaluated using deep learning. In contact analysis, we investigated the diagnostic performance of the presence or absence of contact between the mandibular third molar and inferior alveolar canal. We also evaluated the diagnostic performance of bone continuity diagnosed based on computed tomography as a continuity analysis. A dataset of 1279 images of mandibular third molars from digital radiographs taken at the Department of Oral and Maxillofacial Surgery at a general hospital (2014-2021) was used for the validation. The deep learning models were ResNet50 and ResNet50v2, with stochastic gradient descent and sharpness-aware minimization (SAM) as optimizers. The performance metrics were accuracy, precision, recall, specificity, F1 score, and area under the receiver operating characteristic curve (AUC). The results indicated that ResNet50v2 using SAM performed excellently in the contact and continuity analyses. The accuracy and AUC were 0.860 and 0.890 for the contact analyses and 0.766 and 0.843 for the continuity analyses. In the contact analysis, SAM and the deep learning model performed effectively. However, in the continuity analysis, none of the deep learning models demonstrated significant classification performance.


Asunto(s)
Aprendizaje Profundo , Tercer Molar , Mandíbula/diagnóstico por imagen , Nervio Mandibular/diagnóstico por imagen , Diente Molar , Tercer Molar/diagnóstico por imagen , Radiografía Panorámica
4.
Sci Rep ; 12(1): 13281, 2022 08 02.
Artículo en Inglés | MEDLINE | ID: mdl-35918498

RESUMEN

The use of sharpness aware minimization (SAM) as an optimizer that achieves high performance for convolutional neural networks (CNNs) is attracting attention in various fields of deep learning. We used deep learning to perform classification diagnosis in oral exfoliative cytology and to analyze performance, using SAM as an optimization algorithm to improve classification accuracy. The whole image of the oral exfoliation cytology slide was cut into tiles and labeled by an oral pathologist. CNN was VGG16, and stochastic gradient descent (SGD) and SAM were used as optimizers. Each was analyzed with and without a learning rate scheduler in 300 epochs. The performance metrics used were accuracy, precision, recall, specificity, F1 score, AUC, and statistical and effect size. All optimizers performed better with the rate scheduler. In particular, the SAM effect size had high accuracy (11.2) and AUC (11.0). SAM had the best performance of all models with a learning rate scheduler. (AUC = 0.9328) SAM tended to suppress overfitting compared to SGD. In oral exfoliation cytology classification, CNNs using SAM rate scheduler showed the highest classification performance. These results suggest that SAM can play an important role in primary screening of the oral cytological diagnostic environment.


Asunto(s)
Aprendizaje Profundo , Algoritmos , Redes Neurales de la Computación
5.
PLoS One ; 17(7): e0269016, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35895591

RESUMEN

Attention mechanism, which is a means of determining which part of the forced data is emphasized, has attracted attention in various fields of deep learning in recent years. The purpose of this study was to evaluate the performance of the attention branch network (ABN) for implant classification using convolutional neural networks (CNNs). The data consisted of 10191 dental implant images from 13 implant brands that cropped the site, including dental implants as pretreatment, from digital panoramic radiographs of patients who underwent surgery at Kagawa Prefectural Central Hospital between 2005 and 2021. ResNet 18, 50, and 152 were evaluated as CNN models that were compared with and without the ABN. We used accuracy, precision, recall, specificity, F1 score, and area under the receiver operating characteristics curve as performance metrics. We also performed statistical and effect size evaluations of the 30-time performance metrics of the simple CNNs and the ABN model. ResNet18 with ABN significantly improved the dental implant classification performance for all the performance metrics. Effect sizes were equivalent to "Huge" for all performance metrics. In contrast, the classification performance of ResNet50 and 152 deteriorated by adding the attention mechanism. ResNet18 showed considerably high compatibility with the ABN model in dental implant classification (AUC = 0.9993) despite the small number of parameters. The limitation of this study is that only ResNet was verified as a CNN; further studies are required for other CNN models.


Asunto(s)
Aprendizaje Profundo , Implantes Dentales , Humanos , Redes Neurales de la Computación , Curva ROC , Radiografía Panorámica
6.
Biomolecules ; 11(6)2021 05 30.
Artículo en Inglés | MEDLINE | ID: mdl-34070916

RESUMEN

It is necessary to accurately identify dental implant brands and the stage of treatment to ensure efficient care. Thus, the purpose of this study was to use multi-task deep learning to investigate a classifier that categorizes implant brands and treatment stages from dental panoramic radiographic images. For objective labeling, 9767 dental implant images of 12 implant brands and treatment stages were obtained from the digital panoramic radiographs of patients who underwent procedures at Kagawa Prefectural Central Hospital, Japan, between 2005 and 2020. Five deep convolutional neural network (CNN) models (ResNet18, 34, 50, 101 and 152) were evaluated. The accuracy, precision, recall, specificity, F1 score, and area under the curve score were calculated for each CNN. We also compared the multi-task and single-task accuracies of brand classification and implant treatment stage classification. Our analysis revealed that the larger the number of parameters and the deeper the network, the better the performance for both classifications. Multi-tasking significantly improved brand classification on all performance indicators, except recall, and significantly improved all metrics in treatment phase classification. Using CNNs conferred high validity in the classification of dental implant brands and treatment stages. Furthermore, multi-task learning facilitated analysis accuracy.


Asunto(s)
Aprendizaje Profundo , Implantes Dentales , Procesamiento de Imagen Asistido por Computador , Radiografía Panorámica , Femenino , Humanos , Japón , Masculino , Persona de Mediana Edad
7.
Biomolecules ; 10(7)2020 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-32630195

RESUMEN

In this study, we used panoramic X-ray images to classify and clarify the accuracy of different dental implant brands via deep convolutional neural networks (CNNs) with transfer-learning strategies. For objective labeling, 8859 implant images of 11 implant systems were used from digital panoramic radiographs obtained from patients who underwent dental implant treatment at Kagawa Prefectural Central Hospital, Japan, between 2005 and 2019. Five deep CNN models (specifically, a basic CNN with three convolutional layers, VGG16 and VGG19 transfer-learning models, and finely tuned VGG16 and VGG19) were evaluated for implant classification. Among the five models, the finely tuned VGG16 model exhibited the highest implant classification performance. The finely tuned VGG19 was second best, followed by the normal transfer-learning VGG16. We confirmed that the finely tuned VGG16 and VGG19 CNNs could accurately classify dental implant systems from 11 types of panoramic X-ray images.


Asunto(s)
Implantes Dentales/clasificación , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Radiografía Panorámica/métodos , Aprendizaje Profundo , Humanos , Redes Neurales de la Computación
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA