Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 40
Filter
1.
Odontology ; 2024 Apr 12.
Article in English | MEDLINE | ID: mdl-38607582

ABSTRACT

The objectives of this study were to create a mutual conversion system between contrast-enhanced computed tomography (CECT) and non-CECT images using a cycle generative adversarial network (cycleGAN) for the internal jugular region. Image patches were cropped from CT images in 25 patients who underwent both CECT and non-CECT imaging. Using a cycleGAN, synthetic CECT and non-CECT images were generated from original non-CECT and CECT images, respectively. The peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) were calculated. Visual Turing tests were used to determine whether oral and maxillofacial radiologists could tell the difference between synthetic versus original images, and receiver operating characteristic (ROC) analyses were used to assess the radiologists' performances in discriminating lymph nodes from blood vessels. The PSNR of non-CECT images was higher than that of CECT images, while the SSIM was higher in CECT images. The Visual Turing test showed a higher perceptual quality in CECT images. The area under the ROC curve showed almost perfect performances in synthetic as well as original CECT images. In conclusion, synthetic CECT images created by cycleGAN appeared to have the potential to provide effective information in patients who could not receive contrast enhancement.

2.
Imaging Sci Dent ; 54(1): 33-41, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38571775

ABSTRACT

Purpose: The aims of this study were to create a deep learning model to distinguish between nasopalatine duct cysts (NDCs), radicular cysts, and no-lesions (normal) in the midline region of the anterior maxilla on panoramic radiographs and to compare its performance with that of dental residents. Materials and Methods: One hundred patients with a confirmed diagnosis of NDC (53 men, 47 women; average age, 44.6±16.5 years), 100 with radicular cysts (49 men, 51 women; average age, 47.5±16.4 years), and 100 with normal groups (56 men, 44 women; average age, 34.4±14.6 years) were enrolled in this study. Cases were randomly assigned to the training datasets (80%) and the test dataset (20%). Then, 20% of the training data were randomly assigned as validation data. A learning model was created using a customized DetectNet built in Digits version 5.0 (NVIDIA, Santa Clara, USA). The performance of the deep learning system was assessed and compared with that of two dental residents. Results: The performance of the deep learning system was superior to that of the dental residents except for the recall of radicular cysts. The areas under the curve (AUCs) for NDCs and radicular cysts in the deep learning system were significantly higher than those of the dental residents. The results for the dental residents revealed a significant difference in AUC between NDCs and normal groups. Conclusion: This study showed superior performance in detecting NDCs and radicular cysts and in distinguishing between these lesions and normal groups.

3.
Imaging Sci Dent ; 54(1): 25-31, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38571781

ABSTRACT

Purpose: The purpose of this study was to clarify the panoramic image differences of cleft alveolus patients with or without a cleft palate, with emphases on the visibility of the line formed by the junction between the nasal septum and nasal floor (the upper line) and the appearances of the maxillary lateral incisor. Materials and Methods: Panoramic radiographs of 238 patients with cleft alveolus were analyzed for the visibility of the upper line, including clear, obscure or invisible, and the appearances of the maxillary lateral incisor, regarding congenital absence, incomplete growth, delayed eruption and medial inclination. Differences in the distribution ratio of these visibility and appearances were verified between the patients with and without a cleft palate using the chi-square test. Results: There was a significant difference in the visibility distribution of the upper line between the patients with and without a cleft palate (p<0.05). In most of the patients with a cleft palate, the upper line was not observed. In the unilateral cleft alveolus patients, the medial inclination of the maxillary lateral incisor was more frequently observed in patients with a cleft palate than in patients without a cleft palate. Conclusion: Two differences were identified in panoramic appearances. The first was the disappearance (invisible appearance) of the upper line in patients with a cleft palate, and the second was a change in the medial inclination on the affected side maxillary lateral incisor in unilateral cleft alveolus patients with a cleft palate.

4.
J Endod ; 50(5): 627-636, 2024 May.
Article in English | MEDLINE | ID: mdl-38336338

ABSTRACT

INTRODUCTION: The purposes of this study were to evaluate the effect of the combined use of object detection for the classification of the C-shaped canal anatomy of the mandibular second molar in panoramic radiographs and to perform an external validation on a multicenter dataset. METHODS: The panoramic radiographs of 805 patients were collected from 4 institutes across two countries. The CBCT data of the same patients were used as "Ground-truth". Five datasets were generated: one for training and validation, and 4 as external validation datasets. Workflow 1 used manual cropping to prepare the image patches of mandibular second molars, and then classification was performed using EfficientNet. Workflow 2 used two combined methods with a preceding object detection (YOLOv7) performed for automated image patch formation, followed by classification using EfficientNet. Workflow 3 directly classified the root canal anatomy from the panoramic radiographs using the YOLOv7 prediction outcomes. The classification performance of the 3 workflows was evaluated and compared across 4 external validation datasets. RESULTS: For Workflows 1, 2, and 3, the area under the receiver operating characteristic curve (AUC) values were 0.863, 0.861, and 0.876, respectively, for the AGU dataset; 0.935, 0.945, and 0.863, respectively, for the ASU dataset; 0.854, 0.857, and 0.849, respectively, for the ODU dataset; and 0.821, 0.797, and 0.831, respectively, for the ODU low-resolution dataset. No significant differences existed between the AUC values of Workflows 1, 2, and 3 across the 4 datasets. CONCLUSIONS: The deep learning systems of the 3 workflows achieved significant accuracy in predicting the C-shaped canal in mandibular second molars across all test datasets.


Subject(s)
Dental Pulp Cavity , Mandible , Molar , Radiography, Panoramic , Humans , Molar/diagnostic imaging , Molar/anatomy & histology , Mandible/diagnostic imaging , Mandible/anatomy & histology , Dental Pulp Cavity/diagnostic imaging , Dental Pulp Cavity/anatomy & histology , Female , Male , Cone-Beam Computed Tomography/methods , Adult
5.
Oral Radiol ; 2024 Feb 03.
Article in English | MEDLINE | ID: mdl-38308723

ABSTRACT

OBJECTIVE: This systematic review was performed to examine the usefulness of salivary gland ultrasound elastography (USE) as a diagnostic tool for Sjögren's syndrome (SjS). METHODS: Electronic databases (MEDLINE, EMBASE, the Cochrane Library, and Web of Science: Science Citation Index) were searched to identify studies using USE to diagnose SjS from database inception to 15 July 2022. The primary outcome was improved diagnostic accuracy for SjS with the use of USE. Risk of bias and applicability concerns were assessed using the GRADE system, which is continuously developed by the GRADE Working Group. RESULTS: Among 4550 screened studies, 24 full-text articles describing the applications of USE to diagnose SjS were reviewed. The overall risk of bias was determined to be low for 17 of the 24 articles, medium for 5, and high for 2. Articles comparing patients with SjS and healthy subjects reported high diagnostic accuracy of USE, with most results showed statistically significant differences (parotid glands: 15 of the 16 articles, submandibular glands: 11 of the 14 articles). CONCLUSIONS: This systematic review suggests that the assessment of salivary glands using USE is a useful diagnostic tool for SjS.

6.
Aust Endod J ; 50(1): 157-162, 2024 Apr.
Article in English | MEDLINE | ID: mdl-37964478

ABSTRACT

A cemental tear (CeT) is a definitive clinical entity and its radiographic appearance is well known in single-rooted teeth. However, the imaging features of CeT in multi-rooted teeth have not been clarified. We report a case of CeT which arose in the maxillary first molar and exhibited an unusual appearance in cone-beam computed tomography images. The torn structure was verified as cementum by micro-computed tomography and histological analysis. The hypercementosis, most likely induced by occlusal force, might have been torn from the root by a stronger occlusal force caused by the mandibular implant. An unusual bridging structure was created between the two buccal roots. These features may occur in multi-rooted teeth with long-standing deep pockets and abscesses that are resistant to treatment.


Subject(s)
Dental Cementum , Lacerations , Humans , Dental Cementum/diagnostic imaging , X-Ray Microtomography , Molar/diagnostic imaging , Cone-Beam Computed Tomography/methods , Tooth Root/diagnostic imaging
7.
Oral Radiol ; 40(2): 93-108, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38001347

ABSTRACT

OBJECTIVES: This systematic review on generative adversarial network (GAN) architectures for dental image analysis provides a comprehensive overview to readers regarding current GAN trends in dental imagery and potential future applications. METHODS: Electronic databases (PubMed/MEDLINE, Scopus, Embase, and Cochrane Library) were searched to identify studies involving GANs for dental image analysis. Eighteen full-text articles describing the applications of GANs in dental imagery were reviewed. Risk of bias and applicability concerns were assessed using the QUADAS-2 tool. RESULTS: GANs were used for various imaging modalities, including two-dimensional and three-dimensional images. In dental imaging, GANs were utilized for tasks such as artifact reduction, denoising, and super-resolution, domain transfer, image generation for augmentation, outcome prediction, and identification. The generated images were incorporated into tasks such as landmark detection, object detection and classification. Because of heterogeneity among the studies, a meta-analysis could not be conducted. Most studies (72%) had a low risk of bias in all four domains. However, only three (17%) studies had a low risk of applicability concerns. CONCLUSIONS: This extensive analysis of GANs in dental imaging highlighted their broad application potential within the dental field. Future studies should address limitations related to the stability, repeatability, and overall interpretability of GAN architectures. By overcoming these challenges, the applicability of GANs in dentistry can be enhanced, ultimately benefiting the dental field in its use of GANs and artificial intelligence.


Subject(s)
Artifacts , Artificial Intelligence , Image Processing, Computer-Assisted , MEDLINE
8.
Sci Rep ; 13(1): 18038, 2023 10 21.
Article in English | MEDLINE | ID: mdl-37865655

ABSTRACT

This study evaluated the performance of generative adversarial network (GAN)-synthesized periapical images for classifying C-shaped root canals, which are challenging to diagnose because of their complex morphology. GANs have emerged as a promising technique for generating realistic images, offering a potential solution for data augmentation in scenarios with limited training datasets. Periapical images were synthesized using the StyleGAN2-ADA framework, and their quality was evaluated based on the average Frechet inception distance (FID) and the visual Turing test. The average FID was found to be 35.353 (± 4.386) for synthesized C-shaped canal images and 25.471 (± 2.779) for non C-shaped canal images. The visual Turing test conducted by two radiologists on 100 randomly selected images revealed that distinguishing between real and synthetic images was difficult. These results indicate that GAN-synthesized images exhibit satisfactory visual quality. The classification performance of the neural network, when augmented with GAN data, showed improvements compared with using real data alone, and could be advantageous in addressing data conditions with class imbalance. GAN-generated images have proven to be an effective data augmentation method, addressing the limitations of limited training data and computational resources in diagnosing dental anomalies.


Subject(s)
Dental Pulp Cavity , Neural Networks, Computer , Humans , Dental Pulp Cavity/diagnostic imaging , Radiologists , Vision Tests
9.
Imaging Sci Dent ; 53(1): 27-34, 2023 Mar.
Article in English | MEDLINE | ID: mdl-37006785

ABSTRACT

Purpose: The aim of this study was to clarify the influence of training with a different kind of lesion on the performance of a target model. Materials and Methods: A total of 310 patients (211 men, 99 women; average age, 47.9±16.1 years) were selected and their panoramic images were used in this study. We created a source model using panoramic radiographs including mandibular radiolucent cyst-like lesions (radicular cyst, dentigerous cyst, odontogenic keratocyst, and ameloblastoma). The model was simulatively transferred and trained on images of Stafne's bone cavity. A learning model was created using a customized DetectNet built in the Digits version 5.0 (NVIDIA, Santa Clara, CA). Two machines (Machines A and B) with identical specifications were used to simulate transfer learning. A source model was created from the data consisting of ameloblastoma, odontogenic keratocyst, dentigerous cyst, and radicular cyst in Machine A. Thereafter, it was transferred to Machine B and trained on additional data of Stafne's bone cavity to create target models. To investigate the effect of the number of cases, we created several target models with different numbers of Stafne's bone cavity cases. Results: When the Stafne's bone cavity data were added to the training, both the detection and classification performances for this pathology improved. Even for lesions other than Stafne's bone cavity, the detection sensitivities tended to increase with the increase in the number of Stafne's bone cavities. Conclusion: This study showed that using different lesions for transfer learning improves the performance of the model.

10.
Dentomaxillofac Radiol ; 52(8): 20210436, 2023 Nov.
Article in English | MEDLINE | ID: mdl-35076259

ABSTRACT

OBJECTIVES: The purpose of this study was to evaluate the difference in performance of deep-learning (DL) models with respect to the image classes and amount of training data to create an effective DL model for detecting both unilateral cleft alveoli (UCAs) and bilateral cleft alveoli (BCAs) on panoramic radiographs. METHODS: Model U was created using UCA and normal images, and Model B was created using BCA and normal images. Models C1 and C2 were created using the combined data of UCA, BCA, and normal images. The same number of CAs was used for training Models U, B, and C1, whereas Model C2 was created with a larger amount of data. The performance of all four models was evaluated with the same test data and compared with those of two human observers. RESULTS: The recall values were 0.60, 0.73, 0.80, and 0.88 for Models A, B, C1, and C2, respectively. The results of Model C2 were highest in precision and F-measure (0.98 and 0.92) and almost the same as those of human observers. Significant differences were found in the ratios of detected to undetected CAs of Models U and C1 (p = 0.01), Models U and C2 (p < 0.001), and Models B and C2 (p = 0.036). CONCLUSIONS: The DL models trained using both UCA and BCA data (Models C1 and C2) achieved high detection performance. Moreover, the performance of a DL model may depend on the amount of training data.


Subject(s)
Deep Learning , Humans , Radiography, Panoramic
11.
Oral Radiol ; 39(2): 349-354, 2023 04.
Article in English | MEDLINE | ID: mdl-35984588

ABSTRACT

OBJECTIVES: The aim of the present study was to create effective deep learning-based models for diagnosing the presence or absence of cleft palate (CP) in patients with unilateral or bilateral cleft alveolus (CA) on panoramic radiographs. METHODS: The panoramic images of 491 patients who had unilateral or bilateral cleft alveolus were used to create two models. Model A, which detects the upper incisor area on panoramic radiographs and classifies the areas into the presence or absence of CP, was created using both object detection and classification functions of DetectNet. Using the same data for developing Model A, Model B, which directly classifies the presence or absence of CP on panoramic radiographs, was created using classification function of VGG-16. The performances of both models were evaluated with the same test data and compared with those of two radiologists. RESULTS: The recall, precision, and F-measure were all 1.00 in Model A. The area under the receiver operating characteristic curve (AUC) values were 0.95, 0.93, 0.70, and 0.63 for Model A, Model B, and the radiologists, respectively. The AUCs of the models were significantly higher than those of the radiologists. CONCLUSIONS: The deep learning-based models developed in the present study have potential for use in supporting observer interpretations of the presence of cleft palate on panoramic radiographs.


Subject(s)
Cleft Palate , Deep Learning , Humans , Cleft Palate/diagnostic imaging , Radiography, Panoramic , Incisor
12.
Dentomaxillofac Radiol ; 51(4): 20210515, 2022 May 01.
Article in English | MEDLINE | ID: mdl-35113725

ABSTRACT

OBJECTIVE: The purpose of this study was to establish a deep-learning model for segmenting the cervical lymph nodes of oral cancer patients and diagnosing metastatic or non-metastatic lymph nodes from contrast-enhanced computed tomography (CT) images. METHODS: CT images of 158 metastatic and 514 non-metastatic lymph nodes were prepared. CT images were assigned to training, validation, and test datasets. The colored images with lymph nodes were prepared together with the original images for the training and validation datasets. Learning was performed for 200 epochs using the neural network U-net. Performance in segmenting lymph nodes and diagnosing metastasis were obtained. RESULTS: Performance in segmenting metastatic lymph nodes showed recall of 0.742, precision of 0.942, and F1 score of 0.831. The recall of metastatic lymph nodes at level II was 0.875, which was the highest value. The diagnostic performance of identifying metastasis showed an area under the curve (AUC) of 0.950, which was significantly higher than that of radiologists (0.896). CONCLUSIONS: A deep-learning model was created to automatically segment the cervical lymph nodes of oral squamous cell carcinomas. Segmentation performances should still be improved, but the segmented lymph nodes were more accurately diagnosed for metastases compared with evaluation by humans.


Subject(s)
Deep Learning , Mouth Neoplasms , Humans , Lymph Nodes/diagnostic imaging , Lymph Nodes/pathology , Lymphatic Metastasis/diagnostic imaging , Mouth Neoplasms/diagnostic imaging , Technology , Tomography, X-Ray Computed/methods
13.
J Clin Med ; 10(19)2021 Sep 29.
Article in English | MEDLINE | ID: mdl-34640523

ABSTRACT

This study was performed to evaluate the diagnostic performance of deep learning systems using ultrasonography (USG) images of the submandibular glands (SMGs) in three different conditions: obstructive sialoadenitis, Sjögren's syndrome (SjS), and normal glands. Fifty USG images with a confirmed diagnosis of obstructive sialoadenitis, 50 USG images with a confirmed diagnosis of SjS, and 50 USG images with no SMG abnormalities were included in the study. The training group comprised 40 obstructive sialoadenitis images, 40 SjS images, and 40 control images, and the test group comprised 10 obstructive sialoadenitis images, 10 SjS images, and 10 control images for deep learning analysis. The performance of the deep learning system was calculated and compared between two experienced radiologists. The sensitivity of the deep learning system in the obstructive sialoadenitis group, SjS group, and control group was 55.0%, 83.0%, and 73.0%, respectively, and the total accuracy was 70.3%. The sensitivity of the two radiologists was 64.0%, 72.0%, and 86.0%, respectively, and the total accuracy was 74.0%. This study revealed that the deep learning system was more sensitive than experienced radiologists in diagnosing SjS in USG images of two case groups and a group of healthy subjects in inflammation of SMGs.

14.
Sci Rep ; 11(1): 16044, 2021 08 06.
Article in English | MEDLINE | ID: mdl-34363000

ABSTRACT

Although panoramic radiography has a role in the examination of patients with cleft alveolus (CA), its appearances is sometimes difficult to interpret. The aims of this study were to develop a computer-aided diagnosis system for diagnosing the CA status on panoramic radiographs using a deep learning object detection technique with and without normal data in the learning process, to verify its performance in comparison to human observers, and to clarify some characteristic appearances probably related to the performance. The panoramic radiographs of 383 CA patients with cleft palate (CA with CP) or without cleft palate (CA only) and 210 patients without CA (normal) were used to create two models on the DetectNet. The models 1 and 2 were developed based on the data without and with normal subjects, respectively, to detect the CAs and classify them into with or without CP. The model 2 reduced the false positive rate (1/30) compared to the model 1 (12/30). The overall accuracy of Model 2 was higher than Model 1 and human observers. The model created in this study appeared to have the potential to detect and classify CAs on panoramic radiographs, and might be useful to assist the human observers.


Subject(s)
Alveolar Process/pathology , Cleft Lip/pathology , Cleft Palate/classification , Deep Learning , Radiography, Panoramic/methods , Alveolar Process/diagnostic imaging , Child , Cleft Lip/diagnostic imaging , Cleft Palate/diagnostic imaging , Cleft Palate/pathology , Female , Humans , Male
15.
Imaging Sci Dent ; 51(2): 129-136, 2021 Jun.
Article in English | MEDLINE | ID: mdl-34235058

ABSTRACT

PURPOSE: This study investigated the effects of 1 year of training on imaging diagnosis, using static ultrasonography (US) salivary gland images of Sjögren syndrome patients. MATERIALS AND METHODS: This study involved 3 inexperienced radiologists with different levels of experience, who received training 1 or 2 days a week under the supervision of experienced radiologists. The training program included collecting patient histories and performing physical and imaging examinations for various maxillofacial diseases. The 3 radiologists (observers A, B, and C) evaluated 400 static US images of salivary glands twice at a 1-year interval. To compare their performance, 2 experienced radiologists evaluated the same images. Diagnostic performance was compared between the 2 evaluations using the area under the receiver operating characteristic curve (AUC). RESULTS: Observer A, who was participating in the training program for the second year, exhibited no significant difference in AUC between the first and second evaluations, with results consistently comparable to those of experienced radiologists. After 1 year of training, observer B showed significantly higher AUCs than before training. The diagnostic performance of observer B reached the level of experienced radiologists for parotid gland assessment, but differed for submandibular gland assessment. For observer C, who did not complete the training, there was no significant difference in the AUC between the first and second evaluations, both of which showed significant differences from those of the experienced radiologists. CONCLUSION: These preliminary results suggest that the training program effectively helped inexperienced radiologists reach the level of experienced radiologists for US examinations.

16.
Oral Radiol ; 37(2): 236-244, 2021 Apr.
Article in English | MEDLINE | ID: mdl-32303973

ABSTRACT

OBJECTIVES: The present study aimed to clarify the characteristic computed tomography (CT) features that indicate synovial chondromatosis (SC) with a few small calcified bodies or without calcification on panoramic images, and to discuss their differences from the features of temporomandibular disorder (TMD). METHODS: Panoramic and CT images from 11 patients with histologically verified SC of the temporomandibular joint were investigated. Based on the panoramic images, the patients were classified into a distinct group (5 patients) with typical features of calcified loose bodies and an indistinct group (6 patients) without such bodies. On the CT images, findings for high-density structures suggesting calcified loose bodies, joint space widening, and bony changes in the articular eminence and glenoid fossa (eminence/fossa) and condyle were analyzed. RESULTS: All 5 distinct group patients showed high-density structures on CT images, while 2 of 6 indistinct group patients showed no high-density structures even on soft-tissue window CT images. A significant difference was found for the joint space distance between the affected and unaffected sides. A low-density area relative to the surrounding muscles, suggesting joint space widening, was observed on the affected side in 2 indistinct group patients. All 11 patients regardless of distinct or indistinct classification showed bony changes in the eminence/fossa with predominant findings of extended sclerosis and erosion. CONCLUSION: Eminence/fossa osseous changes including extended sclerosis and erosion may be effective CT features for differentiating SC from TMD even when calcified loose bodies cannot be identified.


Subject(s)
Chondromatosis, Synovial , Joint Loose Bodies , Temporomandibular Joint Disorders , Chondromatosis, Synovial/diagnostic imaging , Humans , Joint Loose Bodies/diagnostic imaging , Temporomandibular Joint/diagnostic imaging , Temporomandibular Joint Disorders/diagnostic imaging , Tomography, X-Ray Computed
17.
Dentomaxillofac Radiol ; 50(1): 20200171, 2021 Jan 01.
Article in English | MEDLINE | ID: mdl-32618480

ABSTRACT

OBJECTIVE: The first aim of this study was to determine the performance of a deep learning object detection technique in the detection of maxillary sinuses on panoramic radiographs. The second aim was to clarify the performance in the classification of maxillary sinus lesions compared with healthy maxillary sinuses. METHODS: The imaging data for healthy maxillary sinuses (587 sinuses, Class 0), inflamed maxillary sinuses (416 sinuses, Class 1), cysts of maxillary sinus regions (171 sinuses, Class 2) were assigned to training, testing 1, and testing 2 data sets. A learning process of 1000 epochs with the training images and labels was performed using DetectNet, and a learning model was created. The testing 1 and testing 2 images were applied to the model, and the detection sensitivities and the false-positive rates per image were calculated. The accuracies, sensitivities and specificities were determined for distinguishing the inflammation group (Class 1) and cyst group (Class 2) with respect to the healthy group (Class 0). RESULTS: Detection sensitivities of healthy (Class 0) and inflamed (Class 1) maxillary sinuses were 100% for both testing 1 and testing 2 data sets, whereas they were 98 and 89% for cysts of the maxillary sinus regions (Class 2). False-positive rates per image were nearly 0.00. Accuracies, sensitivities and specificities for diagnosis maxillary sinusitis were 90-91%, 88-85%, and 91-96%, respectively; for cysts of the maxillary sinus regions, these values were 97-100%, 80-100%, and 100-100%, respectively. CONCLUSION: Deep learning could reliably detect the maxillary sinuses and identify maxillary sinusitis and cysts of the maxillary sinus regions. ADVANCES IN KNOWLEDGE: This study using a deep leaning object detection technique indicated that the detection sensitivities of maxillary sinuses were high and the performance of maxillary sinus lesion identification was ≧80%. In particular, performance of sinusitis identification was ≧90%.


Subject(s)
Deep Learning , Maxillary Sinusitis , Humans , Maxillary Sinus/diagnostic imaging , Maxillary Sinusitis/diagnostic imaging , Radiography, Panoramic , Technology
19.
Oral Radiol ; 37(3): 487-493, 2021 07.
Article in English | MEDLINE | ID: mdl-32948938

ABSTRACT

OBJECTIVES: This study aimed to examine the performance of deep learning object detection technology for detecting and identifying maxillary cyst-like lesions on panoramic radiography. METHODS: Altogether, 412 patients with maxillary cyst-like lesions (including several benign tumors) were enrolled. All panoramic radiographs were arbitrarily assigned to the training, testing 1, and testing 2 datasets of the study. The deep learning process of the training images and labels was performed for 1000 epochs using the DetectNet neural network. The testing 1 and testing 2 images were applied to the created learning model, and the detection performance was evaluated. For lesions that could be detected, the classification performance (sensitivity) for identifying radicular cysts or other lesions were examined. RESULTS: The recall, precision, and F-1 score for detecting maxillary cysts were 74.6%/77.1%, 89.8%/90.0%, and 81.5%/83.1% for the testing 1/testing 2 datasets, respectively. The recall was higher in the anterior regions and for radicular cysts. The sensitivity was higher for identifying radicular cysts than for other lesions. CONCLUSIONS: Using deep learning object detection technology, maxillary cyst-like lesions could be detected in approximately 75-77%.


Subject(s)
Cysts , Deep Learning , Humans , Neural Networks, Computer , Radiography, Panoramic
20.
Article in English | MEDLINE | ID: mdl-32507560

ABSTRACT

OBJECTIVE: This investigation aimed to verify and compare the performance of 3 deep learning systems for classifying maxillary impacted supernumerary teeth (ISTs) in patients with fully erupted incisors. STUDY DESIGN: In total, the study included 550 panoramic radiographs obtained from 275 patients with at least 1 IST and 275 patients without ISTs in the maxillary incisor region. Three learning models were created by using AlexNet, VGG-16, and DetectNet. Four hundred images were randomly selected as training data, and 100 images were assigned as validating and testing data. The remaining 50 images were used as new testing data. The sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve were calculated. Detection performance was evaluated by using recall, precision, and F-measure. RESULTS: DetectNet generally produced the highest values of diagnostic efficacy. VGG-16 yielded significantly lower values compared with DetectNet and AlexNet. Assessment of the detection performance of DetectNet showed that recall, precision, and F-measure for detection in the incisor region were all 1.0, indicating perfect detection. CONCLUSIONS: DetectNet and AlexNet appear to have potential use in classifying the presence of ISTs in the maxillary incisor region on panoramic radiographs. Additionally, DetectNet would be suitable for automatic detection of this abnormality.


Subject(s)
Deep Learning , Tooth, Impacted , Tooth, Supernumerary , Humans , Incisor/diagnostic imaging , Maxilla/diagnostic imaging , Radiography, Panoramic , Tooth, Impacted/diagnostic imaging , Tooth, Supernumerary/diagnostic imaging
SELECTION OF CITATIONS
SEARCH DETAIL
...