RESUMO
Dental fillings, frequently used in dentistry to address various dental tissue issues, may pose problems when not aligned with the anatomical contours and physiology of dental and periodontal tissues. Our study aims to detect the prevalence and distribution of normal and overhanging filling restorations using a deep CNN architecture trained through supervised learning, on panoramic radiography images. A total of 10480 fillings and 2491 overhanging fillings were labeled using CranioCatch software from 2473 and 1850 images, respectively. After the data obtaining phase, validation (80%), training 10%), and test-groups (10%) were formed from images for both labelling. The YOLOv5x architecture was used to develop the AI model. The model's performance was assessed through a confusion matrix and sensitivity, precision, and F1 score values of the model were calculated. For filling, sensitivity is 0.95, precision is 0.97, and F1 score is 0.96; for overhanging were determined to be 0.86, 0.89, and 0.87, respectively. The results demonstrate the capacity of the YOLOv5 algorithm to segment dental radiographs efficiently and accurately and demonstrate proficiency in detecting and distinguishing between normal and overhanging filling restorations.
Assuntos
Restauração Dentária Permanente , Radiografia Panorâmica , Humanos , Restauração Dentária Permanente/métodos , Reprodutibilidade dos Testes , Inteligência Artificial , Valores de Referência , AlgoritmosRESUMO
OBJECTIVES: Accurately identification and tooth numbering on radiographs is essential for any clinicians. The aim of the present study was to validate the hypothesis that Yolov5, a type of artificial intelligence model, can be trained to detect and number teeth in periapical radiographs. MATERIALS AND METHODS: Six thousand four hundred forty six anonymized periapical radiographs without motion-related artifacts were randomly selected from the database. All periapical radiographs in which all boundaries of any tooth could be distinguished were included in the study. The radiographic images used were randomly divided into three groups: 80% training, 10% validation, and 10% testing. The confusion matrix was used to examine model success. RESULTS: During the test phase, 2578 labelings were performed on 644 periapical radiographs. The number of true positive was 2434 (94.4%), false positive was 115 (4.4%), and false negative was 29 (1.2%). The recall, precision, and F1 scores were 0.9882, 0.9548, and 0.9712, respectively. Moreover, the model yielded an area under curve (AUC) of 0.603 on the receiver operating characteristic curve (ROC). CONCLUSIONS: This study showed us that YOLOv5 is nearly perfect for numbering teeth on periapical radiography. Although high success rates were achieved as a result of the study, it should not be forgotten that artificial intelligence currently only can be guides dentists for accurate and rapid diagnosis. CLINICAL RELEVANCE: It is thought that dentists can accelerate the radiographic examination time and inexperienced dentists can reduce the error rate by using YOLOv5. Additionally, YOLOv5 can also be used in the education of dentistry students.
Assuntos
Inteligência Artificial , Humanos , Radiografia Dentária/métodos , Dente/diagnóstico por imagemRESUMO
BACKGROUND: Maxillofacial complex automated segmentation could alternative traditional segmentation methods to increase the effectiveness of virtual workloads. The use of DL systems in the detection of maxillary sinus and pathologies will both facilitate the work of physicians and be a support mechanism before the planned surgeries. OBJECTIVE: The aim was to use a modified You Only Look Oncev5x (YOLOv5x) architecture with transfer learning capabilities to segment both maxillary sinuses and maxillary sinus diseases on Cone-Beam Computed Tomographic (CBCT) images. METHODS: Data set consists of 307 anonymised CBCT images of patients (173 women and 134 males) obtained from the radiology archive of the Department of Oral and Maxillofacial Radiology. Bilateral maxillary sinuses CBCT scans were used to identify mucous retention cysts (MRC), mucosal thickenings (MT), total and partial opacifications, and healthy maxillary sinuses without any radiological features. RESULTS: Recall, precision and F1 score values for total maxillary sinus segmentation were 1, 0.985 and 0.992, respectively; 1, 0.931 and 0.964 for healthy maxillary sinus segmentation; 0.858, 0.923 and 0.889 for MT segmentation; 0.977, 0.877 and 0.924 for MRC segmentation; 1, 0.942 and 0.970 for sinusitis segmentation. CONCLUSION: This study demonstrates that maxillary sinuses can be segmented, and maxillary sinus diseases can be accurately detected using the AI model.
Assuntos
Tomografia Computadorizada de Feixe Cônico , Aprendizado Profundo , Seio Maxilar , Humanos , Tomografia Computadorizada de Feixe Cônico/métodos , Seio Maxilar/diagnóstico por imagem , Seio Maxilar/patologia , Feminino , Masculino , Doenças dos Seios Paranasais/diagnóstico por imagem , Doenças dos Seios Paranasais/patologia , Doenças dos Seios Paranasais/classificação , Adulto , Pessoa de Meia-IdadeRESUMO
BACKGROUND: This study aims to evaluate the performance of a deep learning system for the evaluation of tooth development stages on images obtained from panoramic radiographs from child patients. METHODS: The study collected a total of 1500 images obtained from panoramic radiographs from child patients between the ages of 5 and 14 years. YOLOv5, a convolutional neural network (CNN)-based object detection model, was used to automatically detect the calcification states of teeth. Images obtained from panoramic radiographs from child patients were trained and tested in the YOLOv5 algorithm. True-positive (TP), false-positive (FP), and false-negative (FN) ratios were calculated. A confusion matrix was used to evaluate the performance of the model. RESULTS: Among the 146 test group images with 1022 labels, there were 828 TPs, 308 FPs, and 1 FN. The sensitivity, precision, and F1-score values of the detection model of the tooth stage development model were 0.99, 0.72, and 0.84, respectively. CONCLUSIONS: In conclusion, utilizing a deep learning-based approach for the detection of dental development on pediatric panoramic radiographs may facilitate a precise evaluation of the chronological correlation between tooth development stages and age. This can help clinicians make treatment decisions and aid dentists in finding more accurate treatment options.
Assuntos
Algoritmos , Aprendizado Profundo , Radiografia Panorâmica , Humanos , Criança , Adolescente , Pré-Escolar , Feminino , Masculino , Inteligência Artificial , Dente/crescimento & desenvolvimento , Dente/diagnóstico por imagem , Determinação da Idade pelos Dentes/métodos , Redes Neurais de ComputaçãoRESUMO
OBJECTIVE: To investigate the effectiveness of using YOLO-v5x in detecting fixed prosthetic restoration in panoramic radiographs. STUDY DESIGN: Descriptive study. Place and Duration of the Study: Department of Oral and Maxillofacial Radiology, Eskisehir Osmangazi University, Eskisehir, Turkiye from November 2022 to April 2023. METHODOLOGY: For the labelling of fixed prosthetic restorations, 8,000 panoramic radiographs were evaluated using the YOLO-v5x architecture. In creating the dataset for this study, fixed prosthetic restorations were categorised as dental implant, pontic, crown, and implant-supported crown on dental panoramic radiographs. The labelled images were then randomly split into three groups: 80% for training, 10% for validation, and 10% for testing. The labelled panoramic images constituted the model's training dataset, and leveraging the knowledge acquired during this learning stage, the model generated predictions in the testing phase. RESULTS: The majority of labelling data were dedicated to crown restorations. The precision and sensitivity values of YOLOv5x were 0.99 and 0.98 for crowns, 0.98 and 0.99 for implants, 0.99 and 0.99 for pontics, and 0.99 and 0.99 for implant-supported crowns, respectively. CONCLUSION: The results obtained in this study demonstrate a satisfactory success rate of YOLO-v5x in detecting dental prosthetic restorations. The high precision and sensitivity of the model indicate its strong potential to enhance clinical professional performance and contribute to the development of more efficient dental health services. KEY WORDS: Artificial intelligence, Dentistry, Dental prosthesis, Panoramic radiography.
Assuntos
Inteligência Artificial , Radiografia Panorâmica , Humanos , CoroasRESUMO
OBJECTIVES: This study aimed to assess the effectiveness of deep convolutional neural network (CNN) algorithms for the detecting and segmentation of overhanging dental restorations in bitewing radiographs. METHODS: A total of 1160 anonymized bitewing radiographs were used to progress the artificial intelligence (AI) system for the detection and segmentation of overhanging restorations. The data were then divided into three groups: 80% for training (930 images, 2399 labels), 10% for validation (115 images, 273 labels), and 10% for testing (115 images, 306 labels). A CNN model known as You Only Look Once (YOLOv5) was trained to detect overhanging restorations in bitewing radiographs. After utilizing the remaining 115 radiographs to evaluate the efficacy of the proposed CNN model, the accuracy, sensitivity, precision, F1 score, and area under the receiver operating characteristic curve (AUC) were computed. RESULTS: The model demonstrated a precision of 90.9%, a sensitivity of 85.3%, and an F1 score of 88.0%. Furthermore, the model achieved an AUC of 0.859 on the receiver operating characteristic (ROC) curve. The mean average precision (mAP) at an intersection over a union (IoU) threshold of 0.5 was notably high at 0.87. CONCLUSIONS: The findings suggest that deep CNN algorithms are highly effective in the detection and diagnosis of overhanging dental restorations in bitewing radiographs. The high levels of precision, sensitivity, and F1 score, along with the significant AUC and mAP values, underscore the potential of these advanced deep learning techniques in revolutionizing dental diagnostic procedures.
Assuntos
Aprendizado Profundo , Restauração Dentária Permanente , Radiografia Interproximal , Humanos , Restauração Dentária Permanente/métodos , Radiografia Interproximal/métodos , Algoritmos , Redes Neurais de Computação , Sensibilidade e EspecificidadeRESUMO
OBJECTIVES: In the interpretation of panoramic radiographs (PRs), the identification and numbering of teeth is an important part of the correct diagnosis. This study evaluates the effectiveness of YOLO-v5 in the automatic detection, segmentation, and numbering of deciduous and permanent teeth in mixed dentition pediatric patients based on PRs. METHODS: A total of 3854 mixed pediatric patients PRs were labelled for deciduous and permanent teeth using the CranioCatch labeling program. The dataset was divided into three subsets: training (n = 3093, 80% of the total), validation (n = 387, 10% of the total) and test (n = 385, 10% of the total). An artificial intelligence (AI) algorithm using YOLO-v5 models were developed. RESULTS: The sensitivity, precision, F-1 score, and mean average precision-0.5 (mAP-0.5) values were 0.99, 0.99, 0.99, and 0.98 respectively, to teeth detection. The sensitivity, precision, F-1 score, and mAP-0.5 values were 0.98, 0.98, 0.98, and 0.98, respectively, to teeth segmentation. CONCLUSIONS: YOLO-v5 based models can have the potential to detect and enable the accurate segmentation of deciduous and permanent teeth using PRs of pediatric patients with mixed dentition.
Assuntos
Aprendizado Profundo , Dentição Mista , Odontopediatria , Radiografia Panorâmica , Dente , Radiografia Panorâmica/métodos , Aprendizado Profundo/normas , Dente/diagnóstico por imagem , Humanos , Pré-Escolar , Criança , Adolescente , Masculino , Feminino , Odontopediatria/métodosRESUMO
INTRODUCTION: Oral squamous cell carcinomas (OSCC) seen in the oral cavity are a category of diseases for which dentists may diagnose and even cure. This study evaluated the performance of diagnostic computer software developed to detect oral cancer lesions in intra-oral retrospective patient images. MATERIALS AND METHODS: Oral cancer lesions were labeled with CranioCatch labeling program (CranioCatch, Eskisehir, Turkey) and polygonal type labeling method on a total of 65 anonymous retrospective intraoral patient images of oral mucosa that were diagnosed with oral cancer histopathologically by incisional biopsy from individuals in our clinic. All images have been rechecked and verified by experienced experts. This data set was divided into training (n = 53), validation (n = 6) and test (n = 6) sets. Artificial intelligence model was developed using YOLOv5 architecture, which is a deep learning approach. Model success was evaluated with confusion matrix. RESULTS: When the success rate in estimating the images reserved for the test not used in education was evaluated, the F1, sensitivity and precision results of the artificial intelligence model obtained using the YOLOv5 architecture were found to be 0.667, 0.667 and 0.667, respectively. CONCLUSIONS: Our study reveals that OCSCC lesions carry discriminative visual appearances, which can be identified by deep learning algorithm. Artificial intelligence shows promise in the prediagnosis of oral cancer lesions. The success rates will increase in the training models of the data set that will be formed with more images.
Assuntos
Carcinoma de Células Escamosas , Aprendizado Profundo , Neoplasias Bucais , Humanos , Neoplasias Bucais/diagnóstico , Neoplasias Bucais/patologia , Estudos Retrospectivos , Carcinoma de Células Escamosas/diagnóstico , Carcinoma de Células Escamosas/patologia , Masculino , Feminino , Sensibilidade e EspecificidadeRESUMO
OBJECTIVES: The purpose of this study was to evaluate the effectiveness of dental caries segmentation on the panoramic radiographs taken from children in primary dentition, mixed dentition, and permanent dentition with Artificial Intelligence (AI) models developed using the deep learning method. METHODS: This study used 6075 panoramic radiographs taken from children aged between 4 and 14 to develop the AI model. The radiographs included in the study were divided into three groups: primary dentition (n: 1857), mixed dentition (n: 1406), and permanent dentition (n: 2812). The U-Net model implemented with PyTorch library was used for the segmentation of caries lesions. A confusion matrix was used to evaluate model performance. RESULTS: In the primary dentition group, the sensitivity, precision, and F1 scores calculated using the confusion matrix were found to be 0.8525, 0.9128, and 0.8816, respectively. In the mixed dentition group, the sensitivity, precision, and F1 scores calculated using the confusion matrix were found to be 0.7377, 0.9192, and 0.8185, respectively. In the permanent dentition group, the sensitivity, precision, and F1 scores calculated using the confusion matrix were found to be 0.8271, 0.9125, and 0.8677, respectively. In the total group including primary, mixed, and permanent dentition, the sensitivity, precision, and F1 scores calculated using the confusion matrix were 0.8269, 0.9123, and 0.8675, respectively. CONCLUSIONS: Deep learning-based AI models are promising tools for the detection and diagnosis of caries in panoramic radiographs taken from children with different dentition.
RESUMO
BACKGROUND: The purpose of this study was to investigate the morphology of maxillary first premolar mesial root concavity and to analyse its relation to periodontal bone loss (BL) using cone beam computed tomography (CBCT) and panoramic radiographs. METHODS: The mesial root concavity of maxillary premolar teeth was analysed via CBCT. The sex and age of the patients, starting position and depth of the root concavity, apicocoronal length of the concavity on the crown or root starting from the cementoenamel junction (CEJ), total apicocoronal length of the concavity, amount of bone loss both in CBCT images and panoramic radiographs, location of the furcation, length of the buccal and palatinal roots, and buccopalatinal cervical root width were measured. RESULTS: A total of 610 patients' CBCT images were examined, and 100 were included in the study. The total number of upper premolar teeth was 200. The patients were aged between 18 and 65 years, with a mean age of 45.21 ± 13.13 years. All the teeth in the study presented mesial root concavity (100%, n = 200). The starting point of concavity was mostly on the cervical third of the root (58.5%). The mean depth and buccolingual length measurements were 0.96 mm and 4.32 mm, respectively. Depth was significantly related to the amount of alveolar bone loss (F = 5.834, p = 0.001). The highest average concavity depth was 1.29 mm in the group with 50% bone loss. The data indicated a significant relationship between the location of the furcation and bone loss (X2 = 25.215, p = 0.003). Bone loss exceeded 50% in 100% of patients in whom the furcation was in the cervical third and in only 9.5% of patients in whom the furcation was in the apical third (p = 0.003). CONCLUSIONS: According to the results of this study, the depth of the mesial root concavity and the coronal position of the furcation may increase the amount of alveolar bone loss. Clinicians should be aware of these anatomical factors to ensure accurate treatment planning and successful patient management.
Assuntos
Perda do Osso Alveolar , Dente Pré-Molar , Tomografia Computadorizada de Feixe Cônico , Maxila , Radiografia Panorâmica , Raiz Dentária , Humanos , Dente Pré-Molar/diagnóstico por imagem , Masculino , Feminino , Perda do Osso Alveolar/diagnóstico por imagem , Perda do Osso Alveolar/patologia , Raiz Dentária/diagnóstico por imagem , Raiz Dentária/anatomia & histologia , Raiz Dentária/patologia , Adulto , Pessoa de Meia-Idade , Adolescente , Maxila/diagnóstico por imagem , Idoso , Adulto Jovem , Colo do Dente/diagnóstico por imagem , Colo do Dente/patologiaRESUMO
Objectives The aim of this artificial intelligence (AI) study was to develop a deep learning algorithm capable of automatically classifying periapical and bitewing radiography images as either periodontally healthy or unhealthy and to assess the algorithm's diagnostic success. Materials and methods The sample of the study consisted of 1120 periapical radiographs (560 periodontally healthy, 560 periodontally unhealthy) and 1498 bitewing radiographs (749 periodontally healthy, 749 periodontally ill). From the main datasets of both radiography types, three sub-datasets were randomly created: a training set (80%), a validation set (10%), and a test set (10%). Using these sub-datasets, a deep learning algorithm was developed with the YOLOv8-cls model (Ultralytics, Los Angeles, California, United States) and trained over 300 epochs. The success of the developed algorithm was evaluated using the confusion matrix method. Results The AI algorithm achieved classification accuracies of 75% or higher for both radiograph types. For bitewing radiographs, the sensitivity, specificity, precision, accuracy, and F1 score values were 0.8243, 0.7162, 0.7439, 0.7703, and 0.7821, respectively. For periapical radiographs, the sensitivity, specificity, precision, accuracy, and F1 score were 0.7500, 0.7500, 0.7500, 0.7500, and 0.7500, respectively. Conclusion The AI models developed in this study demonstrated considerable success in classifying periodontal disease. Future applications may involve employing AI algorithms for assessing periodontal status across various types of radiography images and for automated disease detection.
RESUMO
This study aims to evaluate the effectiveness of employing a deep learning approach for the automated detection of pulp stones in panoramic imaging. A comprehensive dataset comprising 2409 panoramic radiography images (7564 labels) underwent labeling using the CranioCatch labeling program, developed in Eskisehir, Turkey. The dataset was stratified into three distinct subsets: training (n = 1929, 80% of the total), validation (n = 240, 10% of the total), and test (n = 240, 10% of the total) sets. To optimize the visual clarity of labeled regions, a 3 × 3 clash operation was applied to the images. The YOLOv5 architecture was employed for artificial intelligence modeling, yielding F1, sensitivity, and precision metrics of 0.7892, 0.8026, and 0.7762, respectively, during the evaluation of the test dataset. Among deep learning-based artificial intelligence algorithms applied to panoramic radiographs, the use of numerical identification for the detection of pulp stones has achieved remarkable success. It is expected that the success rates of training models will increase by using datasets consisting of a larger number of images. The use of artificial intelligence-supported clinical decision support system software has the potential to increase the efficiency and effectiveness of dentists.
RESUMO
OBJECTIVE: The aim of this study is to assess the efficacy of employing a deep learning methodology for the automated identification and enumeration of permanent teeth in bitewing radiographs. The experimental procedures and techniques employed in this study are described in the following section. STUDY DESIGN: A total of 1248 bitewing radiography images were annotated using the CranioCatch labeling program, developed in Eskisehir, Turkey. The dataset has been partitioned into 3 subsets: training (n = 1000, 80% of the total), validation (n = 124, 10% of the total), and test (n = 124, 10% of the total) sets. The images were subjected to a 3 × 3 clash operation in order to enhance the clarity of the labeled regions. RESULTS: The F1, sensitivity and precision results of the artificial intelligence model obtained using the Yolov5 architecture in the test dataset were found to be 0.9913, 0.9954, and 0.9873, respectively. CONCLUSION: The utilization of numerical identification for teeth within deep learning-based artificial intelligence algorithms applied to bitewing radiographs has demonstrated notable efficacy. The utilization of clinical decision support system software, which is augmented by artificial intelligence, has the potential to enhance the efficiency and effectiveness of dental practitioners.
Assuntos
Inteligência Artificial , Radiografia Interproximal , Humanos , Projetos Piloto , Radiografia Interproximal/métodos , Algoritmos , Dente/diagnóstico por imagem , Aprendizado Profundo , Sensibilidade e Especificidade , Turquia , Interpretação de Imagem Radiográfica Assistida por ComputadorRESUMO
BACKGROUND: Deep learning model trained on a large image dataset, can be used to detect and discriminate targets with similar but not identical appearances. The aim of this study is to evaluate the post-training performance of the CNN-based YOLOv5x algorithm in the detection of white spot lesions in post-orthodontic oral photographs using the limited data available and to make a preliminary study for fully automated models that can be clinically integrated in the future. METHODS: A total of 435 images in JPG format were uploaded into the CranioCatch labeling software and labeled white spot lesions. The labeled images were resized to 640 × 320 while maintaining their aspect ratio before model training. The labeled images were randomly divided into three groups (Training:349 images (1589 labels), Validation:43 images (181 labels), Test:43 images (215 labels)). YOLOv5x algorithm was used to perform deep learning. The segmentation performance of the tested model was visualized and analyzed using ROC analysis and a confusion matrix. True Positive (TP), False Positive (FP), and False Negative (FN) values were determined. RESULTS: Among the test group images, there were 133 TPs, 36 FPs, and 82 FNs. The model's performance metrics include precision, recall, and F1 score values of detecting white spot lesions were 0.786, 0.618, and 0.692. The AUC value obtained from the ROC analysis was 0.712. The mAP value obtained from the Precision-Recall curve graph was 0.425. CONCLUSIONS: The model's accuracy and sensitivity in detecting white spot lesions remained lower than expected for practical application, but is a promising and acceptable detection rate compared to previous study. The current study provides a preliminary insight to further improved by increasing the dataset for training, and applying modifications to the deep learning algorithm. CLINICAL REVELANCE: Deep learning systems can help clinicians to distinguish white spot lesions that may be missed during visual inspection.
Assuntos
Algoritmos , Aprendizado Profundo , Fotografia Dentária , Humanos , Processamento de Imagem Assistida por Computador/métodos , Fotografia Dentária/métodos , Projetos PilotoRESUMO
OBJECTIVES: The study aims to develop an artificial intelligence (AI) model based on nnU-Net v2 for automatic maxillary sinus (MS) segmentation in cone beam computed tomography (CBCT) volumes and to evaluate the performance of this model. METHODS: In 101 CBCT scans, MS were annotated using the CranioCatch labelling software (Eskisehir, Turkey) The dataset was divided into 3 parts: 80 CBCT scans for training the model, 11 CBCT scans for model validation, and 10 CBCT scans for testing the model. The model training was conducted using the nnU-Net v2 deep learning model with a learning rate of 0.00001 for 1000 epochs. The performance of the model to automatically segment the MS on CBCT scans was assessed by several parameters, including F1-score, accuracy, sensitivity, precision, area under curve (AUC), Dice coefficient (DC), 95% Hausdorff distance (95% HD), and Intersection over Union (IoU) values. RESULTS: F1-score, accuracy, sensitivity, precision values were found to be 0.96, 0.99, 0.96, 0.96, respectively for the successful segmentation of maxillary sinus in CBCT images. AUC, DC, 95% HD, IoU values were 0.97, 0.96, 1.19, 0.93, respectively. CONCLUSIONS: Models based on nnU-Net v2 demonstrate the ability to segment the MS autonomously and accurately in CBCT images.
Assuntos
Inteligência Artificial , Tomografia Computadorizada de Feixe Cônico , Seio Maxilar , Tomografia Computadorizada de Feixe Cônico/métodos , Humanos , Seio Maxilar/diagnóstico por imagem , Software , Feminino , Masculino , AdultoRESUMO
OBJECTIVE: The aim of this study is to determine if a deep learning (DL) model can predict the surgical difficulty for impacted maxillary third molar tooth using panoramic images before surgery. MATERIALS AND METHODS: The dataset consists of 708 panoramic radiographs of the patients who applied to the Oral and Maxillofacial Surgery Clinic for various reasons. Each maxillary third molar difficulty was scored based on dept (V), angulation (H), relation with maxillary sinus (S), and relation with ramus (R) on panoramic images. The YoloV5x architecture was used to perform automatic segmentation and classification. To prevent re-testing of images, participate in the training, the data set was subdivided as: 80 % training, 10 % validation, and 10 % test group. RESULTS: Impacted Upper Third Molar Segmentation model showed best success on sensitivity, precision and F1 score with 0,9705, 0,9428 and 0,9565, respectively. S-model had a lesser sensitivity, precision and F1 score than the other models with 0,8974, 0,6194, 0,7329, respectively. CONCLUSION: The results showed that the proposed DL model could be effective for predicting the surgical difficulty of an impacted maxillary third molar tooth using panoramic radiographs and this approach might help as a decision support mechanism for the clinicians in perisurgical period.
Assuntos
Aprendizado Profundo , Maxila , Dente Serotino , Radiografia Panorâmica , Extração Dentária , Dente Impactado , Humanos , Dente Serotino/cirurgia , Dente Serotino/diagnóstico por imagem , Dente Impactado/cirurgia , Dente Impactado/diagnóstico , Dente Impactado/diagnóstico por imagem , Dente Impactado/epidemiologia , Maxila/cirurgia , Maxila/diagnóstico por imagem , Maxila/patologia , Extração Dentária/métodos , Extração Dentária/estatística & dados numéricos , Feminino , Masculino , Adulto , Adolescente , Adulto JovemRESUMO
One of the most common congenital anomalies of the head and neck region is a cleft lip and palate. This retrospective case-control research aimed to compare the maxillary sinus volumes in individuals with bilateral cleft lip and palate (BCLP) to a non-cleft control group. The study comprised 72 participants, including 36 patients with BCLP and 36 gender and age-matched control subjects. All topographies were obtained utilizing Cone Beam Computed Tomography (CBCT) for diagnostic purposes, and 3D Dolphin software was utilized for sinus segmentation. Volumetric measurements were taken in cubic millimeters. No significant differences were found between the sex and age distributions of both groups. Additionally, there was no statistically significant difference observed between the BCLP group and the control group on the right and left sides (p > 0.05). However, the mean maxillary sinus volumes of BCLP patients (8014.26 ± 2841.03 mm3) were significantly lower than those of the healthy control group (11,085.21 ± 3146.12 mm3) (p < 0.05). The findings of this study suggest that clinicians should be aware of the lower maxillary sinus volumes in BCLP patients when planning surgical interventions. The utilization of CBCT and sinus segmentation allowed for precise measurement of maxillary sinus volumes, contributing to the existing literature on anatomical variations in BCLP patients.
Assuntos
Fenda Labial , Fissura Palatina , Humanos , Fenda Labial/diagnóstico por imagem , Fissura Palatina/diagnóstico por imagem , Fissura Palatina/cirurgia , Seio Maxilar/diagnóstico por imagem , Estudos Retrospectivos , Tomografia Computadorizada de Feixe Cônico/métodosRESUMO
BACKGROUND: This retrospective study aimed to develop a deep learning algorithm for the interpretation of panoramic radiographs and to examine the performance of this algorithm in the detection of periodontal bone losses and bone loss patterns. METHODS: A total of 1121 panoramic radiographs were used in this study. Bone losses in the maxilla and mandibula (total alveolar bone loss) (n = 2251), interdental bone losses (n = 25303), and furcation defects (n = 2815) were labeled using the segmentation method. In addition, interdental bone losses were divided into horizontal (n = 21839) and vertical (n = 3464) bone losses according to the defect patterns. A Convolutional Neural Network (CNN)-based artificial intelligence (AI) system was developed using U-Net architecture. The performance of the deep learning algorithm was statistically evaluated by the confusion matrix and ROC curve analysis. RESULTS: The system showed the highest diagnostic performance in the detection of total alveolar bone losses (AUC = 0.951) and the lowest in the detection of vertical bone losses (AUC = 0.733). The sensitivity, precision, F1 score, accuracy, and AUC values were found as 1, 0.995, 0.997, 0.994, 0.951 for total alveolar bone loss; found as 0.947, 0.939, 0.943, 0.892, 0.910 for horizontal bone losses; found as 0.558, 0.846, 0.673, 0.506, 0.733 for vertical bone losses and found as 0.892, 0.933, 0.912, 0.837, 0.868 for furcation defects (respectively). CONCLUSIONS: AI systems offer promising results in determining periodontal bone loss patterns and furcation defects from dental radiographs. This suggests that CNN algorithms can also be used to provide more detailed information such as automatic determination of periodontal disease severity and treatment planning in various dental radiographs.
Assuntos
Perda do Osso Alveolar , Aprendizado Profundo , Defeitos da Furca , Humanos , Perda do Osso Alveolar/diagnóstico por imagem , Radiografia Panorâmica/métodos , Estudos Retrospectivos , Defeitos da Furca/diagnóstico por imagem , Inteligência Artificial , AlgoritmosRESUMO
The objective of this study is to use a deep-learning model based on CNN architecture to detect the second mesiobuccal (MB2) canals, which are seen as a variation in maxillary molars root canals. In the current study, 922 axial sections from 153 patients' cone beam computed tomography (CBCT) images were used. The segmentation method was employed to identify the MB2 canals in maxillary molars that had not previously had endodontic treatment. Labeled images were divided into training (80%), validation (10%) and testing (10%) groups. The artificial intelligence (AI) model was trained using the You Only Look Once v5 (YOLOv5x) architecture with 500 epochs and a learning rate of 0.01. Confusion matrix and receiver-operating characteristic (ROC) analysis were used in the statistical evaluation of the results. The sensitivity of the MB2 canal segmentation model was 0.92, the precision was 0.83, and the F1 score value was 0.87. The area under the curve (AUC) in the ROC graph of the model was 0.84. The mAP value at 0.5 inter-over union (IoU) was found as 0.88. The deep-learning algorithm used showed a high success in the detection of the MB2 canal. The success of the endodontic treatment can be increased and clinicians' time can be preserved using the newly created artificial intelligence-based models to identify variations in root canal anatomy before the treatment.