Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 45
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
BMC Med Imaging ; 24(1): 172, 2024 Jul 11.
Artigo em Inglês | MEDLINE | ID: mdl-38992601

RESUMO

OBJECTIVES: In the interpretation of panoramic radiographs (PRs), the identification and numbering of teeth is an important part of the correct diagnosis. This study evaluates the effectiveness of YOLO-v5 in the automatic detection, segmentation, and numbering of deciduous and permanent teeth in mixed dentition pediatric patients based on PRs. METHODS: A total of 3854 mixed pediatric patients PRs were labelled for deciduous and permanent teeth using the CranioCatch labeling program. The dataset was divided into three subsets: training (n = 3093, 80% of the total), validation (n = 387, 10% of the total) and test (n = 385, 10% of the total). An artificial intelligence (AI) algorithm using YOLO-v5 models were developed. RESULTS: The sensitivity, precision, F-1 score, and mean average precision-0.5 (mAP-0.5) values were 0.99, 0.99, 0.99, and 0.98 respectively, to teeth detection. The sensitivity, precision, F-1 score, and mAP-0.5 values were 0.98, 0.98, 0.98, and 0.98, respectively, to teeth segmentation. CONCLUSIONS: YOLO-v5 based models can have the potential to detect and enable the accurate segmentation of deciduous and permanent teeth using PRs of pediatric patients with mixed dentition.


Assuntos
Aprendizado Profundo , Dentição Mista , Odontopediatria , Radiografia Panorâmica , Dente , Radiografia Panorâmica/métodos , Aprendizado Profundo/normas , Dente/diagnóstico por imagem , Humanos , Pré-Escolar , Criança , Adolescente , Masculino , Feminino , Odontopediatria/métodos
2.
Odontology ; 112(2): 552-561, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-37907818

RESUMO

The objective of this study is to use a deep-learning model based on CNN architecture to detect the second mesiobuccal (MB2) canals, which are seen as a variation in maxillary molars root canals. In the current study, 922 axial sections from 153 patients' cone beam computed tomography (CBCT) images were used. The segmentation method was employed to identify the MB2 canals in maxillary molars that had not previously had endodontic treatment. Labeled images were divided into training (80%), validation (10%) and testing (10%) groups. The artificial intelligence (AI) model was trained using the You Only Look Once v5 (YOLOv5x) architecture with 500 epochs and a learning rate of 0.01. Confusion matrix and receiver-operating characteristic (ROC) analysis were used in the statistical evaluation of the results. The sensitivity of the MB2 canal segmentation model was 0.92, the precision was 0.83, and the F1 score value was 0.87. The area under the curve (AUC) in the ROC graph of the model was 0.84. The mAP value at 0.5 inter-over union (IoU) was found as 0.88. The deep-learning algorithm used showed a high success in the detection of the MB2 canal. The success of the endodontic treatment can be increased and clinicians' time can be preserved using the newly created artificial intelligence-based models to identify variations in root canal anatomy before the treatment.


Assuntos
Inteligência Artificial , Cavidade Pulpar , Humanos , Cavidade Pulpar/diagnóstico por imagem , Raiz Dentária , Maxila/anatomia & histologia , Tomografia Computadorizada de Feixe Cônico/métodos
3.
Artigo em Inglês | MEDLINE | ID: mdl-39024043

RESUMO

OBJECTIVE: This study aimed to assess the effectiveness of deep convolutional neural network (CNN) algorithms in detecting and segmentation of overhanging dental restorations in bitewing radiographs. METHOD AND MATERIALS: A total of 1160 anonymized bitewing radiographs were used to progress the Artificial Intelligence (AI) system) for the detection and segmentation of overhanging restorations. The data was then divided into three groups: 80% for training (930 images, 2399 labels), 10% for validation (115 images, 273 labels), and 10% for testing (115 images, 306 labels). A CNN model known as you only look once (YOLOv5) was trained to detect overhanging restorations in bitewing radiographs. After utilizing the remaining 115 radiographs to evaluate the efficacy of the proposed CNN model, the accuracy, sensitivity, precision, F1 score, and area under the receiver operating characteristic curve (AUC) were computed. RESULTS: The model demonstrated a precision of 90.9%, a sensitivity of 85.3%, and an F1 score of 88.0%. Furthermore, the model achieved an AUC of 0.859 on the Receiver Operating Characteristic (ROC) curve. The mean average precision (mAP) at an intersection over a union (IoU) threshold of 0.5 was notably high at 0.87. CONCLUSION: The findings suggest that deep CNN algorithms are highly effective in the detection and diagnosis of overhanging dental restorations in bitewing radiographs. The high levels of precision, sensitivity, and F1 score, along with the significant AUC and mAP values, underscore the potential of these advanced deep learning techniques in revolutionizing dental diagnostic procedures.

4.
Dentomaxillofac Radiol ; 53(4): 256-266, 2024 Apr 29.
Artigo em Inglês | MEDLINE | ID: mdl-38502963

RESUMO

OBJECTIVES: The study aims to develop an artificial intelligence (AI) model based on nnU-Net v2 for automatic maxillary sinus (MS) segmentation in cone beam computed tomography (CBCT) volumes and to evaluate the performance of this model. METHODS: In 101 CBCT scans, MS were annotated using the CranioCatch labelling software (Eskisehir, Turkey) The dataset was divided into 3 parts: 80 CBCT scans for training the model, 11 CBCT scans for model validation, and 10 CBCT scans for testing the model. The model training was conducted using the nnU-Net v2 deep learning model with a learning rate of 0.00001 for 1000 epochs. The performance of the model to automatically segment the MS on CBCT scans was assessed by several parameters, including F1-score, accuracy, sensitivity, precision, area under curve (AUC), Dice coefficient (DC), 95% Hausdorff distance (95% HD), and Intersection over Union (IoU) values. RESULTS: F1-score, accuracy, sensitivity, precision values were found to be 0.96, 0.99, 0.96, 0.96, respectively for the successful segmentation of maxillary sinus in CBCT images. AUC, DC, 95% HD, IoU values were 0.97, 0.96, 1.19, 0.93, respectively. CONCLUSIONS: Models based on nnU-Net v2 demonstrate the ability to segment the MS autonomously and accurately in CBCT images.


Assuntos
Inteligência Artificial , Tomografia Computadorizada de Feixe Cônico , Seio Maxilar , Tomografia Computadorizada de Feixe Cônico/métodos , Humanos , Seio Maxilar/diagnóstico por imagem , Software , Feminino , Masculino , Adulto
5.
BMC Oral Health ; 24(1): 155, 2024 Jan 31.
Artigo em Inglês | MEDLINE | ID: mdl-38297288

RESUMO

BACKGROUND: This retrospective study aimed to develop a deep learning algorithm for the interpretation of panoramic radiographs and to examine the performance of this algorithm in the detection of periodontal bone losses and bone loss patterns. METHODS: A total of 1121 panoramic radiographs were used in this study. Bone losses in the maxilla and mandibula (total alveolar bone loss) (n = 2251), interdental bone losses (n = 25303), and furcation defects (n = 2815) were labeled using the segmentation method. In addition, interdental bone losses were divided into horizontal (n = 21839) and vertical (n = 3464) bone losses according to the defect patterns. A Convolutional Neural Network (CNN)-based artificial intelligence (AI) system was developed using U-Net architecture. The performance of the deep learning algorithm was statistically evaluated by the confusion matrix and ROC curve analysis. RESULTS: The system showed the highest diagnostic performance in the detection of total alveolar bone losses (AUC = 0.951) and the lowest in the detection of vertical bone losses (AUC = 0.733). The sensitivity, precision, F1 score, accuracy, and AUC values were found as 1, 0.995, 0.997, 0.994, 0.951 for total alveolar bone loss; found as 0.947, 0.939, 0.943, 0.892, 0.910 for horizontal bone losses; found as 0.558, 0.846, 0.673, 0.506, 0.733 for vertical bone losses and found as 0.892, 0.933, 0.912, 0.837, 0.868 for furcation defects (respectively). CONCLUSIONS: AI systems offer promising results in determining periodontal bone loss patterns and furcation defects from dental radiographs. This suggests that CNN algorithms can also be used to provide more detailed information such as automatic determination of periodontal disease severity and treatment planning in various dental radiographs.


Assuntos
Perda do Osso Alveolar , Aprendizado Profundo , Defeitos da Furca , Humanos , Perda do Osso Alveolar/diagnóstico por imagem , Radiografia Panorâmica/métodos , Estudos Retrospectivos , Defeitos da Furca/diagnóstico por imagem , Inteligência Artificial , Algoritmos
6.
BMC Oral Health ; 24(1): 490, 2024 Apr 24.
Artigo em Inglês | MEDLINE | ID: mdl-38658959

RESUMO

BACKGROUND: Deep learning model trained on a large image dataset, can be used to detect and discriminate targets with similar but not identical appearances. The aim of this study is to evaluate the post-training performance of the CNN-based YOLOv5x algorithm in the detection of white spot lesions in post-orthodontic oral photographs using the limited data available and to make a preliminary study for fully automated models that can be clinically integrated in the future. METHODS: A total of 435 images in JPG format were uploaded into the CranioCatch labeling software and labeled white spot lesions. The labeled images were resized to 640 × 320 while maintaining their aspect ratio before model training. The labeled images were randomly divided into three groups (Training:349 images (1589 labels), Validation:43 images (181 labels), Test:43 images (215 labels)). YOLOv5x algorithm was used to perform deep learning. The segmentation performance of the tested model was visualized and analyzed using ROC analysis and a confusion matrix. True Positive (TP), False Positive (FP), and False Negative (FN) values were determined. RESULTS: Among the test group images, there were 133 TPs, 36 FPs, and 82 FNs. The model's performance metrics include precision, recall, and F1 score values of detecting white spot lesions were 0.786, 0.618, and 0.692. The AUC value obtained from the ROC analysis was 0.712. The mAP value obtained from the Precision-Recall curve graph was 0.425. CONCLUSIONS: The model's accuracy and sensitivity in detecting white spot lesions remained lower than expected for practical application, but is a promising and acceptable detection rate compared to previous study. The current study provides a preliminary insight to further improved by increasing the dataset for training, and applying modifications to the deep learning algorithm. CLINICAL REVELANCE: Deep learning systems can help clinicians to distinguish white spot lesions that may be missed during visual inspection.


Assuntos
Algoritmos , Aprendizado Profundo , Fotografia Dentária , Humanos , Processamento de Imagem Assistida por Computador/métodos , Fotografia Dentária/métodos , Projetos Piloto
7.
J Oral Rehabil ; 50(9): 758-766, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37186400

RESUMO

BACKGROUND: The use of artificial intelligence has many advantages, especially in the field of oral and maxillofacial radiology. Early diagnosis of temporomandibular joint osteoarthritis by artificial intelligence may improve prognosis. OBJECTIVE: The aim of this study is to perform the classification of temporomandibular joint (TMJ) osteoarthritis and TMJ segmentation on cone beam computed tomography (CBCT) sagittal images with artificial intelligence. METHODS: In this study, the success of YOLOv5 architecture, an artificial intelligence model, in TMJ segmentation and osteoarthritis classification was evaluated on 2000 sagittal sections (500 healthy, 500 erosion, 500 osteophyte, 500 flattening images) obtained from CBCT DICOM images of 290 patients. RESULTS: The sensitivity, precision and F1 scores of the model for TMJ osteoarthritis classification are 1, 0.7678 and 0.8686, respectively. The accuracy value for classification is 0.7678. The prediction values of the classification model are 88% for healthy joints, 70% for flattened joints, 95% for joints with erosion and 86% for joints with osteophytes. The sensitivity, precision and F1 score of the YOLOv5 model for TMJ segmentation are 1, 0.9953 and 0.9976, respectively. The AUC value of the model for TMJ segmentation is 0.9723. In addition, the accuracy value of the model for TMJ segmentation was found to be 0.9953. CONCLUSION: Artificial intelligence model applied in this study can be a support method that will save time and convenience for physicians in the diagnosis of the disease with successful results in TMJ segmentation and osteoarthritis classification.


Assuntos
Osteoartrite , Transtornos da Articulação Temporomandibular , Humanos , Transtornos da Articulação Temporomandibular/diagnóstico por imagem , Inteligência Artificial , Articulação Temporomandibular/diagnóstico por imagem , Tomografia Computadorizada de Feixe Cônico/métodos , Osteoartrite/diagnóstico por imagem
8.
BMC Oral Health ; 23(1): 764, 2023 10 17.
Artigo em Inglês | MEDLINE | ID: mdl-37848870

RESUMO

BACKGROUND: Panoramic radiographs, in which anatomic landmarks can be observed, are used to detect cases closely related to pediatric dentistry. The purpose of the study is to investigate the success and reliability of the detection of maxillary and mandibular anatomic structures observed on panoramic radiographs in children using artificial intelligence. METHODS: A total of 981 mixed images of pediatric patients for 9 different pediatric anatomic landmarks including maxillary sinus, orbita, mandibular canal, mental foramen, foramen mandible, incisura mandible, articular eminence, condylar and coronoid processes were labelled, the training was carried out using 2D convolutional neural networks (CNN) architectures, by giving 500 training epochs and Pytorch-implemented YOLO-v5 models were produced. The success rate of the AI model prediction was tested on a 10% test data set. RESULTS: A total of 14,804 labels including maxillary sinus (1922), orbita (1944), mandibular canal (1879), mental foramen (884), foramen mandible (1885), incisura mandible (1922), articular eminence (1645), condylar (1733) and coronoid (990) processes were made. The most successful F1 Scores were obtained from orbita (1), incisura mandible (0.99), maxillary sinus (0.98), and mandibular canal (0.97). The best sensitivity values were obtained from orbita, maxillary sinus, mandibular canal, incisura mandible, and condylar process. The worst sensitivity values were obtained from mental foramen (0.92) and articular eminence (0.92). CONCLUSIONS: The regular and standardized labelling, the relatively larger areas, and the success of the YOLO-v5 algorithm contributed to obtaining these successful results. Automatic segmentation of these structures will save time for physicians in clinical diagnosis and will increase the visibility of pathologies related to structures and the awareness of physicians.


Assuntos
Pontos de Referência Anatômicos , Inteligência Artificial , Humanos , Criança , Radiografia Panorâmica/métodos , Pontos de Referência Anatômicos/diagnóstico por imagem , Reprodutibilidade dos Testes , Mandíbula/diagnóstico por imagem
9.
Med Princ Pract ; 31(6): 555-561, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36167054

RESUMO

OBJECTIVE: The purpose of the study was to create an artificial intelligence (AI) system for detecting idiopathic osteosclerosis (IO) on panoramic radiographs for automatic, routine, and simple evaluations. SUBJECT AND METHODS: In this study, a deep learning method was carried out with panoramic radiographs obtained from healthy patients. A total of 493 anonymized panoramic radiographs were used to develop the AI system (CranioCatch, Eskisehir, Turkey) for the detection of IOs. The panoramic radiographs were acquired from the radiology archives of the Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University. GoogLeNet Inception v2 model implemented with TensorFlow library was used for the detection of IOs. Confusion matrix was used to predict model achievements. RESULTS: Fifty IOs were detected accurately by the AI model from the 52 test images which had 57 IOs. The sensitivity, precision, and F-measure values were 0.88, 0.83, and 0.86, respectively. CONCLUSION: Deep learning-based AI algorithm has the potential to detect IOs accurately on panoramic radiographs. AI systems may reduce the workload of dentists in terms of diagnostic efforts.


Assuntos
Aprendizado Profundo , Osteosclerose , Humanos , Inteligência Artificial , Radiografia Panorâmica , Algoritmos , Osteosclerose/diagnóstico por imagem
10.
Pol J Radiol ; 87: e516-e520, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36250137

RESUMO

Purpose: Magnetic resonance imaging (MRI) has a special place in the evaluation of orbital and periorbital lesions. Segmentation is one of the deep learning methods. In this study, we aimed to perform segmentation in orbital and periorbital lesions. Material and methods: Contrast-enhanced orbital MRIs performed between 2010 and 2019 were retrospectively screened, and 302 cross-sections of contrast-enhanced, fat-suppressed, T1-weighted, axial MRI images of 95 patients obtained using 3 T and 1.5 T devices were included in the study. The dataset was divided into 3: training, test, and validation. The number of training and validation data was increased 4 times by applying data augmentation (horizontal, vertical, and both). Pytorch UNet was used for training, with 100 epochs. The intersection over union (IOU) statistic (the Jaccard index) was selected as 50%, and the results were calculated. Results: The 77th epoch model provided the best results: true positives, 23; false positives, 4; and false negatives, 8. The pre-cision, sensitivity, and F1 score were determined as 0.85, 0.74, and 0.79, respectively. Conclusions: Our study proved to be successful in segmentation by deep learning method. It is one of the pioneering studies on this subject and will shed light on further segmentation studies to be performed in orbital MR images.

11.
BMC Med Imaging ; 21(1): 124, 2021 08 13.
Artigo em Inglês | MEDLINE | ID: mdl-34388975

RESUMO

BACKGROUND: Panoramic radiography is an imaging method for displaying maxillary and mandibular teeth together with their supporting structures. Panoramic radiography is frequently used in dental imaging due to its relatively low radiation dose, short imaging time, and low burden to the patient. We verified the diagnostic performance of an artificial intelligence (AI) system based on a deep convolutional neural network method to detect and number teeth on panoramic radiographs. METHODS: The data set included 2482 anonymized panoramic radiographs from adults from the archive of Eskisehir Osmangazi University, Faculty of Dentistry, Department of Oral and Maxillofacial Radiology. A Faster R-CNN Inception v2 model was used to develop an AI algorithm (CranioCatch, Eskisehir, Turkey) to automatically detect and number teeth on panoramic radiographs. Human observation and AI methods were compared on a test data set consisting of 249 panoramic radiographs. True positive, false positive, and false negative rates were calculated for each quadrant of the jaws. The sensitivity, precision, and F-measure values were estimated using a confusion matrix. RESULTS: The total numbers of true positive, false positive, and false negative results were 6940, 250, and 320 for all quadrants, respectively. Consequently, the estimated sensitivity, precision, and F-measure were 0.9559, 0.9652, and 0.9606, respectively. CONCLUSIONS: The deep convolutional neural network system was successful in detecting and numbering teeth. Clinicians can use AI systems to detect and number teeth on panoramic radiographs, which may eventually replace evaluation by human observers and support decision making.


Assuntos
Redes Neurais de Computação , Radiografia Panorâmica , Dente/diagnóstico por imagem , Algoritmos , Conjuntos de Dados como Assunto , Aprendizado Profundo , Humanos , Sensibilidade e Especificidade
12.
Acta Odontol Scand ; 79(4): 275-281, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-33176533

RESUMO

OBJECTIVES: Radiological examination has an important place in dental practice, and it is frequently used in intraoral imaging. The correct numbering of teeth on radiographs is a routine practice that takes time for the dentist. This study aimed to propose an automatic detection system for the numbering of teeth in bitewing images using a faster Region-based Convolutional Neural Networks (R-CNN) method. METHODS: The study included 1125 bite-wing radiographs of patients who attended the Faculty of Dentistry of Ordu University from 2018 to 2019. A faster R-CNN an advanced object identification method was used to identify the teeth. The confusion matrix was used as a metric and to evaluate the success of the model. RESULTS: The deep CNN system (CranioCatch, Eskisehir, Turkey) was used to detect and number teeth in bitewing radiographs. Of 715 teeth in 109 bite-wing images, 697 were correctly numbered in the test data set. The F1 score, precision and sensitivity were 0.9515, 0.9293 and 0.9748, respectively. CONCLUSIONS: A CNN approach for the analysis of bitewing images shows promise for detecting and numbering teeth. This method can save dentists time by automatically preparing dental charts.


Assuntos
Inteligência Artificial , Dente , Oclusão Dentária , Humanos , Redes Neurais de Computação , Dente/diagnóstico por imagem , Turquia
13.
Int J Comput Dent ; 24(1): 1-9, 2021 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-33634681

RESUMO

AIM: The aim of the study was to compare the success and reliability of an artificial intelligence (AI) application in the detection and classification of submerged teeth in panoramic radiographs. MATERIALS AND METHODS: Convolutional neural network (CNN) algorithms were used to detect and classify submerged molars. The detection module, based on the stateof- the-art Faster R-CNN architecture, processed a radiograph to define the boundaries of submerged molars. A separate testing set was used to evaluate the diagnostic performance of the system and compare it with that of experts in the field. RESULT: The success rate of the classification and identification of the system was high when evaluated according to the reference standard. The system was extremely accurate in its performance in comparison with observers. CONCLUSIONS: The performance of the proposed computeraided diagnosis solution is comparable to that of experts. It is useful to diagnose submerged molars with an AI application to prevent errors. In addition, this will facilitate the diagnoses of pediatric dentists.


Assuntos
Inteligência Artificial , Aprendizado Profundo , Criança , Humanos , Projetos Piloto , Reprodutibilidade dos Testes , Dente Decíduo
14.
Tuberk Toraks ; 69(4): 486-491, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34957742

RESUMO

INTRODUCTION: Computed tomography (CT) is an auxiliary modality in the diagnosis of the novel Coronavirus (COVID-19) disease and can guide physicians in the presence of lung involvement. In this study, we aimed to investigate the contribution of deep learning to diagnosis in patients with typical COVID-19 pneumonia findings on CT. MATERIALS AND METHODS: This study retrospectively evaluated 690 lesions obtained from 35 patients diagnosed with COVID-19 pneumonia based on typical findings on non-contrast high-resolution CT (HRCT) in our hospital. The diagnoses of the patients were also confirmed by other necessary tests. HRCT images were assessed in the parenchymal window. In the images obtained, COVID-19 lesions were detected. For the deep Convolutional Neural Network (CNN) algorithm, the Confusion matrix was used based on a Tensorflow Framework in Python. RESULT: A total of 596 labeled lesions obtained from 224 sections of the images were used for the training of the algorithm, 89 labeled lesions from 27 sections were used in validation, and 67 labeled lesions from 25 images in testing. Fifty-six of the 67 lesions used in the testing stage were accurately detected by the algorithm while the remaining 11 were not recognized. There was no false positive. The Recall, Precision and F1 score values in the test group were 83.58, 1, and 91.06, respectively. CONCLUSIONS: We successfully detected the COVID-19 pneumonia lesions on CT images using the algorithms created with artificial intelligence. The integration of deep learning into the diagnostic stage in medicine is an important step for the diagnosis of diseases that can cause lung involvement in possible future pandemics.


Assuntos
COVID-19 , Aprendizado Profundo , Inteligência Artificial , Humanos , Estudos Retrospectivos , SARS-CoV-2 , Tomografia Computadorizada por Raios X
15.
J Oral Implantol ; 49(4): 344-345, 2023 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-37527149
16.
J Eval Clin Pract ; 2024 May 14.
Artigo em Inglês | MEDLINE | ID: mdl-38741561

RESUMO

BACKGROUND: Machine learning techniques (MLT) build models to detect complex patterns and solve new problems using big data. AIM: The present study aims to create a prediction interface for mothers breastfeeding exclusively for the first 6 months using MLT. METHOD: All mothers who had babies aged 6-24 months between 15.09.2021 and 15.12.2021 and to whom the surveys could be delivered were included. 'Personal Information Form' created by the researchers was used as a data collection tool. Data from 514 mothers participating in the study were used for MLT. Data from 70% of mothers were used for educational purposes, and a prediction model was created. The data obtained from the remaining 30% of the mothers were used for testing. RESULTS: The best MLT algorithm for predicting exclusive breastfeeding for the first 6 months was determined to be the Random Forest Classifier. The top five variables affecting the possibility of mothers breastfeeding exclusively for the first 6 months were as follows: "the mother not having any health problems during pregnancy," "there were no people who negatively affected the mother's morale about breastfeeding," "the amount of water the mother drinks in a day," "thinking that her milk supply is insufficient," "having no problems breastfeeding the baby". CONCLUSIONS: Using created prediction model may allow early identification of mothers with a risk of not breastfeeding their babies exclusively for the first 6 months. In this way, mothers in the risk group can be closely monitored in the early period.

17.
Artigo em Inglês | MEDLINE | ID: mdl-38632035

RESUMO

OBJECTIVE: The aim of this study is to assess the efficacy of employing a deep learning methodology for the automated identification and enumeration of permanent teeth in bitewing radiographs. The experimental procedures and techniques employed in this study are described in the following section. STUDY DESIGN: A total of 1248 bitewing radiography images were annotated using the CranioCatch labeling program, developed in Eskisehir, Turkey. The dataset has been partitioned into 3 subsets: training (n = 1000, 80% of the total), validation (n = 124, 10% of the total), and test (n = 124, 10% of the total) sets. The images were subjected to a 3 × 3 clash operation in order to enhance the clarity of the labeled regions. RESULTS: The F1, sensitivity and precision results of the artificial intelligence model obtained using the Yolov5 architecture in the test dataset were found to be 0.9913, 0.9954, and 0.9873, respectively. CONCLUSION: The utilization of numerical identification for teeth within deep learning-based artificial intelligence algorithms applied to bitewing radiographs has demonstrated notable efficacy. The utilization of clinical decision support system software, which is augmented by artificial intelligence, has the potential to enhance the efficiency and effectiveness of dental practitioners.


Assuntos
Inteligência Artificial , Radiografia Interproximal , Humanos , Projetos Piloto , Radiografia Interproximal/métodos , Algoritmos , Dente/diagnóstico por imagem , Aprendizado Profundo , Sensibilidade e Especificidade , Turquia , Interpretação de Imagem Radiográfica Assistida por Computador
18.
J Stomatol Oral Maxillofac Surg ; : 101975, 2024 Jul 21.
Artigo em Inglês | MEDLINE | ID: mdl-39043293

RESUMO

INTRODUCTION: Oral squamous cell carcinomas (OSCC) seen in the oral cavity are a category of diseases for which dentists may diagnose and even cure. This study evaluated the performance of diagnostic computer software developed to detect oral cancer lesions in intra-oral retrospective patient images. MATERIALS AND METHODS: Oral cancer lesions were labeled with CranioCatch labeling program (CranioCatch, Eskisehir, Turkey) and polygonal type labeling method on a total of 65 anonymous retrospective intraoral patient images of oral mucosa that were diagnosed with oral cancer histopathologically by incisional biopsy from individuals in our clinic. All images have been rechecked and verified by experienced experts. This data set was divided into training (n = 53), validation (n = 6) and test (n = 6) sets. Artificial intelligence model was developed using YOLOv5 architecture, which is a deep learning approach. Model success was evaluated with confusion matrix. RESULTS: When the success rate in estimating the images reserved for the test not used in education was evaluated, the F1, sensitivity and precision results of the artificial intelligence model obtained using the YOLOv5 architecture were found to be 0.667, 0.667 and 0.667, respectively. CONCLUSIONS: Our study reveals that OCSCC lesions carry discriminative visual appearances, which can be identified by deep learning algorithm. Artificial intelligence shows promise in the prediagnosis of oral cancer lesions. The success rates will increase in the training models of the data set that will be formed with more images.

19.
J Stomatol Oral Maxillofac Surg ; : 101817, 2024 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-38458545

RESUMO

OBJECTIVE: The aim of this study is to determine if a deep learning (DL) model can predict the surgical difficulty for impacted maxillary third molar tooth using panoramic images before surgery. MATERIALS AND METHODS: The dataset consists of 708 panoramic radiographs of the patients who applied to the Oral and Maxillofacial Surgery Clinic for various reasons. Each maxillary third molar difficulty was scored based on dept (V), angulation (H), relation with maxillary sinus (S), and relation with ramus (R) on panoramic images. The YoloV5x architecture was used to perform automatic segmentation and classification. To prevent re-testing of images, participate in the training, the data set was subdivided as: 80 % training, 10 % validation, and 10 % test group. RESULTS: Impacted Upper Third Molar Segmentation model showed best success on sensitivity, precision and F1 score with 0,9705, 0,9428 and 0,9565, respectively. S-model had a lesser sensitivity, precision and F1 score than the other models with 0,8974, 0,6194, 0,7329, respectively. CONCLUSION: The results showed that the proposed DL model could be effective for predicting the surgical difficulty of an impacted maxillary third molar tooth using panoramic radiographs and this approach might help as a decision support mechanism for the clinicians in peri­surgical period.

20.
Diagnostics (Basel) ; 14(9)2024 Apr 24.
Artigo em Inglês | MEDLINE | ID: mdl-38732305

RESUMO

This study aims to evaluate the effectiveness of employing a deep learning approach for the automated detection of pulp stones in panoramic imaging. A comprehensive dataset comprising 2409 panoramic radiography images (7564 labels) underwent labeling using the CranioCatch labeling program, developed in Eskisehir, Turkey. The dataset was stratified into three distinct subsets: training (n = 1929, 80% of the total), validation (n = 240, 10% of the total), and test (n = 240, 10% of the total) sets. To optimize the visual clarity of labeled regions, a 3 × 3 clash operation was applied to the images. The YOLOv5 architecture was employed for artificial intelligence modeling, yielding F1, sensitivity, and precision metrics of 0.7892, 0.8026, and 0.7762, respectively, during the evaluation of the test dataset. Among deep learning-based artificial intelligence algorithms applied to panoramic radiographs, the use of numerical identification for the detection of pulp stones has achieved remarkable success. It is expected that the success rates of training models will increase by using datasets consisting of a larger number of images. The use of artificial intelligence-supported clinical decision support system software has the potential to increase the efficiency and effectiveness of dentists.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA