Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros













Base de datos
Intervalo de año de publicación
1.
BMC Oral Health ; 24(1): 490, 2024 Apr 24.
Artículo en Inglés | MEDLINE | ID: mdl-38658959

RESUMEN

BACKGROUND: Deep learning model trained on a large image dataset, can be used to detect and discriminate targets with similar but not identical appearances. The aim of this study is to evaluate the post-training performance of the CNN-based YOLOv5x algorithm in the detection of white spot lesions in post-orthodontic oral photographs using the limited data available and to make a preliminary study for fully automated models that can be clinically integrated in the future. METHODS: A total of 435 images in JPG format were uploaded into the CranioCatch labeling software and labeled white spot lesions. The labeled images were resized to 640 × 320 while maintaining their aspect ratio before model training. The labeled images were randomly divided into three groups (Training:349 images (1589 labels), Validation:43 images (181 labels), Test:43 images (215 labels)). YOLOv5x algorithm was used to perform deep learning. The segmentation performance of the tested model was visualized and analyzed using ROC analysis and a confusion matrix. True Positive (TP), False Positive (FP), and False Negative (FN) values were determined. RESULTS: Among the test group images, there were 133 TPs, 36 FPs, and 82 FNs. The model's performance metrics include precision, recall, and F1 score values of detecting white spot lesions were 0.786, 0.618, and 0.692. The AUC value obtained from the ROC analysis was 0.712. The mAP value obtained from the Precision-Recall curve graph was 0.425. CONCLUSIONS: The model's accuracy and sensitivity in detecting white spot lesions remained lower than expected for practical application, but is a promising and acceptable detection rate compared to previous study. The current study provides a preliminary insight to further improved by increasing the dataset for training, and applying modifications to the deep learning algorithm. CLINICAL REVELANCE: Deep learning systems can help clinicians to distinguish white spot lesions that may be missed during visual inspection.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Fotografía Dental , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Fotografía Dental/métodos , Proyectos Piloto
2.
BMC Oral Health ; 24(1): 155, 2024 Jan 31.
Artículo en Inglés | MEDLINE | ID: mdl-38297288

RESUMEN

BACKGROUND: This retrospective study aimed to develop a deep learning algorithm for the interpretation of panoramic radiographs and to examine the performance of this algorithm in the detection of periodontal bone losses and bone loss patterns. METHODS: A total of 1121 panoramic radiographs were used in this study. Bone losses in the maxilla and mandibula (total alveolar bone loss) (n = 2251), interdental bone losses (n = 25303), and furcation defects (n = 2815) were labeled using the segmentation method. In addition, interdental bone losses were divided into horizontal (n = 21839) and vertical (n = 3464) bone losses according to the defect patterns. A Convolutional Neural Network (CNN)-based artificial intelligence (AI) system was developed using U-Net architecture. The performance of the deep learning algorithm was statistically evaluated by the confusion matrix and ROC curve analysis. RESULTS: The system showed the highest diagnostic performance in the detection of total alveolar bone losses (AUC = 0.951) and the lowest in the detection of vertical bone losses (AUC = 0.733). The sensitivity, precision, F1 score, accuracy, and AUC values were found as 1, 0.995, 0.997, 0.994, 0.951 for total alveolar bone loss; found as 0.947, 0.939, 0.943, 0.892, 0.910 for horizontal bone losses; found as 0.558, 0.846, 0.673, 0.506, 0.733 for vertical bone losses and found as 0.892, 0.933, 0.912, 0.837, 0.868 for furcation defects (respectively). CONCLUSIONS: AI systems offer promising results in determining periodontal bone loss patterns and furcation defects from dental radiographs. This suggests that CNN algorithms can also be used to provide more detailed information such as automatic determination of periodontal disease severity and treatment planning in various dental radiographs.


Asunto(s)
Pérdida de Hueso Alveolar , Aprendizaje Profundo , Defectos de Furcación , Humanos , Pérdida de Hueso Alveolar/diagnóstico por imagen , Radiografía Panorámica/métodos , Estudios Retrospectivos , Defectos de Furcación/diagnóstico por imagen , Inteligencia Artificial , Algoritmos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA