Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Diagnostics (Basel) ; 13(23)2023 Nov 29.
Artigo em Inglês | MEDLINE | ID: mdl-38066803

RESUMO

Several artificial intelligence-based models have been presented for the detection of periodontal bone loss (PBL), mostly using convolutional neural networks, which are the state of the art in deep learning. Given the emerging breakthrough of transformer networks in computer vision, we aimed to evaluate various models for automatized PBL detection. An image data set of 21,819 anonymized periapical radiographs from the upper/lower and anterior/posterior regions was assessed by calibrated dentists according to PBL. Five vision transformer networks (ViT-base/ViT-large from Google, BEiT-base/BEiT-large from Microsoft, DeiT-base from Facebook/Meta) were utilized and evaluated. Accuracy (ACC), sensitivity (SE), specificity (SP), positive/negative predictive value (PPV/NPV) and area under the ROC curve (AUC) were statistically determined. The overall diagnostic ACC and AUC values ranged from 83.4 to 85.2% and 0.899 to 0.918 for all evaluated transformer networks, respectively. Differences in diagnostic performance were evident for lower (ACC 94.1-96.7%; AUC 0.944-0.970) and upper anterior (86.7-90.2%; 0.948-0.958) and lower (85.6-87.2%; 0.913-0.937) and upper posterior teeth (78.1-81.0%; 0.851-0.875). In this study, only minor differences among the tested networks were detected for PBL detection. To increase the diagnostic performance and to support the clinical use of such networks, further optimisations with larger and manually annotated image data sets are needed.

2.
J Clin Med ; 12(22)2023 Nov 20.
Artigo em Inglês | MEDLINE | ID: mdl-38002799

RESUMO

Interest in machine learning models and convolutional neural networks (CNNs) for diagnostic purposes is steadily increasing in dentistry. Here, CNNs can potentially help in the classification of periodontal bone loss (PBL). In this study, the diagnostic performance of five CNNs in detecting PBL on periapical radiographs was analyzed. A set of anonymized periapical radiographs (N = 21,819) was evaluated by a group of trained and calibrated dentists and classified into radiographs without PBL or with mild, moderate, or severe PBL. Five CNNs were trained over five epochs. Statistically, diagnostic performance was analyzed using accuracy (ACC), sensitivity (SE), specificity (SP), and area under the receiver operating curve (AUC). Here, overall ACC ranged from 82.0% to 84.8%, SE 88.8-90.7%, SP 66.2-71.2%, and AUC 0.884-0.913, indicating similar diagnostic performance of the five CNNs. Furthermore, performance differences were evident in the individual sextant groups. Here, the highest values were found for the mandibular anterior teeth (ACC 94.9-96.0%) and the lowest values for the maxillary posterior teeth (78.0-80.7%). It can be concluded that automatic assessment of PBL seems to be possible, but that diagnostic accuracy varies depending on the location in the dentition. Future research is needed to improve performance for all tooth groups.

3.
NPJ Digit Med ; 6(1): 198, 2023 Oct 25.
Artigo em Inglês | MEDLINE | ID: mdl-37880375

RESUMO

Caries and molar-incisor hypomineralization (MIH) are among the most prevalent diseases worldwide and need to be reliably diagnosed. The use of dental photographs and artificial intelligence (AI) methods may potentially contribute to realizing accurate and automated diagnostic visual examinations in the future. Therefore, the present study aimed to develop an AI-based algorithm that can detect, classify and localize caries and MIH. This study included an image set of 18,179 anonymous photographs. Pixelwise image labeling was achieved by trained and calibrated annotators using the Computer Vision Annotation Tool (CVAT). All annotations were made according to standard methods and were independently checked by an experienced dentist. The entire image set was divided into training (N = 16,679), validation (N = 500) and test sets (N = 1000). The AI-based algorithm was trained and finetuned over 250 epochs by using image augmentation and adapting a vision transformer network (SegFormer-B5). Statistics included the determination of the intersection over union (IoU), average precision (AP) and accuracy (ACC). The overall diagnostic performance in terms of IoU, AP and ACC were 0.959, 0.977 and 0.978 for the finetuned model, respectively. The corresponding data for the most relevant caries classes of non-cavitations (0.630, 0.813 and 0.990) and dentin cavities (0.692, 0.830, and 0.997) were found to be high. MIH-related demarcated opacity (0.672, 0.827, and 0.993) and atypical restoration (0.829, 0.902, and 0.999) showed similar results. Here, we report that the model achieves excellent precision for pixelwise detection and localization of caries and MIH. Nevertheless, the model needs to be further improved and externally validated.

4.
Clin Oral Investig ; 26(9): 5923-5930, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-35608684

RESUMO

OBJECTIVE: The aim of this study was to develop and validate a deep learning-based convolutional neural network (CNN) for the automated detection and categorization of teeth affected by molar-incisor-hypomineralization (MIH) on intraoral photographs. MATERIALS AND METHODS: The data set consisted of 3241 intraoral images (767 teeth with no MIH/no intervention, 76 with no MIH/atypical restoration, 742 with no MIH/sealant, 815 with demarcated opacity/no intervention, 158 with demarcated opacity/atypical restoration, 181 with demarcated opacity/sealant, 290 with enamel breakdown/no intervention, 169 with enamel breakdown/atypical restoration, and 43 with enamel breakdown/sealant). These images were divided into a training (N = 2596) and a test sample (N = 649). All images were evaluated by an expert group, and each diagnosis served as a reference standard for cyclic training and evaluation of the CNN (ResNeXt-101-32 × 8d). Statistical analysis included the calculation of contingency tables, areas under the receiver operating characteristic curve (AUCs) and saliency maps. RESULTS: The developed CNN was able to categorize teeth with MIH correctly with an overall diagnostic accuracy of 95.2%. The overall SE and SP amounted to 78.6% and 97.3%, respectively, which indicate that the CNN performed better in healthy teeth compared to those with MIH. The AUC values ranging from 0.873 (enamel breakdown/sealant) to 0.994 (atypical restoration/no MIH). CONCLUSION: It was possible to categorize the majority of clinical photographs automatically by using a trained deep learning-based CNN with an acceptably high diagnostic accuracy. CLINICAL RELEVANCE: Artificial intelligence-based dental diagnostics may support dental diagnostics in the future regardless of the need to improve accuracy.


Assuntos
Hipoplasia do Esmalte Dentário , Incisivo , Inteligência Artificial , Hipoplasia do Esmalte Dentário/diagnóstico por imagem , Materiais Dentários , Humanos , Dente Molar/diagnóstico por imagem , Prevalência
5.
J Dent ; 121: 104124, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-35395346

RESUMO

OBJECTIVES: Intraoral photographs might be considered the machine-readable equivalent of a clinical-based visual examination and can potentially be used to detect and categorize dental restorations. The first objective of this study was to develop a deep learning-based convolutional neural network (CNN) for automated detection and categorization of posterior composite, cement, amalgam, gold and ceramic restorations on clinical photographs. Second, this study aimed to determine the diagnostic accuracy for the developed CNN (test method) compared to that of an expert evaluation (reference standard). METHODS: The whole image set of 1761 images (483 of unrestored teeth, 570 of composite restorations, 213 of cements, 278 of amalgam restorations, 125 of gold restorations and 92 of ceramic restorations) was divided into a training set (N = 1407, 401, 447, 66, 231, 93, and 169, respectively) and a test set (N = 354, 82, 123, 26, 47, 32, and 44). The expert diagnoses served as a reference standard for cyclic training and repeated evaluation of the CNN (ResNeXt-101-32 × 8d), which was trained by using image augmentation and transfer learning. Statistical analysis included the calculation of contingency tables, areas under the receiver operating characteristic curve and saliency maps. RESULTS: After training was complete, the CNN was able to categorize restorations correctly with the following diagnostic accuracy values: 94.9% for unrestored teeth, 92.9% for composites, 98.3% for cements, 99.2% for amalgam restorations, 99.4% for gold restorations and 97.8% for ceramic restorations. CONCLUSIONS: It was possible to categorize different types of posterior restorations on intraoral photographs automatically with a good diagnostic accuracy. CLINICAL SIGNIFICANCE: Dental diagnostics might be supported by artificial intelligence-based algorithms in the future. However, further improvements are needed to increase accuracy and practicability.


Assuntos
Aprendizado Profundo , Restauração Dentária Permanente , Fotografia Dentária , Dente , Inteligência Artificial , Resinas Compostas , Amálgama Dentário , Restauração Dentária Permanente/métodos , Ouro , Redes Neurais de Computação , Fotografia Dentária/classificação , Fotografia Dentária/métodos , Dente/diagnóstico por imagem , Dente/cirurgia
6.
Diagnostics (Basel) ; 11(9)2021 Sep 03.
Artigo em Inglês | MEDLINE | ID: mdl-34573949

RESUMO

The aim of the present study was to investigate the diagnostic performance of a trained convolutional neural network (CNN) for detecting and categorizing fissure sealants from intraoral photographs using the expert standard as reference. An image set consisting of 2352 digital photographs from permanent posterior teeth (461 unsealed tooth surfaces/1891 sealed surfaces) was divided into a training set (n = 1881/364/1517) and a test set (n = 471/97/374). All the images were scored according to the following categories: unsealed molar, intact, sufficient and insufficient sealant. Expert diagnoses served as the reference standard for cyclic training and repeated evaluation of the CNN (ResNeXt-101-32x8d), which was trained by using image augmentation and transfer learning. A statistical analysis was performed, including the calculation of contingency tables and areas under the receiver operating characteristic curve (AUC). The results showed that the CNN accurately detected sealants in 98.7% of all the test images, corresponding to an AUC of 0.996. The diagnostic accuracy and AUC were 89.6% and 0.951, respectively, for intact sealant; 83.2% and 0.888, respectively, for sufficient sealant; 92.4 and 0.942, respectively, for insufficient sealant. On the basis of the documented results, it was concluded that good agreement with the reference standard could be achieved for automatized sealant detection by using artificial intelligence methods. Nevertheless, further research is necessary to improve the model performance.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA