Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 5 de 5
1.
Clin Oral Investig ; 28(7): 364, 2024 Jun 08.
Article En | MEDLINE | ID: mdl-38849649

OBJECTIVES: Diagnosing oral potentially malignant disorders (OPMD) is critical to prevent oral cancer. This study aims to automatically detect and classify the most common pre-malignant oral lesions, such as leukoplakia and oral lichen planus (OLP), and distinguish them from oral squamous cell carcinomas (OSCC) and healthy oral mucosa on clinical photographs using vision transformers. METHODS: 4,161 photographs of healthy mucosa, leukoplakia, OLP, and OSCC were included. Findings were annotated pixel-wise and reviewed by three clinicians. The photographs were divided into 3,337 for training and validation and 824 for testing. The training and validation images were further divided into five folds with stratification. A Mask R-CNN with a Swin Transformer was trained five times with cross-validation, and the held-out test split was used to evaluate the model performance. The precision, F1-score, sensitivity, specificity, and accuracy were calculated. The area under the receiver operating characteristics curve (AUC) and the confusion matrix of the most effective model were presented. RESULTS: The detection of OSCC with the employed model yielded an F1 of 0.852 and AUC of 0.974. The detection of OLP had an F1 of 0.825 and AUC of 0.948. For leukoplakia the F1 was 0.796 and the AUC was 0.938. CONCLUSIONS: OSCC were effectively detected with the employed model, whereas the detection of OLP and leukoplakia was moderately effective. CLINICAL RELEVANCE: Oral cancer is often detected in advanced stages. The demonstrated technology may support the detection and observation of OPMD to lower the disease burden and identify malignant oral cavity lesions earlier.


Leukoplakia, Oral , Lichen Planus, Oral , Mouth Neoplasms , Precancerous Conditions , Humans , Mouth Neoplasms/diagnosis , Precancerous Conditions/diagnosis , Lichen Planus, Oral/diagnosis , Leukoplakia, Oral/diagnosis , Sensitivity and Specificity , Photography , Diagnosis, Differential , Carcinoma, Squamous Cell/diagnosis , Male , Female , Photography, Dental , Image Interpretation, Computer-Assisted/methods
2.
BMC Oral Health ; 24(1): 387, 2024 Mar 26.
Article En | MEDLINE | ID: mdl-38532414

OBJECTIVE: Panoramic radiographs (PRs) provide a comprehensive view of the oral and maxillofacial region and are used routinely to assess dental and osseous pathologies. Artificial intelligence (AI) can be used to improve the diagnostic accuracy of PRs compared to bitewings and periapical radiographs. This study aimed to evaluate the advantages and challenges of using publicly available datasets in dental AI research, focusing on solving the novel task of predicting tooth segmentations, FDI numbers, and tooth diagnoses, simultaneously. MATERIALS AND METHODS: Datasets from the OdontoAI platform (tooth instance segmentations) and the DENTEX challenge (tooth bounding boxes with associated diagnoses) were combined to develop a two-stage AI model. The first stage implemented tooth instance segmentation with FDI numbering and extracted regions of interest around each tooth segmentation, whereafter the second stage implemented multi-label classification to detect dental caries, impacted teeth, and periapical lesions in PRs. The performance of the automated tooth segmentation algorithm was evaluated using a free-response receiver-operating-characteristics (FROC) curve and mean average precision (mAP) metrics. The diagnostic accuracy of detection and classification of dental pathology was evaluated with ROC curves and F1 and AUC metrics. RESULTS: The two-stage AI model achieved high accuracy in tooth segmentations with a FROC score of 0.988 and a mAP of 0.848. High accuracy was also achieved in the diagnostic classification of impacted teeth (F1 = 0.901, AUC = 0.996), whereas moderate accuracy was achieved in the diagnostic classification of deep caries (F1 = 0.683, AUC = 0.960), early caries (F1 = 0.662, AUC = 0.881), and periapical lesions (F1 = 0.603, AUC = 0.974). The model's performance correlated positively with the quality of annotations in the used public datasets. Selected samples from the DENTEX dataset revealed cases of missing (false-negative) and incorrect (false-positive) diagnoses, which negatively influenced the performance of the AI model. CONCLUSIONS: The use and pooling of public datasets in dental AI research can significantly accelerate the development of new AI models and enable fast exploration of novel tasks. However, standardized quality assurance is essential before using the datasets to ensure reliable outcomes and limit potential biases.


Dental Caries , Tooth, Impacted , Tooth , Humans , Artificial Intelligence , Radiography, Panoramic , Bone and Bones
3.
J Dent ; 143: 104886, 2024 04.
Article En | MEDLINE | ID: mdl-38342368

OBJECTIVE: Secondary caries lesions adjacent to restorations, a leading cause of restoration failure, require accurate diagnostic methods to ensure an optimal treatment outcome. Traditional diagnostic strategies rely on visual inspection complemented by radiographs. Recent advancements in artificial intelligence (AI), particularly deep learning, provide potential improvements in caries detection. This study aimed to develop a convolutional neural network (CNN)-based algorithm for detecting primary caries and secondary caries around restorations using bitewings. METHODS: Clinical data from 7 general dental practices in the Netherlands, comprising 425 bitewings of 383 patients, were utilized. The study used the Mask-RCNN architecture, for instance, segmentation, supported by the Swin Transformer backbone. After data augmentation, model training was performed through a ten-fold cross-validation. The diagnostic accuracy of the algorithm was evaluated by calculating the area under the Free-Response Receiver Operating Characteristics curve, sensitivity, precision, and F1 scores. RESULTS: The model achieved areas under FROC curves of 0.806 and 0.804, and F1-scores of 0.689 and 0.719 for primary and secondary caries detection, respectively. CONCLUSION: An accurate CNN-based automated system was developed to detect primary and secondary caries lesions on bitewings, highlighting a significant advancement in automated caries diagnostics. CLINICAL SIGNIFICANCE: An accurate algorithm that integrates the detection of both primary and secondary caries will permit the development of automated systems to aid clinicians in their daily clinical practice.


Deep Learning , Dental Caries , Humans , Artificial Intelligence , Dental Caries Susceptibility , Neural Networks, Computer , ROC Curve , Dental Caries/therapy
4.
Sci Rep ; 13(1): 2296, 2023 02 09.
Article En | MEDLINE | ID: mdl-36759684

Oral squamous cell carcinoma (OSCC) is amongst the most common malignancies, with an estimated incidence of 377,000 and 177,000 deaths worldwide. The interval between the onset of symptoms and the start of adequate treatment is directly related to tumor stage and 5-year-survival rates of patients. Early detection is therefore crucial for efficient cancer therapy. This study aims to detect OSCC on clinical photographs (CP) automatically. 1406 CP(s) were manually annotated and labeled as a reference. A deep-learning approach based on Swin-Transformer was trained and validated on 1265 CP(s). Subsequently, the trained algorithm was applied to a test set consisting of 141 CP(s). The classification accuracy and the area-under-the-curve (AUC) were calculated. The proposed method achieved a classification accuracy of 0.986 and an AUC of 0.99 for classifying OSCC on clinical photographs. Deep learning-based assistance of clinicians may raise the rate of early detection of oral cancer and hence the survival rate and quality of life of patients.


Carcinoma, Squamous Cell , Head and Neck Neoplasms , Mouth Neoplasms , Humans , Carcinoma, Squamous Cell/diagnosis , Carcinoma, Squamous Cell/pathology , Mouth Neoplasms/diagnosis , Mouth Neoplasms/pathology , Squamous Cell Carcinoma of Head and Neck , Quality of Life
5.
Sci Rep ; 12(1): 19596, 2022 11 15.
Article En | MEDLINE | ID: mdl-36379971

Mandibular fractures are among the most frequent facial traumas in oral and maxillofacial surgery, accounting for 57% of cases. An accurate diagnosis and appropriate treatment plan are vital in achieving optimal re-establishment of occlusion, function and facial aesthetics. This study aims to detect mandibular fractures on panoramic radiographs (PR) automatically. 1624 PR with fractures were manually annotated and labelled as a reference. A deep learning approach based on Faster R-CNN and Swin-Transformer was trained and validated on 1640 PR with and without fractures. Subsequently, the trained algorithm was applied to a test set consisting of 149 PR with and 171 PR without fractures. The detection accuracy and the area-under-the-curve (AUC) were calculated. The proposed method achieved an F1 score of 0.947 and an AUC of 0.977. Deep learning-based assistance of clinicians may reduce the misdiagnosis and hence the severe complications.


Deep Learning , Mandibular Fractures , Humans , Radiography, Panoramic/methods , Mandibular Fractures/diagnostic imaging , Algorithms , Area Under Curve
...