Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
J Dent ; 150: 105323, 2024 Aug 27.
Artículo en Inglés | MEDLINE | ID: mdl-39197530

RESUMEN

OBJECTIVES: This study aimed to develop and evaluate a fully automated method for visualizing and measuring tooth wear progression using pairs of intraoral scans (IOSs) in comparison with a manual protocol. METHODS: Eight patients with severe tooth wear progression were retrospectively included, with IOSs taken at baseline and 1-year, 3-year, and 5-year follow-ups. For alignment, the automated method segmented the arch into separate teeth in the IOSs. Tooth pair registration selected tooth surfaces that were likely unaffected by tooth wear and performed point set registration on the selected surfaces. Maximum tooth profile losses from baseline to each follow-up were determined based on signed distances using the manual 3D Wear Analysis (3DWA) protocol and the automated method. The automated method was evaluated against the 3DWA protocol by comparing tooth segmentations with the Dice-Sørensen coefficient (DSC) and intersection over union (IoU). The tooth profile loss measurements were compared with regression and Bland-Altman plots. Additionally, the relationship between the time interval and the measurement differences between the two methods was shown. RESULTS: The automated method completed within two minutes. It was very effective for tooth instance segmentation (826 teeth, DSC = 0.947, IoU = 0.907), and a correlation of 0.932 was observed for agreement on tooth profile loss measurements (516 tooth pairs, mean difference = 0.021mm, 95% confidence interval = [-0.085, 0.138]). The variability in measurement differences increased for larger time intervals. CONCLUSIONS: The proposed automated method for monitoring tooth wear progression was faster and not clinically significantly different in accuracy compared to a manual protocol for full-arch IOSs. CLINICAL SIGNIFICANCE: General practitioners and patients can benefit from the visualization of tooth wear, allowing quantifiable and standardized decisions concerning therapy requirements of worn teeth. The proposed method for tooth wear monitoring decreased the time required to less than two minutes compared with the manual approach, which took at least two hours.

2.
Clin Oral Investig ; 28(7): 364, 2024 Jun 08.
Artículo en Inglés | MEDLINE | ID: mdl-38849649

RESUMEN

OBJECTIVES: Diagnosing oral potentially malignant disorders (OPMD) is critical to prevent oral cancer. This study aims to automatically detect and classify the most common pre-malignant oral lesions, such as leukoplakia and oral lichen planus (OLP), and distinguish them from oral squamous cell carcinomas (OSCC) and healthy oral mucosa on clinical photographs using vision transformers. METHODS: 4,161 photographs of healthy mucosa, leukoplakia, OLP, and OSCC were included. Findings were annotated pixel-wise and reviewed by three clinicians. The photographs were divided into 3,337 for training and validation and 824 for testing. The training and validation images were further divided into five folds with stratification. A Mask R-CNN with a Swin Transformer was trained five times with cross-validation, and the held-out test split was used to evaluate the model performance. The precision, F1-score, sensitivity, specificity, and accuracy were calculated. The area under the receiver operating characteristics curve (AUC) and the confusion matrix of the most effective model were presented. RESULTS: The detection of OSCC with the employed model yielded an F1 of 0.852 and AUC of 0.974. The detection of OLP had an F1 of 0.825 and AUC of 0.948. For leukoplakia the F1 was 0.796 and the AUC was 0.938. CONCLUSIONS: OSCC were effectively detected with the employed model, whereas the detection of OLP and leukoplakia was moderately effective. CLINICAL RELEVANCE: Oral cancer is often detected in advanced stages. The demonstrated technology may support the detection and observation of OPMD to lower the disease burden and identify malignant oral cavity lesions earlier.


Asunto(s)
Leucoplasia Bucal , Liquen Plano Oral , Neoplasias de la Boca , Lesiones Precancerosas , Humanos , Neoplasias de la Boca/diagnóstico , Lesiones Precancerosas/diagnóstico , Liquen Plano Oral/diagnóstico , Leucoplasia Bucal/diagnóstico , Sensibilidad y Especificidad , Fotograbar , Diagnóstico Diferencial , Carcinoma de Células Escamosas/diagnóstico , Masculino , Femenino , Fotografía Dental , Interpretación de Imagen Asistida por Computador/métodos
3.
BMC Oral Health ; 24(1): 387, 2024 Mar 26.
Artículo en Inglés | MEDLINE | ID: mdl-38532414

RESUMEN

OBJECTIVE: Panoramic radiographs (PRs) provide a comprehensive view of the oral and maxillofacial region and are used routinely to assess dental and osseous pathologies. Artificial intelligence (AI) can be used to improve the diagnostic accuracy of PRs compared to bitewings and periapical radiographs. This study aimed to evaluate the advantages and challenges of using publicly available datasets in dental AI research, focusing on solving the novel task of predicting tooth segmentations, FDI numbers, and tooth diagnoses, simultaneously. MATERIALS AND METHODS: Datasets from the OdontoAI platform (tooth instance segmentations) and the DENTEX challenge (tooth bounding boxes with associated diagnoses) were combined to develop a two-stage AI model. The first stage implemented tooth instance segmentation with FDI numbering and extracted regions of interest around each tooth segmentation, whereafter the second stage implemented multi-label classification to detect dental caries, impacted teeth, and periapical lesions in PRs. The performance of the automated tooth segmentation algorithm was evaluated using a free-response receiver-operating-characteristics (FROC) curve and mean average precision (mAP) metrics. The diagnostic accuracy of detection and classification of dental pathology was evaluated with ROC curves and F1 and AUC metrics. RESULTS: The two-stage AI model achieved high accuracy in tooth segmentations with a FROC score of 0.988 and a mAP of 0.848. High accuracy was also achieved in the diagnostic classification of impacted teeth (F1 = 0.901, AUC = 0.996), whereas moderate accuracy was achieved in the diagnostic classification of deep caries (F1 = 0.683, AUC = 0.960), early caries (F1 = 0.662, AUC = 0.881), and periapical lesions (F1 = 0.603, AUC = 0.974). The model's performance correlated positively with the quality of annotations in the used public datasets. Selected samples from the DENTEX dataset revealed cases of missing (false-negative) and incorrect (false-positive) diagnoses, which negatively influenced the performance of the AI model. CONCLUSIONS: The use and pooling of public datasets in dental AI research can significantly accelerate the development of new AI models and enable fast exploration of novel tasks. However, standardized quality assurance is essential before using the datasets to ensure reliable outcomes and limit potential biases.


Asunto(s)
Caries Dental , Diente Impactado , Diente , Humanos , Inteligencia Artificial , Radiografía Panorámica , Huesos
4.
J Dent ; 143: 104886, 2024 04.
Artículo en Inglés | MEDLINE | ID: mdl-38342368

RESUMEN

OBJECTIVE: Secondary caries lesions adjacent to restorations, a leading cause of restoration failure, require accurate diagnostic methods to ensure an optimal treatment outcome. Traditional diagnostic strategies rely on visual inspection complemented by radiographs. Recent advancements in artificial intelligence (AI), particularly deep learning, provide potential improvements in caries detection. This study aimed to develop a convolutional neural network (CNN)-based algorithm for detecting primary caries and secondary caries around restorations using bitewings. METHODS: Clinical data from 7 general dental practices in the Netherlands, comprising 425 bitewings of 383 patients, were utilized. The study used the Mask-RCNN architecture, for instance, segmentation, supported by the Swin Transformer backbone. After data augmentation, model training was performed through a ten-fold cross-validation. The diagnostic accuracy of the algorithm was evaluated by calculating the area under the Free-Response Receiver Operating Characteristics curve, sensitivity, precision, and F1 scores. RESULTS: The model achieved areas under FROC curves of 0.806 and 0.804, and F1-scores of 0.689 and 0.719 for primary and secondary caries detection, respectively. CONCLUSION: An accurate CNN-based automated system was developed to detect primary and secondary caries lesions on bitewings, highlighting a significant advancement in automated caries diagnostics. CLINICAL SIGNIFICANCE: An accurate algorithm that integrates the detection of both primary and secondary caries will permit the development of automated systems to aid clinicians in their daily clinical practice.


Asunto(s)
Aprendizaje Profundo , Caries Dental , Humanos , Inteligencia Artificial , Susceptibilidad a Caries Dentarias , Redes Neurales de la Computación , Curva ROC , Caries Dental/terapia
5.
Sci Rep ; 13(1): 2296, 2023 02 09.
Artículo en Inglés | MEDLINE | ID: mdl-36759684

RESUMEN

Oral squamous cell carcinoma (OSCC) is amongst the most common malignancies, with an estimated incidence of 377,000 and 177,000 deaths worldwide. The interval between the onset of symptoms and the start of adequate treatment is directly related to tumor stage and 5-year-survival rates of patients. Early detection is therefore crucial for efficient cancer therapy. This study aims to detect OSCC on clinical photographs (CP) automatically. 1406 CP(s) were manually annotated and labeled as a reference. A deep-learning approach based on Swin-Transformer was trained and validated on 1265 CP(s). Subsequently, the trained algorithm was applied to a test set consisting of 141 CP(s). The classification accuracy and the area-under-the-curve (AUC) were calculated. The proposed method achieved a classification accuracy of 0.986 and an AUC of 0.99 for classifying OSCC on clinical photographs. Deep learning-based assistance of clinicians may raise the rate of early detection of oral cancer and hence the survival rate and quality of life of patients.


Asunto(s)
Carcinoma de Células Escamosas , Neoplasias de Cabeza y Cuello , Neoplasias de la Boca , Humanos , Carcinoma de Células Escamosas/diagnóstico , Carcinoma de Células Escamosas/patología , Neoplasias de la Boca/diagnóstico , Neoplasias de la Boca/patología , Carcinoma de Células Escamosas de Cabeza y Cuello , Calidad de Vida
6.
Sci Rep ; 12(1): 19596, 2022 11 15.
Artículo en Inglés | MEDLINE | ID: mdl-36379971

RESUMEN

Mandibular fractures are among the most frequent facial traumas in oral and maxillofacial surgery, accounting for 57% of cases. An accurate diagnosis and appropriate treatment plan are vital in achieving optimal re-establishment of occlusion, function and facial aesthetics. This study aims to detect mandibular fractures on panoramic radiographs (PR) automatically. 1624 PR with fractures were manually annotated and labelled as a reference. A deep learning approach based on Faster R-CNN and Swin-Transformer was trained and validated on 1640 PR with and without fractures. Subsequently, the trained algorithm was applied to a test set consisting of 149 PR with and 171 PR without fractures. The detection accuracy and the area-under-the-curve (AUC) were calculated. The proposed method achieved an F1 score of 0.947 and an AUC of 0.977. Deep learning-based assistance of clinicians may reduce the misdiagnosis and hence the severe complications.


Asunto(s)
Aprendizaje Profundo , Fracturas Mandibulares , Humanos , Radiografía Panorámica/métodos , Fracturas Mandibulares/diagnóstico por imagen , Algoritmos , Área Bajo la Curva
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA