Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
1.
Clin Oral Implants Res ; 34(6): 565-574, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-36906917

RESUMEN

OBJECTIVES: To develop and assess the performance of a novel artificial intelligence (AI)-driven convolutional neural network (CNN)-based tool for automated three-dimensional (3D) maxillary alveolar bone segmentation on cone-beam computed tomography (CBCT) images. MATERIALS AND METHODS: A total of 141 CBCT scans were collected for performing training (n = 99), validation (n = 12), and testing (n = 30) of the CNN model for automated segmentation of the maxillary alveolar bone and its crestal contour. Following automated segmentation, the 3D models with under- or overestimated segmentations were refined by an expert for generating a refined-AI (R-AI) segmentation. The overall performance of CNN model was assessed. Also, 30% of the testing sample was randomly selected and manually segmented to compare the accuracy of AI and manual segmentation. Additionally, the time required to generate a 3D model was recorded in seconds (s). RESULTS: The accuracy metrics of automated segmentation showed an excellent range of values for all accuracy metrics. However, the manual method (95% HD: 0.20 ± 0.05 mm; IoU: 95% ± 3.0; DSC: 97% ± 2.0) showed slightly better performance than the AI segmentation (95% HD: 0.27 ± 0.03 mm; IoU: 92% ± 1.0; DSC: 96% ± 1.0). There was a statistically significant difference of the time-consumed among the segmentation methods (p < .001). The AI-driven segmentation (51.5 ± 10.9 s) was 116 times faster than the manual segmentation (5973.3 ± 623.6 s). The R-AI method showed intermediate time-consumed (1666.7 ± 588.5 s). CONCLUSION: Although the manual segmentation showed slightly better performance, the novel CNN-based tool also provided a highly accurate segmentation of the maxillary alveolar bone and its crestal contour consuming 116 times less than the manual approach.


Asunto(s)
Inteligencia Artificial , Procesamiento de Imagen Asistido por Computador , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Tomografía Computarizada de Haz Cónico/métodos
2.
Eur J Orthod ; 45(2): 169-174, 2023 03 31.
Artículo en Inglés | MEDLINE | ID: mdl-36099419

RESUMEN

OBJECTIVE: Tooth segmentation and classification from cone-beam computed tomography (CBCT) is a prerequisite for diagnosis and treatment planning in the majority of digital dental workflows. However, an accurate and efficient segmentation of teeth in the presence of metal artefacts still remains a challenge. Therefore, the following study aimed to validate an automated deep convolutional neural network (CNN)-based tool for the segmentation and classification of teeth with orthodontic brackets on CBCT images. METHODS: A total of 215 CBCT scans (1780 teeth) were retrospectively collected, consisting of pre- and post-operative images of the patients who underwent combined orthodontic and orthognathic surgical treatment. All the scans were acquired with NewTom CBCT device. A complete dentition with orthodontic brackets and high-quality images were included. The dataset were randomly divided into three subsets with random allocation of all 32 tooth classes: training set (140 CBCT scans-400 teeth), validation set (35 CBCT scans-100 teeth), and test set (pre-operative: 25, post-operative: 15 = 40 CBCT scans-1280 teeth). A multiclass CNN-based tool was developed and its performance was assessed for automated segmentation and classification of teeth with brackets by comparison with a ground truth. RESULTS: The CNN model took 13.7 ± 1.2 s for the segmentation and classification of all the teeth on a single CBCT image. Overall, the segmentation performance was excellent with a high intersection over union (IoU) of 0.99. Anterior teeth showed a significantly lower IoU (P < 0.05) compared to premolar and molar teeth. The dice similarity coefficient score of anterior (0.99 ± 0.02) and premolar teeth (0.99 ± 0.10) in the pre-operative group was comparable to the post-operative group. The classification of teeth to the correct 32 classes had a high recall rate (99.9%) and precision (99%). CONCLUSIONS: The proposed CNN model outperformed other state-of-the-art algorithms in terms of accuracy and efficiency. It could act as a viable alternative for automatic segmentation and classification of teeth with brackets. CLINICAL SIGNIFICANCE: The proposed method could simplify the existing digital workflows of orthodontics, orthognathic surgery, restorative dentistry, and dental implantology by offering an accurate and efficient automated segmentation approach to clinicians, hence further enhancing the treatment predictability and outcomes.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Soportes Ortodóncicos , Humanos , Tomografía Computarizada de Haz Cónico/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Estudios Retrospectivos
3.
J Dent ; 124: 104238, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-35872223

RESUMEN

OBJECTIVES: The present study investigated the accuracy, consistency, and time-efficiency of a novel deep convolutional neural network (CNN) based model for the automated maxillofacial bone segmentation from cone beam computed tomography (CBCT) images. METHOD: A dataset of 144 scans was acquired from two CBCT devices and randomly divided into three subsets: training set (n = 110), validation set (n = 10) and testing set (n = 24). A three-dimensional (3D) U-Net (CNN) model was developed, and the achieved automated segmentation was compared with a manual approach. RESULTS: The average time required for automated segmentation was 39.1 s with a 204-fold decrease in time consumption compared to manual segmentation (132.7 min). The model was highly accurate for identification of the bony structures of the anatomical region of interest with a dice similarity coefficient (DSC) of 92.6%. Additionally, the fully deterministic nature of the CNN model was able to provide 100% consistency without any variability. The inter-observer consistency for expert-based minor correction of the automated segmentation observed an excellent DSC of 99.7%. CONCLUSION: The proposed CNN model provided a time-efficient, accurate, and consistent CBCT-based automated segmentation of the maxillofacial complex. CLINICAL SIGNIFICANCE: Automated segmentation of the maxillofacial complex could act as an alternative to the conventional segmentation techniques for improving the efficiency of the digital workflows. This approach could deliver accurate and ready-to-print3D models, essential to patient-specific digital treatment planning for orthodontics, maxillofacial surgery, and implant dentistry.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Tomografía Computarizada de Haz Cónico , Humanos , Procesamiento de Imagen Asistido por Computador/métodos
4.
Sci Rep ; 12(1): 7523, 2022 05 07.
Artículo en Inglés | MEDLINE | ID: mdl-35525857

RESUMEN

An accurate three-dimensional (3D) segmentation of the maxillary sinus is crucial for multiple diagnostic and treatment applications. Yet, it is challenging and time-consuming when manually performed on a cone-beam computed tomography (CBCT) dataset. Recently, convolutional neural networks (CNNs) have proven to provide excellent performance in the field of 3D image analysis. Hence, this study developed and validated a novel automated CNN-based methodology for the segmentation of maxillary sinus using CBCT images. A dataset of 264 sinuses were acquired from 2 CBCT devices and randomly divided into 3 subsets: training, validation, and testing. A 3D U-Net architecture CNN model was developed and compared to semi-automatic segmentation in terms of time, accuracy, and consistency. The average time was significantly reduced (p-value < 2.2e-16) by automatic segmentation (0.4 min) compared to semi-automatic segmentation (60.8 min). The model accurately identified the segmented region with a dice similarity co-efficient (DSC) of 98.4%. The inter-observer reliability for minor refinement of automatic segmentation showed an excellent DSC of 99.6%. The proposed CNN model provided a time-efficient, precise, and consistent automatic segmentation which could allow an accurate generation of 3D models for diagnosis and virtual treatment planning.


Asunto(s)
Seno Maxilar , Redes Neurales de la Computación , Tomografía Computarizada de Haz Cónico/métodos , Procesamiento de Imagen Asistido por Computador , Seno Maxilar/diagnóstico por imagen , Reproducibilidad de los Resultados
5.
J Dent ; 122: 104139, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35461974

RESUMEN

OBJECTIVE: To assess the accuracy of a novel Artificial Intelligence (AI)-driven tool for automated detection of teeth and small edentulous regions on Cone-Beam Computed Tomography (CBCT) images. MATERIALS AND METHODS: After AI training and testing with 175 CBCT scans (130 for training and 40 for testing), validation was performed on a total of 46 CBCT scans selected for this purpose. Scans were split into fully dentate and partially dentate patients (small edentulous regions). The AI Driven tool (Virtual Patient Creator, Relu BV, Leuven, Belgium) automatically detected, segmented and labelled teeth and edentulous regions. Human performance served as clinical reference. Accuracy and speed of the AI-driven tool to detect and label teeth and edentulous regions in partially edentulous jaws were assessed. Automatic tooth segmentation was compared to manually refined segmentation and accuracy by means of Intersetion over Union (IoU) and 95% Hausdorff Distance served as a secondary outcome. RESULTS: The AI-driven tool achieved a general accuracy of 99.7% and 99% for detection and labelling of teeth and missing teeth for both fully dentate and partially dentate patients, respectively. Automated detections took a median time of 1.5s, while the human operator median time was 98s (P<0.0001). Segmentation accuracy measured by Intersection over Union was 0.96 and 0.97 for fully dentate and partially edentulous jaws respectively. CONCLUSIONS: The AI-driven tool was accurate and fast for CBCT-based detection, segmentation and labelling of teeth and missing teeth in partial edentulism. CLINICAL SIGNIFICANCE: The use of AI may represent a promising time-saving tool serving radiological reporting, with a major step forward towards automated dental charting, as well as surgical and treatment planning.


Asunto(s)
Arcada Edéntula , Boca Edéntula , Inteligencia Artificial , Tomografía Computarizada de Haz Cónico/métodos , Humanos , Procesamiento de Imagen Asistido por Computador , Arcada Edéntula/diagnóstico por imagen , Boca Edéntula/diagnóstico por imagen , Redes Neurales de la Computación
6.
J Dent ; 119: 104069, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-35183696

RESUMEN

OBJECTIVES: To assess the influence of dental fillings on the performance of an artificial intelligence (AI)-driven tool for tooth segmentation on cone-beam computed tomography (CBCT) according to the type of tooth. METHODS: A total of 175 CBCT scans (500 teeth) were recruited for performing training (140 CBCT scans - 400 teeth) and validation (35 CBCT scans - 100 teeth) of the AI convolutional neural networks. The test dataset involved 74 CBCT scans (226 teeth), which was further divided into control and experimental groups depending on the presence of dental filling: without filling (control group: 24 CBCT scans - 113 teeth) and with coronal and/or root filling (experimental group: 50 CBCT scans - 113 teeth). The segmentation performance for both groups was assessed. Additionally, 10% of each tooth type (anterior, premolar, and molar) was randomly selected for time analysis according to manual, AI-based and refined-AI segmentation methods. RESULTS: The presence of fillings significantly influenced the segmentation performance (p<0.05). However, the accuracy metrics showed an excellent range of values for both control (95% Hausdorff Distance (95% HD): 0.01-0.08 mm; Intersection over union (IoU): 0.97-0.99; Dice similarity coefficient (DSC): 0.98-0.99; Precision: 1.00; Recall: 0.97-0.99; Accuracy: 1.00) and experimental groups (95% HD: 0.17-0.25 mm; IoU: 0.91-0.95; DSC: 0.95-0.97; Precision:1.00; Recall: 0.91-0.95; Accuracy: 0.99-1.00). The time analysis showed that the AI-based segmentation was significantly faster with a mean time of 29.8 s (p<0.001). CONCLUSIONS: The proposed AI-driven tool allowed an accurate and time-efficient approach for the segmentation of teeth on CBCT images irrespective of the presence of high-density dental filling material and the type of tooth. CLINICAL SIGNIFICANCE: Tooth segmentation is a challenging and time-consuming task, mainly in the presence of artifacts generated by dental filling material. The proposed AI-driven tool could offer a clinically acceptable approach for tooth segmentation, to be applied in the digital dental workflows considering its time efficiency and high accuracy regardless of the presence of dental fillings.


Asunto(s)
Tomografía Computarizada de Haz Cónico Espiral , Diente , Inteligencia Artificial , Tomografía Computarizada de Haz Cónico/métodos , Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación
7.
J Dent ; 116: 103891, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34780873

RESUMEN

OBJECTIVES: The objective of this study is the development and validation of a novel artificial intelligence driven tool for fast and accurate mandibular canal segmentation on cone beam computed tomography (CBCT). METHODS: A total of 235 CBCT scans from dentate subjects needing oral surgery were used in this study, allowing for development, training and validation of a deep learning algorithm for automated mandibular canal (MC) segmentation on CBCT. Shape, diameter and direction of the MC were adjusted on all CBCT slices using a voxel-wise approach. Validation was then performed on a random set of 30 CBCTs - previously unseen by the algorithm - where voxel-level annotations allowed for assessment of all MC segmentations. RESULTS: Primary results show successful implementation of the AI algorithm for segmentation of the MC with a mean IoU of 0.636 (± 0.081), a median IoU of 0.639 (± 0.081), a mean Dice Similarity Coefficient of 0.774 (± 0.062). Precision, recall and accuracy had mean values of 0.782 (± 0.121), 0.792 (± 0.108) and 0.99 (± 7.64×10-05) respectively. The total time for automated AI segmentation was 21.26 s (±2.79), which is 107 times faster than accurate manual segmentation. CONCLUSIONS: This study demonstrates a novel, fast and accurate AI-driven module for MC segmentation on CBCT. CLINICAL SIGNIFICANCE: Given the importance of adequate pre-operative mandibular canal assessment, Artificial Intelligence could help relieve practitioners from the delicate and time-consuming task of manually tracing and segmenting this structure, helping prevent per- and post-operative neurovascular complications.


Asunto(s)
Aprendizaje Profundo , Tomografía Computarizada de Haz Cónico Espiral , Inteligencia Artificial , Tomografía Computarizada de Haz Cónico , Humanos , Procesamiento de Imagen Asistido por Computador , Canal Mandibular
8.
J Dent ; 115: 103865, 2021 12.
Artículo en Inglés | MEDLINE | ID: mdl-34710545

RESUMEN

OBJECTIVES: Automatic tooth segmentation and classification from cone beam computed tomography (CBCT) have become an integral component of the digital dental workflows. Therefore, the aim of this study was to develop and validate a deep learning approach for an automatic tooth segmentation and classification from CBCT images. METHODS: A dataset of 186 CBCT scans was acquired from two CBCT machines with different acquisition settings. An artificial intelligence (AI) framework was built to segment and classify teeth. Teeth were segmented in a three-step approach with each step consisting of a 3D U-Net and step 2 included classification. The dataset was divided into training set (140 scans) to train the model based on ground-truth segmented teeth, validation set (35 scans) to test the model performance and test set (11 scans) to evaluate the model performance compared to ground-truth. Different evaluation metrics were used such as precision, recall rate and time. RESULTS: The AI framework correctly segmented teeth with optimal precision (0.98±0.02) and recall (0.83±0.05). The difference between the AI model and ground-truth was 0.56±0.38 mm based on 95% Hausdorff distance confirming the high performance of AI compared to ground-truth. Furthermore, segmentation of all the teeth within a scan was more than 1800 times faster for AI compared to that of an expert. Teeth classification also performed optimally with a recall rate of 98.5% and precision of 97.9%. CONCLUSIONS: The proposed 3D U-Net based AI framework is an accurate and time-efficient deep learning system for automatic tooth segmentation and classification without expert refinement. CLINICAL SIGNIFICANCE: The proposed system might enable potential future applications for diagnostics and treatment planning in the field of digital dentistry, while reducing clinical workload.


Asunto(s)
Aprendizaje Profundo , Diente , Inteligencia Artificial , Tomografía Computarizada de Haz Cónico , Procesamiento de Imagen Asistido por Computador , Diente/diagnóstico por imagen
9.
J Dent ; 114: 103786, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34425172

RESUMEN

OBJECTIVE: To develop and validate a layered deep learning algorithm which automatically creates three-dimensional (3D) surface models of the human mandible out of cone-beam computed tomography (CBCT) imaging. MATERIALS & METHODS: Two convolutional networks using a 3D U-Net architecture were combined and deployed in a cloud-based artificial intelligence (AI) model. The AI model was trained in two phases and iteratively improved to optimize the segmentation result using 160 anonymized full skull CBCT scans of orthognathic surgery patients (70 preoperative scans and 90 postoperative scans). Finally, the final AI model was tested by assessing timing, consistency, and accuracy on a separate testing dataset of 15 pre- and 15 postoperative full skull CBCT scans. The AI model was compared to user refined AI segmentations (RAI) and to semi-automatic segmentation (SA), which is the current clinical standard. The time needed for segmentation was measured in seconds. Intra- and inter-operator consistency were assessed to check if the segmentation protocols delivered reproducible results. The following consistency metrics were used: intersection over union (IoU), dice similarity coefficient (DSC), Hausdorff distance (HD), absolute volume difference and root mean square (RMS) distance. To evaluate the match of the AI and RAI results to those of the SA method, their accuracy was measured using IoU, DSC, HD, absolute volume difference and RMS distance. RESULTS: On average, SA took 1218.4s. RAI showed a significant drop (p<0.0001) in timing to 456.5s (2.7-fold decrease). The AI method only took 17s (71.3-fold decrease). The average intra-operator IoU for RAI was 99.5% compared to 96.9% for SA. For inter-operator consistency, RAI scored an IoU of 99.6% compared to 94.6% for SA. The AI method was always consistent by default. In both the intra- and inter-operator consistency assessments, RAI outperformed SA on all metrics indicative of better consistency. With SA as the ground truth, AI and RAI scored an IoU of 94.6% and 94.4%, respectively. All accuracy metrics were similar for AI and RAI, meaning that both methods produce 3D models that closely match those produced by SA. CONCLUSION: A layered 3D U-Net architecture deep learning algorithm, with and without additional user refinements, improves time-efficiency, reduces operator error, and provides excellent accuracy when benchmarked against the clinical standard. CLINICAL SIGNIFICANCE: Semi-automatic segmentation in CBCT imaging is time-consuming and allows user-induced errors. Layered convolutional neural networks using a 3D U-Net architecture allow direct segmentation of high-resolution CBCT images. This approach creates 3D mandibular models in a more time-efficient and consistent way. It is accurate when benchmarked to semi-automatic segmentation.


Asunto(s)
Aprendizaje Profundo , Inteligencia Artificial , Tomografía Computarizada de Haz Cónico , Humanos , Procesamiento de Imagen Asistido por Computador , Mandíbula/diagnóstico por imagen
10.
J Dent ; 111: 103705, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-34077802

RESUMEN

OBJECTIVES: This study proposed and investigated the performance of a deep learning based three-dimensional (3D) convolutional neural network (CNN) model for automatic segmentation of the pharyngeal airway space (PAS). METHODS: A dataset of 103 computed tomography (CT) and cone-beam CT (CBCT) scans was acquired from an orthognathic surgery patients database. The acquisition devices consisted of 1 CT (128-slice multi-slice spiral CT, Siemens Somatom Definition Flash, Siemens AG, Erlangen, Germany) and 2 CBCT devices (Promax 3D Max, Planmeca, Helsinki, Finland and Newtom VGi evo, Cefla, Imola, Italy) with different scanning parameters. A 3D CNN-based model (3D U-Net) was built for automatic segmentation of the PAS. The complete CT/CBCT dataset was split into three sets, training set (n = 48) for training the model based on the ground-truth observer-based manual segmentation, test set (n = 25) for getting the final performance of the model and validation set (n = 30) for evaluating the model's performance versus observer-based segmentation. RESULTS: The CNN model was able to identify the segmented region with optimal precision (0.97±0.01) and recall (0.96±0.03). The maximal difference between the automatic segmentation and ground truth based on 95% hausdorff distance score was 0.98±0.74mm. The dice score of 0.97±0.02 confirmed the high similarity of the segmented region to the ground truth. The Intersection over union (IoU) metric was also found to be high (0.93±0.03). Based on the acquisition devices, Newtom VGi evo CBCT showed improved performance compared to the Promax 3D Max and CT device. CONCLUSION: The proposed 3D U-Net model offered an accurate and time-efficient method for the segmentation of PAS from CT/CBCT images. CLINICAL SIGNIFICANCE: The proposed method can allow clinicians to accurately and efficiently diagnose, plan treatment and follow-up patients with dento-skeletal deformities and obstructive sleep apnea which might influence the upper airway space, thereby further improving patient care.


Asunto(s)
Tomografía Computarizada de Haz Cónico , Redes Neurales de la Computación , Bases de Datos Factuales , Finlandia , Humanos , Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X
11.
J Endod ; 47(5): 827-835, 2021 May.
Artículo en Inglés | MEDLINE | ID: mdl-33434565

RESUMEN

INTRODUCTION: Tooth segmentation on cone-beam computed tomographic (CBCT) imaging is a labor-intensive task considering the limited contrast resolution and potential disturbance by various artifacts. Fully automated tooth segmentation cannot be achieved by merely relying on CBCT intensity variations. This study aimed to develop and validate an artificial intelligence (AI)-driven tool for automated tooth segmentation on CBCT imaging. METHODS: A total of 433 Digital Imaging and Communications in Medicine images of single- and double-rooted teeth randomly selected from 314 anonymized CBCT scans were imported and manually segmented. An AI-driven tooth segmentation algorithm based on a feature pyramid network was developed to automatically detect and segment teeth, replacing manual user contour placement. The AI-driven tool was evaluated based on volume comparison, intersection over union, the Dice score coefficient, morphologic surface deviation, and total segmentation time. RESULTS: Overall, AI-driven and clinical reference segmentations resulted in very similar segmentation volumes. The mean intersection over union for full-tooth segmentation was 0.87 (±0.03) and 0.88 (±0.03) for semiautomated (SA) (clinical reference) versus fully automated AI-driven (F-AI) and refined AI-driven (R-AI) tooth segmentation, respectively. R-AI and F-AI segmentation showed an average median surface deviation from SA segmentation of 9.96 µm (±59.33 µm) and 7.85 µm (±69.55 µm), respectively. SA segmentations of single- and double-rooted teeth had a mean total time of 6.6 minutes (±76.15 seconds), F-AI segmentation of 0.5 minutes (±8.64 seconds, 12 times faster), and R-AI segmentation of 1.2 minutes (±33.02 seconds, 6 times faster). CONCLUSIONS: This study showed a unique fast and accurate approach for AI-driven automated tooth segmentation on CBCT imaging. These results may open doors for AI-driven applications in surgical and treatment planning in oral health care.


Asunto(s)
Inteligencia Artificial , Diente , Artefactos , Tomografía Computarizada de Haz Cónico , Raíz del Diente
12.
Clin Oral Investig ; 25(4): 2257-2267, 2021 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-32844259

RESUMEN

OBJECTIVE: To evaluate the performance of a new artificial intelligence (AI)-driven tool for tooth detection and segmentation on panoramic radiographs. MATERIALS AND METHODS: In total, 153 radiographs were collected. A dentomaxillofacial radiologist labeled and segmented each tooth, serving as the ground truth. Class-agnostic crops with one tooth resulted in 3576 training teeth. The AI-driven tool combined two deep convolutional neural networks with expert refinement. Accuracy of the system to detect and segment teeth was the primary outcome, time analysis secondary. The Kruskal-Wallis test was used to evaluate differences of performance metrics among teeth groups and different devices and chi-square test to verify associations among the amount of corrections, presence of false positive and false negative, and crown and root parts of teeth with potential AI misinterpretations. RESULTS: The system achieved a sensitivity of 98.9% and a precision of 99.6% for tooth detection. For segmenting teeth, lower canines presented best results with the following values for intersection over union, precision, recall, F1-score, and Hausdorff distances: 95.3%, 96.9%, 98.3%, 97.5%, and 7.9, respectively. Although still above 90%, segmentation results for both upper and lower molars were somewhat lower. The method showed a clinically significant reduction of 67% of the time consumed for the manual. CONCLUSIONS: The AI tool yielded a highly accurate and fast performance for detecting and segmenting teeth, faster than the ground truth alone. CLINICAL SIGNIFICANCE: An innovative clinical AI-driven tool showed a faster and more accurate performance to detect and segment teeth on panoramic radiographs compared with manual segmentation.


Asunto(s)
Inteligencia Artificial , Diente , Diente Molar , Redes Neurales de la Computación , Radiografía Panorámica
13.
Artículo en Inglés | MEDLINE | ID: mdl-32466156

RESUMEN

The purpose of the presented Artificial Intelligence (AI)-tool was to automatically segment the mandibular molars on panoramic radiographs and extract the molar orientations in order to predict the third molars' eruption potential. In total, 838 panoramic radiographs were used for training (n = 588) and validation (n = 250) of the network. A fully convolutional neural network with ResNet-101 backbone jointly predicted the molar segmentation maps and an estimate of the orientation lines, which was then iteratively refined by regression on the mesial and distal sides of the segmentation contours. Accuracy was quantified as the fraction of correct angulations (with predefined error intervals) compared to human reference measurements. Performance differences between the network and reference measurements were visually assessed using Bland-Altman plots. The quantitative analysis for automatic molar segmentation resulted in mean IoUs approximating 90%. Mean Hausdorff distances were lowest for first and second molars. The network angulation measurements reached accuracies of 79.7% [-2.5°; 2.5°] and 98.1% [-5°; 5°], combined with a clinically significant reduction in user-time of >53%. In conclusion, this study validated a new and unique AI-driven tool for fast, accurate, and consistent automated measurement of molar angulations on panoramic radiographs. Complementing the dental practitioner with accurate AI-tools will facilitate and optimize dental care and synergistically lead to ever-increasing diagnostic accuracies.


Asunto(s)
Inteligencia Artificial , Diente Molar , Radiografía Panorámica , Odontólogos , Humanos , Diente Molar/anatomía & histología , Diente Molar/crecimiento & desarrollo , Rol Profesional
14.
Proteomics Clin Appl ; 14(3): e1900040, 2020 05.
Artículo en Inglés | MEDLINE | ID: mdl-31950592

RESUMEN

The increasing storage of information, data, and forms of knowledge has led to the development of new technologies that can help to accomplish complex tasks in different areas, such as in dentistry. In this context, the role of computational methods, such as radiomics and Artificial Intelligence (AI) applications, has been progressing remarkably for dentomaxillofacial radiology (DMFR). These tools bring new perspectives for diagnosis, classification, and prediction of oral diseases, treatment planning, and for the evaluation and prediction of outcomes, minimizing the possibilities of human errors. A comprehensive review of the state-of-the-art of using radiomics and machine learning (ML) for imaging in oral healthcare is presented in this paper. Although the number of published studies is still relatively low, the preliminary results are very promising and in a near future, an augmented dentomaxillofacial radiology (ADMFR) will combine the use of radiomics-based and AI-based analyses with the radiologist's evaluation. In addition to the opportunities and possibilities, some challenges and limitations have also been discussed for further investigations.


Asunto(s)
Atención a la Salud/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático , Salud Bucal , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...