Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
1.
Int J Paediatr Dent ; 2024 May 09.
Artigo em Inglês | MEDLINE | ID: mdl-38725105

RESUMO

BACKGROUND: Changes in healthy and inflamed pulp on periapical radiographs are traditionally so subtle that they may be imperceptible to human experts, limiting its potential use as an adjunct clinical diagnostic feature. AIM: This study aimed to investigate the feasibility of an image-analysis technique based on the convolutional neural network (CNN) to detect irreversible pulpitis in primary molars on periapical radiographs (PRs). DESIGN: This retrospective study was performed in two health centres. Patients who received indirect pulp therapy at Peking University Hospital for Stomatology were retrospectively identified and randomly divided into training and validation sets (8:2). Using PRs as input to an EfficientNet CNN, the model was trained to categorise cases into either the success or failure group and externally tested on patients who presented to our affiliate institution. Model performance was evaluated using sensitivity, specificity, accuracy and F1 score. RESULTS: A total of 348 PRs with deep caries were enrolled from the two centres. The deep learning model achieved the highest accuracy of 0.90 (95% confidence interval: 0.79-0.96) in the internal validation set, with an overall accuracy of 0.85 in the external test set. The mean greyscale value was higher in the failure group than in the success group (p = .013). CONCLUSION: The deep learning-based model could detect irreversible pulpitis in primary molars with deep caries on PRs. Moreover, this study provides a convenient and complementary method for assessing pulp status.

2.
J Oral Maxillofac Surg ; 81(8): 1011-1020, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37217163

RESUMO

PURPOSE: Zygomatic fractures involve complex anatomical structures of the mid-face and the diagnosis can be challenging and labor-consuming. This research aimed to evaluate the performance of an automatic algorithm for the detection of zygomatic fractures based on convolutional neural network (CNN) on spiral computed tomography (CT). MATERIALS AND METHODS: We designed a cross-sectional retrospective diagnostic trial study. Clinical records and CT scans of patients with zygomatic fractures were reviewed. The sample consisted of two types of patients with different zygomatic fractures statuses (positive or negative) in Peking University School of Stomatology from 2013 to 2019. All CT samples were randomly divided into three groups at a ratio of 6:2:2 as training set, validation set, and test set, respectively. All CT scans were viewed and annotated by three experienced maxillofacial surgeons, serving as the gold standard. The algorithm consisted of two modules as follows: (1) segmentation of the zygomatic region of CT based on U-Net, a type of CNN model; (2) detection of fractures based on Deep Residual Network 34(ResNet34). The region segmentation model was used first to detect and extract the zygomatic region, then the detection model was used to detect the fracture status. The Dice coefficient was used to evaluate the performance of the segmentation algorithm. The sensitivity and specificity were used to assess the performance of the detection model. The covariates included age, gender, duration of injury, and the etiology of fractures. RESULTS: A total of 379 patients with an average age of 35.43 ± 12.74 years were included in the study. There were 203 nonfracture patients and 176 fracture patients with 220 sites of zygomatic fractures (44 patients underwent bilateral fractures). The Dice coefficient of zygomatic region detection model and gold standard verified by manual labeling were 0.9337 (coronal plane) and 0.9269 (sagittal plane), respectively. The sensitivity and specificity of the fracture detection model were 100% (p>.05). CONCLUSION: The performance of the algorithm based on CNNs was not statistically different from the gold standard (manual diagnosis) for zygomatic fracture detection in order for the algorithm to be applied clinically.


Assuntos
Fraturas Zigomáticas , Adulto , Humanos , Pessoa de Meia-Idade , Adulto Jovem , Estudos Transversais , Redes Neurais de Computação , Estudos Retrospectivos , Tomografia Computadorizada por Raios X/métodos , Fraturas Zigomáticas/diagnóstico por imagem
3.
Int Wound J ; 20(4): 910-916, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-36054618

RESUMO

The study aimed to develop and validate a convolutional neural network (CNN)-based deep learning method for automatic diagnosis and graduation of skin frostbite. A dataset of 71 annotated images was used for the training, the validation, and the testing based on ResNet-50 model. The performances were evaluated with the test set. The diagnosis and graduation performance of our approach was compared with two residents from burns department. The approach correctly identified all the frostbite of IV (18/18, 100%), but with respectively 1 mistake in the diagnosis of degree I (29/30, 96.67%), II (28/29, 96.55%) and III (37/38, 97.37%). The accuracy of the approach on the whole test set was 97.39% (112/115). The accuracy of the two residents were respectively 77.39% and 73.04%. Weighted Kappa of 0.583 indicates good reliability between the two residents (P = .445). Kendall's coefficient of concordance is 0.326 (P = .548), indicating differences in accuracy between the approach and the two residents. Our approach based on CNNs demonstrated an encouraging performance for the automatic diagnosis and graduation of skin frostbite, with higher accuracy and efficiency.


Assuntos
Congelamento das Extremidades , Interpretação de Imagem Assistida por Computador , Redes Neurais de Computação , Humanos , Congelamento das Extremidades/diagnóstico , Reprodutibilidade dos Testes , Índice de Gravidade de Doença
4.
Clin Oral Investig ; 26(6): 4593-4601, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35218428

RESUMO

OBJECTIVES: This study aimed to evaluate the accuracy and reliability of convolutional neural networks (CNNs) for the detection and classification of mandibular fracture on spiral computed tomography (CT). MATERIALS AND METHODS: Between January 2013 and July 2020, 686 patients with mandibular fractures who underwent CT scan were classified and annotated by three experienced maxillofacial surgeons serving as the ground truth. An algorithm including two convolutional neural networks (U-Net and ResNet) was trained, validated, and tested using 222, 56, and 408 CT scans, respectively. The diagnostic performance of the algorithm was compared with the ground truth and evaluated by DICE, accuracy, sensitivity, specificity, and area under the ROC curve (AUC). RESULTS: One thousand five hundred six mandibular fractures in nine subregions of 686 patients were diagnosed. The DICE of mandible segmentation using U-Net was 0.943. The accuracies of nine subregions were all above 90%, with a mean AUC of 0.956. CONCLUSIONS: CNNs showed comparable reliability and accuracy in detecting and classifying mandibular fractures on CT. CLINICAL RELEVANCE: The algorithm for automatic detection and classification of mandibular fractures will help improve diagnostic efficiency and provide expertise to areas with lower medical levels.


Assuntos
Fraturas Mandibulares , Algoritmos , Humanos , Fraturas Mandibulares/diagnóstico por imagem , Redes Neurais de Computação , Reprodutibilidade dos Testes , Tomografia Computadorizada por Raios X/métodos
5.
Clin Oral Investig ; 26(1): 981-991, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-34312683

RESUMO

OBJECTIVES: The objective of our study was to develop and validate a deep learning approach based on convolutional neural networks (CNNs) for automatic detection of the mandibular third molar (M3) and the mandibular canal (MC) and evaluation of the relationship between them on CBCT. MATERIALS AND METHODS: A dataset of 254 CBCT scans with annotations by radiologists was used for the training, the validation, and the test. The proposed approach consisted of two modules: (1) detection and pixel-wise segmentation of M3 and MC based on U-Nets; (2) M3-MC relation classification based on ResNet-34. The performances were evaluated with the test set. The classification performance of our approach was compared with two residents in oral and maxillofacial radiology. RESULTS: For segmentation performance, the M3 had a mean Dice similarity coefficient (mDSC) of 0.9730 and a mean intersection over union (mIoU) of 0.9606; the MC had a mDSC of 0.9248 and a mIoU of 0.9003. The classification models achieved a mean sensitivity of 90.2%, a mean specificity of 95.0%, and a mean accuracy of 93.3%, which was on par with the residents. CONCLUSIONS: Our approach based on CNNs demonstrated an encouraging performance for the automatic detection and evaluation of the M3 and MC on CBCT. Clinical relevance An automated approach based on CNNs for detection and evaluation of M3 and MC on CBCT has been established, which can be utilized to improve diagnostic efficiency and facilitate the precision diagnosis and treatment of M3.


Assuntos
Aprendizado Profundo , Tomografia Computadorizada de Feixe Cônico Espiral , Tomografia Computadorizada de Feixe Cônico , Canal Mandibular , Dente Molar , Dente Serotino/diagnóstico por imagem
6.
J Dent ; 144: 104931, 2024 05.
Artigo em Inglês | MEDLINE | ID: mdl-38458378

RESUMO

OBJECTIVES: To develop a deep learning-based system for precise, robust, and fully automated segmentation of the mandibular canal on cone beam computed tomography (CBCT) images. METHODS: The system was developed on 536 CBCT scans (training set: 376, validation set: 80, testing set: 80) from one center and validated on an external dataset of 89 CBCT scans from 3 centers. Each scan was annotated using a multi-stage annotation method and refined by oral and maxillofacial radiologists. We proposed a three-step strategy for the mandibular canal segmentation: extraction of the region of interest based on 2D U-Net, global segmentation of the mandibular canal, and segmentation refinement based on 3D U-Net. RESULTS: The system consistently achieved accurate mandibular canal segmentation in the internal set (Dice similarity coefficient [DSC], 0.952; intersection over union [IoU], 0.912; average symmetric surface distance [ASSD], 0.046 mm; 95% Hausdorff distance [HD95], 0.325 mm) and the external set (DSC, 0.960; IoU, 0.924; ASSD, 0.040 mm; HD95, 0.288 mm). CONCLUSIONS: These results demonstrated the potential clinical application of this AI system in facilitating clinical workflows related to mandibular canal localization. CLINICAL SIGNIFICANCE: Accurate delineation of the mandibular canal on CBCT images is critical for implant placement, mandibular third molar extraction, and orthognathic surgery. This AI system enables accurate segmentation across different models, which could contribute to more efficient and precise dental automation systems.


Assuntos
Tomografia Computadorizada de Feixe Cônico , Imageamento Tridimensional , Mandíbula , Tomografia Computadorizada de Feixe Cônico/métodos , Humanos , Mandíbula/diagnóstico por imagem , Mandíbula/anatomia & histologia , Imageamento Tridimensional/métodos , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos
7.
J Dent ; 136: 104607, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37422206

RESUMO

OBJECTIVES: This study developed and validated a deep learning-based method to automatically segment and number teeth in panoramic radiographs across primary, mixed, and permanent dentitions. METHODS: A total of 6,046 panoramic radiographs were collected and annotated. The dataset encompassed primary, mixed and permanent dentitions and dental abnormalities such as tooth number anomalies, dental diseases, dental prostheses, and orthodontic appliances. A deep learning-based algorithm consisting of a U-Net-based region of interest extraction model, a Hybrid Task Cascade-based teeth segmentation and numbering model, and a post-processing procedure was trained on 4,232 images, validated on 605 images, and tested on 1,209 images. Precision, recall and Intersection-over-Union (IoU) were used to evaluate its performance. RESULTS: The deep learning-based teeth identification algorithm achieved good performance on panoramic radiographs, with precision and recall for teeth segmentation and numbering exceeding 97%, and the IoU between predictions and ground truths reaching 92%. It generalized well across all three dentition stages and complex real-world cases. CONCLUSIONS: By utilizing a two-stage training framework with a large-scale heterogeneous dataset, the automatic teeth identification algorithm achieved a performance level comparable to that of dental experts. CLINICAL SIGNIFICANCE: Deep learning can be leveraged to aid clinical interpretation of panoramic radiographs across primary, mixed, and permanent dentitions, even in the presence of real-world complexities. This robust teeth identification algorithm could contribute to the future development of more advanced, diagnosis- or treatment-oriented dental automation systems.


Assuntos
Aprendizado Profundo , Radiografia Panorâmica , Dentição Permanente , Algoritmos
8.
Springerplus ; 5(1): 1097, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27468398

RESUMO

No-reference image quality assessment aims to predict the visual quality of distorted images without examining the original image as a reference. Most no-reference image quality metrics which have been already proposed are designed for one or a set of predefined specific distortion types and are unlikely to generalize for evaluating images degraded with other types of distortion. There is a strong need of no-reference image quality assessment methods which are applicable to various distortions. In this paper, the authors proposed a no-reference image quality assessment method based on a natural image statistic model in the wavelet transform domain. A generalized Gaussian density model is employed to summarize the marginal distribution of wavelet coefficients of the test images, so that correlative parameters are needed for the evaluation of image quality. The proposed algorithm is tested on three large-scale benchmark databases. Experimental results demonstrate that the proposed algorithm is easy to implement and computational efficient. Furthermore, our method can be applied to many well-known types of image distortions, and achieves a good quality of prediction performance.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA