RESUMEN
Purpose: This study aimed to assess the performance of 2-dimensional (2D) imaging with microscopy coils in delineating teeth and periodontal tissues compared with conventional 3-dimensional (3D) imaging on a 3 T magnetic resonance imaging (MRI) unit. Materials and Methods: Twelve healthy participants (4 men and 8 women; mean age: 25.6 years; range: 20-52 years) with no dental symptoms were included. The left mandibular first molars and surrounding periodontal tissues were examined using the following 2 sequences: 2D proton density-weighted (PDw) images and 3D enhanced T1 high-resolution isotropic volume excitation (eTHRIVE) images. Two-dimensional MRI images were taken using a 3 T MRI unit and a 47 mm microscopy coil, while 3D MRI imaging used a 3 T MRI unit and head-neck coil. Oral radiologists assessed dental and periodontal structures using a 4-point Likert scale. Inter- and intra-observer agreement was determined using the weighted kappa coefficient. The Wilcoxon signed-rank test was used to compare 2D-PDw and 3D-eTHRIVE images. Results: Qualitative analysis showed significantly better visualization scores for 2D-PDw imaging than for 3D-eTHRIVE imaging (Wilcoxon signed-rank test). 2D-PDw images provided improved visibility of the tooth, root dental pulp, periodontal ligament, lamina dura, coronal dental pulp, gingiva, and nutrient tract. Inter-observer reliability ranged from moderate agreement to almost perfect agreement, and intra-observer agreement was in a similar range. Conclusion: Two-dimensional-PDw images acquired using a 3 T MRI unit and microscopy coil effectively visualized nearly all aspects of teeth and periodontal tissues.
RESUMEN
OBJECTIVES: This study aimed to clarify the performance of magnetic resonance imaging (MRI)-based deep learning classification models in diagnosing temporomandibular joint osteoarthritis (TMJ-OA) and to compare the developed diagnostic assistance with human observers. METHODS: The subjects were 118 patients who underwent MRI for examination of TMJ disorders. One hundred condyles with TMJ-OA and 100 condyles without TMJ-OA were enrolled. Deep learning was performed with four networks (ResNet18, EfficientNet b4, Inception v3, and GoogLeNet) using five-fold cross validation. Receiver operating characteristics (ROC) curves were drawn for each model and diagnostic metrics were determined. The performances of the four network models were compared using Kruskal-Wallis tests and post-hoc Scheffe tests, and ROCs between the best model and human were compared using chi-square tests, with p < 0.05 considered significant. RESULTS: ResNet18 had areas under the curves (AUCs) of 0.91-0.93 and accuracy of 0.85-0.88, which were the highest among the four networks. There were significant differences in AUC and accuracy between ResNet and GoogLeNet (p = 0.0264 and p = 0.0418, respectively). The kappa values of the models were large, 0.95 for ResNet and 0.93 for EfficientNet. The experts achieved similar AUC and accuracy values ââto the ResNet metrics, 0.94 and 0.85, and 0.84 and 0.84, respectively, but with a lower kappa of 0.67. Those of the dental residents showed lower values. There were significant differences in AUCs between ResNet and residents (p < 0.0001) and between experts and residents (p < 0.0001). CONCLUSIONS: Using a deep learning model, high performance was confirmed for MRI diagnosis of TMJ-OA.
RESUMEN
OBJECTIVES: To clarify the performance of transfer learning with a small number of Waters' images at institution B in diagnosing maxillary sinusitis, based on a source model trained with a large number of panoramic radiographs at institution A. METHODS: The source model was created by a 200-epoch training process with 800 training and 60 validation datasets of panoramic radiographs at institution A using VGG-16. One hundred and eighty Waters' and 180 panoramic image patches with or without maxillary sinusitis at institution B were enrolled in this study, and were arbitrarily assigned to 120 training, 20 validation, and 40 test datasets, respectively. Transfer learning of 200 epochs was performed using the training and validation datasets of Waters' images based on the source model, and the target model was obtained. The test Waters' images were applied to the source and target models, and the performance of each model was evaluated. Transfer learning with panoramic radiographs and evaluation by two radiologists were undertaken and compared. The evaluation was based on the area of receiver-operating characteristic curves (AUC). RESULTS: When using Waters' images as the test dataset, the AUCs of the source model, target model, and radiologists were 0.780, 0.830, and 0.806, respectively. There were no significant differences between these models and the radiologists, whereas the target model performed better than the source model. For panoramic radiographs, AUCs were 0.863, 0.863, and 0.808, respectively, with no significant differences. CONCLUSIONS: This study performed transfer learning using a small number of Waters' images, based on a source model created solely from panoramic radiographs, resulting in a performance improvement to 0.830 in diagnosing maxillary sinusitis, which was equivalent to that of radiologists. Transfer learning is considered a useful method to improve diagnostic performance.