RESUMO
The early detection of initial dental caries enables preventive treatment, and bitewing radiography is a good diagnostic tool for posterior initial caries. In medical imaging, the utilization of deep learning with convolutional neural networks (CNNs) to process various types of images has been actively researched, with promising performance. In this study, we developed a CNN model using a U-shaped deep CNN (U-Net) for caries detection on bitewing radiographs and investigated whether this model can improve clinicians' performance. The research complied with relevant ethical regulations. In total, 304 bitewing radiographs were used to train the CNN model and 50 radiographs for performance evaluation. The diagnostic performance of the CNN model on the total test dataset was as follows: precision, 63.29%; recall, 65.02%; and F1-score, 64.14%, showing quite accurate performance. When three dentists detected caries using the results of the CNN model as reference data, the overall diagnostic performance of all three clinicians significantly improved, as shown by an increased sensitivity ratio (D1, 85.34%; D1', 92.15%; D2, 85.86%; D2', 93.72%; D3, 69.11%; D3', 79.06%; p < 0.05). These increases were especially significant (p < 0.05) in the initial and moderate caries subgroups. The deep learning model may help clinicians to diagnose dental caries more accurately.
Assuntos
Aprendizado Profundo , Cárie Dentária/diagnóstico por imagem , Cárie Dentária/diagnóstico , Radiografia Interproximal , Humanos , Redes Neurais de ComputaçãoRESUMO
In early gastric cancer (EGC), tumor invasion depth is an important factor for determining the treatment method. However, as endoscopic ultrasonography has limitations when measuring the exact depth in a clinical setting as endoscopists often depend on gross findings and personal experience. The present study aimed to develop a model optimized for EGC detection and depth prediction, and we investigated factors affecting artificial intelligence (AI) diagnosis. We employed a visual geometry group(VGG)-16 model for the classification of endoscopic images as EGC (T1a or T1b) or non-EGC. To induce the model to activate EGC regions during training, we proposed a novel loss function that simultaneously measured classification and localization errors. We experimented with 11,539 endoscopic images (896 T1a-EGC, 809 T1b-EGC, and 9834 non-EGC). The areas under the curves of receiver operating characteristic curves for EGC detection and depth prediction were 0.981 and 0.851, respectively. Among the factors affecting AI prediction of tumor depth, only histologic differentiation was significantly associated, where undifferentiated-type histology exhibited a lower AI accuracy. Thus, the lesion-based model is an appropriate training method for AI in EGC. However, further improvements and validation are required, especially for undifferentiated-type histology.