Your browser doesn't support javascript.
loading
Comparison of 3 deep learning neural networks for classifying the relationship between the mandibular third molar and the mandibular canal on panoramic radiographs.
Fukuda, Motoki; Ariji, Yoshiko; Kise, Yoshitaka; Nozawa, Michihito; Kuwada, Chiaki; Funakoshi, Takuma; Muramatsu, Chisako; Fujita, Hiroshi; Katsumata, Akitoshi; Ariji, Eiichiro.
Afiliación
  • Fukuda M; Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan. Electronic address: halpop@dpc.agu.ac.jp.
  • Ariji Y; Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan.
  • Kise Y; Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan.
  • Nozawa M; Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan.
  • Kuwada C; Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan.
  • Funakoshi T; Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan.
  • Muramatsu C; Faculty of Data Science, Shiga University, Shiga, Japan.
  • Fujita H; Department of Electrical, Electronic and Computer Faculty of Engineering, Gifu University, Gifu, Japan.
  • Katsumata A; Department of Oral Radiology, Asahi University School of Dentistry, Mizuho, Japan.
  • Ariji E; Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan.
Article en En | MEDLINE | ID: mdl-32444332
ABSTRACT

OBJECTIVE:

The aim of this study was to compare time and storage space requirements, diagnostic performance, and consistency among 3 image recognition convolutional neural networks (CNNs) in the evaluation of the relationships between the mandibular third molar and the mandibular canal on panoramic radiographs. STUDY

DESIGN:

Of 600 panoramic radiographs, 300 each were assigned to noncontact and contact groups based on the relationship between the mandibular third molar and the mandibular canal. The CNNs were trained twice by using cropped image patches with sizes of 70 × 70 pixels and 140 × 140 pixels. Time and storage space were measured for each system. Accuracy, sensitivity, specificity, and the area under the receiver operating characteristic curve (AUC) were determined. Intra-CNN and inter-CNN consistency values were calculated.

RESULTS:

Time and storage space requirements depended on the depth of CNN layers and number of learned parameters, respectively. The highest AUC values ranged from 0.88 to 0.93 in the CNNs created by 70 × 70 pixel patches, but there were no significant differences in diagnostic performance among any of the models with smaller patches. Intra-CNN and inter-CNN consistency values were good or very good for all CNNs.

CONCLUSIONS:

The size of the image patches should be carefully determined to ensure acquisition of high diagnostic performance and consistency.
Asunto(s)

Texto completo: 1 Base de datos: MEDLINE Asunto principal: Aprendizaje Profundo Tipo de estudio: Prognostic_studies Idioma: En Revista: Oral Surg Oral Med Oral Pathol Oral Radiol Año: 2020 Tipo del documento: Article

Texto completo: 1 Base de datos: MEDLINE Asunto principal: Aprendizaje Profundo Tipo de estudio: Prognostic_studies Idioma: En Revista: Oral Surg Oral Med Oral Pathol Oral Radiol Año: 2020 Tipo del documento: Article