Your browser doesn't support javascript.
loading
Fully automated identification of cephalometric landmarks for upper airway assessment using cascaded convolutional neural networks.
Yoon, Hyun-Joo; Kim, Dong-Ryul; Gwon, Eunseo; Kim, Namkug; Baek, Seung-Hak; Ahn, Hyo-Won; Kim, Kyung-A; Kim, Su-Jung.
Affiliation
  • Yoon HJ; Department of Dentistry, Graduate School, Kyung Hee University, Seoul, Republic of Korea.
  • Kim DR; Department of Dentistry, Graduate School, Kyung Hee University, Seoul, Republic of Korea.
  • Gwon E; Department of Convergence Medicine, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
  • Kim N; Department of Convergence Medicine, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
  • Baek SH; Department of Radiology, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
  • Ahn HW; Department of Orthodontics, School of Dentistry, Seoul National University, Seoul, Republic of Korea.
  • Kim KA; Department of Orthodontics, School of Dentistry, Kyung Hee University, Seoul, Republic of Korea.
  • Kim SJ; Department of Orthodontics, School of Dentistry, Kyung Hee University, Seoul, Republic of Korea.
Eur J Orthod ; 44(1): 66-77, 2022 01 25.
Article in En | MEDLINE | ID: mdl-34379120
ABSTRACT

OBJECTIVES:

The aim of the study was to evaluate the accuracy of a cascaded two-stage convolutional neural network (CNN) model in detecting upper airway (UA) soft tissue landmarks in comparison with the skeletal landmarks on the lateral cephalometric images. MATERIALS AND

METHODS:

The dataset contained 600 lateral cephalograms of adult orthodontic patients, and the ground-truth positions of 16 landmarks (7 skeletal and 9 UA landmarks) were obtained from 500 learning dataset. We trained a UNet with EfficientNetB0 model through the region of interest-centred circular segmentation labelling process. Mean distance errors (MDEs, mm) of the CNN algorithm was compared with those from human examiners. Successful detection rates (SDRs, per cent) assessed within 1-4 mm precision ranges were compared between skeletal and UA landmarks.

RESULTS:

The proposed model achieved MDEs of 0.80 ± 0.55 mm for skeletal landmarks and 1.78 ± 1.21 mm for UA landmarks. The mean SDRs for UA landmarks were 72.22 per cent for 2 mm range, and 92.78 per cent for 4 mm range, contrasted with those for skeletal landmarks amounting to 93.43 and 98.71 per cent, respectively. As compared with mean interexaminer difference, however, this model showed higher detection accuracies for geometrically constructed UA landmarks on the nasopharynx (AD2 and Ss), while lower accuracies for anatomically located UA landmarks on the tongue (Td) and soft palate (Sb and St).

CONCLUSION:

The proposed CNN model suggests the availability of an automated cephalometric UA assessment to be integrated with dentoskeletal and facial analysis.
Subject(s)

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Neural Networks, Computer / Face Type of study: Diagnostic_studies Limits: Adult / Humans Language: En Journal: Eur J Orthod Year: 2022 Document type: Article

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Neural Networks, Computer / Face Type of study: Diagnostic_studies Limits: Adult / Humans Language: En Journal: Eur J Orthod Year: 2022 Document type: Article