Your browser doesn't support javascript.
loading
Three-Dimensional Postoperative Results Prediction for Orthognathic Surgery through Deep Learning-Based Alignment Network.
Jeong, Seung Hyun; Woo, Min Woo; Shin, Dong Sun; Yeom, Han Gyeol; Lim, Hun Jun; Kim, Bong Chul; Yun, Jong Pil.
Afiliação
  • Jeong SH; Advanced Mechatronics R&D Group, Korea Institute of Industrial Technology (KITECH), Gyeongsan 38408, Korea.
  • Woo MW; Advanced Mechatronics R&D Group, Korea Institute of Industrial Technology (KITECH), Gyeongsan 38408, Korea.
  • Shin DS; School of Computer Science and Engineering, Kyungpook National University, Daegu 41566, Korea.
  • Yeom HG; Department of Oral and Maxillofacial Surgery, Daejeon Dental Hospital, College of Dentistry, Wonkwang University, Daejeon 35233, Korea.
  • Lim HJ; Department of Oral and Maxillofacial Radiology, Daejeon Dental Hospital, College of Dentistry, Wonkwang University, Daejeon 35233, Korea.
  • Kim BC; Department of Oral and Maxillofacial Surgery, Daejeon Dental Hospital, College of Dentistry, Wonkwang University, Daejeon 35233, Korea.
  • Yun JP; Department of Oral and Maxillofacial Surgery, Daejeon Dental Hospital, College of Dentistry, Wonkwang University, Daejeon 35233, Korea.
J Pers Med ; 12(6)2022 Jun 18.
Article em En | MEDLINE | ID: mdl-35743782
To date, for the diagnosis of dentofacial dysmorphosis, we have relied almost entirely on reference points, planes, and angles. This is time consuming, and it is also greatly influenced by the skill level of the practitioner. To solve this problem, we wanted to know if deep neural networks could predict postoperative results of orthognathic surgery without relying on reference points, planes, and angles. We use three-dimensional point cloud data of the skull of 269 patients. The proposed method has two main stages for prediction. In step 1, the skull is divided into six parts through the segmentation network. In step 2, three-dimensional transformation parameters are predicted through the alignment network. The ground truth values of transformation parameters are calculated through the iterative closest points (ICP), which align the preoperative part of skull to the corresponding postoperative part of skull. We compare pointnet, pointnet++ and pointconv for the feature extractor of the alignment network. Moreover, we design a new loss function, which considers the distance error of transformed points for a better accuracy. The accuracy, mean intersection over union (mIoU), and dice coefficient (DC) of the first segmentation network, which divides the upper and lower part of skull, are 0.9998, 0.9994, and 0.9998, respectively. For the second segmentation network, which divides the lower part of skull into 5 parts, they were 0.9949, 0.9900, 0.9949, respectively. The mean absolute error of transverse, anterior-posterior, and vertical distance of part 2 (maxilla) are 0.765 mm, 1.455 mm, and 1.392 mm, respectively. For part 3 (mandible), they were 1.069 mm, 1.831 mm, and 1.375 mm, respectively, and for part 4 (chin), they were 1.913 mm, 2.340 mm, and 1.257 mm, respectively. From this study, postoperative results can now be easily predicted by simply entering the point cloud data of computed tomography.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Tipo de estudo: Prognostic_studies / Risk_factors_studies Idioma: En Ano de publicação: 2022 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Tipo de estudo: Prognostic_studies / Risk_factors_studies Idioma: En Ano de publicação: 2022 Tipo de documento: Article