Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Angle Orthod ; 90(6): 823-830, 2020 11 01.
Article in English | MEDLINE | ID: mdl-33378507

ABSTRACT

OBJECTIVES: To determine the optimal quantity of learning data needed to develop artificial intelligence (AI) that can automatically identify cephalometric landmarks. MATERIALS AND METHODS: A total of 2400 cephalograms were collected, and 80 landmarks were manually identified by a human examiner. Of these, 2200 images were chosen as the learning data to train AI. The remaining 200 images were used as the test data. A total of 24 combinations of the quantity of learning data (50, 100, 200, 400, 800, 1600, and 2000) were selected by the random sampling method without replacement, and the number of detecting targets per image (19, 40, and 80) were used in the AI training procedures. The training procedures were repeated four times. A total of 96 different AIs were produced. The accuracy of each AI was evaluated in terms of radial error. RESULTS: The accuracy of AI increased linearly with the increasing number of learning data sets on a logarithmic scale. It decreased with increasing numbers of detection targets. To estimate the optimal quantity of learning data, a prediction model was built. At least 2300 sets of learning data appeared to be necessary to develop AI as accurate as human examiners. CONCLUSIONS: A considerably large quantity of learning data was necessary to develop accurate AI. The present study might provide a basis to determine how much learning data would be necessary in developing AI.


Subject(s)
Artificial Intelligence , Deep Learning , Cephalometry , Humans , Radiography
2.
Angle Orthod ; 90(1): 69-76, 2020 01.
Article in English | MEDLINE | ID: mdl-31335162

ABSTRACT

OBJECTIVES: To compare detection patterns of 80 cephalometric landmarks identified by an automated identification system (AI) based on a recently proposed deep-learning method, the You-Only-Look-Once version 3 (YOLOv3), with those identified by human examiners. MATERIALS AND METHODS: The YOLOv3 algorithm was implemented with custom modifications and trained on 1028 cephalograms. A total of 80 landmarks comprising two vertical reference points and 46 hard tissue and 32 soft tissue landmarks were identified. On the 283 test images, the same 80 landmarks were identified by AI and human examiners twice. Statistical analyses were conducted to detect whether any significant differences between AI and human examiners existed. Influence of image factors on those differences was also investigated. RESULTS: Upon repeated trials, AI always detected identical positions on each landmark, while the human intraexaminer variability of repeated manual detections demonstrated a detection error of 0.97 ± 1.03 mm. The mean detection error between AI and human was 1.46 ± 2.97 mm. The mean difference between human examiners was 1.50 ± 1.48 mm. In general, comparisons in the detection errors between AI and human examiners were less than 0.9 mm, which did not seem to be clinically significant. CONCLUSIONS: AI showed as accurate an identification of cephalometric landmarks as did human examiners. AI might be a viable option for repeatedly identifying multiple cephalometric landmarks.


Subject(s)
Algorithms , Anatomic Landmarks , Cephalometry , Automation , Humans , Radiography , Reproducibility of Results
3.
Angle Orthod ; 89(6): 903-909, 2019 11.
Article in English | MEDLINE | ID: mdl-31282738

ABSTRACT

OBJECTIVE: To compare the accuracy and computational efficiency of two of the latest deep-learning algorithms for automatic identification of cephalometric landmarks. MATERIALS AND METHODS: A total of 1028 cephalometric radiographic images were selected as learning data that trained You-Only-Look-Once version 3 (YOLOv3) and Single Shot Multibox Detector (SSD) methods. The number of target labeling was 80 landmarks. After the deep-learning process, the algorithms were tested using a new test data set composed of 283 images. Accuracy was determined by measuring the point-to-point error and success detection rate and was visualized by drawing scattergrams. The computational time of both algorithms was also recorded. RESULTS: The YOLOv3 algorithm outperformed SSD in accuracy for 38 of 80 landmarks. The other 42 of 80 landmarks did not show a statistically significant difference between YOLOv3 and SSD. Error plots of YOLOv3 showed not only a smaller error range but also a more isotropic tendency. The mean computational time spent per image was 0.05 seconds and 2.89 seconds for YOLOv3 and SSD, respectively. YOLOv3 showed approximately 5% higher accuracy compared with the top benchmarks in the literature. CONCLUSIONS: Between the two latest deep-learning methods applied, YOLOv3 seemed to be more promising as a fully automated cephalometric landmark identification system for use in clinical practice.


Subject(s)
Algorithms , Cephalometry , Silver Sulfadiazine , Deep Learning , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL