Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Dentomaxillofac Radiol ; 52(8): 20230065, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37869886

ABSTRACT

OBJECTIVES: To evaluate the reliability and reproducibility of an artificial intelligence (AI) software in identifying cephalometric points on lateral cephalometric radiographs considering four settings of brightness and contrast. METHODS AND MATERIALS: Brightness and contrast of 30 lateral cephalometric radiographs were adjusted into four different settings. Then, the control examiner (ECont), the calibrated examiner (ECal), and the CEFBOT AI software (AIs) each marked 19 cephalometric points on all radiographs. Reliability was assessed with a second analysis of the radiographs 15 days after the first one. Statistical significance was set at p < 0.05. RESULTS: Reliability of landmark identification was excellent for the human examiners and the AIs regardless of the type of brightness and contrast setting (mean intraclass correlation coefficient >0.89). When ECont and ECal were compared for reproducibility, there were more cephalometric points with significant differences on the x-axis of the image with the highest contrast and the lowest brightness, namely N(p = 0.033), S(p = 0.030), Po(p < 0.001), and Pog'(p = 0.012). Between ECont and AIs, there were more cephalometric points with significant differences on the image with the highest contrast and the lowest brightness, namely N(p = 0.034), Or(p = 0.048), Po(p < 0.001), A(p = 0.042), Pog'(p = 0.004), Ll(p = 0.005), Ul(p < 0.001), and Sn(p = 0.001). CONCLUSIONS: While the reliability of the AIs for cephalometric landmark identification was rated as excellent, low brightness and high contrast seemed to affect its reproducibility. The experienced human examiner, on the other hand, did not show such faulty reproducibility; therefore, the AIs used in this study is an excellent auxiliary tool for cephalometric analysis, but still depends on human supervision to be clinically reliable.


Subject(s)
Artificial Intelligence , Software , Humans , Reproducibility of Results , Radiography , Cephalometry/methods
2.
Dentomaxillofac Radiol ; 51(6): 20200548, 2022 Sep 01.
Article in English | MEDLINE | ID: mdl-33882247

ABSTRACT

OBJECTIVE: To assess the reliability of CEFBOT, an artificial intelligence (AI)-based cephalometry software, for cephalometric landmark annotation and linear and angular measurements according to Arnett's analysis. METHODS: Thirty lateral cephalometric radiographs acquired with a Carestream CS 9000 3D unit (Carestream Health Inc., Rochester/NY) were used in this study. The 66 landmarks and the 10 selected linear and angular measurements of Arnett's analysis were identified on each radiograph by a trained human examiner (control) and by CEFBOT (RadioMemory Ltd., Belo Horizonte, Brazil). For both methods, landmark annotations and measurements were duplicated with an interval of 15 days between measurements and the intraclass correlation coefficient (ICC) was calculated to determine reliability. The numerical values obtained with the two methods were compared by a t-test for independent variables. RESULTS: CEFBOT was able to perform all but one of the 10 measurements. ICC values > 0.94 were found for the remaining eight measurements, while the Frankfurt horizontal plane - true horizontal line (THL) angular measurement showed the lowest reproducibility (human, ICC = 0.876; CEFBOT, ICC = 0.768). Measurements performed by the human examiner and by CEFBOT were not statistically different. CONCLUSION: Within the limitations of our methodology, we concluded that the AI contained in the CEFBOT software can be considered a promising tool for enhancing the capacities of human radiologists.


Subject(s)
Artificial Intelligence , Software , Cephalometry/methods , Humans , Radiography , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL
...