RESUMEN
OBJECTIVE: The aim of this study is to evaluate whether fully automatic cephalometric analysis software with artificial intelligence algorithms is as accurate as non-automated cephalometric analysis software for clinical diagnosis and research. MATERIALS AND METHODS: This is a retrospective archive study using lateral cephalometric radiographs taken from individuals aged 12-20 years. Cephalometric measurement data were obtained from these lateral cephalometric radiographs by manual landmark marking with non-automated computer software (Dolphin 11.8). Again, the same radiographs were made using fully automatic digital cephalometric analysis software OrthoDx™ (AI-Powered Orthodontic Imaging System, Phimentum) and WebCeph (Assemblecircle, Seoul, Korea) with artificial intelligence algorithm, without manual intervention of the researcher and fully automatic markings and measurements were made by the software. RESULTS: According to the consistency test, a statistically significant good level of consistency was found between Dolphin and OrthoDx™ measurements and Dolphin and WebCeph measurements in angular measurements (ICC > 0.75, P < .01, ICC > 0.75, P < 0, respectively. 01). A weak level of consistency was found in linear measurement and soft tissue parameters in both software (ICC < 0.50, P < .05, ICC < 0.50, P < .05), and the difference between measurements was statistically found to be different from "0." CONCLUSION: The results obtained from fully automatic cephalometric analysis software with artificial intelligence algorithms are similar to the results of non-automated cephalometric analysis software, although there are differences in some parameters. To minimize the margin of error in artificial intelligence-based fully automatic cephalometric software, the manual intervention of the observer is needed.
Asunto(s)
Algoritmos , Inteligencia Artificial , Estudios Retrospectivos , Programas Informáticos , Radiografía , Cefalometría/métodos , Reproducibilidad de los ResultadosRESUMEN
PURPOSE: An efficient physics-informed deep learning approach for extracting spinopelvic measures from X-ray images is introduced and its performance is evaluated against manual annotations. METHODS: Two datasets, comprising a total of 1470 images, were collected to evaluate the model's performance. We propose a novel method of detecting landmarks as objects, incorporating their relationships as constraints (LanDet). Using this approach, we trained our deep learning model to extract five spine and pelvis measures: Sacrum Slope (SS), Pelvic Tilt (PT), Pelvic Incidence (PI), Lumbar Lordosis (LL), and Sagittal Vertical Axis (SVA). The results were compared to manually labelled test dataset (GT) as well as measures annotated separately by three surgeons. RESULTS: The LanDet model was evaluated on the two datasets separately and on an extended dataset combining both. The final accuracy for each measure is reported in terms of Mean Absolute Error (MAE), Standard Deviation (SD), and R Pearson correlation coefficient as follows: [ S S ∘ : 3.7 ( 2.7 ) , R = 0.89 ] , [ P T ∘ : 1.3 ( 1.1 ) , R = 0.98 ] , [ P I ∘ : 4.2 ( 3.1 ) , R = 0.93 ] , [ L L ∘ : 5.1 ( 6.4 ) , R = 0.83 ] , [ S V A ( m m ) : 2.1 ( 1.9 ) , R = 0.96 ] . To assess model reliability and compare it against surgeons, the intraclass correlation coefficient (ICC) metric is used. The model demonstrated better consistency with surgeons with all values over 0.88 compared to what was previously reported in the literature. CONCLUSION: The LanDet model exhibits competitive performance compared to existing literature. The effectiveness of the physics-informed constraint method, utilized in our landmark detection as object algorithm, is highlighted. Furthermore, we addressed the limitations of heatmap-based methods for anatomical landmark detection and tackled issues related to mis-identifying of similar or adjacent landmarks instead of intended landmark using this novel approach.
Asunto(s)
Aprendizaje Profundo , Lordosis , Humanos , Reproducibilidad de los Resultados , Sacro/diagnóstico por imagen , Pelvis/diagnóstico por imagen , Vértebras Lumbares/cirugíaRESUMEN
The localization and tracking of neurocranial landmarks is essential in modern medical procedures, e.g., transcranial magnetic stimulation (TMS). However, state-of-the-art treatments still rely on the manual identification of head targets and require setting retroreflective markers for tracking. This limits the applicability and scalability of TMS approaches, making them time-consuming, dependent on expensive hardware, and prone to errors when retroreflective markers drift from their initial position. To overcome these limitations, we propose a scalable method capable of inferring the position of points of interest on the scalp, e.g., the International 10-20 System's neurocranial landmarks. In contrast with existing approaches, our method does not require human intervention or markers; head landmarks are estimated leveraging visible facial landmarks, optional head size measurements, and statistical head model priors. We validate the proposed approach on ground truth data from 1,150 subjects, for which facial 3D and head information is available; our technique achieves a localization RMSE of 2.56 mm on average, which is of the same order as reported by high-end techniques in TMS. Our implementation is available at https://github.com/odedsc/ANLD.
RESUMEN
Shape analysis of infant's heads is crucial to diagnose cranial deformities and evaluate head growth. Currently available 3D imaging systems can be used to create 3D head models, promoting the clinical practice for head evaluation. However, manual analysis of 3D shapes is difficult and operator-dependent, causing inaccuracies in the analysis. This study aims to validate an automatic landmark detection method for head shape analysis. The detection results were compared with manual analysis in three levels: (1) distance error of landmarks; (2) accuracy of standard cranial measurements, namely cephalic ratio (CR), cranial vault asymmetry index (CVAI), and overall symmetry ratio (OSR); and (3) accuracy of the final diagnosis of cranial deformities. For each level, the intra- and interobserver variability was also studied by comparing manual landmark settings. High landmark detection accuracy was achieved by the method in 166 head models. A very strong agreement with manual analysis for the cranial measurements was also obtained, with intraclass correlation coefficients of 0.997, 0.961, and 0.771 for the CR, CVAI, and OSR. 91% agreement with manual analysis was achieved in the diagnosis of cranial deformities. Considering its high accuracy and reliability in different evaluation levels, the method showed to be feasible for use in clinical practice for head shape analysis.
Asunto(s)
Imagenología Tridimensional , Cráneo , Cefalometría/métodos , Humanos , Imagenología Tridimensional/métodos , Lactante , Variaciones Dependientes del Observador , Reproducibilidad de los Resultados , Cráneo/diagnóstico por imagenRESUMEN
The 3D musculoskeletal motion of animals is of interest for various biological studies and can be derived from X-ray fluoroscopy acquisitions by means of image matching or manual landmark annotation and mapping. While the image matching method requires a robust similarity measure (intensity-based) or an expensive computation (tomographic reconstruction-based), the manual annotation method depends on the experience of operators. In this paper, we tackle these challenges by a strategic approach that consists of two building blocks: an automated 3D landmark extraction technique and a deep neural network for 2D landmarks detection. For 3D landmark extraction, we propose a technique based on the shortest voxel coordinate variance to extract the 3D landmarks from the 3D tomographic reconstruction of an object. For 2D landmark detection, we propose a customized ResNet18-based neural network, BoneNet, to automatically detect geometrical landmarks on X-ray fluoroscopy images. With a deeper network architecture in comparison to the original ResNet18 model, BoneNet can extract and propagate feature vectors for accurate 2D landmark inference. The 3D poses of the animal are then reconstructed by aligning the extracted 2D landmarks from X-ray radiographs and the corresponding 3D landmarks in a 3D object reference model. Our proposed method is validated on X-ray images, simulated from a real piglet hindlimb 3D computed tomography scan and does not require manual annotation of landmark positions. The simulation results show that BoneNet is able to accurately detect the 2D landmarks in simulated, noisy 2D X-ray images, resulting in promising rigid and articulated parameter estimations.
RESUMEN
Automatic detection of anatomical landmarks is an important step for a wide range of applications in medical image analysis. Manual annotation of landmarks is a tedious task and prone to observer errors. In this paper, we evaluate novel deep reinforcement learning (RL) strategies to train agents that can precisely and robustly localize target landmarks in medical scans. An artificial RL agent learns to identify the optimal path to the landmark by interacting with an environment, in our case 3D images. Furthermore, we investigate the use of fixed- and multi-scale search strategies with novel hierarchical action steps in a coarse-to-fine manner. Several deep Q-network (DQN) architectures are evaluated for detecting multiple landmarks using three different medical imaging datasets: fetal head ultrasound (US), adult brain and cardiac magnetic resonance imaging (MRI). The performance of our agents surpasses state-of-the-art supervised and RL methods. Our experiments also show that multi-scale search strategies perform significantly better than fixed-scale agents in images with large field of view and noisy background such as in cardiac MRI. Moreover, the novel hierarchical steps can significantly speed up the searching process by a factor of 4-5 times.
Asunto(s)
Puntos Anatómicos de Referencia , Encéfalo/diagnóstico por imagen , Aprendizaje Profundo , Cabeza/diagnóstico por imagen , Corazón/diagnóstico por imagen , Imagenología Tridimensional/métodos , Imagen por Resonancia Magnética/métodos , Adulto , Femenino , Cabeza/embriología , Humanos , EmbarazoRESUMEN
PURPOSE: Cone-beam computed tomography (CBCT) is now an established component for 3D evaluation and treatment planning of patients with severe malocclusion and craniofacial deformities. Precision landmark plotting on 3D images for cephalometric analysis requires considerable effort and time, notwithstanding the experience of landmark plotting, which raises a need to automate the process of 3D landmark plotting. Therefore, knowledge-based algorithm for automatic detection of landmarks on 3D CBCT images has been developed and tested. METHODS: A knowledge-based algorithm was developed in the MATLAB programming environment to detect 20 cephalometric landmarks. For the automatic detection, landmarks that are physically adjacent to each other were clustered into groups and were extracted through a volume of interest (VOI). Relevant contours were detected in the VOI and landmarks were detected using corresponding mathematical entities. The standard data for validation were generated using manual marking carried out by three orthodontists on a dataset of 30 CBCT images as a reference. RESULTS: Inter-observer ICC for manual landmark identification was found to be excellent (>0.9) amongst three observers. Euclidean distances between the coordinates of manual identification and automatic detection through the proposed algorithm of each landmark were calculated. The overall mean error for the proposed method was 2.01 mm with a standard deviation of 1.23 mm for all the 20 landmarks. The overall landmark detection accuracy was recorded at 64.67, 82.67 and 90.33 % within 2-, 3- and 4-mm error range of manual marking, respectively. CONCLUSIONS: The proposed knowledge-based algorithm for automatic detection of landmarks on 3D images was able to achieve relatively accurate results than the currently available algorithm.