RESUMEN
Robotic arms have been widely used in various industries and have the advantages of cost savings, high productivity, and efficiency. Although robotic arms are good at increasing efficiency in repetitive tasks, they still need to be re-programmed and optimized when new tasks are to be deployed, resulting in detrimental downtime and high cost. It is therefore the objective of this paper to present a learning from demonstration (LfD) robotic system to provide a more intuitive way for robots to efficiently perform tasks through learning from human demonstration on the basis of two major components: understanding through human demonstration and reproduction by robot arm. To understand human demonstration, we propose a vision-based spatial-temporal action detection method to detect human actions that focuses on meticulous hand movement in real time to establish an action base. An object trajectory inductive method is then proposed to obtain a key path for objects manipulated by the human through multiple demonstrations. In robot reproduction, we integrate the sequence of actions in the action base and the key path derived by the object trajectory inductive method for motion planning to reproduce the task demonstrated by the human user. Because of the capability of learning from demonstration, the robot can reproduce the tasks that the human demonstrated with the help of vision sensors in unseen contexts.
Asunto(s)
Robótica , Humanos , Movimiento (Física) , Movimiento , Extremidad Superior , Visión OcularRESUMEN
OBJECTIVES: This study aimed to use a deep learning (DL) approach for the automatic identification of the ridge deficiency around dental implants based on an image slice from cone-beam computerized tomography (CBCT). MATERIALS AND METHODS: Single slices crossing the central long-axis of 630 mandibular and 845 maxillary virtually placed implants (4-5 mm diameter, 10 mm length) in 412 patients were used. The ridges were classified based on the intraoral bone-implant support and sinus floor location. The slices were either preprocessed by alveolar ridge homogenizing prior to DL (preprocessed) or left unpreprocessed. A convolutional neural network with ResNet-50 architecture was employed for DL. RESULTS: The model achieved an accuracy of >98.5% on the unpreprocessed image slices and was found to be superior to the accuracy observed on the preprocessed slices. On the mandible, model accuracy was 98.91 ± 1.45%, and F1 score, a measure of a model's accuracy in binary classification tasks, was lowest (97.30%) on the ridge with a combined horizontal-vertical defect. On the maxilla, model accuracy was 98.82 ± 1.11%, and the ridge presenting an implant collar-sinus floor distance of 5-10 mm with a dehiscence defect had the lowest F1 score (95.86%). To achieve >90% model accuracy, ≥441 mandibular slices or ≥592 maxillary slices were required. CONCLUSIONS: The ridge deficiency around dental implants can be identified using DL from CBCT image slices without the need for preprocessed homogenization. The model will be further strengthened by implementing more clinical expertise in dental implant treatment planning and incorporating multiple slices to classify 3-dimensional implant-ridge relationships.