RESUMO
In recent years, the integration of deep learning into robotic grasping algorithms has led to significant advancements in this field. However, one of the challenges faced by many existing deep learning-based grasping algorithms is their reliance on extensive training data, which makes them less effective when encountering unknown objects not present in the training dataset. This paper presents a simple and effective grasping algorithm that addresses this challenge through the utilization of a deep learning-based object detector, focusing on oriented detection of key features shared among most objects, namely straight edges and corners. By integrating these features with information obtained through image segmentation, the proposed algorithm can logically deduce a grasping pose without being limited by the size of the training dataset. Experimental results on actual robotic grasping of unknown objects over 400 trials show that the proposed method can achieve a higher grasp success rate of 98.25% compared to existing methods.