RESUMEN
Dragon fruit is one of the most popular fruits in China and Southeast Asia. It, however, is mainly picked manually, imposing high labor intensity on farmers. The hard branches and complex postures of dragon fruit make it difficult to achieve automated picking. For picking dragon fruits with diverse postures, this paper proposes a new dragon fruit detection method, not only to identify and locate the dragon fruit, but also to detect the endpoints that are at the head and root of the dragon fruit, which can provide more visual information for the dragon fruit picking robot. First, YOLOv7 is used to locate and classify the dragon fruit. Then, we propose a PSP-Ellipse method to further detect the endpoints of the dragon fruit, including dragon fruit segmentation via PSPNet, endpoints positioning via an ellipse fitting algorithm and endpoints classification via ResNet. To test the proposed method, some experiments are conducted. In dragon fruit detection, the precision, recall and average precision of YOLOv7 are 0.844, 0.924 and 0.932, respectively. YOLOv7 also performs better compared with some other models. In dragon fruit segmentation, the segmentation performance of PSPNet on dragon fruit is better than some other commonly used semantic segmentation models, with the segmentation precision, recall and mean intersection over union being 0.959, 0.943 and 0.906, respectively. In endpoints detection, the distance error and angle error of endpoints positioning based on ellipse fitting are 39.8 pixels and 4.3°, and the classification accuracy of endpoints based on ResNet is 0.92. The proposed PSP-Ellipse method makes a great improvement compared with two kinds of keypoint regression method based on ResNet and UNet. Orchard picking experiments verified that the method proposed in this paper is effective. The detection method proposed in this paper not only promotes the progress of the automatic picking of dragon fruit, but it also provides a reference for other fruit detection.
Asunto(s)
Algoritmos , Frutas , ChinaRESUMEN
The target recognition algorithm is one of the core technologies of Zanthoxylum pepper-picking robots. However, most existing detection algorithms cannot effectively detect Zanthoxylum fruit covered by branches, leaves and other fruits in natural scenes. To improve the work efficiency and adaptability of the Zanthoxylum-picking robot in natural environments, and to recognize and detect fruits in complex environments under different lighting conditions, this paper presents a Zanthoxylum-picking-robot target detection method based on improved YOLOv5s. Firstly, an improved CBF module based on the CBH module in the backbone is raised to improve the detection accuracy. Secondly, the Specter module based on CBF is presented to replace the bottleneck CSP module, which improves the speed of detection with a lightweight structure. Finally, the Zanthoxylum fruit algorithm is checked by the improved YOLOv5 framework, and the differences in detection between YOLOv3, YOLOv4 and YOLOv5 are analyzed and evaluated. Through these improvements, the recall rate, recognition accuracy and mAP of the YOLOv5s are 4.19%, 28.7% and 14.8% higher than those of the original YOLOv5s, YOLOv3 and YOLOv4 models, respectively. Furthermore, the model is transferred to the computing platform of the robot with the cutting-edge NVIDIA Jetson TX2 device. Several experiments are implemented on the TX2, yielding an average time of inference of 0.072, with an average GPU load in 30 s of 20.11%. This method can provide technical support for pepper-picking robots to detect multiple pepper fruits in real time.
Asunto(s)
Robótica , Zanthoxylum , Algoritmos , Ambiente , Proyectos de InvestigaciónRESUMEN
Currently kiwifruit picking process mainly leverages manual labor, which has low productivity and high labor intensity, meanwhile, the existing kiwifruit picking machinery also has low picking efficiency and easily damages fruits. In this regard, a kiwifruit picking robot suitable for orchard operations was developed in this paper for kiwifruit grown in orchard trellis style. First, based on the analysis of kiwifruit growth pattern and cultivation parameters, the expected design requirements and objectives of a kiwifruit picking robot were proposed, and the expected workflow of the robot in the kiwifruit orchard environment was given, which in turn led to a multi-fruit envelope-cutting kiwifruit picking robot was designed. Then, the D-H method was used to establish the kinematic Equations of the kiwifruit-picking robot, the forward and inverse kinematic calculations were carried out, and the Monte Carlo method was used to analyze the workspace of the robot. By planning the trajectory of the robotic arm and calculating critical nodes in the picking path, the scheme of trajectory planning of the robot was given, and MATLAB software was applied to simulate the motion trajectory as well as to verify the feasibility of the trajectory planning scheme and the picking strategy. Finally, a kiwifruit picking test bed was set up to conduct picking tests in the form of fruit clusters. The results show that the average time to pick each cluster of fruit was 9.7s, the picking success rate was 88.0%, and the picking damage rate was 7.3%. All the indicators met the requirements of the expected design of the kiwifruit-picking robot.
RESUMEN
The cotton-picking robot needs to locate the target object in space in the process of picking in the field and other outdoor strong light complex environments. The difficulty of this process was binocular matching. Therefore, this paper proposes an accurate and fast binocular matching method. This method used the deep learning model to obtain the position and shape of the target object, and then used the matching equation proposed in this paper to match the target object. Matching precision of this method for cotton matching was much higher than that of similar algorithms. It was 54.11, 45.37, 6.15, and 12.21% higher than block matching (BM), semi global block matching (SGBM), pyramid stereo matching network (PSMNet), and geometry and context for deep stereo regression (GC-net) respectively, and its speed was also the fastest. Using this new matching method, the cotton was matched and located in space. Experimental results show the effectiveness and feasibility of the algorithm.
RESUMEN
Visual recognition is the most critical function of a harvesting robot, and the accuracy of the harvesting action is based on the performance of visual recognition. However, unstructured environment, such as severe occlusion, fruits overlap, illumination changes, complex backgrounds, and even heavy fog weather, pose series of serious challenges to the detection accuracy of the recognition algorithm. Hence, this paper proposes an improved YOLO v4 model, called YOLO v4+, to cope with the challenges brought by unstructured environment. The output of each Resblock_body in the backbone is processed using a simple, parameterless attention mechanism for full dimensional refinement of extracted features. Further, in order to alleviate the problem of feature information loss, a multi scale feature fusion module with fusion weight and jump connection structure was pro-posed. In addition, the focal loss function is adopted and the hyperparameters α, γ are adjusted to 0.75 and 2. The experimental results show that the average precision of the YOLO v4+ model is 94.25% and the F1 score is 93%, which is 3.35% and 3% higher than the original YOLO v4 respectively. Compared with several state-of-the-art detection models, YOLO v4+ not only has the highest comprehensive ability, but also has better generalization ability. Selecting the corresponding augmentation method for specific working condition can greatly improve the model detection accuracy. Applying the proposed method to harvesting robots may enhance the applicability and robustness of the robotic system.
RESUMEN
The typical occlusion of cherry tomatoes in the natural environment is one of the most critical factors affecting the accurate picking of cherry tomato picking robots. To recognize occluded cherry tomatoes accurately and efficiently using deep convolutional neural networks, a new occluded cherry tomato recognition model DSP-YOLOv7-CA is proposed. Firstly, images of cherry tomatoes with different degrees of occlusion are acquired, four occlusion areas and four occlusion methods are defined, and a cherry tomato dataset (TOSL) is constructed. Then, based on YOLOv7, the convolution module of the original residual edges was replaced with null residual edges, depth-separable convolutional layers were added, and jump connections were added to reuse feature information. Then, a depth-separable convolutional layer is added to the SPPF module with fewer parameters to replace the original SPPCSPC module to solve the problem of loss of small target information by different pooled residual layers. Finally, a coordinate attention mechanism (CA) layer is introduced at the critical position of the enhanced feature extraction network to strengthen the attention to the occluded cherry tomato. The experimental results show that the DSP-YOLOv7-CA model outperforms other target detection models, with an average detection accuracy (mAP) of 98.86%, and the number of model parameters is reduced from 37.62MB to 33.71MB, which is better on the actual detection of cherry tomatoes with less than 95% occlusion. Relatively average results were obtained on detecting cherry tomatoes with a shade level higher than 95%, but such cherry tomatoes were not targeted for picking. The DSP-YOLOv7-CA model can accurately recognize the occluded cherry tomatoes in the natural environment, providing an effective solution for accurately picking cherry tomato picking robots.
RESUMEN
Aiming at the stability of hand-eye calibration in fruit picking scene, a simple hand-eye calibration method for picking robot based on optimization combined with TOF (Time of Flight) camera is proposed. This method needs to fix the TOF depth camera at actual and calculated coordinates of the peach the end of the robot, operate the robot to take pictures of the calibration board from different poses, and record the current photographing poses to ensure that each group of pictures is clear and complete, so as to use the TOF depth camera to image the calibration board. Obtain multiple sets of calibration board depth maps and corresponding point cloud data, that is, "eye" data. Through the circle center extraction and positioning algorithm, the circle center points on each group of calibration plates are extracted, and a circle center sorting method based on the vector angle and the center of mass coordinates is designed to solve the circle center caused by factors such as mirror distortion, uneven illumination and different photographing poses. And through the tool center point of the actuator, the coordinate value of the circle center point on the four corners of each group of calibration plates in the robot end coordinate system is located in turn, and the "hand" data is obtained. Combined with the SVD method, And according to the obtained point residuals, the weight coefficients of the marker points are redistributed, and the hand-eye parameters are iteratively optimized, which improves the accuracy and stability of the hand-eye calibration. the method proposed in this paper has a better ability to locate the gross error under the environment of large gross errors. In order to verify the feasibility of the hand-eye calibration method, the indoor picking experiment was simulated, and the peaches were identified and positioned by combining deep learning and 3D vision to verify the proposed hand-eye calibration method. The JAKA six-axis robot and TuYang depth camera are used to build the experimental platform. The experimental results show that the method is simple to operate, has good stability, and the calibration plate is easy to manufacture and low in cost. work accuracy requirements.