Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros













Base de datos
Intervalo de año de publicación
1.
Sensors (Basel) ; 23(15)2023 Aug 06.
Artículo en Inglés | MEDLINE | ID: mdl-37571770

RESUMEN

The precise localization of unmanned ground vehicles (UGVs) in industrial parks without prior GPS measurements presents a significant challenge. Simultaneous localization and mapping (SLAM) techniques can address this challenge by capturing environmental features, using sensors for real-time UGV localization. In order to increase the real-time localization accuracy and efficiency of UGVs, and to improve the robustness of UGVs' odometry within industrial parks-thereby addressing issues related to UGVs' motion control discontinuity and odometry drift-this paper proposes a tightly coupled LiDAR-IMU odometry method based on FAST-LIO2, integrating ground constraints and a novel feature extraction method. Additionally, a novel maintenance method of prior maps is proposed. The front-end module acquires the prior pose of the UGV by combining the detection and correction of relocation with point cloud registration. Then, the proposed maintenance method of prior maps is used to hierarchically and partitionally segregate and perform the real-time maintenance of the prior maps. At the back-end, real-time localization is achieved by the proposed tightly coupled LiDAR-IMU odometry that incorporates ground constraints. Furthermore, a feature extraction method based on the bidirectional-projection plane slope difference filter is proposed, enabling efficient and accurate point cloud feature extraction for edge, planar and ground points. Finally, the proposed method is evaluated, using self-collected datasets from industrial parks and the KITTI dataset. Our experimental results demonstrate that, compared to FAST-LIO2 and FAST-LIO2 with the curvature feature extraction method, the proposed method improved the odometry accuracy by 30.19% and 48.24% on the KITTI dataset. The efficiency of odometry was improved by 56.72% and 40.06%. When leveraging prior maps, the UGV achieved centimeter-level localization accuracy. The localization accuracy of the proposed method was improved by 46.367% compared to FAST-LIO2 on self-collected datasets, and the located efficiency was improved by 32.33%. The z-axis-located accuracy of the proposed method reached millimeter-level accuracy. The proposed prior map maintenance method reduced RAM usage by 64% compared to traditional methods.

2.
Sensors (Basel) ; 23(13)2023 Jun 22.
Artículo en Inglés | MEDLINE | ID: mdl-37447680

RESUMEN

This article proposes a CBAM-ASPP-SqueezeNet model based on the attention mechanism and atrous spatial pyramid pooling (CBAM-ASPP) to solve the problem of robot multi-target grasping detection. Firstly, the paper establishes and expends a multi-target grasping dataset, as well as introduces and uses transfer learning to conduct network pre-training on the single-target dataset and slightly modify the model parameters using the multi-target dataset. Secondly, the SqueezeNet model is optimized and improved using the attention mechanism and atrous spatial pyramid pooling module. The paper introduces the attention mechanism network to weight the transmitted feature map in the channel and spatial dimensions. It uses a variety of parallel operations of atrous convolution with different atrous rates to increase the size of the receptive field and preserve features from different ranges. Finally, the CBAM-ASPP-SqueezeNet algorithm is verified using the self-constructed, multi-target capture dataset. When the paper introduces transfer learning, the various indicators converge after training 20 epochs. In the physical grabbing experiment conducted by Kinova and SIASUN Arm, a network grabbing success rate of 93% was achieved.


Asunto(s)
Algoritmos , Aprendizaje , Tecnología , Aprendizaje Automático
3.
Entropy (Basel) ; 25(4)2023 Apr 03.
Artículo en Inglés | MEDLINE | ID: mdl-37190398

RESUMEN

To address the time-optimal trajectory planning (TOTP) problem with joint jerk constraints in a Cartesian coordinate system, we propose a time-optimal path-parameterization (TOPP) algorithm based on nonlinear optimization. The key insight of our approach is the presentation of a comprehensive and effective iterative optimization framework for solving the optimal control problem (OCP) formulation of the TOTP problem in the (s,s˙)-phase plane. In particular, we identify two major difficulties: establishing TOPP in Cartesian space satisfying third-order constraints in joint space, and finding an efficient computational solution to TOPP, which includes nonlinear constraints. Experimental results demonstrate that the proposed method is an effective solution for time-optimal trajectory planning with joint jerk limits, and can be applied to a wide range of robotic systems.

4.
Opt Express ; 28(13): 19058-19073, 2020 Jun 22.
Artículo en Inglés | MEDLINE | ID: mdl-32672191

RESUMEN

RGB-D cameras (or color-depth cameras) play key roles in many vision applications. A typical RGB-D camera has only rough intrinsic and extrinsic calibrations that cannot provide the accuracy required in many vision applications. In this paper, we propose a novel and accurate sphere-based calibration framework for estimating the intrinsic and extrinsic parameters of color-depth sensor pair. Additionally, a method of depth error correction is suggested, and the principle of error correction is analyzed in detail. In our method, the feature extraction module can automatically and reliably detect the center and edges of the sphere projection, while excluding noise data and outliers, and the projection of the sphere center on RGB and depth images is used to obtain a closed solution of the initial parameters. Finally, all the parameters are accurately estimated within the framework of nonlinear global minimization. Compared to other state-of-the-art methods, our calibration method is easy to use and provides higher calibration accuracy. Detailed experimental analysis is performed to support our conclusions.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA