Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Assunto principal
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Sensors (Basel) ; 24(17)2024 Sep 04.
Artigo em Inglês | MEDLINE | ID: mdl-39275654

RESUMO

Simultaneous Localization and Mapping (SLAM) enables mobile robots to autonomously perform localization and mapping tasks in unknown environments. Despite significant progress achieved by visual SLAM systems in ideal conditions, relying solely on a single robot and point features for mapping in large-scale indoor environments with weak-texture structures can affect mapping efficiency and accuracy. Therefore, this paper proposes a multi-robot collaborative mapping method based on point-line fusion to address this issue. This method is designed for indoor environments with weak-texture structures for localization and mapping. The feature-extraction algorithm, which combines point and line features, supplements the existing environment point feature-extraction method by introducing a line feature-extraction step. This integration ensures the accuracy of visual odometry estimation in scenes with pronounced weak-texture structure features. For relatively large indoor scenes, a scene-recognition-based map-fusion method is proposed in this paper to enhance mapping efficiency. This method relies on visual bag of words to determine overlapping areas in the scene, while also proposing a keyframe-extraction method based on photogrammetry to improve the algorithm's robustness. By combining the Perspective-3-Point (P3P) algorithm and Bundle Adjustment (BA) algorithm, the relative pose-transformation relationships of multi-robots in overlapping scenes are resolved, and map fusion is performed based on these relative pose relationships. We evaluated our algorithm on public datasets and a mobile robot platform. The experimental results demonstrate that the proposed algorithm exhibits higher robustness and mapping accuracy. It shows significant effectiveness in handling mapping in scenarios with weak texture and structure, as well as in small-scale map fusion.

2.
Sensors (Basel) ; 22(23)2022 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-36502063

RESUMO

SLAM (Simultaneous Localization and Mapping) is mainly composed of five parts: sensor data reading, front-end visual odometry, back-end optimization, loopback detection, and map building. And when visual SLAM is estimated by visual odometry only, cumulative drift will inevitably occur. Loopback detection is used in classical visual SLAM, and if loopback is not detected during operation, it is not possible to correct the positional trajectory using loopback. Therefore, to address the cumulative drift problem of visual SLAM, this paper adds Indoor Positioning System (IPS) to the back-end optimization of visual SLAM, and uses the two-label orientation method to estimate the heading angle of the mobile robot as the pose information, and outputs the pose information with position and heading angle. It is also added to the optimization as an absolute constraint. Global constraints are provided for the optimization of the positional trajectory. We conducted experiments on the AUTOLABOR mobile robot, and the experimental results show that the localization accuracy of the SLAM back-end optimization algorithm with fused IPS can be maintained between 0.02 m and 0.03 m, which meets the requirements of indoor localization, and there is no cumulative drift problem when there is no loopback detection, which solves the problem of cumulative drift of the visual SLAM system to some extent.


Assuntos
Dispositivos Ópticos , Algoritmos
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa