Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Heliyon ; 10(4): e26042, 2024 Feb 29.
Artigo em Inglês | MEDLINE | ID: mdl-38390062

RESUMO

In this paper, we present a new generation of omnidirectional automated guided vehicles (omniagv) used for transporting materials within a manufacturing factory with the ability to navigate autonomously and intelligently by interacting with the environment, including people and other entities. This robot has to be integrated into the operating environment without significant changes to the current facilities or heavy redefinitions of the logistics processes already running. For this purpose, different vision-based systems and advanced methods in mobile and cognitive robotics are developed and integrated. In this context, vision and perception are key factors. Different developed modules are in charge of supporting the robot during its navigation in the environment. Specifically, the localization module provides information about the robot pose by using visual odometry and wheel odometry systems. The obstacle avoidance module can detect obstacles and recognize some object classes for adaptive navigation. Finally, the tag detection module aids the robot during the picking phase of carts and provides information for global localization. The smart integration of vision and perception is paramount for effectively using the robot in the industrial context. Extensive qualitative and quantitative results prove the capability and effectiveness of the proposed AGV to navigate in the considered industrial environment.

2.
Sensors (Basel) ; 20(3)2020 Feb 06.
Artigo em Inglês | MEDLINE | ID: mdl-32041371

RESUMO

In this paper we tackle the problem of indoor robot localization by using a vision-based approach. Specifically, we propose a visual odometer able to give back the relative pose of an omnidirectional automatic guided vehicle (AGV) that moves inside an indoor industrial environment. A monocular downward-looking camera having the optical axis nearly perpendicular to the ground floor, is used for collecting floor images. After a preliminary analysis of images aimed at detecting robust point features (keypoints) takes place, specific descriptors associated to the keypoints enable to match the detected points to their consecutive frames. A robust correspondence feature filter based on statistical and geometrical information is devised for rejecting those incorrect matchings, thus delivering better pose estimations. A camera pose compensation is further introduced for ensuring better positioning accuracy. The effectiveness of proposed methodology has been proven through several experiments, in laboratory as well as in an industrial setting. Both quantitative and qualitative evaluations have been made. Outcomes have shown that the method provides a final positioning percentage error of 0.21% on an average distance of 17.2 m. A longer run in an industrial context has provided comparable results (a percentage error of 0.94% after about 80 m). The average relative positioning error is about 3%, which is still in good agreement with current state of the art.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA