Your browser doesn't support javascript.
loading
A Vision-Based Odometer for Localization of Omnidirectional Indoor Robots.
Patruno, Cosimo; Colella, Roberto; Nitti, Massimiliano; Renò, Vito; Mosca, Nicola; Stella, Ettore.
Afiliação
  • Patruno C; Institute of Intelligent Industrial Technologies and Systems for Advanced Manufacturing, Italian National Research Council, STIIMA-CNR, G. Amendola 122 D/O, Bari 70126, Italy.
  • Colella R; Institute of Intelligent Industrial Technologies and Systems for Advanced Manufacturing, Italian National Research Council, STIIMA-CNR, G. Amendola 122 D/O, Bari 70126, Italy.
  • Nitti M; Institute of Intelligent Industrial Technologies and Systems for Advanced Manufacturing, Italian National Research Council, STIIMA-CNR, G. Amendola 122 D/O, Bari 70126, Italy.
  • Renò V; Institute of Intelligent Industrial Technologies and Systems for Advanced Manufacturing, Italian National Research Council, STIIMA-CNR, G. Amendola 122 D/O, Bari 70126, Italy.
  • Mosca N; Institute of Intelligent Industrial Technologies and Systems for Advanced Manufacturing, Italian National Research Council, STIIMA-CNR, G. Amendola 122 D/O, Bari 70126, Italy.
  • Stella E; Institute of Intelligent Industrial Technologies and Systems for Advanced Manufacturing, Italian National Research Council, STIIMA-CNR, G. Amendola 122 D/O, Bari 70126, Italy.
Sensors (Basel) ; 20(3)2020 Feb 06.
Article em En | MEDLINE | ID: mdl-32041371
ABSTRACT
In this paper we tackle the problem of indoor robot localization by using a vision-based approach. Specifically, we propose a visual odometer able to give back the relative pose of an omnidirectional automatic guided vehicle (AGV) that moves inside an indoor industrial environment. A monocular downward-looking camera having the optical axis nearly perpendicular to the ground floor, is used for collecting floor images. After a preliminary analysis of images aimed at detecting robust point features (keypoints) takes place, specific descriptors associated to the keypoints enable to match the detected points to their consecutive frames. A robust correspondence feature filter based on statistical and geometrical information is devised for rejecting those incorrect matchings, thus delivering better pose estimations. A camera pose compensation is further introduced for ensuring better positioning accuracy. The effectiveness of proposed methodology has been proven through several experiments, in laboratory as well as in an industrial setting. Both quantitative and qualitative evaluations have been made. Outcomes have shown that the method provides a final positioning percentage error of 0.21% on an average distance of 17.2 m. A longer run in an industrial context has provided comparable results (a percentage error of 0.94% after about 80 m). The average relative positioning error is about 3%, which is still in good agreement with current state of the art.
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2020 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2020 Tipo de documento: Article