Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Assunto principal
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 22(6)2022 Mar 20.
Artigo em Inglês | MEDLINE | ID: mdl-35336558

RESUMO

Visual navigation is of vital importance for autonomous mobile robots. Most existing practical perception-aware based visual navigation methods generally require prior-constructed precise metric maps, and learning-based methods rely on large training to improve their generality. To improve the reliability of visual navigation, in this paper, we propose a novel object-level topological visual navigation method. Firstly, a lightweight object-level topological semantic map is constructed to release the dependence on the precise metric map, where the semantic associations between objects are stored via graph memory and topological organization is performed. Then, we propose an object-based heuristic graph search method to select the global topological path with the optimal and shortest characteristics. Furthermore, to reduce the global cumulative error, a global path segmentation strategy is proposed to divide the global topological path on the basis of active visual perception and object guidance. Finally, to achieve adaptive smooth trajectory generation, a Bernstein polynomial-based smooth trajectory refinement method is proposed by transforming trajectory generation into a nonlinear planning problem, achieving smooth multi-segment continuous navigation. Experimental results demonstrate the feasibility and efficiency of our method on both simulation and real-world scenarios. The proposed method also obtains better navigation success rate (SR) and success weighted by inverse path length (SPL) than the state-of-the-art methods.


Assuntos
Robótica , Algoritmos , Simulação por Computador , Reprodutibilidade dos Testes , Robótica/métodos , Semântica
2.
Appl Opt ; 58(20): 5516-5524, 2019 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-31504022

RESUMO

This study develops a novel automatic all-sky imaging system, namely, an all-sky camera (ASC) system, for cloud cover assessment. The proposed system does not require conventional solar occulting devices and can capture complete hemispheric sky images. Cloud detection is performed innovatively using a convolutional neural network model (i.e., the optimized U-Net model). Experiments demonstrate that the optimized U-Net model can effectively detect clouds from sky images. In terms of cloud cover, the estimation results of the ASC system exhibit a high correlation with those obtained via manual observation, thereby indicating the applicability of the ASC system in ground-based cloud observation and analysis.

3.
Sensors (Basel) ; 18(11)2018 Nov 19.
Artigo em Inglês | MEDLINE | ID: mdl-30463261

RESUMO

State estimation is crucial for robot autonomy, visual odometry (VO) has received significant attention in the robotics field because it can provide accurate state estimation. However, the accuracy and robustness of most existing VO methods are degraded in complex conditions, due to the limited field of view (FOV) of the utilized camera. In this paper, we present a novel tightly-coupled multi-keyframe visual-inertial odometry (called VINS-MKF), which can provide an accurate and robust state estimation for robots in an indoor environment. We first modify the monocular ORBSLAM (Oriented FAST and Rotated BRIEF Simultaneous Localization and Mapping) to multiple fisheye cameras alongside an inertial measurement unit (IMU) to provide large FOV visual-inertial information. Then, a novel VO framework is proposed to ensure the efficiency of state estimation, by adopting a GPU (Graphics Processing Unit) based feature extraction method and parallelizing the feature extraction thread that is separated from the tracking thread with the mapping thread. Finally, a nonlinear optimization method is formulated for accurate state estimation, which is characterized as being multi-keyframe, tightly-coupled and visual-inertial. In addition, accurate initialization and a novel MultiCol-IMU camera model are coupled to further improve the performance of VINS-MKF. To the best of our knowledge, it's the first tightly-coupled multi-keyframe visual-inertial odometry that joins measurements from multiple fisheye cameras and IMU. The performance of the VINS-MKF was validated by extensive experiments using home-made datasets, and it showed improved accuracy and robustness over the state-of-art VINS-Mono.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA