Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
J Intell Robot Syst ; 101(2): 32, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33519083

RESUMEN

Different high-level robotics tasks require the robot to manipulate or interact with objects that are in an unexplored part of the environment or not already in its field of view. Although much works rely on searching for objects based on their colour or 3D context, we argue that text information is a useful and functional visual cue to guide the search. In this paper, we study the problem of active visual search (AVS) in large unknown environments. In this paper, we present an AVS system that relies on semantic information inferred from texts found in the environment, which allows the robot to reduce the search costs by avoiding not promising regions for the target object. Our semantic planner reasons over the numbers detected from door signs to decide either perform a goal-directed exploration towards unknown parts of the environment or carefully search in the already known parts. We compared the performance of our semantic AVS system with two other search systems in four simulated environments. First, we developed a greedy search system that does not consider any semantic information, and second, we invited human participants to teleoperate the robot while performing the search. Our results from simulation and real-world experiments show that text is a promising source of information that provides different semantic cues for AVS systems.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA