Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Sensors (Basel) ; 23(10)2023 May 17.
Artículo en Inglés | MEDLINE | ID: mdl-37430762

RESUMEN

The capability of a mobile robot to efficiently and safely perform complex missions is limited by its knowledge of the environment, namely the situation. Advanced reasoning, decision-making, and execution skills enable an intelligent agent to act autonomously in unknown environments. Situational Awareness (SA) is a fundamental capability of humans that has been deeply studied in various fields, such as psychology, military, aerospace, and education. Nevertheless, it has yet to be considered in robotics, which has focused on single compartmentalized concepts such as sensing, spatial perception, sensor fusion, state estimation, and Simultaneous Localization and Mapping (SLAM). Hence, the present research aims to connect the broad multidisciplinary existing knowledge to pave the way for a complete SA system for mobile robotics that we deem paramount for autonomy. To this aim, we define the principal components to structure a robotic SA and their area of competence. Accordingly, this paper investigates each aspect of SA, surveying the state-of-the-art robotics algorithms that cover them, and discusses their current limitations. Remarkably, essential aspects of SA are still immature since the current algorithmic development restricts their performance to only specific environments. Nevertheless, Artificial Intelligence (AI), particularly Deep Learning (DL), has brought new methods to bridge the gap that maintains these fields apart from the deployment to real-world scenarios. Furthermore, an opportunity has been discovered to interconnect the vastly fragmented space of robotic comprehension algorithms through the mechanism of Situational Graph (S-Graph), a generalization of the well-known scene graph. Therefore, we finally shape our vision for the future of robotic situational awareness by discussing interesting recent research directions.


Asunto(s)
Inteligencia Artificial , Robótica , Humanos , Concienciación , Algoritmos , Inteligencia
2.
Sensors (Basel) ; 22(23)2022 Nov 29.
Artículo en Inglés | MEDLINE | ID: mdl-36501998

RESUMEN

In recent years, Simultaneous Localization and Mapping (SLAM) systems have shown significant performance, accuracy, and efficiency gain. In this regard, Visual Simultaneous Localization and Mapping (VSLAM) methods refer to the SLAM approaches that employ cameras for pose estimation and map reconstruction and are preferred over Light Detection And Ranging (LiDAR)-based methods due to their lighter weight, lower acquisition costs, and richer environment representation. Hence, several VSLAM approaches have evolved using different camera types (e.g., monocular or stereo), and have been tested on various datasets (e.g., Technische Universität München (TUM) RGB-D or European Robotics Challenge (EuRoC)) and in different conditions (i.e., indoors and outdoors), and employ multiple methodologies to have a better understanding of their surroundings. The mentioned variations have made this topic popular for researchers and have resulted in various methods. In this regard, the primary intent of this paper is to assimilate the wide range of works in VSLAM and present their recent advances, along with discussing the existing challenges and trends. This survey is worthwhile to give a big picture of the current focuses in robotics and VSLAM fields based on the concentrated resolutions and objectives of the state-of-the-art. This paper provides an in-depth literature survey of fifty impactful articles published in the VSLAMs domain. The mentioned manuscripts have been classified by different characteristics, including the novelty domain, objectives, employed algorithms, and semantic level. The paper also discusses the current trends and contemporary directions of VSLAM techniques that may help researchers investigate them.


Asunto(s)
Algoritmos , Robótica , Robótica/métodos , Costos y Análisis de Costo , Semántica
3.
Light Sci Appl ; 11(1): 309, 2022 Oct 25.
Artículo en Inglés | MEDLINE | ID: mdl-36284089

RESUMEN

The seemingly simple step of molding a cholesteric liquid crystal into spherical shape, yielding a Cholesteric Spherical Reflector (CSR), has profound optical consequences that open a range of opportunities for potentially transformative technologies. The chiral Bragg diffraction resulting from the helical self-assembly of cholesterics becomes omnidirectional in CSRs. This turns them into selective retroreflectors that are exceptionally easy to distinguish-regardless of background-by simple and low-cost machine vision, while at the same time they can be made largely imperceptible to human vision. This allows them to be distributed in human-populated environments, laid out in the form of QR-code-like markers that help robots and Augmented Reality (AR) devices to operate reliably, and to identify items in their surroundings. At the scale of individual CSRs, unpredictable features within each marker turn them into Physical Unclonable Functions (PUFs), of great value for secure authentication. Via the machines reading them, CSR markers can thus act as trustworthy yet unobtrusive links between the physical world (buildings, vehicles, packaging,…) and its digital twin computer representation. This opens opportunities to address pressing challenges in logistics and supply chain management, recycling and the circular economy, sustainable construction of the built environment, and many other fields of individual, societal and commercial importance.

4.
Sensors (Basel) ; 22(14)2022 Jul 07.
Artículo en Inglés | MEDLINE | ID: mdl-35890800

RESUMEN

One of the most relevant problems related to Unmanned Aerial Vehicle's (UAV) autonomous navigation for industrial inspection is localization or pose estimation relative to significant elements of the environment. This paper analyzes two different approaches in this regard, focusing on its application to unstructured scenarios where objects of considerable size are present, such as a truck, a wind tower, an airplane, a building, etc. The presented methods require a previously developed Computer-Aided Design (CAD) model of the main object to be inspected. The first approach is based on an occupancy map built from a horizontal projection of this CAD model and the Adaptive Monte Carlo Localization (AMCL) algorithm to reach convergence by considering the likelihood field observation model between the 2D projection of 3D sensor data and the created map. The second approach uses a point cloud prior map of the 3D CAD model and a scan-matching algorithm based on the Iterative Closest Point Algorithm (ICP) and the Unscented Kalman Filter (UKF). The presented approaches have been extensively evaluated using simulated as well as previously recorded real flight data. We focus on aircraft inspection as a test example, but our results and conclusions can be directly extended to other applications. To support this assertion, a truck inspection has been performed. Our tests reflected that creating a 2D or 3D map from a standard CAD model and using a 3D laser scan on the created maps can optimize the processing time, resources and improve robustness. The techniques used to segment unexpected objects in 2D maps improved the performance of AMCL. In addition, we showed that moving around locations with relevant geometry after take-off when running AMCL enabled faster convergence and high accuracy. Hence, it could be used as an initial position estimation method for other localization algorithms. The ICP-NL method works well in environments with elements other than the object to inspect, but it can provide better results if some techniques to segment the new objects are applied. Furthermore, the proposed ICP-NL scan-matching method together with UKF performed faster, in a more robust manner, than NDT. Moreover, it is not affected by flight height. However, ICP-NL error may still be too high for applications requiring increased accuracy.

5.
Sensors (Basel) ; 22(7)2022 Mar 30.
Artículo en Inglés | MEDLINE | ID: mdl-35408264

RESUMEN

Unsupervised learning for monocular camera motion and 3D scene understanding has gained popularity over traditional methods, which rely on epipolar geometry or non-linear optimization. Notably, deep learning can overcome many issues of monocular vision, such as perceptual aliasing, low-textured areas, scale drift, and degenerate motions. In addition, concerning supervised learning, we can fully leverage video stream data without the need for depth or motion labels. However, in this work, we note that rotational motion can limit the accuracy of the unsupervised pose networks more than the translational component. Therefore, we present RAUM-VO, an approach based on a model-free epipolar constraint for frame-to-frame motion estimation (F2F) to adjust the rotation during training and online inference. To this end, we match 2D keypoints between consecutive frames using pre-trained deep networks, Superpoint and Superglue, while training a network for depth and pose estimation using an unsupervised training protocol. Then, we adjust the predicted rotation with the motion estimated by F2F using the 2D matches and initializing the solver with the pose network prediction. Ultimately, RAUM-VO shows a considerable accuracy improvement compared to other unsupervised pose networks on the KITTI dataset, while reducing the complexity of other hybrid or traditional approaches and achieving comparable state-of-the-art results.


Asunto(s)
Movimiento (Física)
6.
Sensors (Basel) ; 23(1)2022 Dec 24.
Artículo en Inglés | MEDLINE | ID: mdl-36616782

RESUMEN

Efficient localisation plays a vital role in many modern applications of Unmanned Ground Vehicles (UGV) and Unmanned Aerial Vehicles (UAVs), which contributes to improved control, safety, power economy, etc. The ubiquitous 5G NR (New Radio) cellular network will provide new opportunities to enhance the localisation of UAVs and UGVs. In this paper, we review radio frequency (RF)-based approaches to localisation. We review the RF features that can be utilized for localisation and investigate the current methods suitable for Unmanned Vehicles under two general categories: range-based and fingerprinting. The existing state-of-the-art literature on RF-based localisation for both UAVs and UGVs is examined, and the envisioned 5G NR for localisation enhancement, and the future research direction are explored.

7.
Sensors (Basel) ; 19(21)2019 Nov 04.
Artículo en Inglés | MEDLINE | ID: mdl-31689962

RESUMEN

Deep- and reinforcement-learning techniques have increasingly required large sets of real data to achieve stable convergence and generalization, in the context of image-recognition, object-detection or motion-control strategies. On this subject, the research community lacks robust approaches to overcome unavailable real-world extensive data by means of realistic synthetic-information and domain-adaptation techniques. In this work, synthetic-learning strategies have been used for the vision-based autonomous following of a noncooperative multirotor. The complete maneuver was learned with synthetic images and high-dimensional low-level continuous robot states, with deep- and reinforcement-learning techniques for object detection and motion control, respectively. A novel motion-control strategy for object following is introduced where the camera gimbal movement is coupled with the multirotor motion during the multirotor following. Results confirm that our present framework can be used to deploy a vision-based task in real flight using synthetic data. It was extensively validated in both simulated and real-flight scenarios, providing proper results (following a multirotor up to 1.3 m/s in simulation and 0.3 m/s in real flights).

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA