Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Más filtros

Banco de datos
Tipo del documento
Publication year range
1.
Sensors (Basel) ; 23(10)2023 May 17.
Artículo en Inglés | MEDLINE | ID: mdl-37430762

RESUMEN

The capability of a mobile robot to efficiently and safely perform complex missions is limited by its knowledge of the environment, namely the situation. Advanced reasoning, decision-making, and execution skills enable an intelligent agent to act autonomously in unknown environments. Situational Awareness (SA) is a fundamental capability of humans that has been deeply studied in various fields, such as psychology, military, aerospace, and education. Nevertheless, it has yet to be considered in robotics, which has focused on single compartmentalized concepts such as sensing, spatial perception, sensor fusion, state estimation, and Simultaneous Localization and Mapping (SLAM). Hence, the present research aims to connect the broad multidisciplinary existing knowledge to pave the way for a complete SA system for mobile robotics that we deem paramount for autonomy. To this aim, we define the principal components to structure a robotic SA and their area of competence. Accordingly, this paper investigates each aspect of SA, surveying the state-of-the-art robotics algorithms that cover them, and discusses their current limitations. Remarkably, essential aspects of SA are still immature since the current algorithmic development restricts their performance to only specific environments. Nevertheless, Artificial Intelligence (AI), particularly Deep Learning (DL), has brought new methods to bridge the gap that maintains these fields apart from the deployment to real-world scenarios. Furthermore, an opportunity has been discovered to interconnect the vastly fragmented space of robotic comprehension algorithms through the mechanism of Situational Graph (S-Graph), a generalization of the well-known scene graph. Therefore, we finally shape our vision for the future of robotic situational awareness by discussing interesting recent research directions.


Asunto(s)
Inteligencia Artificial , Robótica , Humanos , Concienciación , Algoritmos , Inteligencia
2.
Sensors (Basel) ; 22(23)2022 Nov 29.
Artículo en Inglés | MEDLINE | ID: mdl-36501998

RESUMEN

In recent years, Simultaneous Localization and Mapping (SLAM) systems have shown significant performance, accuracy, and efficiency gain. In this regard, Visual Simultaneous Localization and Mapping (VSLAM) methods refer to the SLAM approaches that employ cameras for pose estimation and map reconstruction and are preferred over Light Detection And Ranging (LiDAR)-based methods due to their lighter weight, lower acquisition costs, and richer environment representation. Hence, several VSLAM approaches have evolved using different camera types (e.g., monocular or stereo), and have been tested on various datasets (e.g., Technische Universität München (TUM) RGB-D or European Robotics Challenge (EuRoC)) and in different conditions (i.e., indoors and outdoors), and employ multiple methodologies to have a better understanding of their surroundings. The mentioned variations have made this topic popular for researchers and have resulted in various methods. In this regard, the primary intent of this paper is to assimilate the wide range of works in VSLAM and present their recent advances, along with discussing the existing challenges and trends. This survey is worthwhile to give a big picture of the current focuses in robotics and VSLAM fields based on the concentrated resolutions and objectives of the state-of-the-art. This paper provides an in-depth literature survey of fifty impactful articles published in the VSLAMs domain. The mentioned manuscripts have been classified by different characteristics, including the novelty domain, objectives, employed algorithms, and semantic level. The paper also discusses the current trends and contemporary directions of VSLAM techniques that may help researchers investigate them.


Asunto(s)
Algoritmos , Robótica , Robótica/métodos , Costos y Análisis de Costo , Semántica
3.
Sensors (Basel) ; 23(1)2022 Dec 24.
Artículo en Inglés | MEDLINE | ID: mdl-36616782

RESUMEN

Efficient localisation plays a vital role in many modern applications of Unmanned Ground Vehicles (UGV) and Unmanned Aerial Vehicles (UAVs), which contributes to improved control, safety, power economy, etc. The ubiquitous 5G NR (New Radio) cellular network will provide new opportunities to enhance the localisation of UAVs and UGVs. In this paper, we review radio frequency (RF)-based approaches to localisation. We review the RF features that can be utilized for localisation and investigate the current methods suitable for Unmanned Vehicles under two general categories: range-based and fingerprinting. The existing state-of-the-art literature on RF-based localisation for both UAVs and UGVs is examined, and the envisioned 5G NR for localisation enhancement, and the future research direction are explored.

4.
Sensors (Basel) ; 22(7)2022 Mar 30.
Artículo en Inglés | MEDLINE | ID: mdl-35408264

RESUMEN

Unsupervised learning for monocular camera motion and 3D scene understanding has gained popularity over traditional methods, which rely on epipolar geometry or non-linear optimization. Notably, deep learning can overcome many issues of monocular vision, such as perceptual aliasing, low-textured areas, scale drift, and degenerate motions. In addition, concerning supervised learning, we can fully leverage video stream data without the need for depth or motion labels. However, in this work, we note that rotational motion can limit the accuracy of the unsupervised pose networks more than the translational component. Therefore, we present RAUM-VO, an approach based on a model-free epipolar constraint for frame-to-frame motion estimation (F2F) to adjust the rotation during training and online inference. To this end, we match 2D keypoints between consecutive frames using pre-trained deep networks, Superpoint and Superglue, while training a network for depth and pose estimation using an unsupervised training protocol. Then, we adjust the predicted rotation with the motion estimated by F2F using the 2D matches and initializing the solver with the pose network prediction. Ultimately, RAUM-VO shows a considerable accuracy improvement compared to other unsupervised pose networks on the KITTI dataset, while reducing the complexity of other hybrid or traditional approaches and achieving comparable state-of-the-art results.


Asunto(s)
Movimiento (Física)
5.
Sensors (Basel) ; 20(13)2020 Jul 03.
Artículo en Inglés | MEDLINE | ID: mdl-32635375

RESUMEN

The automatic detection of eye positions, their temporal consistency, and their mapping into a line of sight in the real world (to find where a person is looking at) is reported in the scientific literature as gaze tracking. This has become a very hot topic in the field of computer vision during the last decades, with a surprising and continuously growing number of application fields. A very long journey has been made from the first pioneering works, and this continuous search for more accurate solutions process has been further boosted in the last decade when deep neural networks have revolutionized the whole machine learning area, and gaze tracking as well. In this arena, it is being increasingly useful to find guidance through survey/review articles collecting most relevant works and putting clear pros and cons of existing techniques, also by introducing a precise taxonomy. This kind of manuscripts allows researchers and technicians to choose the better way to move towards their application or scientific goals. In the literature, there exist holistic and specifically technological survey documents (even if not updated), but, unfortunately, there is not an overview discussing how the great advancements in computer vision have impacted gaze tracking. Thus, this work represents an attempt to fill this gap, also introducing a wider point of view that brings to a new taxonomy (extending the consolidated ones) by considering gaze tracking as a more exhaustive task that aims at estimating gaze target from different perspectives: from the eye of the beholder (first-person view), from an external camera framing the beholder's, from a third-person view looking at the scene where the beholder is placed in, and from an external view independent from the beholder.


Asunto(s)
Movimientos Oculares , Tecnología de Seguimiento Ocular/instrumentación , Ojo , Fijación Ocular , Computadores , Humanos , Redes Neurales de la Computación
6.
Sensors (Basel) ; 16(3)2016 Mar 11.
Artículo en Inglés | MEDLINE | ID: mdl-26978365

RESUMEN

Autonomous route following with road vehicles has gained popularity in the last few decades. In order to provide highly automated driver assistance systems, different types and combinations of sensors have been presented in the literature. However, most of these approaches apply quite sophisticated and expensive sensors, and hence, the development of a cost-efficient solution still remains a challenging problem. This work proposes the use of a single monocular camera sensor for an automatic steering control, speed assistance for the driver and localization of the vehicle on a road. Herein, we assume that the vehicle is mainly traveling along a predefined path, such as in public transport. A computer vision approach is presented to detect a line painted on the road, which defines the path to follow. Visual markers with a special design painted on the road provide information to localize the vehicle and to assist in its speed control. Furthermore, a vision-based control system, which keeps the vehicle on the predefined path under inner-city speed constraints, is also presented. Real driving tests with a commercial car on a closed circuit finally prove the applicability of the derived approach. In these tests, the car reached a maximum speed of 48 km/h and successfully traveled a distance of 7 km without the intervention of a human driver and any interruption.

7.
Sensors (Basel) ; 15(12): 31362-91, 2015 Dec 12.
Artículo en Inglés | MEDLINE | ID: mdl-26703597

RESUMEN

Poaching is an illegal activity that remains out of control in many countries. Based on the 2014 report of the United Nations and Interpol, the illegal trade of global wildlife and natural resources amounts to nearly $ 213 billion every year, which is even helping to fund armed conflicts. Poaching activities around the world are further pushing many animal species on the brink of extinction. Unfortunately, the traditional methods to fight against poachers are not enough, hence the new demands for more efficient approaches. In this context, the use of new technologies on sensors and algorithms, as well as aerial platforms is crucial to face the high increase of poaching activities in the last few years. Our work is focused on the use of vision sensors on UAVs for the detection and tracking of animals and poachers, as well as the use of such sensors to control quadrotors during autonomous vehicle following and autonomous landing.

8.
Sci Rep ; 14(1): 11968, 2024 May 25.
Artículo en Inglés | MEDLINE | ID: mdl-38796556

RESUMEN

This study presents a novel framework that integrates the universal jamming gripper (UG) with unmanned aerial vehicles (UAVs) to enable automated grasping with no human operator in the loop. Grounded in the principles of granular jamming, the UG exhibits remarkable adaptability and proficiency, navigating the complexities of soft aerial grasping with enhanced robustness and versatility. Central to this integration is a uniquely formulated constrained trajectory optimization using model predictive control, coupled with a robust force control strategy, increasing the level of automation and operational reliability in aerial grasping. This control structure, while simple, is a powerful tool for various applications, ranging from material handling to disaster response, and marks an advancement toward genuine autonomy in aerial manipulation tasks. The key contribution of this research is the combination of a UG with a suitable control strategy, that can be kept relatively straightforward thanks to the mechanical intelligence built into the UG. The algorithm is validated through numerical simulations and virtual experiments.

9.
Heliyon ; 10(11): e31831, 2024 Jun 15.
Artículo en Inglés | MEDLINE | ID: mdl-38947485

RESUMEN

Conventional solutions for wastewater collection focus on reducing overflow events in the sewage network, which can be achieved by adapting sewer infrastructure or, a more cost-effective alternative, by implementing a non-engineering management solution. The state-of-the-art solution is centered on Real-Time Control (RTC), which is already resulting in a positive impact on the environment by decreasing the volume of wastewater being discharged into receiving waters. Researchers have been continuing efforts towards upgrading RTC solutions for sewage systems and a new approach, although rudimentary, was introduced in 1997, known as Pollution-based RTC (P-RTC), which added water quality (concentration or load) information explicitly within the RTC algorithm. Formally, P-RTC is encompassed of several control methodologies using a measurement or estimation of the concentration (i.e. COD or ammonia) of the sewage throughout the network. The use of P-RTC can result in a better control performance with a reduction in concentration of overflowing wastewater observed associated with an increase of concentration of sewage arriving at the Wastewater Treatment Plant (WWTP). The literature revealed that P-RTC can be differentiated by: (1) implementation method; (2) how water quality is incorporated, and (3) overall control objectives. Additionally, this paper evaluates the hydrological models used for P-RTC. The objective of this paper is to compile relevant research in pollution-based modelling and real-time control of sewage systems, explaining the general concepts within each P-RTC category and their differences.

10.
Light Sci Appl ; 11(1): 309, 2022 Oct 25.
Artículo en Inglés | MEDLINE | ID: mdl-36284089

RESUMEN

The seemingly simple step of molding a cholesteric liquid crystal into spherical shape, yielding a Cholesteric Spherical Reflector (CSR), has profound optical consequences that open a range of opportunities for potentially transformative technologies. The chiral Bragg diffraction resulting from the helical self-assembly of cholesterics becomes omnidirectional in CSRs. This turns them into selective retroreflectors that are exceptionally easy to distinguish-regardless of background-by simple and low-cost machine vision, while at the same time they can be made largely imperceptible to human vision. This allows them to be distributed in human-populated environments, laid out in the form of QR-code-like markers that help robots and Augmented Reality (AR) devices to operate reliably, and to identify items in their surroundings. At the scale of individual CSRs, unpredictable features within each marker turn them into Physical Unclonable Functions (PUFs), of great value for secure authentication. Via the machines reading them, CSR markers can thus act as trustworthy yet unobtrusive links between the physical world (buildings, vehicles, packaging,…) and its digital twin computer representation. This opens opportunities to address pressing challenges in logistics and supply chain management, recycling and the circular economy, sustainable construction of the built environment, and many other fields of individual, societal and commercial importance.

11.
J Imaging ; 6(8)2020 Aug 04.
Artículo en Inglés | MEDLINE | ID: mdl-34460693

RESUMEN

The spread of Unmanned Aerial Vehicles (UAVs) in the last decade revolutionized many applications fields. Most investigated research topics focus on increasing autonomy during operational campaigns, environmental monitoring, surveillance, maps, and labeling. To achieve such complex goals, a high-level module is exploited to build semantic knowledge leveraging the outputs of the low-level module that takes data acquired from multiple sensors and extracts information concerning what is sensed. All in all, the detection of the objects is undoubtedly the most important low-level task, and the most employed sensors to accomplish it are by far RGB cameras due to costs, dimensions, and the wide literature on RGB-based object detection. This survey presents recent advancements in 2D object detection for the case of UAVs, focusing on the differences, strategies, and trade-offs between the generic problem of object detection, and the adaptation of such solutions for operations of the UAV. Moreover, a new taxonomy that considers different heights intervals and driven by the methodological approaches introduced by the works in the state of the art instead of hardware, physical and/or technological constraints is proposed.

SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda