Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Sensors (Basel) ; 23(14)2023 Jul 18.
Artigo em Inglês | MEDLINE | ID: mdl-37514781

RESUMO

The presence of sinkholes has been widely studied due to their potential risk to infrastructure and to the lives of inhabitants and rescuers in urban disaster areas, which is generally addressed in geotechnics and geophysics. In recent years, robotics has gained importance for the inspection and assessment of areas of potential risk for sinkhole formation, as well as for environmental exploration and post-disaster assistance. From the mobile robotics approach, this paper proposes RUDE-AL (Roped UGV DEployment ALgorithm), a methodology for deploying a Mobile Cable-Driven Parallel Robot (MCDPR) composed of four mobile robots and a cable-driven parallel robot (CDPR) for sinkhole exploration tasks and assistance to potential trapped victims. The deployment of the fleet is organized with node-edge formation during the mission's first stage, positioning itself around the area of interest and acting as anchors for the subsequent release of the cable robot. One of the relevant issues considered in this work is the selection of target points for mobile robots (anchors) considering the constraints of a roped fleet, avoiding the collision of the cables with positive obstacles through a fitting function that maximizes the area covered of the zone to explore and minimizes the cost of the route distance performed by the fleet using genetic algorithms, generating feasible target routes for each mobile robot with a configurable balance between the parameters of the fitness function. The main results show a robust method whose adjustment function is affected by the number of positive obstacles near the area of interest and the shape characteristics of the sinkhole.

2.
Sensors (Basel) ; 22(21)2022 Oct 24.
Artigo em Inglês | MEDLINE | ID: mdl-36365844

RESUMO

In recent years, legged (quadruped) robots have been subject of technological study and continuous development. These robots have a leading role in applications that require high mobility skills in complex terrain, as is the case of Search and Rescue (SAR). These robots stand out for their ability to adapt to different terrains, overcome obstacles and move within unstructured environments. Most of the implementations recently developed are focused on data collecting with sensors, such as lidar or cameras. This work seeks to integrate a 6DoF arm manipulator to the quadruped robot ARTU-R (A1 Rescue Tasks UPM Robot) by Unitree to perform manipulation tasks in SAR environments. The main contribution of this work is focused on the High-level control of the robotic set (Legged + Manipulator) using Mixed-Reality (MR). An optimization phase of the robotic set workspace has been previously developed in Matlab for the implementation, as well as a simulation phase in Gazebo to verify the dynamic functionality of the set in reconstructed environments. The first and second generation of Hololens glasses have been used and contrasted with a conventional interface to develop the MR control part of the proposed method. Manipulations of first aid equipment have been carried out to evaluate the proposed method. The main results show that the proposed method allows better control of the robotic set than conventional interfaces, improving the operator efficiency in performing robotic handling tasks and increasing confidence in decision-making. On the other hand, Hololens 2 showed a better user experience concerning graphics and latency time.


Assuntos
Realidade Aumentada , Robótica , Robótica/métodos , Simulação por Computador , Extremidade Superior
3.
Sensors (Basel) ; 22(14)2022 Jul 07.
Artigo em Inglês | MEDLINE | ID: mdl-35890800

RESUMO

One of the most relevant problems related to Unmanned Aerial Vehicle's (UAV) autonomous navigation for industrial inspection is localization or pose estimation relative to significant elements of the environment. This paper analyzes two different approaches in this regard, focusing on its application to unstructured scenarios where objects of considerable size are present, such as a truck, a wind tower, an airplane, a building, etc. The presented methods require a previously developed Computer-Aided Design (CAD) model of the main object to be inspected. The first approach is based on an occupancy map built from a horizontal projection of this CAD model and the Adaptive Monte Carlo Localization (AMCL) algorithm to reach convergence by considering the likelihood field observation model between the 2D projection of 3D sensor data and the created map. The second approach uses a point cloud prior map of the 3D CAD model and a scan-matching algorithm based on the Iterative Closest Point Algorithm (ICP) and the Unscented Kalman Filter (UKF). The presented approaches have been extensively evaluated using simulated as well as previously recorded real flight data. We focus on aircraft inspection as a test example, but our results and conclusions can be directly extended to other applications. To support this assertion, a truck inspection has been performed. Our tests reflected that creating a 2D or 3D map from a standard CAD model and using a 3D laser scan on the created maps can optimize the processing time, resources and improve robustness. The techniques used to segment unexpected objects in 2D maps improved the performance of AMCL. In addition, we showed that moving around locations with relevant geometry after take-off when running AMCL enabled faster convergence and high accuracy. Hence, it could be used as an initial position estimation method for other localization algorithms. The ICP-NL method works well in environments with elements other than the object to inspect, but it can provide better results if some techniques to segment the new objects are applied. Furthermore, the proposed ICP-NL scan-matching method together with UKF performed faster, in a more robust manner, than NDT. Moreover, it is not affected by flight height. However, ICP-NL error may still be too high for applications requiring increased accuracy.

4.
Sensors (Basel) ; 21(21)2021 Nov 04.
Artigo em Inglês | MEDLINE | ID: mdl-34770654

RESUMO

Technological breakthroughs in recent years have led to a revolution in fields such as Machine Vision and Search and Rescue Robotics (SAR), thanks to the application and development of new and improved neural networks to vision models together with modern optical sensors that incorporate thermal cameras, capable of capturing data in post-disaster environments (PDE) with rustic conditions (low luminosity, suspended particles, obstructive materials). Due to the high risk posed by PDE because of the potential collapse of structures, electrical hazards, gas leakage, etc., primary intervention tasks such as victim identification are carried out by robotic teams, provided with specific sensors such as thermal, RGB cameras, and laser. The application of Convolutional Neural Networks (CNN) to computer vision is a breakthrough for detection algorithms. Conventional methods for victim identification in these environments use RGB image processing or trained dogs, but detection with RGB images is inefficient in the absence of light or presence of debris; on the other hand, developments with thermal images are limited to the field of surveillance. This paper's main contribution focuses on implementing a novel automatic method based on thermal image processing and CNN for victim identification in PDE, using a Robotic System that uses a quadruped robot for data capture and transmission to the central station. The robot's automatic data processing and control have been carried out through Robot Operating System (ROS). Several tests have been carried out in different environments to validate the proposed method, recreating PDE with varying conditions of light, from which the datasets have been generated for the training of three neural network models (Fast R-CNN, SSD, and YOLO). The method's efficiency has been tested against another method based on CNN and RGB images for the same task showing greater effectiveness in PDE main results show that the proposed method has an efficiency greater than 90%.


Assuntos
Trabalho de Resgate , Robótica , Algoritmos , Animais , Cães , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação
5.
Biomimetics (Basel) ; 8(3)2023 Jul 03.
Artigo em Inglês | MEDLINE | ID: mdl-37504177

RESUMO

Robots with bio-inspired locomotion systems, such as quadruped robots, have recently attracted significant scientific interest, especially those designed to tackle missions in unstructured terrains, such as search-and-rescue robotics. On the other hand, artificial intelligence systems have allowed for the improvement and adaptation of the locomotion capabilities of these robots based on specific terrains, imitating the natural behavior of quadruped animals. The main contribution of this work is a method to adjust adaptive gait patterns to overcome unstructured terrains using the ARTU-R (A1 Rescue Task UPM Robot) quadruped robot based on a central pattern generator (CPG), and the automatic identification of terrain and characterization of its obstacles (number, size, position and superability analysis) through convolutional neural networks for pattern regulation. To develop this method, a study of dog gait patterns was carried out, with validation and adjustment through simulation on the robot model in ROS-Gazebo and subsequent transfer to the real robot. Outdoor tests were carried out to evaluate and validate the efficiency of the proposed method in terms of its percentage of success in overcoming stretches of unstructured terrains, as well as the kinematic and dynamic variables of the robot. The main results show that the proposed method has an efficiency of over 93% for terrain characterization (identification of terrain, segmentation and obstacle characterization) and over 91% success in overcoming unstructured terrains. This work was also compared against main developments in state-of-the-art and benchmark models.

6.
Front Robot AI ; 9: 808484, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35572379

RESUMO

The development of new sensory and robotic technologies in recent years and the increase in the consumption of organic vegetables have allowed the generation of specific applications around precision agriculture seeking to satisfy market demand. This article analyzes the use and advantages of specific optical sensory systems for data acquisition and processing in precision agriculture for Robotic Fertilization process. The SUREVEG project evaluates the benefits of growing vegetables in rows, using different technological tools like sensors, embedded systems, and robots, for this purpose. A robotic platform has been developed consisting of Laser Sick AG LMS100 × 3, Multispectral, RGB sensors, and a robotic arm equipped with a fertilization system. Tests have been developed with the robotic platform in cabbage and red cabbage crops, information captured with the different sensors, allowed to reconstruct rows crops and extract information for fertilization with the robotic arm. The main advantages of each sensory have been analyzed with an quantitative comparison, based on information provided by each one; such as Normalized Difference Vegetation Index index, RGB Histograms, Point Cloud Clusters). Robot Operating System processes this information to generate trajectory planning with the robotic arm and apply the individual treatment in plants. Main results show that the vegetable characterization has been carried out with an efficiency of 93.1% using Point Cloud processing, while the vegetable detection has obtained an error of 4.6% through RGB images.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA