RESUMO
Robots with bio-inspired locomotion systems, such as quadruped robots, have recently attracted significant scientific interest, especially those designed to tackle missions in unstructured terrains, such as search-and-rescue robotics. On the other hand, artificial intelligence systems have allowed for the improvement and adaptation of the locomotion capabilities of these robots based on specific terrains, imitating the natural behavior of quadruped animals. The main contribution of this work is a method to adjust adaptive gait patterns to overcome unstructured terrains using the ARTU-R (A1 Rescue Task UPM Robot) quadruped robot based on a central pattern generator (CPG), and the automatic identification of terrain and characterization of its obstacles (number, size, position and superability analysis) through convolutional neural networks for pattern regulation. To develop this method, a study of dog gait patterns was carried out, with validation and adjustment through simulation on the robot model in ROS-Gazebo and subsequent transfer to the real robot. Outdoor tests were carried out to evaluate and validate the efficiency of the proposed method in terms of its percentage of success in overcoming stretches of unstructured terrains, as well as the kinematic and dynamic variables of the robot. The main results show that the proposed method has an efficiency of over 93% for terrain characterization (identification of terrain, segmentation and obstacle characterization) and over 91% success in overcoming unstructured terrains. This work was also compared against main developments in state-of-the-art and benchmark models.
RESUMO
The presence of sinkholes has been widely studied due to their potential risk to infrastructure and to the lives of inhabitants and rescuers in urban disaster areas, which is generally addressed in geotechnics and geophysics. In recent years, robotics has gained importance for the inspection and assessment of areas of potential risk for sinkhole formation, as well as for environmental exploration and post-disaster assistance. From the mobile robotics approach, this paper proposes RUDE-AL (Roped UGV DEployment ALgorithm), a methodology for deploying a Mobile Cable-Driven Parallel Robot (MCDPR) composed of four mobile robots and a cable-driven parallel robot (CDPR) for sinkhole exploration tasks and assistance to potential trapped victims. The deployment of the fleet is organized with node-edge formation during the mission's first stage, positioning itself around the area of interest and acting as anchors for the subsequent release of the cable robot. One of the relevant issues considered in this work is the selection of target points for mobile robots (anchors) considering the constraints of a roped fleet, avoiding the collision of the cables with positive obstacles through a fitting function that maximizes the area covered of the zone to explore and minimizes the cost of the route distance performed by the fleet using genetic algorithms, generating feasible target routes for each mobile robot with a configurable balance between the parameters of the fitness function. The main results show a robust method whose adjustment function is affected by the number of positive obstacles near the area of interest and the shape characteristics of the sinkhole.
RESUMO
In recent years, legged (quadruped) robots have been subject of technological study and continuous development. These robots have a leading role in applications that require high mobility skills in complex terrain, as is the case of Search and Rescue (SAR). These robots stand out for their ability to adapt to different terrains, overcome obstacles and move within unstructured environments. Most of the implementations recently developed are focused on data collecting with sensors, such as lidar or cameras. This work seeks to integrate a 6DoF arm manipulator to the quadruped robot ARTU-R (A1 Rescue Tasks UPM Robot) by Unitree to perform manipulation tasks in SAR environments. The main contribution of this work is focused on the High-level control of the robotic set (Legged + Manipulator) using Mixed-Reality (MR). An optimization phase of the robotic set workspace has been previously developed in Matlab for the implementation, as well as a simulation phase in Gazebo to verify the dynamic functionality of the set in reconstructed environments. The first and second generation of Hololens glasses have been used and contrasted with a conventional interface to develop the MR control part of the proposed method. Manipulations of first aid equipment have been carried out to evaluate the proposed method. The main results show that the proposed method allows better control of the robotic set than conventional interfaces, improving the operator efficiency in performing robotic handling tasks and increasing confidence in decision-making. On the other hand, Hololens 2 showed a better user experience concerning graphics and latency time.
Assuntos
Realidade Aumentada , Robótica , Robótica/métodos , Simulação por Computador , Extremidade SuperiorRESUMO
The development of new sensory and robotic technologies in recent years and the increase in the consumption of organic vegetables have allowed the generation of specific applications around precision agriculture seeking to satisfy market demand. This article analyzes the use and advantages of specific optical sensory systems for data acquisition and processing in precision agriculture for Robotic Fertilization process. The SUREVEG project evaluates the benefits of growing vegetables in rows, using different technological tools like sensors, embedded systems, and robots, for this purpose. A robotic platform has been developed consisting of Laser Sick AG LMS100 × 3, Multispectral, RGB sensors, and a robotic arm equipped with a fertilization system. Tests have been developed with the robotic platform in cabbage and red cabbage crops, information captured with the different sensors, allowed to reconstruct rows crops and extract information for fertilization with the robotic arm. The main advantages of each sensory have been analyzed with an quantitative comparison, based on information provided by each one; such as Normalized Difference Vegetation Index index, RGB Histograms, Point Cloud Clusters). Robot Operating System processes this information to generate trajectory planning with the robotic arm and apply the individual treatment in plants. Main results show that the vegetable characterization has been carried out with an efficiency of 93.1% using Point Cloud processing, while the vegetable detection has obtained an error of 4.6% through RGB images.
RESUMO
Technological breakthroughs in recent years have led to a revolution in fields such as Machine Vision and Search and Rescue Robotics (SAR), thanks to the application and development of new and improved neural networks to vision models together with modern optical sensors that incorporate thermal cameras, capable of capturing data in post-disaster environments (PDE) with rustic conditions (low luminosity, suspended particles, obstructive materials). Due to the high risk posed by PDE because of the potential collapse of structures, electrical hazards, gas leakage, etc., primary intervention tasks such as victim identification are carried out by robotic teams, provided with specific sensors such as thermal, RGB cameras, and laser. The application of Convolutional Neural Networks (CNN) to computer vision is a breakthrough for detection algorithms. Conventional methods for victim identification in these environments use RGB image processing or trained dogs, but detection with RGB images is inefficient in the absence of light or presence of debris; on the other hand, developments with thermal images are limited to the field of surveillance. This paper's main contribution focuses on implementing a novel automatic method based on thermal image processing and CNN for victim identification in PDE, using a Robotic System that uses a quadruped robot for data capture and transmission to the central station. The robot's automatic data processing and control have been carried out through Robot Operating System (ROS). Several tests have been carried out in different environments to validate the proposed method, recreating PDE with varying conditions of light, from which the datasets have been generated for the training of three neural network models (Fast R-CNN, SSD, and YOLO). The method's efficiency has been tested against another method based on CNN and RGB images for the same task showing greater effectiveness in PDE main results show that the proposed method has an efficiency greater than 90%.
Assuntos
Trabalho de Resgate , Robótica , Algoritmos , Animais , Cães , Processamento de Imagem Assistida por Computador , Redes Neurais de ComputaçãoRESUMO
Hyper-redundant robots are highly articulated devices that present numerous technical challenges such as their design, control or remote operation. However, they offer superior kinematic skills than traditional robots for multiple applications. This work proposes an original and custom-made design for a discrete and hyper-redundant manipulator. It is comprised of 7 sections actuated by cables and 14 degrees of freedom. It has been optimized to be very robust, accurate and capable of moving payloads with high dexterity. Furthermore, it has been efficiently controlled from the actuators to high-level strategies based on the management of its shape. However, these highly articulated systems often exhibit complex shapes that frustrate their spatial understanding. Immersive technologies emerge as a good solution to remotely and safely teleoperate the presented robot for an inspection task in a hazardous environment. Experimental results validate the proposed robot design and control strategies. As a result, it is concluded that hyper-redundant robots and immersive technologies should play an important role in the near future of automated and remote applications.
RESUMO
A crop monitoring system was developed for the supervision of organic fertilization status on tomato plants at early stages. An automatic and nondestructive approach was used to analyze tomato plants with different levels of water-soluble organic fertilizer (3 + 5 NK) and vermicompost. The evaluation system was composed by a multispectral camera with five lenses: green (550 nm), red (660 nm), red edge (735 nm), near infrared (790 nm), RGB, and a computational image processing system. The water-soluble fertilizer was applied weekly in four different treatments: (T0: 0 mL, T1: 6.25 mL, T2: 12.5 mL and T3: 25 mL) and the vermicomposting was added in Weeks 1 and 5. The trial was conducted in a greenhouse and 192 images were taken with each lens. A plant segmentation algorithm was developed and several vegetation indices were calculated. On top of calculating indices, multiple morphological features were obtained through image processing techniques. The morphological features were revealed to be more feasible to distinguish between the control and the organic fertilized plants than the vegetation indices. The system was developed in order to be assembled in a precision organic fertilization robotic platform.
Assuntos
Fertilizantes , Processamento de Imagem Assistida por Computador , Solanum lycopersicum/anatomia & histologia , Análise Espectral , Algoritmos , Modelos Lineares , Probabilidade , RobóticaRESUMO
Aerial robotic swarms have shown benefits for performing search and surveillance missionsin open spaces in the past. Among other properties, these systems are robust, scalable and adaptableto different scenarios. In this work, we propose a behavior-based algorithm to carry out a surveillancetask in a rectangular area with a flexible number of quadcopters, flying at different speeds. Once theefficiency of the algorithm is quantitatively analyzed, the robustness of the system is demonstratedwith 3 different tests: loss of broadcast messages, positioning errors, and failure of half of the agentsduring the mission. Experiments are carried out in an indoor arena with micro quadcopters tosupport simulation results. Finally, a case study is proposed to show a realistic implementation in thetest bed.
RESUMO
This article presents a new method to solve the inverse kinematics problem of hyper-redundant and soft manipulators. From an engineering perspective, this kind of robots are underdetermined systems. Therefore, they exhibit an infinite number of solutions for the inverse kinematics problem, and to choose the best one can be a great challenge. A new algorithm based on the cyclic coordinate descent (CCD) and named as natural-CCD is proposed to solve this issue. It takes its name as a result of generating very harmonious robot movements and trajectories that also appear in nature, such as the golden spiral. In addition, it has been applied to perform continuous trajectories, to develop whole-body movements, to analyze motion planning in complex environments, and to study fault tolerance, even for both prismatic and rotational joints. The proposed algorithm is very simple, precise, and computationally efficient. It works for robots either in two or three spatial dimensions and handles a large amount of degrees-of-freedom. Because of this, it is aimed to break down barriers between discrete hyper-redundant and continuum soft robots.
RESUMO
Multi-robot missions are a challenge for operators in terms of workload and situational awareness. These operators have to receive data from the robots, extract information, understand the situation properly, make decisions, generate the adequate commands, and send them to the robots. The consequences of excessive workload and lack of awareness can vary from inefficiencies to accidents. This work focuses on the study of future operator interfaces of multi-robot systems, taking into account relevant issues such as multimodal interactions, immersive devices, predictive capabilities and adaptive displays. Specifically, four interfaces have been designed and developed: a conventional, a predictive conventional, a virtual reality and a predictive virtual reality interface. The four interfaces have been validated by the performance of twenty-four operators that supervised eight multi-robot missions of fire surveillance and extinguishing. The results of the workload and situational awareness tests show that virtual reality improves the situational awareness without increasing the workload of operators, whereas the effects of predictive components are not significant and depend on their implementation.
RESUMO
Many environmental incidents affect large areas, often in rough terrain constrained by natural obstacles, which makes intervention difficult. New technologies, such as unmanned aerial vehicles, may help address this issue due to their suitability to reach and easily cover large areas. Thus, unmanned aerial vehicles may be used to inspect the terrain and make a first assessment of the affected areas; however, nowadays they do not have the capability to act. On the other hand, ground vehicles rely on enough power to perform the intervention but exhibit more mobility constraints. This paper proposes a multi-robot sense-act system, composed of aerial and ground vehicles. This combination allows performing autonomous tasks in large outdoor areas by integrating both types of platforms in a fully automated manner. Aerial units are used to easily obtain relevant data from the environment and ground units use this information to carry out interventions more efficiently. This paper describes the platforms and sensors required by this multi-robot sense-act system as well as proposes a software system to automatically handle the workflow for any generic environmental task. The proposed system has proved to be suitable to reduce the amount of herbicide applied in agricultural treatments. Although herbicides are very polluting, they are massively deployed on complete agricultural fields to remove weeds. Nevertheless, the amount of herbicide required for treatment is radically reduced when it is accurately applied on patches by the proposed multi-robot system. Thus, the aerial units were employed to scout the crop and build an accurate weed distribution map which was subsequently used to plan the task of the ground units. The whole workflow was executed in a fully autonomous way, without human intervention except when required by Spanish law due to safety reasons.
Assuntos
Aeronaves , Monitoramento Ambiental/métodos , Robótica/métodos , Tecnologia sem Fio , Agricultura , Herbicidas , Humanos , Plantas Daninhas , SoftwareRESUMO
The productivity of greenhouses highly depends on the environmental conditions of crops, such as temperature and humidity. The control and monitoring might need large sensor networks, and as a consequence, mobile sensory systems might be a more suitable solution. This paper describes the application of a heterogeneous robot team to monitor environmental variables of greenhouses. The multi-robot system includes both ground and aerial vehicles, looking to provide flexibility and improve performance. The multi-robot sensory system measures the temperature, humidity, luminosity and carbon dioxide concentration in the ground and at different heights. Nevertheless, these measurements can be complemented with other ones (e.g., the concentration of various gases or images of crops) without a considerable effort. Additionally, this work addresses some relevant challenges of multi-robot sensory systems, such as the mission planning and task allocation, the guidance, navigation and control of robots in greenhouses and the coordination among ground and aerial vehicles. This work has an eminently practical approach, and therefore, the system has been extensively tested both in simulations and field experiments.
RESUMO
This paper describes the design, construction and validation of a mobile sensory platform for greenhouse monitoring. The complete system consists of a sensory system on board a small quadrotor (i.e., a four rotor mini-UAV). The goals of this system include taking measures of temperature, humidity, luminosity and CO2 concentration and plotting maps of these variables. These features could potentially allow for climate control, crop monitoring or failure detection (e.g., a break in a plastic cover). The sensors have been selected by considering the climate and plant growth models and the requirements for their integration onboard the quadrotor. The sensors layout and placement have been determined through a study of quadrotor aerodynamics and the influence of the airflows from its rotors. All components of the system have been developed, integrated and tested through a set of field experiments in a real greenhouse. The primary contributions of this paper are the validation of the quadrotor as a platform for measuring environmental variables and the determination of the optimal location of sensors on a quadrotor.
Assuntos
Ar Condicionado , Monóxido de Carbono/isolamento & purificação , Desenvolvimento Vegetal , Tecnologia de Sensoriamento Remoto , Clima , Humanos , Umidade , Plásticos , TemperaturaRESUMO
In this study, a device based on patient motion capture is developed for the reliable and non-invasive diagnosis of neurodegenerative diseases. The primary objective of this study is the classification of differential diagnosis between Parkinson's disease (PD) and essential tremor (ET). The DIMETER system has been used in the diagnoses of a significant number of patients at two medical centers in Spain. Research studies on classification have primarily focused on the use of well-known and reliable diagnosis criteria developed by qualified personnel. Here, we first present a literature review of the methods used to detect and evaluate tremor; then, we describe the DIMETER device in terms of the software and hardware used and the battery of tests developed to obtain the best diagnoses. All of the tests are classified and described in terms of the characteristics of the data obtained. A list of parameters obtained from the tests is provided, and the results obtained using multilayer perceptron (MLP) neural networks are presented and analyzed.
Assuntos
Técnicas de Diagnóstico Neurológico/instrumentação , Doenças Neurodegenerativas/diagnóstico , Software , Tato , Tremor/diagnóstico , Fenômenos Biomecânicos , Humanos , Movimento , Redes Neurais de Computação , Reconhecimento Automatizado de PadrãoRESUMO
Micro Electro-Mechanical Systems (MEMS) are currently being considered in the space sector due to its suitable level of performance for spacecrafts in terms of mechanical robustness with low power consumption, small mass and size, and significant advantage in system design and accommodation. However, there is still a lack of understanding regarding the performance and testing of these new sensors, especially in planetary robotics. This paper presents what is missing in the field: a complete methodology regarding the characterization and modeling of MEMS sensors with direct application. A reproducible and complete approach including all the intermediate steps, tools and laboratory equipment is described. The process of sensor error characterization and modeling through to the final integration in the sensor fusion scheme is explained with detail. Although the concept of fusion is relatively easy to comprehend, carefully characterizing and filtering sensor information is not an easy task and is essential for good performance. The strength of the approach has been verified with representative tests of novel high-grade MEMS inertia sensors and exemplary planetary rover platforms with promising results.