Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
1.
Sensors (Basel) ; 23(4)2023 Feb 20.
Artículo en Inglés | MEDLINE | ID: mdl-36850936

RESUMEN

Hazardous object detection (escalators, stairs, glass doors, etc.) and avoidance are critical functional safety modules for autonomous mobile cleaning robots. Conventional object detectors have less accuracy for detecting low-feature hazardous objects and have miss detection, and the false classification ratio is high when the object is under occlusion. Miss detection or false classification of hazardous objects poses an operational safety issue for mobile robots. This work presents a deep-learning-based context-aware multi-level information fusion framework for autonomous mobile cleaning robots to detect and avoid hazardous objects with a higher confidence level, even if the object is under occlusion. First, the image-level-contextual-encoding module was proposed and incorporated with the Faster RCNN ResNet 50 object detector model to improve the low-featured and occluded hazardous object detection in an indoor environment. Further, a safe-distance-estimation function was proposed to avoid hazardous objects. It computes the distance of the hazardous object from the robot's position and steers the robot into a safer zone using detection results and object depth data. The proposed framework was trained with a custom image dataset using fine-tuning techniques and tested in real-time with an in-house-developed mobile cleaning robot, BELUGA. The experimental results show that the proposed algorithm detected the low-featured and occluded hazardous object with a higher confidence level than the conventional object detector and scored an average detection accuracy of 88.71%.

2.
Sci Rep ; 12(1): 15938, 2022 09 24.
Artículo en Inglés | MEDLINE | ID: mdl-36153413

RESUMEN

Floor cleaning robots are widely used in public places like food courts, hospitals, and malls to perform frequent cleaning tasks. However, frequent cleaning tasks adversely impact the robot's performance and utilize more cleaning accessories (such as brush, scrubber, and mopping pad). This work proposes a novel selective area cleaning/spot cleaning framework for indoor floor cleaning robots using RGB-D vision sensor-based Closed Circuit Television (CCTV) network, deep learning algorithms, and an optimal complete waypoints path planning method. In this scheme, the robot will clean only dirty areas instead of the whole region. The selective area cleaning/spot cleaning region is identified based on the combination of two strategies: tracing the human traffic patterns and detecting stains and trash on the floor. Here, a deep Simple Online and Real-time Tracking (SORT) human tracking algorithm was used to trace the high human traffic region and Single Shot Detector (SSD) MobileNet object detection framework for detecting the dirty region. Further, optimal shortest waypoint coverage path planning using evolutionary-based optimization was incorporated to traverse the robot efficiently to the designated selective area cleaning/spot cleaning regions. The experimental results show that the SSD MobileNet algorithm scored 90% accuracy for stain and trash detection on the floor. Further, compared to conventional methods, the evolutionary-based optimization path planning scheme reduces 15% percent of navigation time and 10% percent of energy consumption.


Asunto(s)
Aprendizaje Profundo , Robótica , Algoritmos , Pisos y Cubiertas de Piso , Humanos , Robótica/métodos
3.
Sci Rep ; 12(1): 14557, 2022 08 25.
Artículo en Inglés | MEDLINE | ID: mdl-36008439

RESUMEN

This work presents the vision pipeline for our in-house developed autonomous reconfigurable pavement sweeping robot named Panthera. As the goal of Panthera is to be an autonomous self-reconfigurable robot, it has to understand the type of pavement it is moving in so that it can adapt smoothly to changing pavement width and perform cleaning operations more efficiently and safely. deep learning (DL) based vision pipeline is proposed for the Panthera robot to recognize pavement features, including pavement type identification, pavement surface condition prediction, and pavement width estimation. The DeepLabv3+ semantic segmentation algorithm was customized to identify the pavement type classification, an eight-layer CNN was proposed for pavement surface condition prediction. Furthermore, pavement width estimation was computed by fusing the segmented pavement region on the depth map. In the end, the fuzzy inference system was implemented by taking input as the pavement width and its conditions detected and output as the safe operational speed. The vision pipeline was trained using the DL provided with the custom pavement images dataset. The performance was evaluated using offline test and real-time field trial images captured through the reconfigurable robot Panthera stereo vision sensor. In the experimental analysis, the DL-based vision pipeline components scored 88.02% and 93.22% accuracy for pavement segmentation and pavement surface condition assessment, respectively, and took approximately 10 ms computation time to process the single image frame from the vision sensor using the onboard computer.


Asunto(s)
Robótica , Algoritmos , Semántica
4.
Sensors (Basel) ; 22(14)2022 Jul 12.
Artículo en Inglés | MEDLINE | ID: mdl-35890883

RESUMEN

Cleaning is an important task that is practiced in every domain and has prime importance. The significance of cleaning has led to several newfangled technologies in the domestic and professional cleaning domain. However, strategies for auditing the cleanliness delivered by the various cleaning methods remain manual and often ignored. This work presents a novel domestic dirt image dataset for cleaning auditing application including AI-based dirt analysis and robot-assisted cleaning inspection. One of the significant challenges in an AI-based robot-aided cleaning auditing is the absence of a comprehensive dataset for dirt analysis. We bridge this gap by identifying nine classes of commonly occurring domestic dirt and a labeled dataset consisting of 3000 microscope dirt images curated from a semi-indoor environment. The dirt dataset gathered using the adhesive dirt lifting method can enhance the current dirt sensing and dirt composition estimation for cleaning auditing. The dataset's quality is analyzed by AI-based dirt analysis and a robot-aided cleaning auditing task using six standard classification models. The models trained with the dirt dataset were capable of yielding a classification accuracy above 90% in the offline dirt analysis experiment and 82% in real-time test results.


Asunto(s)
Suelo , Conjuntos de Datos como Asunto
5.
Sensors (Basel) ; 22(13)2022 Jun 29.
Artículo en Inglés | MEDLINE | ID: mdl-35808427

RESUMEN

Mosquito-borne diseases can pose serious risks to human health. Therefore, mosquito surveillance and control programs are essential for the wellbeing of the community. Further, human-assisted mosquito surveillance and population mapping methods are time-consuming, labor-intensive, and require skilled manpower. This work presents an AI-enabled mosquito surveillance and population mapping framework using our in-house-developed robot, named 'Dragonfly', which uses the You Only Look Once (YOLO) V4 Deep Neural Network (DNN) algorithm and a two-dimensional (2D) environment map generated by the robot. The Dragonfly robot was designed with a differential drive mechanism and a mosquito trapping module to attract mosquitoes in the environment. The YOLO V4 was trained with three mosquito classes, namely Aedes aegypti, Aedes albopictus, and Culex, to detect and classify the mosquito breeds from the mosquito glue trap. The efficiency of the mosquito surveillance framework was determined in terms of mosquito classification accuracy and detection confidence level on offline and real-time field tests in a garden, drain perimeter area, and covered car parking area. The experimental results show that the trained YOLO V4 DNN model detects and classifies the mosquito classes with an 88% confidence level on offline mosquito test image datasets and scores an average of an 82% confidence level on the real-time field trial. Further, to generate the mosquito population map, the detection results are fused in the robot's 2D map, which will help to understand mosquito population dynamics and species distribution.


Asunto(s)
Aedes , Culex , Robótica , Animales , Mosquitos Vectores
6.
Sensors (Basel) ; 21(24)2021 Dec 13.
Artículo en Inglés | MEDLINE | ID: mdl-34960425

RESUMEN

Cleaning is one of the fundamental tasks with prime importance given in our day-to-day life. Moreover, the importance of cleaning drives the research efforts towards bringing leading edge technologies, including robotics, into the cleaning domain. However, an effective method to assess the quality of cleaning is an equally important research problem to be addressed. The primary footstep towards addressing the fundamental question of "How clean is clean" is addressed using an autonomous cleaning-auditing robot that audits the cleanliness of a given area. This research work focuses on a novel reinforcement learning-based experience-driven dirt exploration strategy for a cleaning-auditing robot. The proposed approach uses proximal policy approximation (PPO) based on-policy learning method to generate waypoints and sampling decisions to explore the probable dirt accumulation regions in a given area. The policy network is trained in multiple environments with simulated dirt patterns. Experiment trials have been conducted to validate the trained policy in both simulated and real-world environments using an in-house developed cleaning audit robot called BELUGA.


Asunto(s)
Robótica
7.
Sci Rep ; 11(1): 22378, 2021 11 17.
Artículo en Inglés | MEDLINE | ID: mdl-34789747

RESUMEN

Drain blockage is a crucial problem in the urban environment. It heavily affects the ecosystem and human health. Hence, routine drain inspection is essential for urban environment. Manual drain inspection is a tedious task and prone to accidents and water-borne diseases. This work presents a drain inspection framework using convolutional neural network (CNN) based object detection algorithm and in house developed reconfigurable teleoperated robot called 'Raptor'. The CNN based object detection model was trained using a transfer learning scheme with our custom drain-blocking objects data-set. The efficiency of the trained CNN algorithm and drain inspection robot Raptor was evaluated through various real-time drain inspection field trial. The experimental results indicate that our trained object detection algorithm has detect and classified the drain blocking objects with 91.42% accuracy for both offline and online test images and is able to process 18 frames per second (FPS). Further, the maneuverability of the robot was evaluated from various open and closed drain environment. The field trial results ensure that the robot maneuverability was stable, and its mapping and localization is also accurate in a complex drain environment.

8.
Sensors (Basel) ; 21(21)2021 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-34770593

RESUMEN

Human visual inspection of drains is laborious, time-consuming, and prone to accidents. This work presents an AI-enabled robot-assisted remote drain inspection and mapping framework using our in-house developed reconfigurable robot Raptor. The four-layer IoRT serves as a bridge between the users and the robots, through which seamless information sharing takes place. The Faster RCNN ResNet50, Faster RCNN ResNet101, and Faster RCNN Inception-ResNet-v2 deep learning frameworks were trained using a transfer learning scheme with six typical concrete defect classes and deployed in an IoRT framework remote defect detection task. The efficiency of the trained CNN algorithm and drain inspection robot Raptor was evaluated through various real-time drain inspection field trials using the SLAM technique. The experimental results indicate that robot's maneuverability was stable, and its mapping and localization were also accurate in different drain types. Finally, for effective drain maintenance, the SLAM-based defect map was generated by fusing defect detection results in the lidar-SLAM map.


Asunto(s)
Rapaces , Robótica , Algoritmos , Animales , Humanos
9.
Sensors (Basel) ; 21(18)2021 Sep 18.
Artículo en Inglés | MEDLINE | ID: mdl-34577486

RESUMEN

Staircase cleaning is a crucial and time-consuming task for maintenance of multistory apartments and commercial buildings. There are many commercially available autonomous cleaning robots in the market for building maintenance, but few of them are designed for staircase cleaning. A key challenge for automating staircase cleaning robots involves the design of Environmental Perception Systems (EPS), which assist the robot in determining and navigating staircases. This system also recognizes obstacles and debris for safe navigation and efficient cleaning while climbing the staircase. This work proposes an operational framework leveraging the vision based EPS for the modular re-configurable maintenance robot, called sTetro. The proposed system uses an SSD MobileNet real-time object detection model to recognize staircases, obstacles and debris. Furthermore, the model filters out false detection of staircases by fusion of depth information through the use of a MobileNet and SVM. The system uses a contour detection algorithm to localize the first step of the staircase and depth clustering scheme for obstacle and debris localization. The framework has been deployed on the sTetro robot using the Jetson Nano hardware from NVIDIA and tested with multistory staircases. The experimental results show that the entire framework takes an average of 310 ms to run and achieves an accuracy of 94.32% for staircase recognition tasks and 93.81% accuracy for obstacle and debris detection tasks during real operation of the robot.


Asunto(s)
Aprendizaje Profundo , Percepción de Forma , Robótica , Algoritmos
10.
Sensors (Basel) ; 21(16)2021 Aug 06.
Artículo en Inglés | MEDLINE | ID: mdl-34450767

RESUMEN

Routine rodent inspection is essential to curbing rat-borne diseases and infrastructure damages within the built environment. Rodents find false ceilings to be a perfect spot to seek shelter and construct their habitats. However, a manual false ceiling inspection for rodents is laborious and risky. This work presents an AI-enabled IoRT framework for rodent activity monitoring inside a false ceiling using an in-house developed robot called "Falcon". The IoRT serves as a bridge between the users and the robots, through which seamless information sharing takes place. The shared images by the robots are inspected through a Faster RCNN ResNet 101 object detection algorithm, which is used to automatically detect the signs of rodent inside a false ceiling. The efficiency of the rodent activity detection algorithm was tested in a real-world false ceiling environment, and detection accuracy was evaluated with the standard performance metrics. The experimental results indicate that the algorithm detects rodent signs and 3D-printed rodents with a good confidence level.


Asunto(s)
Redes Neurales de la Computación , Roedores , Algoritmos , Animales , Ratas
11.
Sensors (Basel) ; 21(13)2021 Jun 24.
Artículo en Inglés | MEDLINE | ID: mdl-34202746

RESUMEN

Cleaning is an important factor in most aspects of our day-to-day life. This research work brings a solution to the fundamental question of "How clean is clean" by introducing a novel framework for auditing the cleanliness of built infrastructure using mobile robots. The proposed system presents a strategy for assessing the quality of cleaning in a given area and a novel exploration strategy that facilitates the auditing in a given location by a mobile robot. An audit sensor that works by the "touch and inspect" analogy that assigns an audit score corresponds to its area of inspection has been developed. A vision-based dirt-probability-driven exploration is proposed to empower a mobile robot with an audit sensor on-board to perform auditing tasks effectively. The quality of cleaning is quantified using a dirt density map representing location-wise audit scores, dirt distribution pattern obtained by kernel density estimation, and cleaning benchmark score representing the extent of cleanliness. The framework is realized in an in-house developed audit robot to perform the cleaning audit in indoor and semi-outdoor environments. The proposed method is validated by experiment trials to estimate the cleanliness in five different locations using the developed audit sensor and dirt-probability-driven exploration.


Asunto(s)
Robótica
12.
Sensors (Basel) ; 21(5)2021 Mar 03.
Artículo en Inglés | MEDLINE | ID: mdl-33802434

RESUMEN

Regular washing of public pavements is necessary to ensure that the public environment is sanitary for social activities. This is a challenge for autonomous cleaning robots, as they must adapt to the environment with varying pavement widths while avoiding pedestrians. A self-reconfigurable pavement sweeping robot, named Panthera, has the mechanisms to perform reconfiguration in width to enable smooth cleaning operations, and it changes its behavior based on environment dynamics of moving pedestrians and changing pavement widths. Reconfiguration in the robot's width is possible, due to the scissor mechanism at the core of the robot's body, which is driven by a lead screw motor. Panthera will perform locomotion and reconfiguration based on perception sensors feedback control proposed while using an Red Green Blue-D (RGB-D) camera. The proposed control scheme involves publishing robot kinematic parameters for reconfiguration during locomotion. Experiments were conducted in outdoor pavements to demonstrate the autonomous reconfiguration during locomotion to avoid pedestrians while complying with varying pavements widths in a real-world scenario.


Asunto(s)
Peatones , Robótica , Retroalimentación , Humanos , Locomoción , Percepción
13.
Sensors (Basel) ; 21(8)2021 Apr 07.
Artículo en Inglés | MEDLINE | ID: mdl-33917223

RESUMEN

The pavement inspection task, which mainly includes crack and garbage detection, is essential and carried out frequently. The human-based or dedicated system approach for inspection can be easily carried out by integrating with the pavement sweeping machines. This work proposes a deep learning-based pavement inspection framework for self-reconfigurable robot named Panthera. Semantic segmentation framework SegNet was adopted to segment the pavement region from other objects. Deep Convolutional Neural Network (DCNN) based object detection is used to detect and localize pavement defects and garbage. Furthermore, Mobile Mapping System (MMS) was adopted for the geotagging of the defects. The proposed system was implemented and tested with the Panthera robot having NVIDIA GPU cards. The experimental results showed that the proposed technique identifies the pavement defects and litters or garbage detection with high accuracy. The experimental results on the crack and garbage detection are presented. It is found that the proposed technique is suitable for deployment in real-time for garbage detection and, eventually, sweeping or cleaning tasks.

14.
Sensors (Basel) ; 22(1)2021 Dec 21.
Artículo en Inglés | MEDLINE | ID: mdl-35009556

RESUMEN

Vibration is an indicator of performance degradation or operational safety issues of mobile cleaning robots. Therefore, predicting the source of vibration at an early stage will help to avoid functional losses and hazardous operational environments. This work presents an artificial intelligence (AI)-enabled predictive maintenance framework for mobile cleaning robots to identify performance degradation and operational safety issues through vibration signals. A four-layer 1D CNN framework was developed and trained with a vibration signals dataset generated from the in-house developed autonomous steam mopping robot 'Snail' with different health conditions and hazardous operational environments. The vibration signals were collected using an IMU sensor and categorized into five classes: normal operational vibration, hazardous terrain induced vibration, collision-induced vibration, loose assembly induced vibration, and structure imbalanced vibration signals. The performance of the trained predictive maintenance framework was evaluated with various real-time field trials with statistical measurement metrics. The experiment results indicate that our proposed predictive maintenance framework has accurately predicted the performance degradation and operational safety issues by analyzing the vibration signal patterns raised from the cleaning robot on different test scenarios. Finally, a predictive maintenance map was generated by fusing the vibration signal class on the cartographer SLAM algorithm-generated 2D environment map.


Asunto(s)
Inteligencia Artificial , Robótica , Algoritmos , Vibración
15.
Sensors (Basel) ; 22(1)2021 Dec 30.
Artículo en Inglés | MEDLINE | ID: mdl-35009802

RESUMEN

Periodic inspection of false ceilings is mandatory to ensure building and human safety. Generally, false ceiling inspection includes identifying structural defects, degradation in Heating, Ventilation, and Air Conditioning (HVAC) systems, electrical wire damage, and pest infestation. Human-assisted false ceiling inspection is a laborious and risky task. This work presents a false ceiling deterioration detection and mapping framework using a deep-neural-network-based object detection algorithm and the teleoperated 'Falcon' robot. The object detection algorithm was trained with our custom false ceiling deterioration image dataset composed of four classes: structural defects (spalling, cracks, pitted surfaces, and water damage), degradation in HVAC systems (corrosion, molding, and pipe damage), electrical damage (frayed wires), and infestation (termites and rodents). The efficiency of the trained CNN algorithm and deterioration mapping was evaluated through various experiments and real-time field trials. The experimental results indicate that the deterioration detection and mapping results were accurate in a real false-ceiling environment and achieved an 89.53% detection accuracy.


Asunto(s)
Aprendizaje Profundo , Robótica , Algoritmos , Animales , Redes Neurales de la Computación , Roedores
16.
Sensors (Basel) ; 20(18)2020 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-32942750

RESUMEN

Insect detection and control at an early stage are essential to the built environment (human-made physical spaces such as homes, hotels, camps, hospitals, parks, pavement, food industries, etc.) and agriculture fields. Currently, such insect control measures are manual, tedious, unsafe, and time-consuming labor dependent tasks. With the recent advancements in Artificial Intelligence (AI) and the Internet of things (IoT), several maintenance tasks can be automated, which significantly improves productivity and safety. This work proposes a real-time remote insect trap monitoring system and insect detection method using IoT and Deep Learning (DL) frameworks. The remote trap monitoring system framework is constructed using IoT and the Faster RCNN (Region-based Convolutional Neural Networks) Residual neural Networks 50 (ResNet50) unified object detection framework. The Faster RCNN ResNet 50 object detection framework was trained with built environment insects and farm field insect images and deployed in IoT. The proposed system was tested in real-time using four-layer IoT with built environment insects image captured through sticky trap sheets. Further, farm field insects were tested through a separate insect image database. The experimental results proved that the proposed system could automatically identify the built environment insects and farm field insects with an average of 94% accuracy.


Asunto(s)
Aprendizaje Profundo , Insectos , Internet de las Cosas , Control de Plagas , Animales , Redes Neurales de la Computación
17.
Sensors (Basel) ; 20(16)2020 Aug 09.
Artículo en Inglés | MEDLINE | ID: mdl-32784888

RESUMEN

Infectious diseases are caused by pathogenic microorganisms, whose transmission can lead to global pandemics like COVID-19. Contact with contaminated surfaces or objects is one of the major channels of spreading infectious diseases among the community. Therefore, the typical contaminable surfaces, such as walls and handrails, should often be cleaned using disinfectants. Nevertheless, safety and efficiency are the major concerns of the utilization of human labor in this process. Thereby, attention has drifted toward developing robotic solutions for the disinfection of contaminable surfaces. A robot intended for disinfecting walls should be capable of following the wall concerned, while maintaining a given distance, to be effective. The ability to operate in an unknown environment while coping with uncertainties is crucial for a wall disinfection robot intended for deployment in public spaces. Therefore, this paper contributes to the state-of-the-art by proposing a novel method of establishing the wall-following behavior for a wall disinfection robot using fuzzy logic. A non-singleton Type 1 Fuzzy Logic System (T1-FLS) and a non-singleton Interval Type 2 Fuzzy Logic System (IT2-FLS) are developed in this regard. The wall-following behavior of the two fuzzy systems was evaluated through simulations by considering heterogeneous wall arrangements. The simulation results validate the real-world applicability of the proposed FLSs for establishing the wall-following behavior for a wall disinfection robot. Furthermore, the statistical outcomes show that the IT2-FLS has significantly superior performance than the T1-FLS in this application.

18.
Sensors (Basel) ; 20(11)2020 Jun 10.
Artículo en Inglés | MEDLINE | ID: mdl-32531960

RESUMEN

Periodic cleaning of all frequently touched social areas such as walls, doors, locks, handles, windows has become the first line of defense against all infectious diseases. Among those, cleaning of large wall areas manually is always tedious, time-consuming, and astounding task. Although numerous cleaning companies are interested in deploying robotic cleaning solutions, they are mostly not addressing wall cleaning. To this end, we are proposing a new vision-based wall following framework that acts as an add-on for any professional robotic platform to perform wall cleaning. The proposed framework uses Deep Learning (DL) framework to visually detect, classify, and segment the wall/floor surface and instructs the robot to wall follow to execute the cleaning task. Also, we summarized the system architecture of Toyota Human Support Robot (HSR), which has been used as our testing platform. We evaluated the performance of the proposed framework on HSR robot under various defined scenarios. Our experimental results indicate that the proposed framework could successfully classify and segment the wall/floor surface and also detect the obstacle on wall and floor with high detection accuracy and demonstrates a robust behavior of wall following.

19.
Sensors (Basel) ; 20(12)2020 Jun 23.
Artículo en Inglés | MEDLINE | ID: mdl-32585864

RESUMEN

The role of mobile robots for cleaning and sanitation purposes is increasing worldwide. Disinfection and hygiene are two integral parts of any safe indoor environment, and these factors become more critical in COVID-19-like pandemic situations. Door handles are highly sensitive contact points that are prone to be contamination. Automation of the door-handle cleaning task is not only important for ensuring safety, but also to improve efficiency. This work proposes an AI-enabled framework for automating cleaning tasks through a Human Support Robot (HSR). The overall cleaning process involves mobile base motion, door-handle detection, and control of the HSR manipulator for the completion of the cleaning tasks. The detection part exploits a deep-learning technique to classify the image space, and provides a set of coordinates for the robot. The cooperative control between the spraying and wiping is developed in the Robotic Operating System. The control module uses the information obtained from the detection module to generate a task/operational space for the robot, along with evaluating the desired position to actuate the manipulators. The complete strategy is validated through numerical simulations, and experiments on a Toyota HSR platform.


Asunto(s)
Betacoronavirus , Infecciones por Coronavirus/prevención & control , Desinfección/instrumentación , Pandemias/prevención & control , Neumonía Viral/prevención & control , Robótica/instrumentación , Algoritmos , COVID-19 , Infecciones por Coronavirus/transmisión , Infecciones por Coronavirus/virología , Aprendizaje Profundo , Desinfección/métodos , Diseño de Equipo , Humanos , Mantenimiento , Movimiento (Física) , Neumonía Viral/transmisión , Neumonía Viral/virología , Robótica/métodos , Robótica/estadística & datos numéricos , SARS-CoV-2
20.
Sensors (Basel) ; 20(6)2020 Mar 18.
Artículo en Inglés | MEDLINE | ID: mdl-32197483

RESUMEN

This work presents a table cleaning and inspection method using a Human Support Robot (HSR) which can operate in a typical food court setting. The HSR is able to perform a cleanliness inspection and also clean the food litter on the table by implementing a deep learning technique and planner framework. A lightweight Deep Convolutional Neural Network (DCNN) has been proposed to recognize the food litter on top of the table. In addition, the planner framework was proposed to HSR for accomplishing the table cleaning task which generates the cleaning path according to the detection of food litter and then the cleaning action is carried out. The effectiveness of the food litter detection module is verified with the cleanliness inspection task using Toyota HSR, and its detection results are verified with standard quality metrics. The experimental results show that the food litter detection module achieves an average of 96 % detection accuracy, which is more suitable for deploying the HSR robots for performing the cleanliness inspection and also helps to select the different cleaning modes. Further, the planner part has been tested through the table cleaning tasks. The experimental results show that the planner generated the cleaning path in real time and its generated path is optimal which reduces the cleaning time by grouping based cleaning action for removing the food litters from the table.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Redes Neurales de la Computación , Robótica/instrumentación , Saneamiento/instrumentación , Alimentos , Humanos , Procesamiento de Imagen Asistido por Computador , Diseño Interior y Mobiliario/instrumentación , Límite de Detección , Robótica/métodos , Dispositivos de Autoayuda , Carga de Trabajo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...