Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 67
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 24(3)2024 Jan 31.
Artigo em Inglês | MEDLINE | ID: mdl-38339635

RESUMO

This study presents a human-computer interaction combined with a brain-machine interface (BMI) and obstacle detection system for remote control of a wheeled robot through movement imagery, providing a potential solution for individuals facing challenges with conventional vehicle operation. The primary focus of this work is the classification of surface EEG signals related to mental activity when envisioning movement and deep relaxation states. Additionally, this work presents a system for obstacle detection based on image processing. The implemented system constitutes a complementary part of the interface. The main contributions of this work include the proposal of a modified 10-20-electrode setup suitable for motor imagery classification, the design of two convolutional neural network (CNNs) models employed to classify signals acquired from sixteen EEG channels, and the implementation of an obstacle detection system based on computer vision integrated with a brain-machine interface. The models developed in this study achieved an accuracy of 83% in classifying EEG signals. The resulting classification outcomes were subsequently utilized to control the movement of a mobile robot. Experimental trials conducted on a designated test track demonstrated real-time control of the robot. The findings indicate the feasibility of integration of the obstacle detection system for collision avoidance with the classification of motor imagery for the purpose of brain-machine interface control of vehicles. The elaborated solution could help paralyzed patients to safely control a wheelchair through EEG and effectively prevent unintended vehicle movements.


Assuntos
Interfaces Cérebro-Computador , Cadeiras de Rodas , Humanos , Eletroencefalografia/métodos , Redes Neurais de Computação , Imagens, Psicoterapia , Movimento , Algoritmos
2.
Sensors (Basel) ; 24(10)2024 May 09.
Artigo em Inglês | MEDLINE | ID: mdl-38793871

RESUMO

The sky may seem big enough for two flying vehicles to collide, but the facts show that mid-air collisions still occur occasionally and are a significant concern. Pilots learn manual tactics to avoid collisions, such as see-and-avoid, but these rules have limitations. Automated solutions have reduced collisions, but these technologies are not mandatory in all countries or airspaces, and they are expensive. These problems have prompted researchers to continue the search for low-cost solutions. One attractive solution is to use computer vision to detect obstacles in the air due to its reduced cost and weight. A well-trained deep learning solution is appealing because object detection is fast in most cases, but it relies entirely on the training data set. The algorithm chosen for this study is optical flow. The optical flow vectors can help us to separate the motion caused by camera motion from the motion caused by incoming objects without relying on training data. This paper describes the development of an optical flow-based airborne obstacle detection algorithm to avoid mid-air collisions. The approach uses the visual information from a monocular camera and detects the obstacles using morphological filters, optical flow, focus of expansion, and a data clustering algorithm. The proposal was evaluated using realistic vision data obtained with a self-developed simulator. The simulator provides different environments, trajectories, and altitudes of flying objects. The results showed that the optical flow-based algorithm detected all incoming obstacles along their trajectories in the experiments. The results showed an F-score greater than 75% and a good balance between precision and recall.

3.
Sensors (Basel) ; 24(15)2024 Jul 30.
Artigo em Inglês | MEDLINE | ID: mdl-39123998

RESUMO

This paper addresses the challenge of detecting unknown or unforeseen obstacles in railway track transportation, proposing an innovative detection strategy that integrates an incremental clustering algorithm with lightweight segmentation techniques. In the detection phase, the paper innovatively employs the incremental clustering algorithm as a core method, combined with dilation and erosion theories, to expand the boundaries of point cloud clusters, merging adjacent point cloud elements into unified clusters. This method effectively identifies and connects spatially adjacent point cloud clusters while efficiently eliminating noise from target object point clouds, thereby achieving more precise recognition of unknown obstacles on the track. Furthermore, the effective integration of this algorithm with lightweight shared convolutional semantic segmentation algorithms enables accurate localization of obstacles. Experimental results using two combined public datasets demonstrate that the obstacle detection average recall rate of the proposed method reaches 90.3%, significantly enhancing system reliability. These findings indicate that the proposed detection strategy effectively improves the accuracy and real-time performance of obstacle recognition, thereby presenting important practical application value for ensuring the safe operation of railway tracks.

4.
Sensors (Basel) ; 23(13)2023 Jun 27.
Artigo em Inglês | MEDLINE | ID: mdl-37447830

RESUMO

Container yard congestion can become a bottleneck in port logistics and result in accidents. Therefore, transfer cranes, which were previously operated manually, are being automated to increase their work efficiency. Moreover, LiDAR is used for recognizing obstacles. However, LiDAR cannot distinguish obstacle types; thus, cranes must move slowly in the risk area, regardless of the obstacle, which reduces their work efficiency. In this study, a novel method for recognizing the position and class of trained and untrained obstacles around a crane using cameras installed on the crane was proposed. First, a semantic segmentation model, which was trained on images of obstacles and the ground, recognizes the obstacles in the camera images. Then, an image filter extracts the obstacle boundaries from the segmented image. Finally, the coordinate mapping table converts the obstacle boundaries in the image coordinate system to the real-world coordinate system. Estimating the distance of a truck with our method resulted in 32 cm error at a distance of 5 m and in 125 cm error at a distance of 30 m. The error of the proposed method is large compared with that of LiDAR; however, it is acceptable because vehicles in ports move at low speeds, and the error decreases as obstacles move closer.

5.
Sensors (Basel) ; 23(5)2023 Mar 03.
Artigo em Inglês | MEDLINE | ID: mdl-36905003

RESUMO

Walking independently is essential to maintaining our quality of life but safe locomotion depends on perceiving hazards in the everyday environment. To address this problem, there is an increasing focus on developing assistive technologies that can alert the user to the risk destabilizing foot contact with either the ground or obstacles, leading to a fall. Shoe-mounted sensor systems designed to monitor foot-obstacle interaction are being employed to identify tripping risk and provide corrective feedback. Advances in smart wearable technologies, integrating motion sensors with machine learning algorithms, has led to developments in shoe-mounted obstacle detection. The focus of this review is gait-assisting wearable sensors and hazard detection for pedestrians. This literature represents a research front that is critically important in paving the way towards practical, low-cost, wearable devices that can make walking safer and reduce the increasing financial and human costs of fall injuries.


Assuntos
Tecnologia Assistiva , Dispositivos Eletrônicos Vestíveis , Humanos , Qualidade de Vida , Fenômenos Biomecânicos , Marcha , Caminhada
6.
Sensors (Basel) ; 23(5)2023 Mar 04.
Artigo em Inglês | MEDLINE | ID: mdl-36905026

RESUMO

The issues of the degradation of the visual sensor's image quality in foggy weather and the loss of information after defogging have brought great challenges to obstacle detection during autonomous driving. Therefore, this paper proposes a method for detecting driving obstacles in foggy weather. The driving obstacle detection in foggy weather was realized by combining the GCANet defogging algorithm with the detection algorithm-based edge and convolution feature fusion training, with a full consideration of the reasonable matching between the defogging algorithm and the detection algorithm on the basis of the characteristics of obvious target edge features after GCANet defogging. Based on the YOLOv5 network, the obstacle detection model is trained using clear day images and corresponding edge feature images to realize the fusion of edge features and convolution features, and to detect driving obstacles in a foggy traffic environment. Compared with the conventional training method, the method improves the mAP by 12% and recall by 9%. In contrast to conventional detection methods, this method can better identify the image edge information after defogging, which significantly enhances detection accuracy while ensuring time efficiency. This is of great practical significance for improving the safe perception of driving obstacles under adverse weather conditions, ensuring the safety of autonomous driving.

7.
Sensors (Basel) ; 23(12)2023 Jun 07.
Artigo em Inglês | MEDLINE | ID: mdl-37420553

RESUMO

Maritime obstacle detection is critical for safe navigation of autonomous surface vehicles (ASVs). While the accuracy of image-based detection methods has advanced substantially, their computational and memory requirements prohibit deployment on embedded devices. In this paper, we analyze the current best-performing maritime obstacle detection network, WaSR. Based on the analysis, we then propose replacements for the most computationally intensive stages and propose its embedded-compute-ready variant, eWaSR. In particular, the new design follows the most recent advancements of transformer-based lightweight networks. eWaSR achieves comparable detection results to state-of-the-art WaSR with only a 0.52% F1 score performance drop and outperforms other state-of-the-art embedded-ready architectures by over 9.74% in F1 score. On a standard GPU, eWaSR runs 10× faster than the original WaSR (115 FPS vs. 11 FPS). Tests on a real embedded sensor OAK-D show that, while WaSR cannot run due to memory restrictions, eWaSR runs comfortably at 5.5 FPS. This makes eWaSR the first practical embedded-compute-ready maritime obstacle detection network. The source code and trained eWaSR models are publicly available.


Assuntos
Veículos Autônomos , Fontes de Energia Elétrica , Software
8.
Sensors (Basel) ; 23(10)2023 May 19.
Artigo em Inglês | MEDLINE | ID: mdl-37430834

RESUMO

Road obstacle detection is an important component of intelligent assisted driving technology. Existing obstacle detection methods ignore the important direction of generalized obstacle detection. This paper proposes an obstacle detection method based on the fusion of roadside units and vehicle mounted cameras and illustrates the feasibility of a combined monocular camera inertial measurement unit (IMU) and roadside unit (RSU) detection method. A generalized obstacle detection method based on vision IMU is combined with a roadside unit obstacle detection method based on a background difference method to achieve generalized obstacle classification while reducing the spatial complexity of the detection area. In the generalized obstacle recognition stage, a VIDAR (Vision-IMU based identification and ranging) -based generalized obstacle recognition method is proposed. The problem of the low accuracy of obstacle information acquisition in the driving environment where generalized obstacles exist is solved. For generalized obstacles that cannot be detected by the roadside unit, VIDAR obstacle detection is performed on the target generalized obstacles through the vehicle terminal camera, and the detection result information is transmitted to the roadside device terminal through the UDP (User Data Protocol) protocol to achieve obstacle recognition and pseudo-obstacle removal, thereby reducing the error recognition rate of generalized obstacles. In this paper, pseudo-obstacles, obstacles with a certain height less than the maximum passing height of the vehicle, and obstacles with a height greater than the maximum passing height of the vehicle are defined as generalized obstacles. Pseudo-obstacles refer to non-height objects that appear to be "patches" on the imaging interface obtained by visual sensors and obstacles with a height less than the maximum passing height of the vehicle. VIDAR is a vision-IMU-based detection and ranging method. IMU is used to obtain the distance and pose of the camera movement, and through the inverse perspective transformation, it can calculate the height of the object in the image. The VIDAR-based obstacle detection method, the roadside unit-based obstacle detection method, YOLOv5 (You Only Look Once version 5), and the method proposed in this paper were applied to outdoor comparison experiments. The results show that the accuracy of the method is improved by 2.3%, 17.4%, and 1.8%, respectively, compared with the other four methods. Compared with the roadside unit obstacle detection method, the speed of obstacle detection is improved by 1.1%. The experimental results show that the method can expand the detection range of road vehicles based on the vehicle obstacle detection method and can quickly and effectively eliminate false obstacle information on the road.

9.
Sensors (Basel) ; 23(11)2023 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-37299996

RESUMO

Visually impaired people seek social integration, yet their mobility is restricted. They need a personal navigation system that can provide privacy and increase their confidence for better life quality. In this paper, based on deep learning and neural architecture search (NAS), we propose an intelligent navigation assistance system for visually impaired people. The deep learning model has achieved significant success through well-designed architecture. Subsequently, NAS has proved to be a promising technique for automatically searching for the optimal architecture and reducing human efforts for architecture design. However, this new technique requires extensive computation, limiting its wide use. Due to its high computation requirement, NAS has been less investigated for computer vision tasks, especially object detection. Therefore, we propose a fast NAS to search for an object detection framework by considering efficiency. The NAS will be used to explore the feature pyramid network and the prediction stage for an anchor-free object detection model. The proposed NAS is based on a tailored reinforcement learning technique. The searched model was evaluated on a combination of the Coco dataset and the Indoor Object Detection and Recognition (IODR) dataset. The resulting model outperformed the original model by 2.6% in average precision (AP) with acceptable computation complexity. The achieved results proved the efficiency of the proposed NAS for custom object detection.


Assuntos
Aprendizado Profundo , Tecnologia Assistiva , Auxiliares Sensoriais , Pessoas com Deficiência Visual , Humanos
10.
Sensors (Basel) ; 23(6)2023 Mar 08.
Artigo em Inglês | MEDLINE | ID: mdl-36991649

RESUMO

As technology continues to develop, computer vision (CV) applications are becoming increasingly widespread in the intelligent transportation systems (ITS) context. These applications are developed to improve the efficiency of transportation systems, increase their level of intelligence, and enhance traffic safety. Advances in CV play an important role in solving problems in the fields of traffic monitoring and control, incident detection and management, road usage pricing, and road condition monitoring, among many others, by providing more effective methods. This survey examines CV applications in the literature, the machine learning and deep learning methods used in ITS applications, the applicability of computer vision applications in ITS contexts, the advantages these technologies offer and the difficulties they present, and future research areas and trends, with the goal of increasing the effectiveness, efficiency, and safety level of ITS. The present review, which brings together research from various sources, aims to show how computer vision techniques can help transportation systems to become smarter by presenting a holistic picture of the literature on different CV applications in the ITS context.

11.
J Exp Biol ; 225(4)2022 02 15.
Artigo em Inglês | MEDLINE | ID: mdl-35067721

RESUMO

Insects are remarkable flyers and capable of navigating through highly cluttered environments. We tracked the head and thorax of bumblebees freely flying in a tunnel containing vertically oriented obstacles to uncover the sensorimotor strategies used for obstacle detection and collision avoidance. Bumblebees presented all the characteristics of active vision during flight by stabilizing their head relative to the external environment and maintained close alignment between their gaze and flightpath. Head stabilization increased motion contrast of nearby features against the background to enable obstacle detection. As bees approached obstacles, they appeared to modulate avoidance responses based on the relative retinal expansion velocity (RREV) of obstacles and their maximum evasion acceleration was linearly related to RREVmax. Finally, bees prevented collisions through rapid roll manoeuvres implemented by their thorax. Overall, the combination of visuo-motor strategies of bumblebees highlights elegant solutions developed by insects for visually guided flight through cluttered environments.


Assuntos
Voo Animal , Visão Ocular , Aceleração , Animais , Abelhas , Voo Animal/fisiologia , Insetos , Movimento (Física)
12.
Sensors (Basel) ; 22(18)2022 Sep 13.
Artigo em Inglês | MEDLINE | ID: mdl-36146281

RESUMO

Problems such as low light, similar background colors, and noisy image acquisition often occur when collecting images of lunar surface obstacles. Given these problems, this study focuses on the AD-Census algorithm. In the original Census algorithm, in the bit string calculated with the central pixel point, the bit string will be affected by the noise that the central point is subjected to. The effect of noise results in errors and mismatching. We introduce an improved algorithm to calculate the average window pixel for solving the problem of being susceptible to the central pixel value and improve the accuracy of the algorithm. Experiments have proven that the object contour in the grayscale map of disparity obtained by the improved algorithm is more apparent, and the edge part of the image is significantly improved, which is more in line with the real scene. In addition, because the traditional Census algorithm matches the window size in a fixed rectangle, it is difficult to obtain a suitable window in the image range of different textures, affecting the timeliness of the algorithm. An improvement idea of area growth adaptive window matching is proposed. The improved Census algorithm is applied to the AD-Census algorithm. The results show that the improved AD-Census algorithm has been shown to have an average run time of 5.3% and better matching compared to the traditional AD-Census algorithm for all tested image sets. Finally, the improved algorithm is applied to the simulation environment, and the experimental results show that the obstacles in the image can be effectively detected. The improved algorithm has important practical application value and is important to improve the feasibility and reliability of obstacle detection in lunar exploration projects.

13.
Sensors (Basel) ; 22(23)2022 Nov 25.
Artigo em Inglês | MEDLINE | ID: mdl-36501841

RESUMO

Robust maritime obstacle detection is critical for safe navigation of autonomous boats and timely collision avoidance. The current state-of-the-art is based on deep segmentation networks trained on large datasets. However, per-pixel ground truth labeling of such datasets is labor-intensive and expensive. We propose a new scaffolding learning regime (SLR) that leverages weak annotations consisting of water edges, the horizon location, and obstacle bounding boxes to train segmentation-based obstacle detection networks, thereby reducing the required ground truth labeling effort by a factor of twenty. SLR trains an initial model from weak annotations and then alternates between re-estimating the segmentation pseudo-labels and improving the network parameters. Experiments show that maritime obstacle segmentation networks trained using SLR on weak annotations not only match but outperform the same networks trained with dense ground truth labels, which is a remarkable result. In addition to the increased accuracy, SLR also increases domain generalization and can be used for domain adaptation with a low manual annotation load. The SLR code and pre-trained models are freely available online.


Assuntos
Trabalho de Parto , Aprendizagem , Gravidez , Feminino , Humanos , Aclimatação , Água , Processamento de Imagem Assistida por Computador
14.
Sensors (Basel) ; 22(2)2022 Jan 09.
Artigo em Inglês | MEDLINE | ID: mdl-35062435

RESUMO

Increasing demand for rail transportation results in denser and more high-speed usage of the existing railway network, making new and more advanced vehicle safety systems necessary. Furthermore, high traveling speeds and the large weights of trains lead to long braking distances-all of which necessitates a Long-Range Obstacle Detection (LROD) system, capable of detecting humans and other objects more than 1000 m in advance. According to current research, only a few sensor modalities are capable of reaching this far and recording sufficiently accurate data to distinguish individual objects. The limitation of these sensors, such as a 1D-Light Detection and Ranging (LiDAR), is however a very narrow Field of View (FoV), making it necessary to use high-precision means of orienting to target them at possible areas of interest. To close this research gap, this paper presents a high-precision pointing mechanism, for the use in a future novel railway obstacle detection system, capable of targeting a 1D-LiDAR at humans or objects at the required distance. This approach addresses the challenges of a low target price, restricted access to high-precision machinery and equipment as well as unique requirements of our target application. By combining established elements from 3D printers and Computer Numerical Control (CNC) machines with a double-hinged lever system, simple and low-cost components are capable of precisely orienting an arbitrary sensor platform. The system's actual pointing accuracy has been evaluated using a controlled, in-door, long-range experiment. The device was able to demonstrate a precision of 6.179 mdeg, which is at the limit of the measurable precision of the designed experiment.


Assuntos
Computadores , Meios de Transporte , Coleta de Dados , Humanos
15.
Sensors (Basel) ; 22(22)2022 Nov 18.
Artigo em Inglês | MEDLINE | ID: mdl-36433511

RESUMO

This paper presents the design, development, and testing of an IoT-enabled smart stick for visually impaired people to navigate the outside environment with the ability to detect and warn about obstacles. The proposed design employs ultrasonic sensors for obstacle detection, a water sensor for sensing the puddles and wet surfaces in the user's path, and a high-definition video camera integrated with object recognition. Furthermore, the user is signaled about various hindrances and objects using voice feedback through earphones after accurately detecting and identifying objects. The proposed smart stick has two modes; one uses ultrasonic sensors for detection and feedback through vibration motors to inform about the direction of the obstacle, and the second mode is the detection and recognition of obstacles and providing voice feedback. The proposed system allows for switching between the two modes depending on the environment and personal preference. Moreover, the latitude/longitude values of the user are captured and uploaded to the IoT platform for effective tracking via global positioning system (GPS)/global system for mobile communication (GSM) modules, which enable the live location of the user/stick to be monitored on the IoT dashboard. A panic button is also provided for emergency assistance by generating a request signal in the form of an SMS containing a Google maps link generated with latitude and longitude coordinates and sent through an IoT-enabled environment. The smart stick has been designed to be lightweight, waterproof, size adjustable, and has long battery life. The overall design ensures energy efficiency, portability, stability, ease of access, and robust features.


Assuntos
Tecnologia Assistiva , Auxiliares Sensoriais , Pessoas com Deficiência Visual , Humanos , Desenho de Equipamento , Bengala
16.
Sensors (Basel) ; 22(12)2022 Jun 07.
Artigo em Inglês | MEDLINE | ID: mdl-35746094

RESUMO

Detecting the objects surrounding a moving vehicle is essential for autonomous driving and for any kind of advanced driving assistance system; such a system can also be used for analyzing the surrounding traffic as the vehicle moves. The most popular techniques for object detection are based on image processing; in recent years, they have become increasingly focused on artificial intelligence. Systems using monocular vision are increasingly popular for driving assistance, as they do not require complex calibration and setup. The lack of three-dimensional data is compensated for by the efficient and accurate classification of the input image pixels. The detected objects are usually identified as cuboids in the 3D space, or as rectangles in the image space. Recently, instance segmentation techniques have been developed that are able to identify the freeform set of pixels that form an individual object, using complex convolutional neural networks (CNNs). This paper presents an alternative to these instance segmentation networks, combining much simpler semantic segmentation networks with light, geometrical post-processing techniques, to achieve instance segmentation results. The semantic segmentation network produces four semantic labels that identify the quarters of the individual objects: top left, top right, bottom left, and bottom right. These pixels are grouped into connected regions, based on their proximity and their position with respect to the whole object. Each quarter is used to generate a complete object hypothesis, which is then scored according to object pixel fitness. The individual homogeneous regions extracted from the labeled pixels are then assigned to the best-fitted rectangles, leading to complete and freeform identification of the pixels of individual objects. The accuracy is similar to instance segmentation-based methods but with reduced complexity in terms of trainable parameters, which leads to a reduced demand for computational resources.


Assuntos
Inteligência Artificial , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos , Semântica
17.
Sensors (Basel) ; 22(11)2022 Jun 03.
Artigo em Inglês | MEDLINE | ID: mdl-35684892

RESUMO

People with visual impairment are the second largest affected category with limited access to assistive products. A complete, portable, and affordable smart assistant for helping visually impaired people to navigate indoors, outdoors, and interact with the environment is presented in this paper. The prototype of the smart assistant consists of a smart cane and a central unit; communication between user and the assistant is carried out through voice messages, making the system suitable for any user, regardless of their IT skills. The assistant is equipped with GPS, electronic compass, Wi-Fi, ultrasonic sensors, an optical sensor, and an RFID reader, to help the user navigate safely. Navigation functionalities work offline, which is especially important in areas where Internet coverage is weak or missing altogether. Physical condition monitoring, medication, shopping, and weather information, facilitate the interaction between the user and the environment, supporting daily activities. The proposed system uses different components for navigation, provides independent navigation systems for indoors and outdoors, both day and night, regardless of weather conditions. Preliminary tests provide encouraging results, indicating that the prototype has the potential to help visually impaired people to achieve a high level of independence in daily activities.


Assuntos
Tecnologia Assistiva , Auxiliares Sensoriais , Pessoas com Deficiência Visual , Bengala , Desenho de Equipamento , Humanos
18.
Sensors (Basel) ; 22(13)2022 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-35808174

RESUMO

Obstacle detection for autonomous navigation through semantic image segmentation using neural networks has grown in popularity for use in unmanned ground and surface vehicles because of its ability to rapidly create a highly accurate pixel-wise classification of complex scenes. Due to the lack of available training data, semantic networks are rarely applied to navigation in complex water scenes such as rivers, creeks, canals, and harbors. This work seeks to address the issue by making a one-of-its-kind River Obstacle Segmentation En-Route By USV Dataset (ROSEBUD) publicly available for use in robotic SLAM applications that map water and non-water entities in fluvial images from the water level. ROSEBUD provides a challenging baseline for surface navigation in complex environments using complex fluvial scenes. The dataset contains 549 images encompassing various water qualities, seasons, and obstacle types that were taken on narrow inland rivers and then hand annotated for use in semantic network training. The difference between the ROSEBUD dataset and existing marine datasets was verified. Two state-of-the-art networks were trained on existing water segmentation datasets and tested for generalization to the ROSEBUD dataset. Results from further training show that modern semantic networks custom made for water recognition, and trained on marine images, can properly segment large areas, but they struggle to properly segment small obstacles in fluvial scenes without further training on the ROSEBUD dataset.


Assuntos
Processamento de Imagem Assistida por Computador , Visão Monocular , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Rios , Semântica
19.
Sensors (Basel) ; 22(16)2022 Aug 11.
Artigo em Inglês | MEDLINE | ID: mdl-36015750

RESUMO

There exist many difficulties in environmental perception in transportation at open-pit mines, such as unpaved roads, dusty environments, and high requirements for the detection and tracking stability of small irregular obstacles. In order to solve the above problems, a new multi-target detection and tracking method is proposed based on the fusion of Lidar and millimeter-wave radar. It advances a secondary segmentation algorithm suitable for open-pit mine production scenarios to improve the detection distance and accuracy of small irregular obstacles on unpaved roads. In addition, the paper also proposes an adaptive heterogeneous multi-source fusion strategy of filtering dust, which can significantly improve the detection and tracking ability of the perception system for various targets in the dust environment by adaptively adjusting the confidence of the output target. Finally, the test results in the open-pit mine show that the method can stably detect obstacles with a size of 30-40 cm at 60 m in front of the mining truck, and effectively filter out false alarms of concentration dust, which proves the reliability of the method.


Assuntos
Mineração , Veículos Automotores , Poeira/análise , Radar , Reprodutibilidade dos Testes
20.
Sensors (Basel) ; 22(17)2022 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-36080993

RESUMO

Obstacle detection is an essential task for the autonomous navigation by robots. The task becomes more complex in a dynamic and cluttered environment. In this context, the RGB-D camera sensor is one of the most common devices that provides a quick and reasonable estimation of the environment in the form of RGB and depth images. This work proposes an efficient obstacle detection and tracking method using depth images to facilitate quick dynamic obstacle detection. To achieve early detection of dynamic obstacles and stable estimation of their states, as in previous methods, we applied a u-depth map for obstacle detection. Unlike existing methods, the present method provides dynamic thresholding facilities on the u-depth map to detect obstacles more accurately. Here, we propose a restricted v-depth map technique, using post-processing after the u-depth map processing to obtain a better prediction of the obstacle dimension. We also propose a new algorithm to track obstacles until they are within the field of view (FOV). We evaluate the performance of the proposed system on different kinds of data sets. The proposed method outperformed the vision-based state-of-the-art (SoA) methods in terms of state estimation of dynamic obstacles and execution time.


Assuntos
Procedimentos Cirúrgicos Robóticos , Robótica , Algoritmos , Robótica/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA